CN116630490A - Automatic generation method of digital twin three-dimensional character action based on big data - Google Patents

Automatic generation method of digital twin three-dimensional character action based on big data Download PDF

Info

Publication number
CN116630490A
CN116630490A CN202310449142.5A CN202310449142A CN116630490A CN 116630490 A CN116630490 A CN 116630490A CN 202310449142 A CN202310449142 A CN 202310449142A CN 116630490 A CN116630490 A CN 116630490A
Authority
CN
China
Prior art keywords
dimensional
center position
action
character
key points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310449142.5A
Other languages
Chinese (zh)
Inventor
尤海宁
赵丽
刘灏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Normal University
Original Assignee
Nanjing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Normal University filed Critical Nanjing Normal University
Priority to CN202310449142.5A priority Critical patent/CN116630490A/en
Publication of CN116630490A publication Critical patent/CN116630490A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to the technical field of automatic generation of three-dimensional actions, and discloses a digital twin three-dimensional figure action automatic generation method based on big data, which comprises the following steps of S1, obtaining a real figure video: selecting a real character motion video for which three-dimensional motion is desired to be produced from a database; s2, extracting three-dimensional skeleton key points; s3, extracting key points of facial expressions; s4, generating three-dimensional character actions; according to the invention, the real figures in each field and figure action videos can be stored through the database; extracting three-dimensional skeleton key points to obtain time sequence coordinate sequence information of the three-dimensional skeleton key points of each real person action, and automatically generating three-dimensional person actions corresponding to the real person actions through the three-dimensional virtual image models; the facial expression key point extraction method can realize that each action corresponds to one expression, so that the character image created by the modeling of the animation character or the modeling of the game character or the modeling of other fields is more anthropomorphic and tends to be a real character.

Description

Automatic generation method of digital twin three-dimensional character action based on big data
Technical Field
The invention relates to the technical field of automatic generation of three-dimensional actions, in particular to a digital twin three-dimensional character action automatic generation method based on big data.
Background
With the development of 3D technology in recent years, the 3D of each area is not limited to two-dimensional, and more 3D cartoons and 3D games are created, and the actions of game characters in the cartoons and games often use the principle of digital twinning to dynamically simulate real entities, and the actions of the real entities are transferred to virtual characters, so that the actions of the virtual characters are basically consistent with the actions of the real characters.
The Chinese patent discloses a processing method and a processing system for three-dimensional data acquisition and reproduction of human body actions (publication No. CN 111080776A), the technology of the patent synchronously acquires images of people at too many angles, intensively and rapidly completes three-dimensional modeling of human body, performs human body gesture estimation through shooting human body motion videos based on video information and through deep learning, extracts human body actions, performs three-dimensional reconstruction by combining a three-dimensional human body model with bound bones, generates a human body model with actions, performs virtual display, reproduces the whole process of human body motions, improves the accuracy of calculating human body actions by the model through a deep learning method, and provides necessary support for acquisition and reproduction of various human body actions; however, the three-dimensional character action automatic generation method is similar to existing methods such as OPENPOSE three-dimensional simulation and OPENGL three-dimensional simulation, and the three-dimensional action is generated by establishing a human body joint chain model, and limb actions are generated, and the facial actions corresponding to the current actions of the real character are not automatically generated, so that the facial actions corresponding to the created limb actions are often added in a later period, and therefore the facial expression is often not personified, the facial expression is tired, the human body is lack of charm, and the facial expression is difficult to correspond to the limb actions. Accordingly, a person skilled in the art provides an automatic generation method of digital twin three-dimensional character actions based on big data to solve the problems set forth in the background art.
Disclosure of Invention
The invention aims to provide a digital twin three-dimensional character action automatic generation method based on big data so as to solve the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions:
the automatic generation method of the digital twin three-dimensional character action based on big data comprises the following steps:
s1, acquiring a real character video: selecting a real character motion video for which three-dimensional motion is desired to be produced from a database;
s2, extracting three-dimensional skeleton key points: carrying out distortion processing on the video, and acquiring time sequence coordinate sequence information of three-dimensional skeleton key points in the video by adopting a key point algorithm;
s3, extracting facial expression key points: processing the face of the person in the real person action video to obtain time sequence coordinate sequence information of two-dimensional key points of the face;
s4, generating three-dimensional character actions: inputting a time sequence coordinate sequence of the two-dimensional key points of the face and a time sequence coordinate sequence of the three-dimensional skeleton key points into a pre-constructed three-dimensional virtual image model, and automatically generating a three-dimensional character action corresponding to the real character action; through the digital twinning principle, the simulation action is created.
As still further aspects of the invention: the database in the S1 comprises real figures and figure action videos in all fields, and the database is used for regularly selecting and adding the figure action videos in the Internet for updating the database; the real person action video in S1 includes photographed videos of different angles of the same part of the same person at the same time point.
As still further aspects of the invention: the key point algorithm in the S2 is a 3D human skeleton key point detection algorithm.
As still further aspects of the invention: the three-dimensional skeleton key points in the S2 are key parts in a three-dimensional skeleton model; the time sequence coordinate sequence information is coordinate information of key points in three-dimensional skeleton coordinates.
As still further aspects of the invention: the key parts are rotary joints, and the rotary joints comprise rotary joints between the head and the chest, rotary joints between the left big arm and the left forearm, rotary joints between the left forearm and the left hand, rotary joints between the right big arm and the right forearm, rotary joints between the right forearm and the right hand, rotary joints between the left thigh and the left calf, rotary joints between the left calf and the left foot, rotary joints between the right thigh and the right calf, and rotary joints between the right calf and the right foot.
As still further aspects of the invention: the three-dimensional skeleton coordinates adopt a coordinate system of a standard posture of a human body standing upright and two arms naturally drooping, and the three-dimensional skeleton coordinates specifically comprise: the front face of the human body is marked as a Z-axis positive direction, the left side of the human body is marked as an X-axis positive direction, and the vertical upward direction is marked as a Y-axis positive direction; the waist of the human body is defined as a basic coordinate system, and the origin coordinates are (0, 0).
As still further aspects of the invention: the specific way of the face processing of the person in the S3 is as follows: and extracting a face image of the front face corresponding to each action of the person in the real person video, defining the face of the front face as a two-dimensional coordinate system, marking the nose as a coordinate origin, marking the left side of the nose as an X-axis positive direction, marking the upper side of the nose as a Y-axis positive direction, extracting coordinates corresponding to important parts of the face when each action of the person is performed, and forming time sequence coordinate sequence information of two-dimensional key points of the face by all coordinate points of all important parts.
As still further aspects of the invention: the important parts comprise facial organs and facial muscles; the facial organs include the ear, nose, mouth, and eyes, and the facial muscles include the mandible, platysma, labialis, levator labialis, masseter, nasal winged dilator, dorsum, orbicularis oculi, frontal muscle, buccinator, orbicularis, zygomatic, suprazygomatic, septal levator, temporal, and frowner.
As still further aspects of the invention: the two-dimensional key points of the facial organ comprise the central position of the outer side edge of the ear, the central position of the nose, the central position of the upper lip, the central position of the lower lip and the central position of the eyeball; the two-dimensional key points of the facial muscles include a center position of a mandible, a center position of a platysma, a center position of a labialis, a center position of a levator labialis, a center position of a bite muscle, a center position of a nasal wing extensor, a center position of a dorsum muscle, a center position of an orbicularis oculi muscle, a center position of a frontal muscle, a center position of a cheek muscle, a center position of an orbicularis, a center position of a zygomatic inferior muscle, a center position of a zygomatic superior muscle, a center position of a nasal septum levator, a center position of a temporal muscle, and a center position of a frownia.
Compared with the prior art, the invention has the beneficial effects that:
according to the invention, the real figures and figure action videos in each field can be stored through the database, the database is connected with the Internet, and new figure action videos can be selected and added in the Internet, so that the timing update of the database is ensured, and the action selection required by the modeling of the animation figures or the modeling of the game figures or the modeling in other fields is facilitated; extracting time sequence coordinate sequence information of three-dimensional skeleton key points capable of acquiring each real character action, automatically generating a three-dimensional character action corresponding to the real character action through a three-dimensional virtual image model, and creating an emulation action through a digital twinning principle; the facial expression key point extraction method can acquire time sequence coordinate sequence information of the facial two-dimensional key points corresponding to each action, so that each action corresponds to one expression, the character image created by the modeling of the animation character or the modeling of the game character or the modeling of other fields is more anthropomorphic, the character image tends to be true, and the situation that the created character image is slow and hard to accept although the action has no defects is avoided.
Detailed Description
In the embodiment of the invention, the digital twin three-dimensional character action automatic generation method based on big data comprises the following steps:
s1, acquiring a real character video: selecting a real character motion video for which three-dimensional motion is desired to be produced from a database;
s2, extracting three-dimensional skeleton key points: carrying out distortion processing on the video, and acquiring time sequence coordinate sequence information of three-dimensional skeleton key points in the video by adopting a key point algorithm;
s3, extracting facial expression key points: processing the face of the person in the real person action video to obtain time sequence coordinate sequence information of two-dimensional key points of the face;
s4, generating three-dimensional character actions: inputting a time sequence coordinate sequence of the two-dimensional key points of the face and a time sequence coordinate sequence of the three-dimensional skeleton key points into a pre-constructed three-dimensional virtual image model, and automatically generating a three-dimensional character action corresponding to the real character action; through the digital twinning principle, the simulation action is created.
Preferably, the database in S1 comprises real figures and figure action videos in each field, and the database selects and adds the figure action videos in the Internet at regular time for updating the database; the real person action video in S1 includes photographed videos of different angles of the same portion at the same time point of the same person.
Preferably, the key point algorithm in S2 is a 3D human skeleton key point detection algorithm.
Preferably, the three-dimensional skeleton key points in S2 are key parts in the three-dimensional skeleton model; the time sequence coordinate sequence information is coordinate information of key points in three-dimensional skeleton coordinates.
Preferably, the key parts are rotary joints, and the rotary joints comprise rotary joints between a head and a chest, rotary joints between a left big arm and a left small arm, rotary joints between a left small arm and a left hand, rotary joints between a right big arm and a right small arm, rotary joints between a right small arm and a right hand, rotary joints between a left big leg and a left small leg, rotary joints between a left small leg and a left foot, rotary joints between a right big leg and a right small leg, and rotary joints between a right small leg and a right foot.
Preferably, the three-dimensional skeleton coordinates adopt a coordinate system of a standard posture of human body erection and natural sagging of both arms, and the coordinate system specifically comprises: the front face of the human body is marked as a Z-axis positive direction, the left side of the human body is marked as an X-axis positive direction, and the vertical upward direction is marked as a Y-axis positive direction; the waist of the human body is defined as a basic coordinate system, and the origin coordinates are (0, 0).
Preferably, the specific way of face processing of the person in S3 is: extracting a face image of the front face corresponding to each action of a person in a real person video, defining the face of the front face as a two-dimensional coordinate system, marking the nose as a coordinate origin, marking the left side of the nose as an X-axis positive direction, marking the upper side of the nose as a Y-axis positive direction, extracting coordinates corresponding to important parts of the face when each action of the person is performed, and forming time sequence coordinate sequence information of two-dimensional key points of the face by all coordinate points of all important parts;
important parts include facial organs and facial muscles; facial organs include the ear, nose, mouth, and eyes, facial muscles including the mandible, platysma, levator labialis, rongeur, dorsum nasalis, orbicularis oculi, frontalis, buccinator, orbicularis stomatalis, zygomatic inferior, suprazygomatic, septal levator, temporal, and frowner;
the two-dimensional key points of the facial organ comprise the central position of the outer edge of the ear, the central position of the nose, the central position of the upper lip, the central position of the lower lip and the central position of the eyeball; the two-dimensional key points of the facial muscles include the center of the mandible, the center of the platysma, the center of the labialis, the center of the levator, the center of the bite, the center of the nasal wing, the center of the dorsum, the center of the orbicularis oculi, the center of the frontal muscle, the center of the buccinator, the center of the orbicularis oculi, the center of the zygomatic lower, the center of the upper zygomatic muscle, the center of the nasal septum levator, the center of the temporal muscle and the center of the frownia;
the time sequence coordinate sequence information of the two-dimensional key points of the face enables the three-dimensional character action generated by the real character action to have a corresponding expression, for example, the action of a noisy frame corresponds to an angry expression, the action of an examination corresponds to a congesting expression, and the action of seeing friends and embracing corresponds to a happy expression and the like;
the angry expression during the noisy period comprises that the central position of the upper lip is upwards deviated, the central position of the lower lip is downwards deviated, the central position of the frowning muscle is downwards and is close to the nose, the central position of the orbicularis oculi is upwards deviated, the central position of the levator cheilis is upwards deviated, the central position of the cheilis is downwards deviated and the like;
the expression of the condensation weight in the examination comprises that the central position of the upper lip is unchanged, the central position of the lower lip is unchanged, the central position of the frowning muscle is downward and approaches to the nose, the central position of the orbicularis oculi muscle is downward deviated, the central position of the levator labialis muscle is unchanged, the central position of the labialis muscle is unchanged and the like;
the expression when seeing friends and hugging includes the center position of the upper cheekbone and the center position of the lower cheekbone shifting upward, the center position of the levator labialis shifting toward the nose and the like
Therefore, each action made by the character can be endowed with a corresponding expression when the animation character is modeled and the game character is modeled, so that the actions and the expressions of the animation character modeling and the game character modeling can be matched.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.

Claims (9)

1. The automatic generation method of the digital twin three-dimensional character action based on the big data is characterized by comprising the following steps:
s1, acquiring a real character video: selecting a real character motion video for which three-dimensional motion is desired to be produced from a database;
s2, extracting three-dimensional skeleton key points: carrying out distortion processing on the video, and acquiring time sequence coordinate sequence information of three-dimensional skeleton key points in the video by adopting a key point algorithm;
s3, extracting facial expression key points: processing the face of the person in the real person action video to obtain time sequence coordinate sequence information of two-dimensional key points of the face;
s4, generating three-dimensional character actions: inputting a time sequence coordinate sequence of the two-dimensional key points of the face and a time sequence coordinate sequence of the three-dimensional skeleton key points into a pre-constructed three-dimensional virtual image model, and automatically generating a three-dimensional character action corresponding to the real character action; through the digital twinning principle, the simulation action is created.
2. The automatic generation method of digital twin three-dimensional figure actions based on big data according to claim 1, wherein the database in S1 comprises real figures in each field and figure action videos, and the database selects and adds the figure action videos in the internet at regular time for updating the database; the real person action video in S1 includes photographed videos of different angles of the same part of the same person at the same time point.
3. The automatic generation method of digital twin three-dimensional character actions based on big data according to claim 1, wherein the key point algorithm in S2 is a 3D human skeleton key point detection algorithm.
4. The automatic generation method of digital twin three-dimensional character actions based on big data according to claim 1, wherein the three-dimensional skeleton key points in S2 are key parts in a three-dimensional skeleton model; the time sequence coordinate sequence information is coordinate information of key points in three-dimensional skeleton coordinates.
5. The automatic generation method of digital twin three-dimensional character motion based on big data according to claim 4, wherein the key part is a rotary joint, the rotary joint comprises a rotary joint between a head and a chest, a rotary joint between a left big arm and a left forearm, a rotary joint between a left forearm and a left hand, a rotary joint between a right big arm and a right forearm, a rotary joint between a right forearm and a right hand, a rotary joint between a left thigh and a left calf, a rotary joint between a left calf and a left foot, a rotary joint between a right thigh and a right calf, and a rotary joint between a right calf and a right foot.
6. The automatic generation method of digital twin three-dimensional character motion based on big data according to claim 5, wherein the three-dimensional skeleton coordinates adopt a coordinate system of standard posture of human body upright and double-arm natural sagging, and the method specifically comprises the following steps: the front face of the human body is marked as a Z-axis positive direction, the left side of the human body is marked as an X-axis positive direction, and the vertical upward direction is marked as a Y-axis positive direction; the waist of the human body is defined as a basic coordinate system, and the origin coordinates are (0, 0).
7. The automatic generation method of digital twin three-dimensional character motion based on big data according to claim 1, wherein the specific way of character face processing in S3 is: and extracting a face image of the front face corresponding to each action of the person in the real person video, defining the face of the front face as a two-dimensional coordinate system, marking the nose as a coordinate origin, marking the left side of the nose as an X-axis positive direction, marking the upper side of the nose as a Y-axis positive direction, extracting coordinates corresponding to important parts of the face when each action of the person is performed, and forming time sequence coordinate sequence information of two-dimensional key points of the face by all coordinate points of all important parts.
8. The automatic generation method of digital twin three-dimensional character motion based on big data according to claim 7, wherein the important parts include facial organs and facial muscles; the facial organs include the ear, nose, mouth, and eyes, and the facial muscles include the mandible, platysma, labialis, levator labialis, masseter, nasal winged dilator, dorsum, orbicularis oculi, frontal muscle, buccinator, orbicularis, zygomatic, suprazygomatic, septal levator, temporal, and frowner.
9. The automatic generation method of digital twin three-dimensional character motion based on big data according to claim 8, wherein the two-dimensional key points of the facial organ include an outer edge center position of the ear, a center position of the nose, a center position of the upper lip, a center position of the lower lip, and a center position of the eyeball; the two-dimensional key points of the facial muscles include a center position of a mandible, a center position of a platysma, a center position of a labialis, a center position of a levator labialis, a center position of a bite muscle, a center position of a nasal wing extensor, a center position of a dorsum muscle, a center position of an orbicularis oculi muscle, a center position of a frontal muscle, a center position of a cheek muscle, a center position of an orbicularis, a center position of a zygomatic inferior muscle, a center position of a zygomatic superior muscle, a center position of a nasal septum levator, a center position of a temporal muscle, and a center position of a frownia.
CN202310449142.5A 2023-04-24 2023-04-24 Automatic generation method of digital twin three-dimensional character action based on big data Pending CN116630490A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310449142.5A CN116630490A (en) 2023-04-24 2023-04-24 Automatic generation method of digital twin three-dimensional character action based on big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310449142.5A CN116630490A (en) 2023-04-24 2023-04-24 Automatic generation method of digital twin three-dimensional character action based on big data

Publications (1)

Publication Number Publication Date
CN116630490A true CN116630490A (en) 2023-08-22

Family

ID=87620301

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310449142.5A Pending CN116630490A (en) 2023-04-24 2023-04-24 Automatic generation method of digital twin three-dimensional character action based on big data

Country Status (1)

Country Link
CN (1) CN116630490A (en)

Similar Documents

Publication Publication Date Title
CN111460872B (en) Image processing method and device, image equipment and storage medium
US11790589B1 (en) System and method for creating avatars or animated sequences using human body features extracted from a still image
Gültepe et al. Real-time virtual fitting with body measurement and motion smoothing
JP7126812B2 (en) Detection device, detection system, image processing device, detection method, image processing program, image display method, and image display system
CN105243375B (en) A kind of motion characteristic extracting method and device
JP2003044873A (en) Method for generating and deforming three-dimensional model of face
WO2020147796A1 (en) Image processing method and apparatus, image device, and storage medium
Xie et al. Visual feedback for core training with 3d human shape and pose
WO2021240848A1 (en) Three-dimensional avatar generation device, three-dimensional avatar generation method, and three-dimensional avatar generation program
Ami-Williams et al. Digitizing traditional dances under extreme clothing: The case study of eyo
Haber et al. Facial modeling and animation
CN116630490A (en) Automatic generation method of digital twin three-dimensional character action based on big data
Ou et al. Development of a low-cost and user-friendly system to create personalized human digital twin
CN115914660A (en) Method for controlling actions and facial expressions of digital people in meta universe and live broadcast
Cha et al. Mobile. Egocentric human body motion reconstruction using only eyeglasses-mounted cameras and a few body-worn inertial sensors
WO2020147794A1 (en) Image processing method and apparatus, image device and storage medium
CN116246041A (en) AR-based mobile phone virtual fitting system and method
Van Wyk Virtual human modelling and animation for real-time sign language visualisation
CN114972583A (en) User motion trajectory generation method and device, electronic equipment and storage medium
Li et al. SwinGar: Spectrum-Inspired Neural Dynamic Deformation for Free-Swinging Garments
CN117218250A (en) Animation model generation method and device
CN115690283A (en) Two-dimensional animation production method and device based on motion sensing technology
Valvoda Virtual humanoids and presence in virtual environments
CN110929641A (en) Action demonstration method and system
Nedel et al. Animation of virtual human bodies using motion capture devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination