WO2019154013A1 - 表情动画数据处理方法、计算机设备和存储介质 - Google Patents

表情动画数据处理方法、计算机设备和存储介质 Download PDF

Info

Publication number
WO2019154013A1
WO2019154013A1 PCT/CN2019/071336 CN2019071336W WO2019154013A1 WO 2019154013 A1 WO2019154013 A1 WO 2019154013A1 CN 2019071336 W CN2019071336 W CN 2019071336W WO 2019154013 A1 WO2019154013 A1 WO 2019154013A1
Authority
WO
WIPO (PCT)
Prior art keywords
expression
data
target
avatar
current
Prior art date
Application number
PCT/CN2019/071336
Other languages
English (en)
French (fr)
Inventor
郭艺帆
刘楠
薛丰
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP19751218.9A priority Critical patent/EP3751521A4/en
Publication of WO2019154013A1 publication Critical patent/WO2019154013A1/zh
Priority to US16/895,912 priority patent/US11270488B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/2053D [Three Dimensional] animation driven by audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present application relates to the field of computer technology, and in particular, to an expression animation data processing method, a computer readable storage medium, and a computer device.
  • avatar modeling technology has emerged.
  • the avatar model can form a synchronized corresponding expression according to the expression of the behavior subject in the video picture.
  • an emoticon data processing method is provided.
  • An expression animation data processing method comprising:
  • the computer device determines the position of the face in the image to obtain the avatar model
  • the computer device acquires current expression data according to the position of the face in the image and the three-dimensional facial model
  • the computer device acquires expression change data from the current expression data
  • the computer device determines a target segmentation expression region that matches the expression change data, and the target segmentation expression region is selected from each segmentation expression region corresponding to the avatar model;
  • the computer device acquires target base avatar data that matches the target segmentation expression area, and generates the loaded expression data according to the expression change data combination target base avatar data;
  • the computer device loads the loaded expression data into the target segmentation expression area to update the expression of the virtual animated image corresponding to the avatar model.
  • a computer device comprising a memory, a processor, and computer readable instructions stored on the memory and operative on the processor, the processor executing the program to:
  • the target segmentation expression region is selected from each segmentation expression region corresponding to the avatar model;
  • the loaded expression data is loaded into the target segmentation expression area to update the expression of the virtual animation image corresponding to the avatar model.
  • a computer readable storage medium having stored thereon computer readable instructions that, when executed by a processor, cause the processor to perform the following steps:
  • the target segmentation expression region is selected from each segmentation expression region corresponding to the avatar model;
  • the loaded expression data is loaded into the target segmentation expression area to update the expression of the virtual animated image corresponding to the avatar model.
  • FIG. 1 is an application environment diagram of an expression animation data processing method in an embodiment
  • FIG. 2 is a schematic flow chart of an expression animation data processing method in an embodiment
  • FIG. 3 is a schematic flow chart of a method for processing an expression animation data in another embodiment
  • FIG. 4 is a schematic diagram of a moving part of a virtual animated image in one embodiment
  • Figure 5 is a schematic illustration of a bone for controlling a second moving portion in one embodiment
  • Figure 6 is a schematic view showing the first moving part of the embodiment in which the ear is bent when the head is rotated;
  • Figure 7 is a schematic view showing the first moving part of the embodiment in which the tongue is extended when the mouth is opened;
  • FIG. 8 is a flow chart showing the determination of bone control data in an embodiment
  • FIG. 10 is a flow chart showing the determination of bone control data in still another embodiment
  • FIG. 11 is a schematic flow chart of a method for processing an expression animation data in still another embodiment
  • FIG. 12 is a schematic diagram of an interface displayed by a virtual animation image in a terminal in an embodiment
  • FIG. 13 is a schematic flow chart of a method for processing an expression animation data in still another embodiment
  • FIG. 14 is a schematic flow chart of determining a target segmentation expression region in an embodiment
  • 15 is a schematic flow chart of an expression animation data processing method in an embodiment
  • 16 is a sub-basic avatar model set corresponding to each segmented expression region in one embodiment
  • 17 is a schematic flow chart of generating load expression data in an embodiment
  • 18 is a schematic flow chart of loading loaded expression data into a target segmentation expression area in one embodiment
  • 19 is a schematic flow chart of generating load expression data in an embodiment
  • 20 is a schematic diagram of loading expression data according to weights in one embodiment
  • 21 is a schematic flow chart of obtaining expression change data in an embodiment
  • 22 is a schematic flow chart of obtaining expression change data in another embodiment
  • FIG. 23 is a schematic diagram of a background image in a virtual environment in which a virtual animated image is placed in an embodiment
  • Figure 24 is a block diagram showing the structure of an expression animation data processing apparatus in an embodiment
  • Figure 25 is a block diagram showing the structure of an expression animation data processing apparatus in another embodiment
  • Figure 26 is a block diagram showing the structure of an expression animation data processing apparatus in still another embodiment
  • Figure 27 is a block diagram showing the structure of an expression animation data processing apparatus in still another embodiment
  • FIG. 28 is a structural block diagram of a target segmentation expression area detecting module in an embodiment
  • 29 is a structural block diagram of an emoticon animation data processing apparatus in another embodiment
  • FIG. 30 is a structural block diagram of a virtual animation image update module in an embodiment
  • 31 is a structural block diagram of a target basic avatar data acquisition module in an embodiment
  • Figure 32 is a block diagram showing the structure of a computer device in an embodiment.
  • FIG. 1 is an application environment diagram of an expression animation data processing method in an embodiment.
  • the expression animation data processing method is applied to an expression animation data processing system.
  • the emoticon data processing system includes a terminal 110 and a server 120. After acquiring the face of the behavior subject by the photographing and collecting device, the terminal 110 determines the position of the face in the image, acquires the avatar model, and acquires the current expression of the behavior subject collected by the photographing and collecting device according to the three-dimensional facial model in the terminal.
  • the data, the expression change data is obtained from the current expression data, and the matching target segmentation expression update area is determined according to the expression change data.
  • the target basic avatar data matching the target segmentation expression update region is further acquired, the loaded expression data is generated according to the target basic avatar data, and the loaded expression data is loaded into the target segmentation expression region to update the expression of the virtual animation image corresponding to the avatar model.
  • the image of the collected behavior subject may be sent to the server 120, and the server acquires the current expression data of the behavior subject in the image according to the built-in three-dimensional facial model.
  • the expression change data is obtained from the current expression data, and the server determines the matched target segmentation expression area according to the expression change data.
  • the server obtains the target basic avatar data that matches the target segmentation expression update area, generates load expression data according to the target base avatar data, and sends the loaded expression data to the terminal, and the terminal loads the loaded expression data into the target segmentation expression area. Update the expression of the virtual animated image corresponding to the avatar model.
  • the terminal 110 and the server 120 are connected through a network.
  • the terminal 110 may specifically be a desktop terminal or a mobile terminal, and the mobile terminal may specifically be at least one of a mobile phone, a tablet computer, a notebook computer, and the like.
  • the server 120 can be implemented by a stand-alone server or a server cluster composed of a plurality of servers.
  • an emoticon data processing method is provided. This embodiment is mainly illustrated by the method being applied to the terminal 110 or the server 120 in FIG. 1 described above.
  • the method for processing an expression animation data specifically includes the following steps:
  • Step 202 Determine a position of the face in the image to obtain an avatar model.
  • the images here include but are not limited to pictures, photos, movies, and the like. It may be a photo taken by a camera of the terminal, a picture taken through a terminal screen capture, or an image uploaded by an application that can upload an image, and the like.
  • the terminals herein include, but are not limited to, various personal computers, notebook computers, personal digital assistants, smart phones, tablets, portable wearable devices, and the like having image processing functions.
  • the avatar model is a model used to display a virtual animated image.
  • the so-called virtual animated image is an animated image designed by design software.
  • the virtual animated image can be, but is not limited to, the image of a puppy, the image of a kitten, the image of a mouse, and the like.
  • a photo taken by the camera of the terminal, a picture acquired by the terminal screen capture, or a specific position of the face in the image uploaded by the application that can upload the image is determined, and an avatar model for displaying the virtual animation image is acquired.
  • the terminal sends the photo taken by the camera, the image acquired through the terminal screen capture or the image uploaded by the application that can upload the image to the server, and the server determines the specific position of the face in the image according to the face in the image, and further acquires the image.
  • Step 204 Acquire current expression data according to the position of the face in the image and the three-dimensional facial model.
  • the three-dimensional facial model is used to acquire the current facial expression data of the behavior subject collected by the terminal capturing and collecting device, and the current facial expression data is an expression of the current facial expression change of the behavior subject collected by the terminal capturing and collecting device. data.
  • the facial expression of the behavior subject can be known from the face of the behavior subject, and the behavior subject in the image collected by the shooting acquisition device.
  • the face is extracted from the face of the subject in the image, and the three-dimensional face model of the subject is established according to the extracted facial feature points.
  • the three-dimensional facial model may specifically be, but not limited to, a three-dimensional face model, a three-dimensional animal facial model, and the like.
  • the face on the specific position of the image is performed.
  • Facial feature point extraction after the 3D facial model of the behavior subject is established according to the extracted facial feature points, the facial data of the current behavior subject is obtained from the 3D facial model, and the current expression data corresponding to the current behavior subject is obtained according to the facial data.
  • the facial feature points are extracted from the facial data on the three-dimensional facial model, and the current facial expression data is acquired according to the facial feature points.
  • the current expression data may be, but not limited to, expression data corresponding to the eye, expression data corresponding to the mouth, expression data corresponding to the nose, and the like.
  • Step 206 Acquire expression change data from current expression data.
  • the expression change data herein is expression data in which the facial expression of the behavior subject changes
  • the expression change data may be, but is not limited to, expression data that changes in comparison with the facial expression of the behavior subject in the historical frame image, for example,
  • the facial expression of a frame of image behavior is faceless, and the so-called facial expression indicates that the feature points of the facial expression have not changed.
  • the facial expression of the behavior subject in the next frame image is smiling, the next description
  • the feature points of the mouth of the behavior subject in the frame image are changed, so the expression data corresponding to the mouth can be used as the expression change data.
  • the facial expression data of the current behavior subject in the three-dimensional facial model may be a three-dimensional face corresponding to the face of the behavior subject in the historical frame image.
  • the facial expression data in the part model is compared to obtain the expression change data of the current behavior subject.
  • the expression change data of the current behavior subject can be obtained by directly comparing the feature points corresponding to the facial expression data.
  • Step 208 Determine a target segmentation expression region that matches the expression change data, and the target segmentation expression region is selected from each segmentation expression region corresponding to the avatar model.
  • the segmented expression area is an expression area used in the avatar model to generate an expression movement change to generate an expression corresponding to the expression change data. For example, when the expression change data is a big laugh, since the laughter is caused by the change of the expression movement of the mouth, the mouth in the avatar model is the target segmentation expression area that matches the expression change data as a laugh.
  • FIG. 3 is a schematic diagram showing each divided expression area of an avatar model in one embodiment, and the avatar model is a facial model of a virtual animated image, wherein the avatar model can be divided into multiple according to certain rules. Dividing the expression area, such as dividing the expression area, may be, but not limited to, two ears of the virtual animated image, two eyes, one mouth, and the like.
  • the target segmentation matching the expression change data is determined from the plurality of segmentation expression regions of the avatar model according to the expression change data. Expression area.
  • Step 210 Acquire target avatar data matching the target segmentation expression area, and generate the load expression data according to the expression change data combination target avatar data.
  • the basic avatar data is a set of virtual animated image expression data that forms a basic expression corresponding to each divided expression area of the avatar model.
  • the basic avatar data may be, but is not limited to, mouth expression data in which the expression area is the mouth expression corresponding to the mouth, and the divided expression area is the eye expression data of the eye expression corresponding to the eye. Since the target segmentation expression region is obtained by matching the expression change data from the plurality of segmentation expression regions of the avatar model, the target base avatar data is the basic avatar data obtained by matching the target segmentation expression region from the base avatar data.
  • the target basic avatar data is basic avatar data corresponding to the target segmentation expression area, because there are multiple expression changes in the target segmentation expression area, and various expression changes have corresponding expression change coefficients, so the change data according to the expression may be
  • the expression change coefficient corresponding to the expression change data in the combined target base avatar data generates the loaded expression data.
  • the so-called loaded expression data is used to directly load the expression of the virtual animated image corresponding to the segmented expression area control avatar model, which is the expression data corresponding to the expression in the three-dimensional facial model.
  • loading expression data can be, but is not limited to, smiling, laughing, blinking, and the like.
  • Step 212 Load the loaded expression data into the target segmentation expression area to update the expression of the virtual animation image corresponding to the avatar model.
  • the avatar model is a model used to display the virtual animation image.
  • the so-called virtual animation image is an animated image designed by design software.
  • the virtual animated image can be, but is not limited to, the image of a puppy, the image of a kitten, the image of a mouse, and the like.
  • the loading expression data is generated according to the combination of the expression change data and the corresponding expression variation coefficient. Therefore, the generated loaded expression data is loaded into the target segmentation area of the avatar model, so that the virtual animation image in the avatar model can make an expression change corresponding to the current expression of the three-dimensional facial model, that is, the virtual image in the avatar model.
  • the animated image can produce the same expression as the subject of the subject in the image captured by the capture device.
  • the expression of the behavior subject in the image captured by the shooting device is a big laugh
  • the mouth of the virtual animated image in the avatar model also makes a laughing expression.
  • the mouth is a target segmentation area corresponding to the loaded expression being a big laugh in the avatar model.
  • the current expression data of the behavior subject is obtained according to the three-dimensional facial model
  • the expression change data of the behavior subject is obtained from the current expression data
  • the matching target segmentation is obtained from each segmented expression region of the avatar model according to the expression change data.
  • the expression area further acquires the target basic avatar data matched by the target segmentation expression area to generate the loaded expression data, and finally loads the loaded expression data into the target segmentation expression area to update the expression of the virtual animation image corresponding to the avatar model. Therefore, when the avatar model loads the expression, only the expression data corresponding to the expression update part is loaded, thereby reducing the calculation amount of the virtual animation image, and improving the efficiency of the virtual animation image expression update.
  • acquiring the avatar model includes: extracting facial feature points from the face in the image, acquiring a corresponding avatar model according to the facial feature points; or acquiring the avatar model set, the avatar model set A plurality of avatar models are included, the avatar model selection instruction is obtained, and the target avatar model is obtained from the avatar model set according to the avatar model selection instruction.
  • an avatar model for displaying a personalized virtual animation image in order to make the virtual animation image more vivid, it is necessary to first obtain an avatar model for displaying a personalized virtual animation image.
  • the avatar model can be dynamically allocated by the feature points of the face in the image, or the matching avatar model, that is, the target avatar model, can be selected from the avatar model set according to the end user's needs or preferences.
  • one way to obtain an avatar model for displaying a virtual animated image is a manner in which a server or a terminal dynamically allocates an avatar model, specifically, a photo taken by a camera of the terminal, a picture acquired through a terminal screen capture, or a After the image uploaded by the application of the image can be uploaded, the feature points of the faces of the characters in the image are extracted.
  • the feature points of the faces of different characters in different images can be extracted. Face feature points are also different.
  • the corresponding avatar model is further obtained according to the facial feature points of the faces of different characters, so that the acquired avatar model is used to display the virtual animated image model.
  • the avatar model acquisition instruction is an instruction for selecting the avatar model, which can be used by the terminal.
  • the related application obtains the avatar model set for the end user to select, and then selects the matching avatar model from the avatar model set through the controls in the related application, so that the selected avatar model is used to display the virtual animation image model.
  • the expression animation data processing method further includes:
  • Step 302 Determine a first motion part corresponding to the virtual animation image according to the expression change data.
  • the first moving part is associated with the second moving part, and the part corresponding to the second moving part can be controlled to generate a corresponding moving part.
  • the first moving part may be a moving part that controls a strong expression part in the avatar model, and the strong expression part is opposite to the weak expression part, wherein the expression change of the strong expression part may cause the expression change of the weak expression part, wherein the strong expression part may be
  • it is not limited to the face of the virtual animated image corresponding to the avatar model, such as the eye that affects the eye movement, the mouth that affects the movement of the tooth, and the head that affects the movement of the ear.
  • the weak expression portion may be an eye, a tooth, an ear, or the like including, but not limited to, a virtual animated image that is affected by a strong expression portion. Since the expression change data is the expression data obtained by displaying the expression change of the current behavior subject face according to the three-dimensional facial model, the motion change portion corresponding to the expression change data in the avatar animation may be determined as the first motion according to the expression change data. Part. As shown in FIG. 4, FIG. 4 is a schematic diagram showing an avatar moving part in one embodiment, and the face of the virtual animated image may be a strong expression part in the avatar model, that is, the first moving part.
  • the expression change data obtained by the facial expression change of the current behavioral subject in the three-dimensional facial model it is laughter and blinking
  • the laughter and the blinking can be determined according to the expression change data for laughing and blinking.
  • the first moving parts of the eye correspond to the mouth and the eye, respectively.
  • Step 304 Acquire a second motion part associated with the first motion part.
  • the second moving part here is a part associated with the first moving part and affected by the first moving part.
  • the second moving part is a moving part that controls a weak expression part in the avatar model.
  • the eyes, teeth, ears, and the like of the virtual animated image in FIG. 4 may be the second motion portion, such as the first motion portion being a virtual animated image, and thus the second motion associated with the face.
  • the site can be, but is not limited to, the eyeball, the ear, the lower teeth, and the tongue.
  • the first moving part is the eye of the virtual animated image in the avatar
  • the second moving part that is connected to the first moving part as the eye is the eyeball.
  • the first moving part is the mouth of the virtual animated image in the avatar
  • the second moving part that is connected to the mouth of the first moving part is the lower teeth and the tongue.
  • Step 306 Calculate motion state data corresponding to the first motion part according to the expression change data.
  • the expression change data is expression data according to the change of the expression movement of the current behavior subject face in the three-dimensional face model, and therefore, the expression change portion corresponding to the expression change data corresponding to the expression change data in the avatar model may be determined according to the expression change data.
  • the so-called motion state data is a change value of a magnitude of a change in motion of a motion part or a change value of a motion change.
  • the motion state data may be, but not limited to, an eye expression change coefficient and a mouth expression change coefficient.
  • Step 308 Determine bone control data corresponding to the second motion part according to the motion state data corresponding to the first motion part.
  • the bone control data is bone data that controls the motion of the second motion part.
  • the bone control data may be, but not limited to, Euler angles and the like.
  • the so-called Euler angle is an angle used to determine the rotation of the second motion portion, and may also be referred to as a rotation angle.
  • the second moving part is a moving part that is connected to the first moving part, the bone control data corresponding to the second moving part can be calculated according to the motion state data of the first moving part.
  • the first moving part is the eye part
  • the motion state data of the eye part is the eye part expression change coefficient corresponding to the eye part
  • the second moving part of the first moving part part of the eye part is the eyeball
  • the coefficient calculates the Euler angle of the eyeball bone, that is, the bone control data corresponding to the second motion part.
  • Step 310 Control bone motion corresponding to the second motion part according to the bone control data to update the expression of the virtual animation image corresponding to the avatar model.
  • FIG. 5 shows a schematic diagram of a bone for controlling a second moving part in one embodiment, for example, controlling bone movement corresponding to an eyeball through bone control data of an eyeball, or controlling bone data through upper teeth and tongue. Controls the skeletal movement of the upper teeth and the tongue, or controls the skeletal movement of the ear through the bone control data of the ear.
  • the bone movement corresponding to the second motion part is controlled according to the bone control data of the second motion part, so that the virtual animation image in the avatar model can make an expression change corresponding to the current expression of the three-dimensional facial model.
  • the bone control data is that the second moving part is the Euler angle of the eyeball, and the eyeball bone can control the eyeball bone movement according to the Euler angle.
  • FIG. 6 shows a schematic diagram of the first moving part in the embodiment when the head is bent when the head is rotated.
  • FIG. 7 is a schematic view showing the first moving part in the embodiment when the mouth is opened, and the tongue is extended.
  • the second moving part associated with the moving part is that the lower part of the tooth and the tongue change the expression of the mouth through the first moving part to determine that the second moving part is the extension of the Google movement of the lower teeth and the tongue.
  • the motion state data corresponding to the first motion portion is calculated according to the expression change data, and the motion state data corresponding to the first motion portion is obtained. Determining bone control data corresponding to the second motion part, including:
  • Step 802 Calculate the yaw rate and the pitch rate corresponding to the first preset part according to the expression change data.
  • the first preset part is a part of the virtual animated image corresponding to the avatar model that changes the expression movement according to the yaw angle speed and the pitch angle speed.
  • the yaw rate and the pitch rate are one of the elements in the Euler angle
  • the yaw rate is the rotation of the coordinate system with the Y axis as the origin.
  • the pitch angular velocity is a rotation value at which the coordinate system is rotated by the X axis with the head as the origin.
  • the first preset part in the avatar model changes in motion to generate an expression corresponding to the expression change data, so the coordinate system is established with the first preset part as the origin, and the first preset part is calculated according to the expression change data.
  • the pitch angular velocity of the X-axis rotation and the yaw angular velocity of the Y-axis rotation of the first predetermined portion are performed.
  • the first preset portion may be, but not limited to, a head, and the yaw angular velocity and the pitch angular velocity corresponding to the head may be calculated according to the rotational speed of the head.
  • Step 804 Determine first bone control data corresponding to the second motion portion according to the pitch angular velocity, the preset maximum elevation angle threshold, and the first preset compensation value.
  • the preset maximum elevation angle threshold is a maximum threshold that the cloud server pre-delivers to control the X-axis rotation angle of the first preset portion. Specifically, the pitch rate of the X-axis rotation is performed according to the first preset portion, and the cloud server pre-sends the maximum threshold for controlling the X-axis rotation angle of the first preset portion and the preset first preset compensation value is calculated.
  • Step 806 Determine second bone control data corresponding to the second motion portion according to the yaw angular velocity, the preset maximum yaw angle threshold, and the second preset compensation value.
  • the preset maximum yaw angle threshold is a maximum threshold that the cloud server pre-delivers to the terminal to control the first preset portion to perform the Y-axis rotation angle. Specifically, the yaw angular velocity of the Y-axis rotation is performed according to the first preset portion, and the cloud server pre-sends a maximum threshold for controlling the Y-axis rotation angle of the first preset portion and a preset second preset compensation value.
  • Calculating a second bone control data corresponding to the second motion portion, wherein the second bone control data is one of a rotation angle for controlling the bone motion corresponding to the second motion portion, and the second bone control data may be but not limited to Euler The precession angle in the corner.
  • Step 808 Determine bone control data corresponding to the second motion part according to the first bone control data and the second bone control data.
  • the first bone control data and the second bone control data are respectively used to control the rotation angle of the bone motion corresponding to the second motion part, and thus can be calculated according to the calculated first bone control data and the second bone control data.
  • the bone control data corresponding to the second motion part is obtained.
  • the Euler angle for controlling the rotation of the second motion portion can be calculated according to the nutation angle and the screw angle.
  • Ear eulerAngles is the Euler angle for the second movement part of the ear
  • V p and V y are the yaw angular velocity and the elevation angular velocity of the first movement part respectively
  • a and B are respectively calculation centers.
  • the motion state data corresponding to the first motion portion is calculated according to the expression change data, and the motion state data corresponding to the first motion portion is obtained. Determining bone control data corresponding to the second motion part, including:
  • Step 902 Calculate an expression variation coefficient corresponding to the second preset part according to the expression change data.
  • the second preset part is a part of the virtual animation image corresponding to the avatar model that changes the expression movement to generate the same expression as the expression change data.
  • the second predetermined portion may be, but not limited to, a mouth.
  • the expression variation coefficient corresponding to the second preset portion may be calculated according to the change value of the expression movement change in the expression change data or the corresponding expression weight coefficient.
  • the expression change coefficient may vary according to the magnitude of the mouth opening. For example, if the expression change data is a big laugh, then the second preset portion is that the mouth opening mouth amplitude is larger than the smile corresponding mouth opening amplitude, so the amplitude of the mouth opening mouth is the expression coefficient corresponding to the mouth.
  • Step 904 Determine bone control data corresponding to the second motion part according to the expression change coefficient and the preset maximum elevation angle threshold.
  • the preset maximum pitch angle threshold is a maximum threshold that the cloud server pre-delivers to control the second preset portion to perform the X-axis rotation angle.
  • the expression change coefficient is a change value of the movement change of the second preset part.
  • the expression change coefficient may be the amplitude of the mouth opening.
  • the change value of the motion change is performed according to the second preset portion, and the cloud server pre-sends the maximum threshold value for controlling the second preset portion to perform the X-axis rotation angle to obtain the bone control speed corresponding to the second motion portion, that is, Euler angle of the second moving part.
  • Jaw eulerAngles is the Euler angle for the second movement site for the lower teeth and the tongue bone
  • H p is the maximum elevation angle threshold controlled by the cloud server
  • the motion state data corresponding to the first motion portion is calculated according to the expression change data, and the motion state data corresponding to the first motion portion is obtained. Determining bone control data corresponding to the second motion part, including:
  • Step 1002 Calculate an expression change coefficient corresponding to the third preset part according to the expression change data.
  • the third preset part is a part where the virtual animation image corresponding to the avatar model changes the expression movement according to the expression change data.
  • the third preset portion may be, but not limited to, an eye portion, and the eye expression change coefficient is calculated according to the change of the eye expression, and the eye expression change data is the expression change data.
  • changes in eye expression include but are not limited to blinking, closing eyes, looking to the left, looking to the right, looking up, looking down, and the like. Therefore, the eye expression change coefficient is the weight coefficient of the expression change corresponding to each eye expression change.
  • the expression change weight coefficient corresponding to the third preset part is calculated according to the expression change data of the third preset part.
  • the third preset part is the eye part, and the eye part expression changes to the right side, and the eye part expression changes to the right, and the corresponding weight coefficient is the eye face corresponding expression change coefficient.
  • Step 1004 Calculate a pitch angle direction value and a yaw angle direction value corresponding to the third preset part according to the expression change data.
  • the pitch angle is the rotation point of the coordinate system rotated by the X-axis with the third preset portion as the origin, and thus the pitch angle value is the direction value of the third preset portion to rotate in the X-axis direction.
  • the yaw angle is the third preset portion being the origin, and the rotation value of the coordinate system rotating in the Y axis is established, and the yaw angle direction value is the direction value in which the third preset portion is rotated in the Y-axis direction.
  • the pitch angle direction value and the yaw angle direction value corresponding to the third preset part are obtained according to the expression change data corresponding to the third preset part. For example, if the pitch angle direction value is set to the positive direction, the positive direction corresponding pitch angle direction value is 1, then the yaw angle direction value is the negative direction, and the negative direction corresponding yaw angle direction value is -1.
  • Step 1006 Determine first bone control data corresponding to the third preset portion according to the expression change coefficient, the pitch angle direction value, and the preset maximum elevation angle threshold.
  • the expression change coefficient is an expression weight coefficient corresponding to the expression change of the third preset portion.
  • the third preset part may be, but not limited to, the eye part, and the expression change data is blinking, so the expression change coefficient corresponding to the eye is the expression weight coefficient corresponding to the blink.
  • the third preset part is calculated according to the change value of the expression change data, the pitch angle direction value, and the maximum threshold calculated by the cloud server to control the third preset part to perform the X-axis rotation angle.
  • Corresponding first bone control data is one of rotation angles for controlling the bone movement corresponding to the third preset portion, wherein the first bone control data may be, but not limited to, a nutation angle that constitutes an Euler angle.
  • Step 1008 Determine second bone control data corresponding to the third preset portion according to the expression change coefficient, the yaw angle direction value, and the preset maximum yaw angle threshold.
  • the third preset is calculated according to the change value of the expression change data, the yaw angle direction value, and the maximum threshold calculated by the cloud server to control the third preset position to perform the Y-axis rotation angle.
  • the second bone control data corresponding to the part.
  • the second bone control data herein is one of rotation angles for controlling the bone motion corresponding to the third preset portion, wherein the second bone control data may be, but not limited to, a screw angle that constitutes the Euler angle.
  • Step 1010 Determine bone control data corresponding to the third preset part according to the first bone control data and the second bone control data.
  • the first bone control data and the second bone control data corresponding to the third preset portion are respectively used to control one of the bone motion rotation angles corresponding to the second motion portion, and thus the first bone control data can be obtained according to the calculation.
  • the bone control data corresponding to the second motion part is calculated by the second bone control data. For example, if the first bone control data is a nutation angle and the second bone control data is a screw angle, the Euler angle for controlling the rotation of the second motion portion can be calculated according to the nutation angle and the screw angle.
  • Eye eulerAngles is the Euler angle of the eyeball skeleton in the second motion part
  • S is the direction value for calculating the rotation
  • H is the maximum pitch angle and yaw angle threshold controlled by the cloud server
  • a eye is the first motion part for the eye part.
  • the expression animation data processing method further includes:
  • Step 1102 Acquire a reference point, determine a virtual space coordinate origin according to the reference point, and establish a virtual space according to the virtual space coordinate origin.
  • Step 1104 Acquire a relative position of the behavior subject with respect to the reference point.
  • Step 1106 Determine, according to the relative position, a target position of the virtual animation image corresponding to the behavior subject in the virtual space, and generate an initial virtual animation image corresponding to the behavior subject in the virtual space according to the target position.
  • the reference point here is an origin set when the measurement is performed, for example, the reference point may be, but not limited to, the terminal being the reference point.
  • the reference point is obtained, and the reference point is the coordinate origin of the virtual space, and the virtual space is established according to the determined coordinate origin of the virtual space, and the relative position of the behavior subject in the image captured by the photographing and collecting device relative to the reference point is acquired.
  • the relative position is the position of the subject with respect to the reference point.
  • the position of the virtual animation image corresponding to the behavior subject in the virtual space that is, the target position of the virtual animation image in the virtual space, may be determined according to the position of the acquired behavior subject relative to the reference point.
  • the initial avatar corresponding to the behavior subject is obtained according to the virtual animation image at the target position of the virtual space, and the initial avatar is displayed at the target position of the virtual space, as shown in FIG. 12, and FIG. 12 shows that the virtual animation image is A schematic diagram of the terminal display.
  • the so-called initial avatar is the original appearance of the avatar.
  • the expression animation data processing method further includes:
  • Step 1302 Acquire voice data, and determine a corresponding current second motion part according to the voice data.
  • Step 1304 Acquire a skeletal animation corresponding to the current second motion part, and play a skeletal animation to update the expression of the virtual animation image corresponding to the avatar model.
  • the voice data is the voice data collected by the voice collection device of the terminal, and the voice data may be, but not limited to, voice data collected by the voice collection device in real time, or voice data recorded by using the related application software.
  • the current moving part is a weak expression part that matches the voice data, that is, the second moving part.
  • the second moving site can be, but is not limited to, the eyeball, the ear, the lower teeth, the tongue, and the like. Since each voice data has a second motion part corresponding to the preset, the matched weak expression part, that is, the current second motion part can be determined according to the voice data acquired by the terminal.
  • the cloud server delivers the skeletal animation corresponding to each of the second motion parts to the terminal, the corresponding skeletal animation can be obtained according to the determined current second motion part, and the skeletal animation is interconnected in the avatar model.
  • the skeletal structure of "skeleton" animates the avatar model by changing the orientation and position of the bone. After acquiring the skeletal animation corresponding to the current second motion part, the skeletal animation is played, so that the virtual animation image in the avatar model can make voice data recorded by voice and voice transmitted by voice. The expression corresponding to the data changes.
  • the voice data recorded by voice is: “dizzy”, and the current second motion part corresponding to the voice data is determined to be an ear and an eyeball, so that the skeletal animation corresponding to the ear and the eyeball is simultaneously rotated clockwise.
  • the avatar's ear and eyeball in the avatar model will rotate clockwise at the same time.
  • determining a target segmentation expression region that matches the expression change data includes:
  • Step 1402 Determine a current moving part corresponding to the virtual animation image according to the expression change data.
  • the expression change data is the expression data of the facial expression change of the current behavior subject face in the three-dimensional face model
  • the virtual animated image of the avatar model needs to make the same expression as the expression change data, according to the three-dimensional
  • the expression change data of the expression change of the current behavior subject face in the face model determines that the virtual animation image in the avatar model makes the motion part of the same expression as the expression change data, that is, the current motion part.
  • Step 1404 Acquire a preset plurality of segmentation expression regions corresponding to the avatar model.
  • Step 1406 Obtain a target segmentation expression region that matches the current motion portion from the preset plurality of segmentation expression regions.
  • the avatar model divides a plurality of segmentation expression regions according to a certain rule, wherein the segmentation expression region is an expression region in the avatar model for generating a motion change to generate an expression corresponding to the expression change data.
  • the avatar model that divides the plurality of divided expression areas according to a certain rule is acquired, and the expression change can be generated due to the change of the expression expression.
  • the data has the same expression, so that the corresponding segmentation expression region, that is, the target segmentation expression region, is obtained by matching each segmentation expression region in the avatar model according to the obtained current motion portion.
  • the segmented expression area of the avatar model is two ears, two eyes, and one mouth
  • the expression change data is the current movement part corresponding to laughter as the mouth, so the matching with the mouth is obtained from the avatar model.
  • the target segmentation expression area is the mouth of the avatar model.
  • the expression animation data processing method further includes:
  • Step 1502 Acquire a current segmentation expression region from each segmentation expression region corresponding to the avatar model.
  • Step 1504 Acquire a sub-base avatar model set corresponding to the current segmentation expression area.
  • the avatar model is divided into a plurality of segmentation expression regions according to a certain rule, and the segmentation expression region is used for generating an expression movement change so that the virtual animation image in the avatar model generates the same expression as the expression change data, and thus
  • the current segmentation expression region of each segmentation expression region is obtained from each segmentation expression region corresponding to the avatar model.
  • the current segmentation expression region is a segmentation expression region randomly selected from each segmentation expression region in the avatar model as the current segmentation expression region.
  • each of the divided expression regions has a corresponding set of sub-basic avatar models, and the so-called sub-basic avatar model set is a common expression base of some pre-designed virtual animated images, wherein the expression base is an ordinary expression of the virtual animated image.
  • a corresponding set of avatar models As shown in FIG. 16, FIG. 16 shows a sub-base avatar model set corresponding to each of the divided expression regions in one embodiment. Specifically, after the current segmentation expression region is randomly selected from each of the segmentation expression regions in the avatar model, all sub-base avatar model sets corresponding to the current segmentation expression region are acquired. For example, the current segmentation expression area is a mouth, so all sub-basic avatar model sets corresponding to the mouth are acquired.
  • Step 1506 Perform multiple different non-linear combinations on each sub-basic avatar model in the sub-basic avatar model to generate corresponding sub-hybrid avatar models to form a sub-hybrid avatar model set corresponding to the currently segmented emoticon region.
  • the image model corresponds to multiple sub-hybrid avatar models.
  • the so-called hybrid avatar model is based on the common expression to generate a rich mixed expression corresponding to the mixed expression base, and the mixed expression base is a combination of a plurality of ordinary expressions of the virtual animation image to obtain a mixed expression corresponding avatar model.
  • the sub-hybrid avatar model corresponding to each sub-basic avatar model is composed of a sub-hybrid avatar model set corresponding to the current segmented emoticon region.
  • each sub-basic avatar model set performs nonlinear combination calculation to generate a corresponding sub-hybrid avatar model calculation formula as Equation 1:
  • B i represents the i-th sub-hybrid avatar model and E j represents the j-th sub-basic avatar model.
  • Step 1508 Obtain a next segmentation expression region from each segmentation expression region as a current segmentation expression region, and return a step of acquiring a sub-basic avatar model set corresponding to the current segmentation expression region, until a sub-hybrid avatar corresponding to each segmentation expression region is obtained. Model set.
  • Step 1510 The sub-basic avatar model set and the sub-hybrid avatar model set corresponding to each segmentation expression region are composed of basic avatar data, and the target avatar data is selected from the basic avatar data.
  • the avatar model is divided into a plurality of segmentation expression regions according to a certain rule, and the sub-hybrid avatar model set corresponding to each segmentation expression region in the avatar model needs to be calculated. Specifically, after the next segmentation expression region is randomly selected from the segmentation expression regions in the avatar model as the current segmentation expression region, and the sub-hybrid avatar model set corresponding to the current segmentation expression region is obtained, the avatar model needs to be further selected from the avatar model.
  • Each of the segmentation expression regions randomly selects a segmentation expression region as the current segmentation expression region, returns the step of acquiring the sub-basic avatar model set corresponding to the current segmentation expression region, and nonlinearly combines the sub-basic avatar model set to obtain the corresponding sub-segment A mixture of avatar models.
  • the sub-hybrid avatar model set corresponding to each segmentation region in the avatar model is obtained, and the sub-hybrid avatar model set and the sub-basic avatar model set corresponding to each segmentation region are further composed into basic avatar data for expression change data matching.
  • the target base avatar data is obtained, that is, the target base avatar data is selected from the basic avatar data.
  • the target base avatar data includes a plurality of target sub-base avatar models and a plurality of target sub-hybrid avatar models, and the target avatar data is combined to generate the loaded expression data according to the expression change data.
  • Step 1702 calculating, according to the expression change data, a combination coefficient corresponding to each target sub-basic avatar model and each target sub-hybrid avatar model.
  • the sub-basic avatar model is also divided into a plurality of segmentation expression regions according to a certain rule, and the segmentation expression region is an expression region in which motion changes occur, and an expression is generated according to each segmentation expression region in each sub-basic avatar model.
  • the change value of the motion change or the expression weight coefficient is used as the combination coefficient of each sub-basic avatar model, and the combination coefficient may be referred to as an expression change coefficient.
  • the sub-hybrid avatar model is obtained by nonlinear combination calculation of the corresponding sub-basic avatar model, so each sub-hybrid avatar model has a corresponding combination coefficient.
  • the target segmentation expression region is determined according to the expression change data acquired in the three-dimensional face model, and each target sub-basic virtual is determined according to the change value of the expression movement change occurring in the target expression segmentation region or the expression weight coefficient corresponding to the expression movement change.
  • Step 1704 linearly combining the plurality of target sub-basic avatar models and the plurality of target sub-hybrid avatar models according to the combination coefficient to generate the loaded expression data.
  • the plurality of target sub-basic avatar models and the plurality of sub-hybrids are virtualized according to the corresponding combination coefficients.
  • the image model performs linear combination to generate the loaded expression data corresponding to the expression change data, wherein the loaded expression data may be referred to as the same expression data as the current expression data of the behavior subject collected by the shooting collection device.
  • the combination coefficient may perform a linear combination of the plurality of target sub-basic avatar models and the plurality of sub-hybrid avatar models according to formula 2 to generate loading expression data corresponding to the expression change data, wherein formula 2 is:
  • the E user is the current expression data of the behavior subject collected by the shooting device, that is, the expression data is loaded, E i is a sub-basic avatar model, and B j is a sub-hybrid avatar model.
  • loading the loaded expression data into the target segmentation expression area includes:
  • Step 1802 Acquire a current vertex position set, where the current vertex position set is composed of a current vertex position corresponding to each target sub-base avatar model that generates the loaded expression data.
  • the topological structure of each target sub-foundation avatar model and the virtual animation image is completely the same, and is the same size in the same scale space.
  • the so-called topological structure is the positional relationship between the vertices of the sub-foundation avatar model.
  • the number of mesh vertices of different target sub-base avatar models is the same, but the vertex positions of different target sub-base avatar models may be different.
  • the vertex position of the basic avatar model corresponding to the smile is different from the vertex position of the basic avatar model corresponding to the laughter.
  • the target sub-foundation avatar model here is to select the sub-foundation avatar model that meets the requirements from the basic avatar data.
  • the loaded expression data is generated according to the expression change data combination target base avatar data, and the target basic avatar data is selected from the basic avatar data, and the basic avatar data is each divided expression area in the avatar model.
  • the corresponding sub-basic avatar model set and the sub-hybrid avatar model set are composed, so the target sub-foundation avatar model includes but is not limited to the sub-basic avatar model and the sub-hybrid avatar model.
  • a vertex position is randomly selected from the vertex positions corresponding to the respective target sub-base avatars that generate the loaded expression data as the current vertex positions, and the current vertex positions in the respective target sub-base avatars are composed into the current vertex position set.
  • Step 1804 determining a current target vertex position of the grid corresponding to the loaded expression data according to the current vertex position set.
  • Step 1806 Acquire a next set of vertex positions, and determine a next target vertex position of the grid corresponding to the loaded emoticon data according to the next vertex position set until determining the target vertex positions of the grid corresponding to the loaded emoticon data.
  • the current vertex position set randomly selects a vertex position as the current vertex position from the vertex positions corresponding to the respective target sub-base avatars of the generated loading expression data, the current vertex positions in each target sub-base avatar are composed, And loading the expression data is generated by a certain expression area in the avatar model to generate an expression movement change, so that the current target vertex position of the grid corresponding to the loaded expression data is calculated according to the obtained current vertex position set.
  • the current target vertex position of the grid corresponding to the loaded expression data is calculated as shown in Equation 3:
  • V i A 1 V E1 +A 2 V E2 +...+A n V En +A 1 V B1 +A 2 V B2 +...+A m V Bm Equation 3
  • V i represents the i-th vertex, that is, the current target vertex position
  • V E1 represents the corresponding vertex in the target sub-base avatar model E 1 .
  • the next vertex position is randomly selected from the vertex positions corresponding to the respective target sub-base avatars that generate the loaded expression data.
  • the next vertex position is taken as the current vertex position, and the current vertex position corresponding to each target sub-base avatar model that generates the loaded expression data is composed of the next vertex position set, and the grid corresponding to the loaded emoticon data is determined according to the next vertex position set.
  • the next target vertex position until the respective target vertex positions of the mesh corresponding to the loaded expression data are determined.
  • Expression data including:
  • Step 1902 Acquire preset weight coefficients corresponding to each target expression.
  • Step 1904 Determine a generation order of the loaded expression data corresponding to each target expression according to the size relationship of the preset weight coefficients corresponding to the respective target expressions.
  • the weight coefficient is a relative importance degree of the index in the overall evaluation for a certain index. Therefore, when the expression change data corresponds to the plurality of target expression updates, the corresponding preset weight coefficients are acquired according to the respective target expressions. Since different expressions correspond to different weight coefficients, it is necessary to determine the generation order of the loaded expression data corresponding to each target expression, that is, the loading order, according to the size relationship of the preset weight coefficients corresponding to the respective target expressions.
  • FIG. 20 shows a schematic diagram of the principle of loading expression data in one embodiment. Specifically, if the expression change data corresponds to the plurality of target expression updates, the corresponding target segmentation expression regions are determined from the respective segmentation expression regions corresponding to the avatar model according to the respective target expressions, and the target base virtual images matching the respective target segmentation expression regions are acquired. Image data. Further, the preset weight coefficients corresponding to the respective target base avatar data are acquired, and the generated expressions corresponding to the respective target base avatar data are determined according to the size relationship of the preset weight coefficients corresponding to the respective target base avatar data. The order, that is, the larger the weight coefficient, first loads the corresponding target base avatar number.
  • the loading of the data should be first loaded when the loading data is generated. Load the data, and then generate the corresponding load data for the smile.
  • Step 1906 Loading the loaded expression data into the target segmentation expression area to update the expression of the virtual animation image corresponding to the avatar model, including: sequentially loading each loaded expression data to the target according to the generation order of the loaded expression data corresponding to each target expression.
  • the expression area is segmented to update the expression of the virtual animated image corresponding to the avatar model.
  • the generation order of the loaded expression data corresponding to each target expression is determined according to the size relationship of the preset weight coefficients corresponding to the respective target expressions.
  • the generation order of the loaded expression data corresponding to each target expression is determined.
  • Each loaded expression data is sequentially loaded into the target segmentation expression area corresponding to the loaded expression data, so that the virtual animation image in the avatar model can make an expression change corresponding to the current expression of the three-dimensional facial model, that is, in the avatar model.
  • the virtual animated image is the same as the expression of the subject in the image captured by the photographing acquisition device.
  • the expression change data corresponds to a plurality of target expressions being updated to smile or blink
  • the weight coefficient of the target expression is blinking is larger than the weight coefficient of the target expression as a smile
  • the loaded expression data corresponding to the blink is first loaded to The segmented expression area in the avatar model is in the eye, and the loaded expression data corresponding to the smile is loaded into the segmented expression area in the avatar model as the mouth.
  • the UV image is used to ensure the avatar model is
  • the expression data is loaded, there is no crack between the divided expression regions of the avatar model due to problems such as UV segmentation.
  • the so-called UV segmentation is to determine the texture coordinates of the texture, and the UV image can be used to determine how the texture of the avatar model is posted.
  • the UV dividing line is distributed in the invisible part of the avatar model, such as the head and the head.
  • acquiring expression change data from current expression data includes:
  • Step 2102 Perform feature point extraction on the current expression data to obtain a corresponding expression feature point.
  • Step 2104 Matching the expression feature point with the preset expression data set to determine the current update expression, and acquiring the expression change data corresponding to the currently updated expression.
  • the three-dimensional face model has facial expression data of the behavior subject in the image acquired by the photographing acquisition device, that is, current expression data. Since the face of the subject has some expressions and no expression changes, it is necessary to obtain the expression change data of the expression change from the current expression data of the behavior subject. Specifically, the facial feature points are extracted from the current facial expression data in the three-dimensional facial model to obtain corresponding facial expression feature points, and the currently updated facial expressions are obtained by matching the extracted facial expression feature points from the preset expression data sets. Further, the corresponding expression change data is acquired according to the currently updated expression. Among them, the expression data set can also be called an expression library.
  • the current expression data is the facial expression expression data of the behavior subject, wherein the expression of the behavior subject is updated to smile. Specifically, feature point extraction is performed on the facial features, and facial expression points corresponding to the facial features are obtained. Further, comparing the feature points corresponding to the facial features with all the expressions in the expression library, the current updated expression is a smile, and thus the expression change data corresponding to the smile is acquired.
  • the facial expression of the behavior subject in the previous frame image is facial expressionless, that is, the facial features are all parts without any expression change, and the facial expression of the behavior subject in the next frame image changes. It is applicable to this embodiment.
  • acquiring expression change data from current expression data includes:
  • Step 2202 Obtain historical expression data, perform feature point extraction on the historical expression data, and obtain corresponding historical expression feature points.
  • Step 2204 Perform feature point extraction on the current expression data to obtain a corresponding current expression feature point.
  • Step 2206 comparing the historical expression feature point with the current expression feature point, and obtaining corresponding expression change data according to the comparison result.
  • the historical expression data is a big laugh
  • the current expression data is laughter and blink
  • feature point extraction is performed on the historical expression data and the current expression data, respectively, to obtain corresponding historical expression feature points and current expression feature points. Comparing the historical expression feature point with the current expression feature point, and obtaining the feature point corresponding to the laugh in the current expression data has not changed, so according to the comparison result, determining the expression change data of the behavior subject in the next frame image is blinking .
  • an expression change is generated for a facial expression of a behavior subject in a previous frame image, and a partial expression of a previous frame image in a face of a behavior subject in a next frame image remains unchanged, and other expressions remain unchanged. Variations are applied to this embodiment. For example, the expression of the behavior subject in the previous frame image changes to laughter, while the mouth of the behavior subject in the next frame image does not change, and the face of the behavior subject is always laughing.
  • the expression animation data processing method further includes: acquiring a corresponding first background image from the preset background image according to the expression change data, and loading the first background image to the virtual animation image corresponding to the avatar model.
  • the expression animation data processing method further includes: acquiring a corresponding first background image from the preset background image according to the expression change data, and loading the first background image to the virtual animation image corresponding to the avatar model.
  • the virtual environment or acquiring voice data, acquiring a corresponding second background image from the preset background image according to the voice data, and loading the second background image into the virtual environment in which the virtual animation image corresponding to the avatar model is located.
  • the virtual environment in which the virtual animation image is located during rendering is rendered with different textures according to different backgrounds, and has a strong sense of reality.
  • the rendering of the virtual environment can be realized in two ways.
  • One of the ways is to control the data through the expression change. Specifically, after acquiring the expression change data, the corresponding first background image is obtained from the background image previously sent by the cloud server according to the special expression data in the expression change data. The acquired first background image is loaded into the virtual environment in which the virtual animation image corresponding to the avatar model is located.
  • the expression change data is a ghost face
  • the corresponding first background image is matched to the star-flashing background image in the background image pre-delivered from the cloud server according to the expression change data, thereby rendering the virtual environment in which the virtual animation image is located.
  • the other method is the voice data control mode.
  • the voice data is the voice data collected by the terminal voice collection device, and can be obtained by triggering the background image pre-delivered from the cloud server according to the special words or special words in the voice data.
  • the second background image matched with the voice data is loaded into the virtual environment in which the virtual animation image corresponding to the avatar model is located.
  • the acquired voice data is Happy New Year
  • the corresponding second background image is matched to the New Year theme in the background image pre-delivered from the cloud server according to the Happy New Year
  • the virtual environment has an animation corresponding to the firecracker.
  • FIG. 23 is a schematic diagram showing a background image in a virtual environment in which a virtual animated image is placed in an embodiment.
  • an emoticon data processing method is provided. Specifically, the following steps are included:
  • step 2402 the location of the terminal is obtained, and the virtual space is established with the location of the terminal as the origin.
  • Step 2404 Determine, according to the relative position of the terminal in the real world, a target position corresponding to the virtual animation image in the virtual space and generate an initial virtual animation image at the target position.
  • Step 2406 After the face of the behavior subject is collected by the camera of the terminal, feature points are extracted from the face of the behavior subject in the image, and a three-dimensional face model is established according to the extracted facial feature points.
  • Step 2408 Acquire current expression data according to the expression data of the behavior subject in the three-dimensional face model.
  • Step 2410 When the facial expression of the behavior subject in the previous frame image is facial expressionless, and the facial expression of the behavior subject in the next frame image changes the expression movement, proceed to step 2410a; when the previous frame The expression in a part of the face of the subject in the image changes the expression of the expression, while the expression of the part of the subject in the next frame image maintains the same expression movement change, and when the expression of the other part changes, the expression movement changes. Then proceed to step 2410b.
  • step 2410a feature points are first performed on the expression change data, and the extracted feature points are matched with the expression data in the expression database to determine the current update expression, and then the expression change data corresponding to the currently updated expression is obtained.
  • Step 2410b first obtaining historical expression data, and then extracting feature points from the historical expression data and the current expression data to obtain corresponding historical expression feature points and current expression feature points, and comparing the historical expression feature points with the current expression feature points to obtain expression changes. data.
  • Step 2412 First, determine a corresponding current motion part according to the expression change data, and then, according to the current motion part, obtain a target expression segmentation expression area that matches the current motion part from the avatar model that divides the plurality of divided areas according to a certain rule.
  • Step 2414 Obtain target base avatar data that matches the target segmentation expression area, and generate the load expression data according to the expression change data combination target base avatar data.
  • Step 2414a firstly, according to the expression change data, the combination coefficient of each target common expression base and each target mixed expression base in the target basic avatar data is calculated, and then the target common expression base and each target mixed expression base are performed according to the combination coefficient. Linear combination generates loaded expression data.
  • Step 2414b If the expression change data has multiple target expressions, first obtain a weight coefficient preset for each target expression, and determine a generation order of the loaded expression corresponding to each target expression according to the magnitude relationship of the weight coefficients preset by each target expression. .
  • Step 2416 Load the loaded expression data into the target segmentation expression area to update the expression of the virtual animation image corresponding to the avatar model.
  • Step 2416a firstly, according to each target common expression base corresponding to the loaded expression data and the vertex position corresponding to each target mixed expression base, the current vertex position set is formed, and then the current target vertex position of the grid corresponding to the loaded expression data is determined according to the current vertex position set. And acquiring the next set of vertex positions, determining the next target vertex position of the grid corresponding to the loaded emoticon data according to the next vertex position set, until determining the target vertex positions of the grid corresponding to the loaded emoticon data.
  • Step 2416b if the expression change data is a plurality of target expressions, after determining the generation order of the loaded expression data corresponding to each target expression, each loaded expression data is sequentially loaded into the target segmentation expression area according to the generation order, so that the avatar model corresponds to The virtual animated image of the expression is updated.
  • Step 2418 Determine a strong expression part corresponding to the virtual animation image according to the expression change data.
  • Step 2420 Acquire a weak emoticon portion associated with the strong emoticon portion.
  • Step 2422 Calculate motion state data corresponding to the strong expression part according to the expression change data; and determine bone control data corresponding to the weak expression part according to the motion state data corresponding to the strong expression part.
  • Step 2422a if the strong expression part is the head of the virtual animated image, the elevation angle of the head rotation and the corresponding compensation value required for calculation, and the maximum pitch angle threshold of the cloud control are calculated in the Euler angle of the ear skeleton.
  • the nutation angle, the yaw angle of the head rotation and the corresponding compensation value required for calculation, and the maximum yaw angle threshold of the cloud control are calculated to obtain the precession angle in the Euler angle of the ear skeleton, which is the nutation angle and the rotation.
  • the advance angle determines the Euler angle of the ear bone.
  • step 2422b if the strong expression part is the mouth of the virtual animation image, the Euler angle of the lower teeth and the tongue bone is calculated by the maximum pitch angle threshold controlled by the cloud and the expression coefficient corresponding to the mouth opening in the expression change data.
  • Step 2422c if the strong expression part is the eye of the virtual animated image, the chapter in the Euler angle of the eyeball skeleton is calculated by the maximum pitch angle threshold of the cloud control, the direction value of the rotation, and the eye expression change coefficient in the expression change data.
  • the moving angle, the maximum yaw angle threshold controlled by the cloud, the direction value of the rotation, and the change coefficient of the eye expression in the expression change data are used to calculate the precession angle in the Euler angle of the eye skeleton, from the nutation angle and the precession angle. Determine the Euler angle of the eyeball bone.
  • Step 2424 Control the bone motion corresponding to the weak expression part according to the bone control data to update the expression of the virtual animation image corresponding to the avatar model.
  • Step 2426 Acquire voice data, and determine a corresponding current weak expression part according to the voice data.
  • Step 2428 Acquire a skeletal animation corresponding to the current weak expression part, and play a skeletal animation to update the expression of the virtual animation image corresponding to the avatar model.
  • Step 2430 Acquire a current segmentation expression region from each segmentation expression region corresponding to the avatar model.
  • Step 2432 Acquire an ordinary expression base corresponding to the currently segmented expression area.
  • Step 2434 performing multiple different non-linear combinations on the avatar models corresponding to the common expression bases in the common expression base to generate a corresponding avatar model corresponding to the plurality of mixed expression bases, and forming a mixed expression base corresponding to the current segmented expression region.
  • step 2436 the next segmented expression region is obtained from each segmented expression region as the current segmented expression region, and the step of acquiring the common expression base corresponding to the current segmented expression region is returned until the mixed expression base corresponding to each segmented expression region is obtained.
  • Step 2438 the common expression base and the mixed expression base corresponding to each divided expression area are combined to form an expression base, and the target expression data is selected from the expression data in the expression base.
  • Step 2440 Acquire a corresponding first background image from the preset background image according to the expression change data, and load the first background image into a virtual environment in which the virtual animation image corresponding to the avatar model is located.
  • Step 2442 Acquire voice data, obtain a corresponding second background image from the preset background image according to the voice data, and load the second background image into the virtual environment where the virtual animation image corresponding to the avatar model is located.
  • an emoticon data processing apparatus 2500 comprising:
  • the current expression data obtaining module 2502 is configured to determine a position of the face in the image, acquire an avatar model, and acquire current expression data according to the position of the face in the image and the three-dimensional facial model.
  • the expression update data acquisition module 2504 is configured to acquire expression change data from the current expression data.
  • the target segmentation expression region detecting module 2506 is configured to determine a target segmentation expression region that matches the expression change data, and the target segmentation expression region is selected from each segmentation expression region corresponding to the avatar model.
  • the target base avatar data obtaining module 2508 is configured to acquire target base avatar data that matches the target split expression area, and generate load expression data according to the expression change data combination target base avatar data.
  • the virtual animation image update module 2510 is configured to load the loaded expression data into the target segmentation expression area to update the expression of the virtual animation image corresponding to the avatar model.
  • the expression animation data processing apparatus 2500 further includes: a first motion part detecting module 2602, a second motion part acquiring module 2604, a motion state data calculating module 2606, and a skeleton control data detecting module. 2608, a bone motion control module 2610, wherein:
  • the first motion part detecting module 2602 is configured to determine, according to the expression change data, the first motion part corresponding to the virtual animation image.
  • the second motion part obtaining module 2604 is configured to acquire a second motion part associated with the first motion part.
  • the motion state data calculation module 2606 is configured to calculate motion state data corresponding to the first motion part according to the expression change data.
  • the skeletal control data detecting module 2608 is configured to determine skeletal control data corresponding to the second moving part according to the motion state data corresponding to the first moving part.
  • the skeletal motion control module 2610 is configured to control the skeletal motion corresponding to the second motion part according to the skeletal control data to update the expression of the virtual animation image corresponding to the avatar model.
  • the motion state data calculation module is further configured to calculate, according to the expression change data, a yaw angular velocity and a pitch angular velocity corresponding to the first preset portion; according to the pitch angular velocity, Determining a maximum pitch angle threshold and a first preset compensation value to determine first bone control data corresponding to the second motion portion; determining a second motion according to the yaw angle speed, the preset maximum yaw angle threshold, and the second preset compensation value The second bone control data corresponding to the part; the bone control data detecting module is further configured to determine the bone control data corresponding to the second motion part according to the first bone control data and the second bone control data.
  • the motion state data calculation module is further configured to calculate, according to the expression change data, the expression change coefficient corresponding to the second preset part; the bone control data detection module further And configured to determine bone control data corresponding to the second motion portion according to the expression variation coefficient and the preset maximum elevation angle threshold.
  • the motion state data calculation module is further configured to calculate, according to the expression change data, the expression change coefficient corresponding to the third preset part; The pitch angle direction value and the yaw angle direction value corresponding to the three preset parts; determining the first bone control data corresponding to the third preset part according to the expression change coefficient, the pitch angle direction value, and the preset maximum elevation angle threshold value; The coefficient, the yaw angle direction value, and the preset maximum yaw angle threshold determine second skeletal control data corresponding to the third preset portion; the bone control data detecting module is further configured to use the first skeletal control data and the second skeletal control data Determining bone control data corresponding to the third preset portion.
  • the expression animation data processing apparatus 2500 further includes: a reference point acquisition module 2702, a relative position acquisition module 2704, and an initial virtual animation image generation module 2706, wherein:
  • the reference point obtaining module 2702 is configured to acquire a reference point, determine a virtual space coordinate origin according to the reference point, and establish a virtual space according to the virtual space coordinate origin.
  • the relative position obtaining module 2704 is configured to acquire a relative position of the behavior subject with respect to the reference point.
  • the initial virtual animation image generation module 2706 is configured to determine, according to the relative position, a target position of the virtual animation image corresponding to the behavior body in the virtual space, and generate an initial virtual animation image corresponding to the behavior body in the virtual space according to the target position.
  • the expression animation data processing apparatus further includes: a voice data acquisition module 2802, a skeleton animation acquisition module 2804, wherein:
  • the voice data acquiring module 2802 is configured to acquire voice data, and determine a corresponding current second motion part according to the voice data.
  • the skeletal animation obtaining module 2804 is configured to acquire a skeletal animation corresponding to the current second moving part, and play the skeletal animation to update the expression of the virtual animated image corresponding to the avatar model.
  • the target segmentation expression region detecting module 2506 includes: a current motion portion detecting unit 2506a, a segmentation embedding region obtaining unit 2506b, and a target segmentation embedding region matching unit 2506c, wherein:
  • the current moving part detecting unit 2506a is configured to determine a current moving part corresponding to the virtual animated image according to the expression changing data.
  • the segmentation expression area obtaining unit 2506b is configured to acquire a preset plurality of segmentation expression regions corresponding to the avatar model.
  • the target segmentation expression region matching unit 2506c is configured to acquire a target segmentation expression region that matches the current motion portion from the preset plurality of segmentation expression regions.
  • the expression animation data processing apparatus 2500 further includes: a current segmentation expression region acquisition module 2902, a sub-base avatar model set acquisition module 2904, and a sub-hybrid avatar model set acquisition module 2906.
  • Base avatar data generation module 2908 wherein:
  • the current split expression area obtaining module 2902 is configured to obtain a current split expression area from each divided expression area corresponding to the avatar model;
  • a sub-base avatar model set obtaining module 2904 configured to acquire a sub-basic avatar model set corresponding to the current segmented emoticon region
  • the sub-hybrid avatar model set obtaining module 2906 is configured to perform multiple different non-linear combinations on each sub-basic avatar model in the sub-basic avatar model set to generate corresponding sub-hybrid avatar models to form a current segmented emoticon region corresponding a set of sub-hybrid avatar models;
  • the sub-basic avatar model set obtaining module 2904 is further configured to: obtain the next segmented emoticon region from each segmented emoticon region as the current segmented emoticon region, and return to the step of acquiring the sub-basic avatar model set corresponding to the current segmented emoticon region until each Segmenting a sub-hybrid avatar model set corresponding to the expression area;
  • the basic avatar data generating module 2908 is configured to form the basic avatar model set and the sub-hybrid avatar model set corresponding to each segmentation expression region into basic avatar data, and the target avatar data is selected from the basic avatar data. .
  • the target base avatar data obtaining module 2508 is further configured to calculate, according to the expression change data, a combination coefficient corresponding to each target sub-base avatar model and each target sub-hybrid avatar model; and multiple targets according to the combination coefficient
  • the sub-basic avatar model and the plurality of target sub-hybrid avatar models are linearly combined to generate and load expression data.
  • the virtual animated image updating module 2510 further includes: a vertex position set acquiring unit 2510a and a target vertex position obtaining unit 2510b, wherein:
  • the vertex position set obtaining unit 2510a is configured to acquire a current vertex position set, where the current vertex position set is composed of current vertex positions corresponding to the respective target sub-base avatar models that generate the loaded expression data.
  • the target vertex position obtaining unit 2510b determines the current target vertex position of the grid corresponding to the loaded emoticon data according to the current vertex position set; acquires the next vertex position set, and determines the next one of the grid corresponding to the loaded emoticon data according to the next vertex position set. The target vertex position until the respective target vertex positions of the mesh corresponding to the loaded expression data are determined.
  • the target base avatar data obtaining module 2508 further includes:
  • the preset weight coefficient obtaining unit 2508a is configured to acquire preset weight coefficients corresponding to the respective target expressions.
  • a generation order determining unit 2508b configured to determine a generation order of the loaded expression data corresponding to each target expression according to a size relationship of the preset weight coefficients corresponding to the respective target expressions;
  • the virtual animation image update module 2510 is further configured to sequentially load each loaded expression data into the target segmentation expression area according to the generation order of the loaded expression data corresponding to each target expression to update the virtual animation image corresponding to the avatar model. Expression.
  • the expression update data acquisition module 2504 is further configured to perform feature point extraction on the current expression data to obtain a corresponding expression feature point; match the expression feature point with the preset expression data set to determine the current update expression, and obtain Expression change data corresponding to the currently updated expression.
  • the expression update data acquisition module 2504 is further configured to acquire historical expression data, perform feature point extraction on the historical expression data, and obtain a corresponding historical expression feature point; perform feature point extraction on the current expression data to obtain a corresponding current The expression feature point is compared with the current expression feature point, and the corresponding expression change data is obtained according to the comparison result.
  • the emoticon data processing apparatus is further configured to acquire a corresponding first background image from the preset background image according to the expression change data, and load the first background image to the virtual animated image corresponding to the avatar model.
  • the virtual environment or acquiring voice data, acquiring a corresponding second background image from the preset background image according to the voice data, and loading the second background image into the virtual environment in which the virtual animation image corresponding to the avatar model is located.
  • Figure 32 is a diagram showing the internal structure of a computer device in one embodiment.
  • the computer device may specifically be the terminal 110 in FIG.
  • the computer device includes the computer device including a processor, a memory, a network interface, an input device, and a display screen connected by a system bus.
  • the memory comprises a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium of the computer device stores an operating system, and can also store computer readable instructions that, when executed by the processor, cause the processor to implement an expression animation data processing method.
  • the internal memory can also store computer readable instructions that, when executed by the processor, cause the processor to perform an emoticon data processing method.
  • the display screen of the computer device may be a liquid crystal display or an electronic ink display screen
  • the input device of the computer device may be a touch layer covered on the display screen, or a button, a trackball or a touchpad provided on the computer device casing, and It can be an external keyboard, trackpad or mouse.
  • FIG. 32 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation of the computer device to which the solution of the present application is applied.
  • the specific computer device may It includes more or fewer components than those shown in the figures, or some components are combined, or have different component arrangements.
  • the emoticon data processing apparatus can be implemented in the form of a computer readable instruction that can be executed on a computer device as shown in FIG.
  • the program modules constituting the emoticon data processing device may be stored in a memory of the computer device, such as the current emoticon data acquisition module, the expression update data acquisition module, the target segmentation expression region detection module, and the target basic avatar data shown in FIG. Get module and virtual animation image update module.
  • the computer readable instructions formed by the various program modules cause the processor to perform the steps in the emoticon animation data processing method of various embodiments of the present application described in this specification.
  • the computer device shown in FIG. 32 can perform the step of acquiring current expression data according to the three-dimensional face model by the current expression data acquisition module in the expression animation data processing device shown in FIG.
  • the computer device may perform the step of acquiring expression change data from the current expression data through the expression update data acquisition module.
  • a computer apparatus comprising a memory and a processor, the memory storing computer readable instructions that, when executed by the processor, cause the processor to perform the following Step: determining the position of the face in the image, acquiring the avatar model; obtaining the current expression data according to the position of the face in the image and the three-dimensional facial model; acquiring the expression change data from the current expression data; determining the matching with the expression change data
  • the target segmentation expression region is selected from the segmentation expression regions corresponding to the avatar model; the target avatar data matching the target segmentation expression region is acquired, and the target avatar data is generated according to the expression change data combination target avatar data. Loading the expression data; loading the loaded expression data into the target segmentation expression area to update the expression of the virtual animation image corresponding to the avatar model.
  • the computer readable instructions further cause the processor to perform the steps of: determining a first motion portion corresponding to the virtual animation image based on the expression change data; acquiring a second motion portion associated with the first motion portion Calculating motion state data corresponding to the first motion part according to the expression change data; determining bone control data corresponding to the second motion part according to the motion state data corresponding to the first motion part; and controlling the bone corresponding to the second motion part according to the bone control data Exercise to update the expression of the virtual animated image corresponding to the avatar model.
  • the motion state data corresponding to the first motion portion is calculated according to the expression change data
  • the second motion portion is determined according to the motion state data corresponding to the first motion portion.
  • the skeletal control data includes: calculating a yaw angular velocity and a pitch angular velocity corresponding to the first preset portion according to the expression change data; determining a second motion portion corresponding according to the pitch angular velocity, the preset maximum elevation angle threshold, and the first preset compensation value
  • the first bone control data; the second bone control data corresponding to the second motion portion according to the yaw angular velocity, the preset maximum yaw angle threshold, and the second preset compensation value; the first bone control data and the second bone The control data determines bone control data corresponding to the second motion portion.
  • the motion state data corresponding to the first motion portion is calculated according to the expression change data
  • the second motion portion is determined according to the motion state data corresponding to the first motion portion.
  • the skeletal control data includes: calculating an expression change coefficient corresponding to the second preset portion according to the expression change data; and determining skeletal control data corresponding to the second motion portion according to the expression change coefficient and the preset maximum elevation angle threshold.
  • the motion state data corresponding to the first motion portion is calculated according to the expression change data
  • the second motion portion is determined according to the motion state data corresponding to the first motion portion.
  • the skeletal control data includes: calculating an expression change coefficient corresponding to the third preset portion according to the expression change data; calculating a pitch angle direction value and a yaw angle direction value corresponding to the third preset portion according to the expression change data; The change coefficient, the pitch angle direction value, and the preset maximum pitch angle threshold determine the first bone control data corresponding to the third preset portion; the third pre-determination is determined according to the expression change coefficient, the yaw angle direction value, and the preset maximum yaw angle threshold The second bone control data corresponding to the part is set; and the bone control data corresponding to the third preset part is determined according to the first bone control data and the second bone control data.
  • the computer readable instructions further cause the processor to perform the steps of: acquiring a reference point, determining a virtual space coordinate origin according to the reference point, establishing a virtual space according to the virtual space coordinate origin; acquiring the behavior subject relative to the reference The relative position of the point; determining the target position of the virtual animation image corresponding to the behavior subject in the virtual space according to the relative position, and generating an initial virtual animation image corresponding to the behavior subject in the virtual space according to the target position.
  • the computer readable instructions further cause the processor to perform the steps of: acquiring voice data, determining a corresponding current second motion portion according to the voice data; acquiring a skeletal animation corresponding to the current second motion portion, The skeletal animation is played to update the expression of the virtual animated image corresponding to the avatar model.
  • determining a target segmentation expression region that matches the expression change data includes: determining a current motion portion corresponding to the virtual animation image according to the expression change data; and acquiring a preset plurality of segmentation expression regions corresponding to the avatar model; A plurality of segmented expression regions are obtained to obtain a target segmentation expression region that matches the current motion portion.
  • the computer readable instructions further cause the processor to: obtain a current segmentation expression region from each of the segmentation expression regions corresponding to the avatar model; and acquire a sub-base avatar corresponding to the current segmentation expression region a set of models; a plurality of different non-linear combinations of sub-basic avatar models in the sub-basic avatar model set to generate corresponding sub-hybrid avatar models to form a sub-hybrid avatar model set corresponding to the current segmented expression region; Obtaining the next segmentation expression region in each segmentation expression region as the current segmentation expression region, and returning to obtain the sub-basic avatar model set corresponding to the current segmentation expression region, until a sub-hybrid avatar model set corresponding to each segmentation expression region is obtained;
  • the sub-basic avatar model set and the sub-hybrid avatar model set corresponding to each segmented emoticon region constitute basic avatar data, and the target avatar data is selected from the basic avatar data.
  • the target base avatar data includes a plurality of target sub-base avatar models and a plurality of target sub-hybrid avatar models
  • the generated expression data is generated according to the expression change data combination target base avatar data, including: changing according to the expression
  • the data is calculated to obtain the combination coefficient of each target sub-basic avatar model and each target sub-hybrid avatar model; and the plurality of target sub-basic avatar models and multiple target sub-hybrid avatar models are linearly combined to generate a loaded expression according to the combination coefficient. data.
  • loading the loaded expression data into the target segmentation expression region comprises: acquiring a current vertex position set, wherein the current vertex position set is composed of a current vertex position corresponding to each target sub-base avatar model that generates the loaded expression data; The current vertex position set determines the current target vertex position of the grid corresponding to the loaded emoticon data; acquires the next vertex position set, and determines the next target vertex position of the grid corresponding to the loaded emoticon data according to the next vertex position set until the loaded emoticon is determined The position of each target vertex of the grid corresponding to the data.
  • the target base avatar data matching the target segmentation expression region is acquired, and the loaded expression data is generated according to the expression change data combination target base avatar data, including: acquiring a preset weight coefficient corresponding to each target expression; determining a generation order of the loaded expression data corresponding to each target expression according to the size relationship of the preset weight coefficients corresponding to each target expression; loading the loaded expression data into the target segmentation expression area to update the virtual
  • the expression of the virtual animation image corresponding to the image model includes: loading each of the loaded expression data into the target segmentation expression area in order to update the expression of the virtual animation image corresponding to the avatar model according to the generation order of the loaded expression data corresponding to each target expression.
  • acquiring the expression change data from the current expression data includes: performing feature point extraction on the current expression data to obtain a corresponding expression feature point; and matching the expression feature point with the preset expression data set to determine the current update expression. , obtaining expression change data corresponding to the currently updated expression.
  • acquiring the expression change data from the current expression data includes: acquiring historical expression data, extracting feature points from the historical expression data, and obtaining corresponding historical expression feature points; performing feature point extraction on the current expression data to obtain a correspondence The current expression feature point; comparing the historical expression feature point with the current expression feature point, and obtaining corresponding expression change data according to the comparison result.
  • the computer readable instructions further cause the processor to perform the steps of: acquiring a corresponding first background image from the preset background image according to the expression change data, and loading the first background image to the avatar model
  • Corresponding virtual animation image is located in the virtual environment; or acquiring voice data, acquiring a corresponding second background image from the preset background image according to the voice data, and loading the second background image to the virtual animation image corresponding to the avatar model In the virtual environment.
  • acquiring the avatar model includes: extracting facial feature points from the face in the image, acquiring a corresponding avatar model according to the facial feature points; or acquiring the avatar model set, the avatar model set A plurality of avatar models are included, the avatar model selection instruction is obtained, and the target avatar model is obtained from the avatar model set according to the avatar model selection instruction.
  • a computer readable storage medium storing computer readable instructions that, when executed by a processor, cause the processor to perform the step of determining that a human face is in an image Position, obtain the avatar model; obtain the current expression data according to the position of the face in the image and the 3D face model; obtain the expression change data from the current expression data; determine the target segmentation expression area matching the expression change data, and the target segmentation expression
  • the region is selected from each of the segmentation expression regions corresponding to the avatar model; the target avatar data matching the target segmentation expression region is acquired, and the expression avatar data is generated according to the expression change data combination target avatar data; the expression data is loaded Loading into the target segmentation expression area to update the expression of the virtual animated image corresponding to the avatar model.
  • the computer readable instructions further cause the processor to perform the steps of: determining a first motion portion corresponding to the virtual animation image based on the expression change data; acquiring a second motion portion associated with the first motion portion Calculating motion state data corresponding to the first motion part according to the expression change data; determining bone control data corresponding to the second motion part according to the motion state data corresponding to the first motion part; and controlling the bone corresponding to the second motion part according to the bone control data Exercise to update the expression of the virtual animated image corresponding to the avatar model.
  • the motion state data corresponding to the first motion portion is calculated according to the expression change data
  • the second motion portion is determined according to the motion state data corresponding to the first motion portion.
  • the skeletal control data includes: calculating a yaw angular velocity and a pitch angular velocity corresponding to the first preset portion according to the expression change data; determining a second motion portion corresponding according to the pitch angular velocity, the preset maximum elevation angle threshold, and the first preset compensation value
  • the first bone control data; the second bone control data corresponding to the second motion portion according to the yaw angular velocity, the preset maximum yaw angle threshold, and the second preset compensation value; the first bone control data and the second bone The control data determines bone control data corresponding to the second motion portion.
  • the motion state data corresponding to the first motion portion is calculated according to the expression change data
  • the second motion portion is determined according to the motion state data corresponding to the first motion portion.
  • the skeletal control data includes: calculating an expression change coefficient corresponding to the second preset portion according to the expression change data; and determining skeletal control data corresponding to the second motion portion according to the expression change coefficient and the preset maximum elevation angle threshold.
  • the motion state data corresponding to the first motion portion is calculated according to the expression change data
  • the second motion portion is determined according to the motion state data corresponding to the first motion portion.
  • the skeletal control data includes: calculating an expression change coefficient corresponding to the third preset portion according to the expression change data; calculating a pitch angle direction value and a yaw angle direction value corresponding to the third preset portion according to the expression change data; The change coefficient, the pitch angle direction value, and the preset maximum pitch angle threshold determine the first bone control data corresponding to the third preset portion; the third pre-determination is determined according to the expression change coefficient, the yaw angle direction value, and the preset maximum yaw angle threshold The second bone control data corresponding to the part is set; and the bone control data corresponding to the third preset part is determined according to the first bone control data and the second bone control data.
  • the computer readable instructions further cause the processor to perform the steps of: acquiring a reference point, determining a virtual space coordinate origin according to the reference point, establishing a virtual space according to the virtual space coordinate origin; acquiring the behavior subject relative to the reference The relative position of the point; determining the target position of the virtual animation image corresponding to the behavior subject in the virtual space according to the relative position, and generating an initial virtual animation image corresponding to the behavior subject in the virtual space according to the target position.
  • the computer readable instructions further cause the processor to perform the steps of: acquiring voice data, determining a corresponding current second motion portion according to the voice data; acquiring a skeletal animation corresponding to the current second motion portion, The skeletal animation is played to update the expression of the virtual animated image corresponding to the avatar model.
  • determining a target segmentation expression region that matches the expression change data includes: determining a current motion portion corresponding to the virtual animation image according to the expression change data; and acquiring a preset plurality of segmentation expression regions corresponding to the avatar model; A plurality of segmented expression regions are obtained to obtain a target segmentation expression region that matches the current motion portion.
  • the computer readable instructions further cause the processor to: obtain a current segmentation expression region from each of the segmentation expression regions corresponding to the avatar model; and acquire a sub-base avatar corresponding to the current segmentation expression region a set of models; a plurality of different non-linear combinations of sub-basic avatar models in the sub-basic avatar model set to generate corresponding sub-hybrid avatar models to form a sub-hybrid avatar model set corresponding to the current segmented expression region; Obtaining the next segmentation expression region in each segmentation expression region as the current segmentation expression region, and returning to obtain the sub-basic avatar model set corresponding to the current segmentation expression region, until a sub-hybrid avatar model set corresponding to each segmentation expression region is obtained;
  • the sub-basic avatar model set and the sub-hybrid avatar model set corresponding to each segmented emoticon region constitute basic avatar data, and the target avatar data is selected from the basic avatar data.
  • the target base avatar data includes a plurality of target sub-base avatar models and a plurality of target sub-hybrid avatar models
  • the generated expression data is generated according to the expression change data combination target base avatar data, including: changing according to the expression
  • the data is calculated to obtain the combination coefficient of each target sub-basic avatar model and each target sub-hybrid avatar model; and the plurality of target sub-basic avatar models and multiple target sub-hybrid avatar models are linearly combined to generate a loaded expression according to the combination coefficient. data.
  • the step of loading the loaded expression data into the target segmentation expression region comprises: acquiring a current vertex position set, where the current vertex position set is composed of current vertex positions corresponding to the respective target sub-base avatar models that generate the loaded expression data. And determining, according to the current set of vertex positions, a current target vertex position of the grid corresponding to the loaded emoticon data; acquiring a next vertex position set, and determining, according to the next vertex position set, a next target vertex position of the grid corresponding to the loaded emoticon data until determining Load each target vertex position of the mesh corresponding to the expression data.
  • the target base avatar data matching the target segmentation expression region is acquired, and the loaded expression data is generated according to the expression change data combination target base avatar data, including: acquiring a preset weight coefficient corresponding to each target expression; determining a generation order of the loaded expression data corresponding to each target expression according to the size relationship of the preset weight coefficients corresponding to each target expression; loading the loaded expression data into the target segmentation expression area to update the virtual
  • the expression of the virtual animation image corresponding to the image model includes: loading each of the loaded expression data into the target segmentation expression area in order to update the expression of the virtual animation image corresponding to the avatar model according to the generation order of the loaded expression data corresponding to each target expression.
  • acquiring the expression change data from the current expression data includes: performing feature point extraction on the current expression data to obtain a corresponding expression feature point; and matching the expression feature point with the preset expression data set to determine the current update expression. , obtaining expression change data corresponding to the currently updated expression.
  • acquiring the expression change data from the current expression data includes: acquiring historical expression data, extracting feature points from the historical expression data, and obtaining corresponding historical expression feature points; performing feature point extraction on the current expression data to obtain a correspondence The current expression feature point; comparing the historical expression feature point with the current expression feature point, and obtaining corresponding expression change data according to the comparison result.
  • the computer readable instructions further cause the processor to perform the steps of: acquiring a corresponding first background image from the preset background image according to the expression change data, and loading the first background image to the avatar model
  • Corresponding virtual animation image is located in the virtual environment; or acquiring voice data, acquiring a corresponding second background image from the preset background image according to the voice data, and loading the second background image to the virtual animation image corresponding to the avatar model In the virtual environment.
  • acquiring the avatar model includes: extracting facial feature points from the face in the image, acquiring a corresponding avatar model according to the facial feature points; or acquiring the avatar model set, the avatar model set A plurality of avatar models are included, the avatar model selection instruction is obtained, and the target avatar model is obtained from the avatar model set according to the avatar model selection instruction.
  • Non-volatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM is available in a variety of formats, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronization chain.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • Synchlink DRAM SLDRAM
  • Memory Bus Radbus
  • RDRAM Direct RAM
  • DRAM Direct Memory Bus Dynamic RAM
  • RDRAM Memory Bus Dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请涉及一种表情动画数据处理方法、计算机可读存储介质和计算机设备,所述方法包括:确定人脸在图像中的位置,获取虚拟形象模型;根据人脸在图像中的位置和三维脸部模型获取当前表情数据;从当前表情数据获取表情变化数据;确定与表情变化数据匹配的目标分割表情区域,目标分割表情区域是从虚拟形象模型对应的各个分割表情区域中选取得到的;获取与目标分割表情区域匹配的目标基础虚拟形象数据,根据表情变化数据组合目标基础虚拟形象数据生成加载表情数据;将加载表情数据加载到目标分割表情区域以更新虚拟形象模型对应的虚拟动画形象的表情。本申请提供的方案可以实现减小表情数据计算量从而提高表情数据处理效率。

Description

表情动画数据处理方法、计算机设备和存储介质
本申请要求于2018年02月09日提交中国专利局,申请号为201810136285X,申请名称为“表情动画数据处理方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别是涉及一种表情动画数据处理方法、计算机可读存储介质和计算机设备。
背景技术
随着计算机技术的发展,出现了虚拟形象建模技术,在用户录制视频时,虚拟形象模型可根据视频画面中行为主体的表情形成同步相对应的表情。
然而,目前的传统方法在虚拟形象模型进行表情加载的时候,需将虚拟形象模型所需的全部表情数据都进行加载,由于加载了许多非必要的部分表情数据不仅导致虚拟动画形象的表情数据计算量大,而且加载过多的表情数据导致效率低下等问题。
发明内容
根据本申请提供的各种实施例,提供一种表情动画数据处理方法、计算机可读存储介质和计算机设备。
一种表情动画数据处理方法,该方法包括:
计算机设备确定人脸在图像中的位置,获取虚拟形象模型;
计算机设备根据人脸在图像中的位置和三维脸部模型获取当前表情数据;
计算机设备从当前表情数据获取表情变化数据;
计算机设备确定与表情变化数据匹配的目标分割表情区域,目标分割表情区域是从虚拟形象模型对应的各个分割表情区域中选取得到的;
计算机设备获取与目标分割表情区域匹配的目标基础虚拟形象数据,根据表情变化数据组合目标基础虚拟形象数据生成加载表情数据;
计算机设备将加载表情数据加载到目标分割表情区域以更新虚拟形象模型对应的虚拟动画形象的表情。
一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机可读指令,该处理器执行所述程序时实现以下步骤:
确定人脸在图像中的位置,获取虚拟形象模型;
根据人脸在图像中的位置和三维脸部模型获取当前表情数据;
从当前表情数据获取表情变化数据;
确定与表情变化数据匹配的目标分割表情区域,目标分割表情区域是从虚拟形象模型对应的各个分割表情区域中选取得到的;
获取与目标分割表情区域匹配的目标基础虚拟形象数据,根据表情变化数据组合目标基础虚拟形象数据生成加载表情数据;
将加载表情数据加载到目标分割表情区域以更新虚拟形象模型对应的虚拟动画形象的表情。
一种计算机可读存储介质,其上存储有计算机可读指令,计算机可读指令被处理器执行时,使得处理器执行以下步骤:
确定人脸在图像中的位置,获取虚拟形象模型;
根据人脸在图像中的位置和三维脸部模型获取当前表情数据;
从当前表情数据获取表情变化数据;
确定与表情变化数据匹配的目标分割表情区域,目标分割表情区域是从虚拟形象模型对应的各个分割表情区域中选取得到的;
获取与目标分割表情区域匹配的目标基础虚拟形象数据,根据表情变化数据组合目标基础虚拟形象数据生成加载表情数据;
将加载表情数据加载到目标分割表情区域以更新虚拟形象模型对应的虚拟 动画形象的表情。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。图1为一个实施例中表情动画数据处理方法的应用环境图;
图2为一个实施例中表情动画数据处理方法的流程示意图;
图3为另一个实施例中表情动画数据处理方法的流程示意图;
图4为一个实施例中虚拟动画形象运动部位的示意图;
图5为一个实施例中用于控制第二运动部位的骨骼的示意图;
图6为一个实施例中第一运动部位为头部转动时耳朵弯曲的示意图;
图7为一个实施例中第一运动部位为嘴部进行张开时舌头伸出的示意图;
图8为一个实施例中确定骨骼控制数据的流程示意图;
图9为另一个实施例中确定骨骼控制数据的流程示意图;
图10为再一个实施例中确定骨骼控制数据的流程示意图;
图11为又一个实施例中表情动画数据处理方法的流程示意图;
图12为一个实施例中虚拟动画形象在终端显示的界面示意图;
图13为再一个实施例中表情动画数据处理方法的流程示意图;
图14为一个实施例中确定目标分割表情区域的流程示意图;
图15为一个实施例中表情动画数据处理方法的流程示意图;
图16为一个实施例中各个分割表情区域对应的子基本虚拟形象模型集;
图17为一个实施例中生成加载表情数据的流程示意图;
图18为一个实施例中将加载表情数据加载到目标分割表情区域的流程示意图;
图19为一个实施例中生成加载表情数据的流程示意图;
图20为一个实施例中根据权重加载表情数据的示意图;
图21为一个实施例中获取表情变化数据的流程示意图;
图22为另一个实施例中获取表情变化数据的流程示意图;
图23为一个实施例中虚拟动画形象所处虚拟环境下发背景图像的示意图;
图24为一个实施例中表情动画数据处理装置的结构框图;
图25为另一个实施例中表情动画数据处理装置的结构框图;
图26为又一个实施例中表情动画数据处理装置的结构框图;
图27为再一个实施例中表情动画数据处理装置的结构框图;
图28为一个实施例中目标分割表情区域检测模块的结构框图;
图29为另一个实施例中表情动画数据处理装置的结构框图;
图30为一个实施例中虚拟动画形象更新模块的结构框图;
图31为一个实施例中目标基础虚拟形象数据获取模块的结构框图;
图32为一个实施例中计算机设备的结构框图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
图1为一个实施例中表情动画数据处理方法的应用环境图。参照图1,该表情动画数据处理方法应用于表情动画数据处理系统。该表情动画数据处理系统包括终端110和服务器120。终端110通过拍摄采集装置采集到行为主体的脸部后,确定人脸在图像中的位置,获取虚拟形象模型,再根据终端中的三维脸部模型获取拍摄采集装置采集到的行为主体的当前表情数据,从当前表情数据中获取表情变化数据,根据表情变化数据确定匹配的目标分割表情更新区域。进一步获取与目标分割表情更新区域匹配的目标基础虚拟形象数据,根据目标基础虚拟形象数据生成加载表情数据,将加载表情数据加载到目标分割表情区域从而更新虚拟形象模型对应的虚拟动画形象的表情。
其中,终端110通过拍摄采集装置采集到行为主体的脸部后,可将采集到的行为主体的图像发送至服务器120,服务器根据内置的三维脸部模型获取图像中 的行为主体的当前表情数据,从当前表情数据中获取到表情变化数据,服务器根据表情变化数据确定匹配的目标分割表情区域。服务器再获取与目标分割表情更新区域匹配的目标基础虚拟形象数据,可根据目标基础虚拟形象数据生成加载表情数据,将加载表情数据发送至终端,终端再将加载表情数据加载到目标分割表情区域从而更新虚拟形象模型对应的虚拟动画形象的表情。终端110和服务器120通过网络连接。终端110具体可以是台式终端或移动终端,移动终端具体可以手机、平板电脑、笔记本电脑等中的至少一种。服务器120可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
如图2所示,在一个实施例中,提供了一种表情动画数据处理方法。本实施例主要以该方法应用于上述图1中的终端110或服务器120来举例说明。参照图2,该表情动画数据处理方法具体包括如下步骤:
步骤202,确定人脸在图像中的位置,获取虚拟形象模型。
其中,这里图像包括但不限于图片、照片、影片等。可以是通过终端的相机拍摄的照片、通过终端截屏获取的图片或者是通过可上传图像的应用程序上传的图像等等。这里的终端包括但不限于是各种具有图像处理功能的个人计算机、笔记本电脑、个人数字助理、智能手机、平板电脑、便携式可穿戴式设备等。虚拟形象模型是用来展示虚拟动画形象的模型,所谓虚拟动画形象是虚拟动画形象是通过设计软件设计出来的动画形象。比如虚拟动画形象可以是但不限于小狗的形象、小猫的形象、老鼠的形象等等。
具体地,确定通过终端的相机拍摄的照片、通过终端截屏获取的图片或者通过可上传图像的应用程序上传的图像中人脸的具体位置,再获取用来展示虚拟动画形象的虚拟形象模型。或者终端将通过相机拍摄的照片、通过终端截屏获取的图片或者通过可上传图像的应用程序上传的图像发送至服务器,服务器根据图像中的人脸确定人脸在图像中的具体位置,进一步再获取用来展示虚拟动画形象的虚拟形象模型。
步骤204,根据人脸在图像中的位置和三维脸部模型获取当前表情数据。
其中,三维脸部模型是用来获取终端拍摄采集装置采集到的行为主体的当前脸部表情数据的模型,而当前表情数据是终端拍摄采集装置采集到的行为主 体当前的脸部表情变化的表情数据。其中,由于行为主体的脸部是行为主体情感传递最重要、最直接的载体,因此可从行为主体的脸部得知行为主体的脸部表情,通过拍摄采集装置采集到的图像中的行为主体的脸部,对图像中的行为主体的脸部进行脸部特征点提取,根据提取的脸部特征点建立行为主体的三维脸部模型。比如,三维脸部模型具体可以是但不限于三维人脸模型、三维动物脸部模型等等。
具体地,在确定通过终端的相机拍摄的照片、通过终端截屏获取的图片或者通过可上传图像的应用程序上传的图像中人脸在图像中的具体位置后,对图像具体位置上的人脸进行脸部特征点提取,根据提取的脸部特征点建立行为主体的三维脸部模型后,从三维脸部模型得到当前行为主体的脸部数据,根据脸部数据得到当前行为主体对应的当前表情数据,如对三维脸部模型上的脸部数据进行脸部特征点提取,根据脸部特征点获取当前表情数据。当前表情数据可以是但不限于眼部对应的表情数据、嘴部对应的表情数据、鼻子对应的表情数据等等。
步骤206,从当前表情数据获取表情变化数据。
其中,这里的表情变化数据是行为主体脸部发生表情变化的表情数据,其中表情变化数据可以是但不限于与历史帧图像中行为主体的脸部表情相比较发生变化的表情数据,比如,上一帧图像行为主体的脸部表情为面无表情,所谓面无表情表示脸部表情的特征点并未发生任何改变,当下一帧图像中的行为主体的脸部表情为微笑时,说明下一帧图像中的行为主体的嘴部的特征点发生了改变,因此可将嘴部对应的表情数据作为表情变化数据。
具体地,在根据三维脸部获取到当前行为主体的脸部表情数据后,可将三维脸部模型中的当前行为主体的脸部表情数据与历史帧图像中行为主体的脸部对应的三维脸部模型中的脸部表情数据进行比较,从而得到当前行为主体的表情变化数据。在一个实施例中,可通过直接比较脸部表情数据对应的特征点得到当前行为主体的表情变化数据。
步骤208,确定与表情变化数据匹配的目标分割表情区域,目标分割表情区域是从虚拟形象模型对应的各个分割表情区域中选取得到的。
其中,分割表情区域是虚拟形象模型中用来发生表情运动变化从而产生与 表情变化数据对应表情的表情区域。比如,当表情变化数据为大笑时,由于大笑是嘴部发生表情运动变化产生的,因此虚拟形象模型中的嘴部是与表情变化数据为大笑匹配的目标分割表情区域。如图3所示,图3示出一个实施例中虚拟形象模型的各个分割表情区域的示意图,虚拟形象模型是虚拟动画形象的脸部模型,其中虚拟形象模型可根据一定的规则划分为多个分割表情区域,比如分割表情区域可以是但不限于虚拟动画形象的两只耳朵、两个眼部、一个嘴部等。
具体地,在根据三维脸部模型中对应的当前行为主体脸部发生表情运动变化得到表情变化数据后,根据表情变化数据从虚拟形象模型多个分割表情区域中确定与表情变化数据匹配的目标分割表情区域。
步骤210,获取与目标分割表情区域匹配的目标基础虚拟形象数据,根据表情变化数据组合目标基础虚拟形象数据生成加载表情数据。
其中,基础虚拟形象数据是形成虚拟形象模型的各个分割表情区域对应的基础表情的虚拟动画形象表情数据的集合。比如,基础虚拟形象数据可以是但不限于分割表情区域为嘴部对应的嘴部表情的嘴部表情数据,分割表情区域为眼部对应的眼部表情的眼部表情数据等等。由于目标分割表情区域是根据表情变化数据从虚拟形象模型多个分割表情区域中匹配得到的,因此目标基础虚拟形象数据是根据目标分割表情区域从基础虚拟形象数据中匹配得到的基础虚拟形象数据。其中,目标基础虚拟形象数据是与目标分割表情区域对应的基础虚拟形象数据,因为目标分割表情区域存在多种表情变化,且各种表情变化都有对应的表情变化系数,因此可根据表情变化数据组合目标基础虚拟形象数据中与表情变化数据对应的表情变化系数生成加载表情数据。所谓加载表情数据用于直接加载到分割表情区域控制虚拟形象模型对应的虚拟动画形象的表情的变化,是与三维脸部模型中的表情相对应的表情数据。比如,加载表情数据可以是但不限于微笑、大笑、睁眼等等。
步骤212,将加载表情数据加载到目标分割表情区域以更新虚拟形象模型对应的虚拟动画形象的表情。
其中,虚拟形象模型是用来展示虚拟动画形象的模型,所谓虚拟动画形象是通过设计软件设计出来的动画形象。比如虚拟动画形象可以是但不限于小狗的 形象、小猫的形象、老鼠的形象等等。其中,加载表情数据是根据表情变化数据与对应的表情变化系数组合生成的。因此将生成的加载表情数据加载到虚拟形象模型的目标分割区域中,使得虚拟形象模型中的虚拟动画形象能够做出与三维脸部模型当前的表情对应的表情变化,即虚拟形象模型中的虚拟动画形象能够产生与拍摄采集装置采集到的图像中的行为主体的表情相同。比如,拍摄采集装置采集到的图像中的行为主体的表情为大笑,因此虚拟形象模型中的虚拟动画形象的嘴部也会做出大笑的表情。其中,嘴部为虚拟形象模型中与加载表情为大笑对应的目标分割区域。
本实施例中,根据三维脸部模型获取行为主体的当前表情数据,从当前表情数据中获取行为主体的表情变化数据,根据表情变化数据从虚拟形象模型的各个分割表情区域中获取匹配的目标分割表情区域,进一步获取目标分割表情区域匹配的目标基础虚拟形象数据从而生成加载表情数据,最后将加载表情数据加载到目标分割表情区域使得虚拟形象模型对应的虚拟动画形象的表情进行更新。因此在虚拟形象模型进行加载表情时,只加载表情更新部分对应的表情数据从而减少虚拟动画形象的计算量,提高虚拟动画形象表情更新的效率。在一个实施例中,获取虚拟形象模型,包括:对图像中的人脸进行人脸特征点提取,根据人脸特征点获取对应的虚拟形象模型;或获取虚拟形象模型集合,该虚拟形象模型集合包括多个虚拟形象模型,获取虚拟形象模型选择指令,根据虚拟形象模型选择指令从虚拟形象模型集合中获取目标虚拟形象模型。
本实施例中,为了使虚拟动画形象更加生动形象,需先获取用来展示个性化虚拟动画形象的虚拟形象模型。其中,可通过图像中的人脸的特征点进行动态分配虚拟形象模型,或者根据终端用户的需求或者喜爱等等从虚拟形象模型集合中选取匹配的虚拟形象模型,即目标虚拟形象模型。比如,获取用来展示虚拟动画形象的虚拟形象模型的其中一种方式为服务器或者终端动态分配虚拟形象模型的方式,具体地,在通过终端的相机拍摄的照片、通过终端截屏获取的图片或者通过可上传图像的应用程序上传的图像后,对图像中人物的脸部进行特征点提取,因为不同人物的脸部特征不同,因此可对不同图像中的不同人物的脸部进行特征点提取得到的人脸特征点也是不同的。进一步根据不同人物的脸部的人 脸特征点获取对应的虚拟形象模型,从而将获取到的虚拟形象模型来展示虚拟动画形象模型。
另一种方式是可根据用户的需求或者喜爱自行从虚拟形象模型集合中选取匹配的虚拟形象模型的方式,具体地,虚拟形象模型获取指令是用来选择虚拟形象模型的指令,可通过终端的相关应用程序获取虚拟形象模型集合供终端用户选择,再通过相关应用程序中的控件从虚拟形象模型集合中选取匹配的虚拟形象模型,从而将选取得到的虚拟形象模型用来展示虚拟动画形象模型。
在另一个实施例中,如图3所示,在图2的基础上该表情动画数据处理方法还包括:
步骤302,根据表情变化数据确定虚拟动画形象对应的第一运动部位。
其中,第一运动部位是与第二运动部位相关联的,可控制第二运动部位对应的骨骼产生相应运动的部位。如第一运动部位可以是控制虚拟形象模型中强表情部分的运动部位,强表情部分相对于弱表情部分,其中强表情部分的表情变化会引起弱表情部分的表情变化,其中强表情部分可以是但不限于虚拟形象模型对应的虚拟动画形象的脸部,如影响眼球运动的眼部,影响牙齿运动的嘴部,影响耳朵运动的头部等。弱表情部分可以是受到强表情部分影响而变化的包括但不限于虚拟动画形象的眼睛、牙齿、耳朵等等。由于表情变化数据是根据三维脸部模型展示出当前行为主体脸部发生表情变化获得的表情数据,因此可根据表情变化数据确定虚拟形象动画中发生与表情变化数据对应的运动变化部位为第一运动部位。如图4所示,图4示出一个实施例中虚拟形象运动部位的示意图,虚拟动画形象的脸部可以是虚拟形象模型中的强表情部分,即第一运动部位。比如,根据三维脸部模型中的当前行为主体脸部发生表情变化获得的表情变化数据为大笑和睁眼,可根据表情变化数据为大笑和睁眼确定虚拟动画形象中发生大笑和睁眼的第一运动部位分别对应为嘴部和眼部。
步骤304,获取与第一运动部位相关联的第二运动部位。
其中,这里的第二运动部位是与第一运动部位相关联的,受到第一运动部位影响的部位。如第二运动部位是控制虚拟形象模型中弱表情部分的运动部位。如图4所示,图4中的虚拟动画形象的眼睛、牙齿、耳朵等可以为第二运动部位, 如第一运动部位为虚拟动画形象的脸部,因此与脸部相关联的第二运动部位可以是但不限于眼球、耳朵、下牙与舌头等。具体地,若第一运动部位为虚拟形象中的虚拟动画形象的眼部,则与第一运动部位为眼部互相连接的第二运动部位为眼球。同样地,若第一运动部位为虚拟形象中的虚拟动画形象的嘴部,则与第一运动部位为嘴部互相连接的第二运动部位为下牙与舌头。
步骤306,根据表情变化数据计算得到第一运动部位对应的运动状态数据。
其中,表情变化数据是根据三维脸部模型中的当前行为主体脸部发生表情运动变化的表情数据,因此可根据表情变化数据确定虚拟形象模型中发生与表情变化数据对应的表情运动变化部位的第一运动部位,且运动部位可发生各种运动变化。因此可根据表情变化数据计算得到与第一运动部位对应的运动状态数据。所谓运动状态数据是运动部位发生各种运动变化的幅度变化值或者运动变化的变化值,比如运动状态数据可以是但不限于眼部表情变化系数、嘴部表情变化系数。
步骤308,根据第一运动部位对应的运动状态数据确定第二运动部位对应的骨骼控制数据。
其中,骨骼控制数据是控制第二运动部位运动的骨骼数据。比如,骨骼控制数据可以是但不限于欧拉角等等,所谓欧拉角是用来确定第二运动部位转动的角度,也可称为旋转角度。具体地,由于第二运动部位是与第一运动部位互相连接的运动部位,因此可根据第一运动部位的运动状态数据计算得到第二运动部位对应的骨骼控制数据。例如,第一运动部位为眼部,眼部的运动状态数据为眼部对应的眼部表情变化系数,与第一运动部位为眼部的第二运动部位为眼球,因此可根据眼部表情变化系数计算得到眼球骨骼的欧拉角,即第二运动部位对应的骨骼控制数据。
步骤310,根据骨骼控制数据控制第二运动部位对应的骨骼运动,以更新虚拟形象模型对应的虚拟动画形象的表情。
其中,由于骨骼控制数据是用来控制第二运动部位对应的骨骼运动,比如骨骼运动可以是但不限于旋转、转动、向左转动、向右转动等等。如图5所示,图5示出一个实施例中用于控制第二运动部位的骨骼的示意图,例如通过眼球的骨 骼控制数据控制眼球对应的骨骼运动,或通过上牙与舌头的骨骼控制数据控制上牙与舌头对应的骨骼运动,又或者通过耳朵的骨骼控制数据控制耳朵对应的骨骼运动。
具体地,根据得到第二运动部位的骨骼控制数据控制第二运动部位对应的骨骼运动,使得虚拟形象模型中的虚拟动画形象能够做出与三维脸部模型当前的表情对应的表情变化。如:骨骼控制数据为第二运动部位为眼球的欧拉角,眼球骨骼可根据欧拉角控制眼球骨骼做出对应的眼球骨骼运动。如图6所示,图6示出一个实施例中第一运动部位为头部转动时耳朵弯曲的示意图,当第一运动部位为头部进行转动时,与第一运动部位相关联的第二运动部位为耳朵通过第一运动部位为头部的速度从而确定第二运动部位为耳朵的骨骼运动为向内弯曲。或者又如图7所示,图7示出一个实施例中第一运动部位为嘴部进行张开时舌头伸出的示意图,当第一运动部位为嘴部进行张开并吐舌头时,与第一运动部位相关联的第二运动部位为下牙与舌头通过第一运动部位为嘴部的表情变化确定第二运动部位为下牙与舌头的谷歌运动为伸出。
在一个实施例中,如图8所示,若第一运动部位为第一预设部位,根据表情变化数据计算得到第一运动部位对应的运动状态数据,根据第一运动部位对应的运动状态数据确定第二运动部位对应的骨骼控制数据,包括:
步骤802,根据表情变化数据计算得到第一预设部位对应的偏航角速度和俯仰角速度。
其中,第一预设部位是虚拟形象模型对应的虚拟动画形象中根据偏航角速度和俯仰角速度发生表情运动变化的部位。其中,偏航角速度和俯仰角速度是欧拉角中的其中一个组成元素,偏航角速度是以头部为原点,建立坐标系以Y轴旋转的旋转值。同样地,俯仰角速度是以头部为原点,建立坐标系以X轴旋转的旋转值。具体地,虚拟形象模型中的第一预设部位发生运动变化从而产生与表情变化数据对应的表情,因此以第一预设部位为原点建立坐标系,根据表情变化数据计算得到第一预设部位进行X轴旋转的俯仰角速度和第一预设部位进行Y轴旋转的偏航角速度。例如,第一预设部位可以是但不限于头部,可根据头部的转动速度计算得到头部对应的偏航角速度和俯仰角速度。
步骤804,根据俯仰角速度、预设最大俯仰角阈值和第一预设补偿值确定第二运动部位对应的第一骨骼控制数据。
其中,预设最大俯仰角阈值是云服务器预先下发用来控制第一预设部位进行X轴旋转角度的最大阈值。具体地,根据第一预设部位进行X轴旋转的俯仰角速度、云服务器预先下发用来控制第一预设部位进行X轴旋转角度的最大阈值和预先设定的第一预设补偿值计算得到第二运动部位对应的第一骨骼控制数据,所谓第一骨骼控制数据是用来控制第二运动部位对应的骨骼运动的旋转角度之一,第一骨骼控制数据可以是但不限于欧拉角中的章动角。
步骤806,根据偏航角速度、预设最大偏航角阈值和与第二预设补偿值确定第二运动部位对应的第二骨骼控制数据。
同样地,预设最大偏航角阈值是云服务器预先下发至终端用来控制第一预设部位进行Y轴旋转角度的最大阈值。具体地,根据第一预设部位进行Y轴旋转的偏航角速度、云服务器预先下发用来控制第一预设部位进行Y轴旋转角度的最大阈值和预先设定的第二预设补偿值计算得到第二运动部位对应的第二骨骼控制数据,所谓第二骨骼控制数据是用来控制第二运动部位对应的骨骼运动的旋转角度之一,第二骨骼控制数据可以是但不限于欧拉角中的旋进角。
步骤808,根据第一骨骼控制数据和第二骨骼控制数据确定第二运动部位对应的骨骼控制数据。
其中,第一骨骼控制数据和第二骨骼控制数据分别是用来控制第二运动部位对应的骨骼运动的旋转角度之一,因此可根据计算得到的第一骨骼控制数据和第二骨骼控制数据计算得到第二运动部位对应的骨骼控制数据。比如,第一骨骼控制数据是章动角,第二骨骼控制数据是旋进角,那么可根据章动角和旋进角计算得到用于控制第二运动部位旋转的欧拉角。如,Ear eulerAngles为第二运动部位为耳朵的骨骼控制数据为欧拉角,V p、V y分别为第一运动部位为头部转动的偏航角速度、俯仰角速度,A、B分别为计算所需的补偿值,H为云服务器控制的阈值,Ear eulerAngles=(Hp×V p+A,Hy×V y+B,0)。
在一个实施例中,如图9所示,若第一运动部位为第二预设部位,根据表情变化数据计算得到第一运动部位对应的运动状态数据,根据第一运动部位对应 的运动状态数据确定第二运动部位对应的骨骼控制数据,包括:
步骤902,根据表情变化数据计算得到第二预设部位对应的表情变化系数。
其中,第二预设部位是虚拟形象模型对应的虚拟动画形象中发生表情运动变化从而产生与表情变化数据相同的表情的部位。比如第二预设部位可以是但不限于嘴部。具体地,可根据表情变化数据中表情运动变化的变化值或者对应的表情权重系数计算得到与第二预设部位对应的表情变化系数。其中表情变化系数可根据张嘴的幅度大小变化而变化。例如,表情变化数据为大笑,那么第二预设部位为嘴部的张嘴幅度会比微笑对应的张嘴幅度要大些,因此大小的嘴部张嘴的幅度为嘴部对应的表情变化系数。
步骤904,根据表情变化系数和预设最大俯仰角阈值确定第二运动部位对应的骨骼控制数据。
同样地,预设最大俯仰角阈值是云服务器预先下发用来控制第二预设部位进行X轴旋转角度的最大阈值。其中表情变化系数是第二预设部位进行运动变化的变化值,比如第二预设部位是嘴部时,则表情变化系数可以是嘴部进行张开的幅度。具体地,根据第二预设部位进行运动变化的变化值和云服务器预先下发用来控制第二预设部位进行X轴旋转角度的最大阈值计算得到第二运动部位对应的骨骼控制速度,即第二运动部位的欧拉角。如,Jaw eulerAngles为第二运动部位为下牙与舌头骨骼的骨骼控制数据为欧拉角,H p为云服务器控制的最大俯仰角阈值,A openMouth为第一运动部位为嘴部的表情变化系数为张嘴表情系数。Jaw eulerAngles=(H p×A openMouth,0,0)。
在一个实施例中,如图10所示,若第一运动部位为第三预设部位,根据表情变化数据计算得到第一运动部位对应的运动状态数据,根据第一运动部位对应的运动状态数据确定第二运动部位对应的骨骼控制数据,包括:
步骤1002,根据表情变化数据计算得到第三预设部位对应的表情变化系数。
其中,第三预设部位是虚拟形象模型对应的虚拟动画形象根据表情变化数据发生表情运动变化的部位。第三预设部位可以是但不限于眼部,根据眼部表情变化计算得到眼部表情变化系数,眼部表情变化数据即表情变化数据。其中眼部表情的变化包括但不限于睁眼、闭眼、向左看、向右看、向上看、向下看等。因 此眼部表情变化系数是各个眼部表情变化对应的表情变化权重系数。具体地,根据第三预设部位的表情变化数据计算得到第三预设部位对应的表情变化权重系数。例如,第三预设部位为眼部,眼部表情变化为向右看,则眼部表情变化为向右看对应的权重系数为眼部对应的表情变化系数。
步骤1004,根据表情变化数据计算得到第三预设部位对应的俯仰角方向值和偏航角方向值。
其中,俯仰角是以第三预设部位为原点,建立坐标系以X轴旋转的旋转值,因此俯仰角方向值是第三预设部位进行X轴方向旋转的方向值。同样地,偏航角是第三预设部位为原点,建立坐标系以Y轴旋转的旋转值,则偏航角方向值是第三预设部位进行Y轴方向旋转的方向值。具体地,根据第三预设部位对应的表情变化数据得到第三预设部位对应的俯仰角方向值和偏航角方向值。比如,设定俯仰角方向值为正方向,正方向对应的俯仰角方向值为1,那么偏航角方向值为负方向,负方向对应的偏航角方向值为-1。
步骤1006,根据表情变化系数、俯仰角方向值和预设最大俯仰角阈值确定第三预设部位对应的第一骨骼控制数据。
其中,这里的表情变化系数是第三预设部位发生表情变化对应的表情权重系数。比如第三预设部位可以是但不限于眼部,表情变化数据为睁眼,因此眼部对应的表情变化系数为睁眼对应的表情权重系数。具体地,根据第三预设部位发生表情变化数据的变化值、俯仰角方向值和云服务器预先下发用来控制第三预设部位进行X轴旋转角度的最大阈值计算得到第三预设部位对应的第一骨骼控制数据。所谓第一骨骼控制数据是用来控制第三预设部位对应的骨骼运动的旋转角度之一,其中第一骨骼控制数据可以是但不限于组成欧拉角的章动角。
步骤1008,根据表情变化系数、偏航角方向值和预设最大偏航角阈值确定第三预设部位对应的第二骨骼控制数据。
具体地,根据第三预设部位发生表情变化数据的变化值、偏航角方向值和云服务器预先下发用来控制第三预设部位进行Y轴旋转角度的最大阈值计算得到第三预设部位对应的第二骨骼控制数据。这里的第二骨骼控制数据是用来控制第三预设部位对应的骨骼运动的旋转角度之一,其中第二骨骼控制数据可以是 但不限于组成欧拉角的旋进角。
步骤1010,根据第一骨骼控制数据和第二骨骼控制数据确定第三预设部位对应的骨骼控制数据。
同样地,第三预设部位对应的第一骨骼控制数据和第二骨骼控制数据分别是用来控制第二运动部位对应的骨骼运动旋转角度之一,因此可根据计算得到的第一骨骼控制数据和第二骨骼控制数据计算得到第二运动部位对应的骨骼控制数据。比如,第一骨骼控制数据是章动角,第二骨骼控制数据是旋进角,那么可根据章动角和旋进角计算得到用于控制第二运动部位旋转的欧拉角。如,Eye eulerAngles为第二运动部位为眼球骨骼的欧拉角,S为计算旋转的方向值,H为云服务器控制的最大俯仰角与偏航角阈值,A eye为第一运动部位为眼部的表情变化系数为眼部表情变化系数,则Eye eulerAngles=(S p×H p×A eye,S y×H y×A eye,0)
在一个实施例中,如图11所示,在图2的基础上该表情动画数据处理方法还包括:
步骤1102,获取参照点,根据参照点确定虚拟空间坐标原点,根据虚拟空间坐标原点建立虚拟空间。
步骤1104,获取行为主体相对于参照点的相对位置。
步骤1106,根据相对位置确定行为主体对应的虚拟动画形象在虚拟空间的目标位置,根据目标位置在虚拟空间生成行为主体对应的初始虚拟动画形象。
其中,这里的参照点是进行测量时设定的一个原点,比如参照点可以是但不限于终端为参照点。具体地,获取参照点,以参照点为虚拟空间的坐标原点,根据确定的虚拟空间的坐标原点建立虚拟空间,获取拍摄采集装置采集到图像中的行为主体相对于参照点的相对位置。所谓相对位置是相对于参照点而言行为主体的位置。进一步地,可根据获取到行为主体相对于参照点的位置确定与行为主体对应的虚拟动画形象在虚拟空间的位置,即虚拟动画形象在虚拟空间的目标位置。进一步可根据虚拟动画形象在虚拟空间的目标位置获取与行为主体对应的初始虚拟形象,将初始虚拟形象在虚拟空间的目标位置上进行展示,如图12所示,图12示出虚拟动画形象在终端显示的示意图。其中所谓初始虚拟形象是 虚拟形象最初的模样。
在一个实施例中,如图13所示,该表情动画数据处理方法还包括:
步骤1302,获取语音数据,根据语音数据确定对应的当前第二运动部位。
步骤1304,获取与当前第二运动部位对应的骨骼动画,播放骨骼动画,以更新虚拟形象模型对应的虚拟动画形象的表情。
其中,语音数据是通过终端的语音采集装置采集到的语音数据,语音数据可以是但不限于语音采集装置实时采集的语音数据、或使用相关应用软件录制的语音数据等等。其中当前运动部位是与语音数据匹配的弱表情部分,即第二运动部位。如上所示,第二运动部位可以是但不限于眼球、耳朵、下牙与舌头等。由于各个语音数据都有预先设定对应的第二运动部位,因此可根据终端获取到的语音数据确定匹配的弱表情部分,即当前第二运动部位。
进一步地,由于云服务器将与各个第二运动部位对应的骨骼动画下发至终端,因此可根据确定的当前第二运动部位获取对应的骨骼动画,所谓骨骼动画是虚拟形象模型中具有互相连接的“骨骼”组成的骨架结构,通过改变骨骼的朝向和位置来为虚拟形象模型生成动画。在获取到与当前第二运动部位对应的骨骼动画后,对获取到骨骼动画进行播放,使得虚拟形象模型中的虚拟动画形象能够做出与通过语音来记录的语音数据以及通过语音来传输的语音数据对应的表情变化。例如,通过语音记录的语音数据为:“晕了”,确定与该语音数据对应的当前第二运动部位为耳朵和眼球,因此获取耳朵和眼球对应的骨骼动画为同时顺时针转动。此时虚拟形象模型中的虚拟形象的耳朵和眼球会同时顺时针转动。
在一个实施例中,如图14所示,确定与表情变化数据匹配的目标分割表情区域,包括:
步骤1402,根据表情变化数据确定虚拟动画形象对应的当前运动部位。
如上所示,由于表情变化数据是三维脸部模型中的当前行为主体脸部发生表情运动变化的表情数据,且虚拟形象模型的虚拟动画形象需要做出与表情变化数据相同的表情,因此根据三维脸部模型中的当前行为主体脸部发生表情变化的表情变化数据确定虚拟形象模型中的虚拟动画形象做出与表情变化数据相同的表情的运动部位,即当前运动部位。
步骤1404,获取虚拟形象模型对应的预设多个分割表情区域。
步骤1406,从预设多个分割表情区域中获取与当前运动部位匹配的目标分割表情区域。
如上所示,虚拟形象模型按照一定的规则划分了多个分割表情区域,其中分割表情区域是虚拟形象模型中用来发生运动变化从而产生与表情变化数据对应表情的表情区域。具体地,在根据表情变化数据确定虚拟动画形象对应的当前运动部位后,获取预先按照一定的规则划分多个分割表情区域的虚拟形象模型,由于分割表情区域能够发生表情运动变化从而产生与表情变化数据相同的表情,因此根据获取到的当前运动部位从虚拟形象模型中各个分割表情区域匹配得到对应的分割表情区域,即目标分割表情区域。例如,虚拟形象模型的分割表情区域为两只耳朵、两个眼部、一个嘴部,表情变化数据为大笑对应的当前运动部位为嘴部,因此从虚拟形象模型中获取与嘴部匹配的目标分割表情区域为虚拟形象模型中嘴部。
在一个实施例中,如图15所示,在图2的基础上该表情动画数据处理方法还包括:
步骤1502,从虚拟形象模型对应的各个分割表情区域中获取当前分割表情区域。
步骤1504,获取当前分割表情区域对应的子基本虚拟形象模型集。
如上所述,虚拟形象模型是按照一定的规则划分为多个分割表情区域,而分割表情区域是用于发生表情运动变化从而使得虚拟形象模型中虚拟动画形象产生与表情变化数据相同的表情,因此从虚拟形象模型对应的各个分割表情区域中获取各个分割表情区域当前分割表情区域。所谓当前分割表情区域是虚拟形象模型中的各个分割表情区域中随机选取一个分割表情区域作为当前分割表情区域。
进一步地,各个分割表情区域都有对应的子基本虚拟形象模型集,所谓子基本虚拟形象模型集是预先设计好的一些虚拟动画形象的普通表情基,其中,表情基是虚拟动画形象的普通表情对应的虚拟形象模型集合。如图16所示,图16示出一个实施例中各个分割表情区域对应的子基本虚拟形象模型集。具体地,在随 机从虚拟形象模型中的各个分割表情区域中选取出当前分割表情区域后,获取与当前分割表情区域对应的所有子基本虚拟形象模型集。如:当前分割表情区域为嘴部,因此获取与嘴部对应的所有子基本虚拟形象模型集合。
步骤1506,对子基本虚拟形象模型集中的各个子基本虚拟形象模型进行多次不同的非线性组合生成对应的多个子混合虚拟形象模型,组成当前分割表情区域对应的子混合虚拟形象模型集。
具体地,在获取与当前分割表情区域对应的所有子基本虚拟形象模型集后,对所有子基本虚拟形象模型集中的各个子基本虚拟形象模型进行多次不同的非线性组合生成与各个子基本虚拟形象模型对应的多个子混合虚拟形象模型。所谓混合虚拟形象模型是在普通表情的基础上生成丰富的混合表情对应的混合表情基,混合表情基是虚拟动画形象的若干个普通表情进行非线性组合得到混合表情对应虚拟形象模型的集合。进一步地,将各个子基本虚拟形象模型对应的子混合虚拟形象模型组成当前分割表情区域对应的子混合虚拟形象模型集。其中,各个子基本虚拟形象模型集进行非线性组合计算生成对应的子混合虚拟形象模型的计算公式如公式1:
B i=A 1E 1×A 2E 2×…×A iE i 公式1
其中,B i表示第i个子混合虚拟形象模型,E j表示第j个子基本虚拟形象模型。
步骤1508,从各个分割表情区域中获取下一个分割表情区域作为当前分割表情区域,返回获取当前分割表情区域对应的子基本虚拟形象模型集的步骤,直到得到各个分割表情区域对应的子混合虚拟形象模型集。
步骤1510,将各个分割表情区域对应的子基本虚拟形象模型集和子混合虚拟形象模型集组成基础虚拟形象数据,目标基础虚拟形象数据是从基础虚拟形象数据中选取得到的。
其中,虚拟形象模型按照一定的规则划分为多个分割表情区域,需将虚拟形象模型中的各个分割表情区域对应的子混合虚拟形象模型集都计算得到。具体地,在从虚拟形象模型中的各个分割表情区域中随机选取下一个分割表情区域 作为当前分割表情区域,得到当前分割表情区域对应的子混合虚拟形象模型集后,需再从虚拟形象模型中的各个分割表情区域中随机选取一个分割表情区域再作为当前分割表情区域,返回获取当前分割表情区域对应的子基本虚拟形象模型集的步骤,将子基本虚拟形象模型集进行非线性组合得到对应子混合虚拟形象模型集。如此循环计算得到虚拟形象模型中各个分割区域对应的子混合虚拟形象模型集,进一步地将各个分割区域对应的子混合虚拟形象模型集合和子基本虚拟形象模型集合组成基础虚拟形象数据供表情变化数据匹配得到目标基础虚拟形象数据,即目标基础虚拟形象数据是从基础虚拟形象数据中选取得到的。
在一个实施例中,如图17所示,目标基础虚拟形象数据包括多个目标子基本虚拟形象模型和多个目标子混合虚拟形象模型,根据表情变化数据组合目标基础虚拟形象数据生成加载表情数据,包括:
步骤1702,根据表情变化数据计算得到各个目标子基本虚拟形象模型和各个目标子混合虚拟形象模型对应的组合系数。
其中,子基本虚拟形象模型也是按照一定的规则划分为多个分割表情区域的虚拟形象模型,分割表情区域是发生运动变化的表情区域,将根据各个子基本虚拟形象模型中各个分割表情区域发生表情运动变化的变化值或者表情权重系数作为各个子基本虚拟形象模型的组合系数,组合系数又可称为表情变化系数。同样地,子混合虚拟形象模型是对应的子基本虚拟形象模型进行非线性组合计算得到的,因此各个子混合虚拟形象模型都有对应的组合系数。具体地,根据三维脸部模型中获取到的表情变化数据确定目标分割表情区域,根据目标表情分割区域中发生表情运动变化的变化值或者发生表情运动变化对应的表情权重系数确定各个目标子基本虚拟形象模型对应的组合系数和各个目标子混合虚拟形象模型对应的组合系数。
步骤1704,根据组合系数将多个目标子基本虚拟形象模型和多个目标子混合虚拟形象模型进行线性组合生成加载表情数据。
具体地,在根据表情变化数据计算得到各个目标子基本虚拟形象模型和各个目标子混合虚拟形象模型对应的组合系数后,根据对应的组合系数将多个目 标子基本虚拟形象模型和多个子混合虚拟形象模型进行线性组合生成与表情变化数据对应的加载表情数据,其中加载表情数据又可称为与拍摄采集装置采集到的行为主体的当前表情数据相同的表情数据。具体地,组合系数将多个目标子基本虚拟形象模型和多个子混合虚拟形象模型可根据公式2进行线性组合生成与表情变化数据对应的加载表情数据,其中公式2为:
E user=A 1E 1+A 2E 2+…+A nE n+A 1B 1+A 2B 2+…+A mB m 公式2
其中,E user是拍摄采集装置采集到的行为主体当前的表情数据,即加载表情数据,E i是子基本虚拟形象模型,B j是子混合虚拟形象模型。
在一个实施例中,如图18所示,将加载表情数据加载到目标分割表情区域,包括:
步骤1802,获取当前顶点位置集合,当前顶点位置集合由生成加载表情数据的各个目标子基础虚拟形象模型对应的当前顶点位置组成。
其中,由于各个目标子基础虚拟形象模型与虚拟动画形象的虚拟形象模型的拓扑结构完全一致,且处于同一个尺度空间下大小一样。所谓拓扑结构是组成子基础虚拟形象模型的顶点之间的位置关系。其中,不同的目标子基础虚拟形象模型的网格顶点数一样,但是不同的目标子基础虚拟形象模型的顶点位置可能不同。例如,微笑对应的基础虚拟形象模型的顶点位置和大笑对应的基础虚拟形象模型的顶点位置是不同的。这里的目标子基础虚拟形象模型是由基础虚拟形象数据中选取符合要求的子基础虚拟形象模型。
具体地,加载表情数据是根据表情变化数据组合目标基础虚拟形象数据生成的,而目标基础虚拟形象数据是从基础虚拟形象数据中选取得到的,基础虚拟形象数据是虚拟形象模型中各个分割表情区域对应的子基本虚拟形象模型集和子混合虚拟形象模型集组成的,因此目标子基础虚拟形象模型包括但不限于子基本虚拟形象模型和子混合虚拟形象模型。进一步地,从生成加载表情数据的各个目标子基础虚拟形象对应的顶点位置中随机选取一个顶点位置作为当前顶点位置,将各个目标子基础虚拟形象中的当前顶点位置组成当前顶点位置集合。
步骤1804,根据当前顶点位置集合确定加载表情数据对应的网格的当前目 标顶点位置。
步骤1806,获取下一个顶点位置集合,根据下一个顶点位置集合确定加载表情数据对应的网格的下一个目标顶点位置,直至确定加载表情数据对应的网格的各个目标顶点位置。
其中,由于当前顶点位置集合是从生成加载表情数据的各个目标子基础虚拟形象对应的顶点位置中随机选取一个顶点位置作为当前顶点位置,将各个目标子基础虚拟形象中的当前顶点位置组成的,且加载表情数据是虚拟形象模型中某一分割表情区域产生表情运动变化产生的,因此根据获取到的当前顶点位置集合计算得到加载表情数据对应的网格的当前目标顶点位置。其中,根据当前顶点位置集合计算得到加载表情数据对应的网格的当前目标顶点位置的如公式3所示:
V i=A 1V E1+A 2V E2+…+A nV En+A 1V B1+A 2V B2+…+A mV Bm 公式3
其中,V i表示第i个顶点,即当前目标顶点位置,V E1表示在目标子基础虚拟形象模型E 1中相对应的顶点。
进一步地,在根据当前顶点位置集合得到加载表情数据对应的网格的当前目标顶点位置后,需再从生成加载表情数据的各个目标子基础虚拟形象对应的顶点位置中随机选取下一个顶点位置,将下一个顶点位置作为当前顶点位置,将由生成加载表情数据的各个目标子基础虚拟形象模型对应的当前顶点位置组成下一个顶点位置集合,根据下一个顶点位置集合确定加载表情数据对应的网格的下一个目标顶点位置,直至确定加载表情数据对应的网格的各个目标顶点位置。
在一个实施例中,如图19所示,当表情变化数据对应多个目标表情更新时,获取与目标分割表情区域匹配的目标基础虚拟形象数据,根据表情变化数据组合目标基础虚拟形象数据生成加载表情数据,包括:
步骤1902,获取各个目标表情对应的预设权重系数。
步骤1904,根据各个目标表情对应的预设权重系数的大小关系,确定各个目标表情对应的加载表情数据的生成顺序。
其中,为了减少虚拟形象的表情数据计算量,因此预先给各个表情设定不同 的权重系数,所谓权重系数针对某一指标而言,该指标在整体评价中的相对重要程度。因此当表情变化数据对应多个目标表情更新时,根据各个目标表情获取对应的预先设定的权重系数。由于不同的表情对应不同的权重系数,因此需根据各个目标表情对应的预先设定的权重系数的大小关系确定各个目标表情对应的加载表情数据的生成顺序,即加载顺序。
如图20所示,图20示出一个实施例中加载表情数据的原理示意图。具体地,若表情变化数据对应多个目标表情更新时,根据各个目标表情从虚拟形象模型对应的各个分割表情区域中确定对应的目标分割表情区域,获取与各个目标分割表情区域匹配的目标基础虚拟形象数据。进一步地,获取各个目标基础虚拟形象数据对应的预先设定权重系数,根据各个目标基础虚拟形象数据对应的预先设定的权重系数的大小关系,确定各个目标基础虚拟形象数据对应的加载表情的生成顺序,即权重系数越大先加载对应的目标基础虚拟形象数。比如,当表情变化数据对应多个目标表情更新为微笑、睁眼时,由于目标表情为睁眼的权重系数比目标表情为微笑的权重系数大时,在生成加载数据时应先加载睁眼对应的加载数据,再生成微笑对应的加载数据。
步骤1906,将加载表情数据加载到目标分割表情区域以更新虚拟形象模型对应的虚拟动画形象的表情,包括:按照各个目标表情对应的加载表情数据的生成顺序,依次将各个加载表情数据加载到目标分割表情区域以更新虚拟形象模型对应的虚拟动画形象的表情。
如图20所示,具体地,在根据各个目标表情对应的预设权重系数的大小关系,确定各个目标表情对应的加载表情数据的生成顺序后,按照各个目标表情对应的加载表情数据的生成顺序依次将各个加载表情数据加载到与加载表情数据对应的目标分割表情区域中,使得虚拟形象模型中的虚拟动画形象能够做出与三维脸部模型当前的表情对应的表情变化,即虚拟形象模型中的虚拟动画形象与拍摄采集装置采集到的图像中的行为主体的表情相同。例如,表情变化数据对应多个目标表情更新为微笑、睁眼时,由于目标表情为睁眼的权重系数比目标表情为微笑的权重系数大时,因此先将睁眼对应的加载表情数据加载到虚拟形象模型中的分割表情区域为眼部中,再将微笑对应的加载表情数据加载到虚拟形 象模型中的分割表情区域为嘴部中。
进一步地,在依次将各个加载表情数据加载到目标分割表情区域以更新虚拟形象模型对应的虚拟动画形象的表情时,由于确保能够保证虚拟动画形象的光滑度时,采用UV分割保证虚拟形象模型在加载表情数据时不会因为UV的分割等问题导致虚拟形象模型的各个分割表情区域之间有裂缝。所谓UV分割就是确定贴图的纹理坐标,可通过UV分割确定虚拟形象模型的贴图具体如何贴。其中,UV分割线分布在虚拟形象模型中不可见的部分,如后脑勺、头顶等。
在一个实施例中,如图21所示,从当前表情数据获取表情变化数据,包括:
步骤2102,对当前表情数据进行特征点提取,得到对应的表情特征点。
步骤2104,将表情特征点与预设表情数据集合进行匹配以确定当前更新表情,获取与当前更新表情对应的表情变化数据。
如上所示,由于三维脸部模型有通过拍摄采集装置采集到的图像中的行为主体的脸部表情数据,即当前表情数据。由于行为主体的脸部有些表情并没有发生任何表情变化,因此需从行为主体当前表情数据中获取发生表情变化的表情变化数据。具体地,对三维脸部模型中的当前表情数据进行脸部特征点提取,得到对应的表情特征点,根据提取到的表情特征点从预先设定的表情数据集合中进行匹配得到当前更新表情。进一步地,根据当前更新表情获取对应的表情变化数据。其中,表情数据集合又可称为表情库。例如,当前表情数据为行为主体的五官表情数据,其中,行为主体的表情更新为微笑。具体地,先对五官表情进行特征点提取,得到五官表情对应的表情特征点。进一步地,将五官表情对应的特征点与表情库中所有表情进行比较,得到当前更新表情为微笑,因此获取与微笑对应的表情变化数据。
应当说明,本实施例是针对前一帧图像中行为主体的脸部表情为面无表情,即五官都为无任何表情变化的部位,而下一帧图像中的行为主体的脸部产生表情变化才适用于本实施例。
在一个实施例中,如图22所示,从当前表情数据获取表情变化数据,包括:
步骤2202,获取历史表情数据,对历史表情数据进行特征点提取,得到对应的历史表情特征点。
步骤2204,对当前表情数据进行特征点提取,得到对应的当前表情特征点。
步骤2206,将历史表情特征点与当前表情特征点进行比较,根据比较结果得到对应的表情变化数据。
其中,由于前一帧图像中行为主体的脸部表情产生表情变化,而下一帧图像中的行为主体的脸部中前一帧图像的部分表情一直保持不变,其他表情可能产生变化,因此获取前一帧图像中的行为主体的历史表情数据,对历史表情数据进行特征点提取,得到对应的历史表情特征点。进一步地,对下一帧图像中的行为主体的当前表情数据进行特征点提取,得到对应的当前表情特征点。再将历史表情特征点与当前表情特征点进行比较从而得到对应的表情变化数据。例如,若历史表情数据为大笑,当前表情数据为大笑和睁眼,分别对历史表情数据和当前表情数据进行特征点提取,得到对应的历史表情特征点和当前表情特征点。将历史表情特征点和当前表情特征点进行比较,得到当前表情数据中的大笑对应的特征点一直未发生改变,因此根据比较结果确定下一帧图像中的行为主体的表情变化数据为睁眼。
应当说明,本实施例是针对前一帧图像中行为主体的脸部表情产生表情变化,而下一帧图像中的行为主体的脸部中上一帧图像的部分表情一直保持不变,其他表情产生变化才适用于本实施例。如,前一帧图像中行为主体的表情变化为大笑,而下一帧图像中的行为主体的嘴部未发生任何改变,行为主体的脸部一直在保持大笑。
在一个实施例中,该表情动画数据处理方法还包括:根据表情变化数据从预设背景图像中获取对应的第一背景图像,将第一背景图像加载至虚拟形象模型对应的虚拟动画形象所处的虚拟环境中;或获取语音数据,根据语音数据从预设背景图像中获取对应的第二背景图像,将第二背景图像加载至虚拟形象模型对应的虚拟动画形象所处的虚拟环境中。
本实施例中,为了进一步对虚拟动画形象所处的虚拟环境进行渲染,在进行渲染时虚拟动画形象所处的虚拟环境会根据不同的背景渲染出不同的质感,具有较强的真实感。其中,可通过两种方式实现对虚拟环境的渲染。其中一种方式为通过表情变化数据控制方式,具体地,在获取到表情变化数据后,根据表情变 化数据中特殊表情数据从云服务器预先下发的背景图像中获取对应的第一背景图像,将获取到的第一背景图像加载至虚拟形象模型对应的虚拟动画形象所处的虚拟环境中。如,表情变化数据为做鬼脸,根据表情变化数据从云服务器预先下发的背景图像中匹配对应的第一背景图像为星星闪动背景图像,从而给虚拟动画形象所处的虚拟环境进行渲染。
另一种方式为通过语音数据控制方式,具体地,语音数据是通过终端语音采集装置采集到的语音数据,可根据语音数据中特殊词语或者特殊话语触发从云服务器预先下发的背景图像中获取与语音数据匹配的第二背景图像,将获取到的第二背景图像加载至虚拟形象模型对应的虚拟动画形象所处的虚拟环境中。例如,获取到的语音数据为新年快乐,根据新年快乐从云服务器预先下发的背景图像中匹配对应的第二背景图像为新年主题,虚拟环境会有鞭炮对应的动画。如图23所示,图23示出一个实施例中虚拟动画形象所处虚拟环境下发背景图像的示意图。当通过以上两种方式从云服务器下发的背景图像中选取目标背景图像,将目标背景图像加载到虚拟动画形象所处的虚拟环境中。
在一个具体的实施例中,提供了一种表情动画数据处理方法。具体包括如下步骤:
步骤2402,获取终端的位置,以终端的位置为原点建立虚拟空间。
步骤2404,根据终端在真实世界的相对位置确定虚拟动画形象在虚拟空间对应的目标位置并在目标位置生成初始虚拟动画形象。
步骤2406,通过终端的摄像头采集到行为主体的脸部后,对图像中的行为主体的脸部进行特征点提取,根据提取的脸部特征点建立三维人脸模型。
步骤2408,根据三维人脸模型中的行为主体的表情数据中获取当前表情数据。
步骤2410,当上一帧图像中的行为主体的脸部表情为面无表情,而下一帧图像中的行为主体的脸部表情发生了表情运动变化时,则进入步骤2410a;当上一帧图像中的行为主体的脸部中的某部位表情为表情运动变化,而下一帧图像中的行为主体的某部位表情一直保持同样的表情运动变化,其他部位表情发生新的表情运动变化时,则进入步骤2410b。
步骤2410a,先对表情变化数据进行特征点,将提取到的特征点与表情库中的表情数据进行匹配确定当前更新表情,再获取当前更新表情对应的表情变化数据。
步骤2410b,先获取历史表情数据,再对历史表情数据和当前表情数据进行特征点提取得到对应的历史表情特征点和当前表情特征点,将历史表情特征点和当前表情特征点进行比较得到表情变化数据。
步骤2412,先根据表情变化数据确定对应的当前运动部位,再根据当前运动部位从按照一定的规则划分多个分割区域的虚拟形象模型中获取与当前运动部位匹配的目标表情分割表情区域。
步骤2414,获取与目标分割表情区域匹配的目标基础虚拟形象数据,根据表情变化数据组合目标基础虚拟形象数据生成加载表情数据。
步骤2414a,先根据表情变化数据计算得到目标基础虚拟形象数据中的各个目标普通表情基和各个目标混合表情基对应的组合系数,再根据组合系数将各个目标普通表情基和各个目标混合表情基进行线性组合生成加载表情数据。
步骤2414b,若表情变化数据有多个目标表情,先获取各个目标表情预先设定的权重系数,根据各个目标表情预先设定的权重系数的大小关系,确定各个目标表情对应的加载表情的生成顺序。
步骤2416,将加载表情数据加载到目标分割表情区域以更新虚拟形象模型对应的虚拟动画形象的表情。
步骤2416a,先根据加载表情数据对应的各个目标普通表情基和各个目标混合表情基对应的顶点位置组成当前顶点位置集合,再根据当前顶点位置集合确定加载表情数据对应的网格的当前目标顶点位置,再获取下一个顶点位置集合,根据下一个顶点位置集合确定加载表情数据对应的网格的下一个目标顶点位置,直到确定加载表情数据对应的网格的各个目标顶点位置。
步骤2416b,若表情变化数据为多个目标表情时,在确定各个目标表情对应的加载表情数据的生成顺序后,按照生成顺序依次将各个加载表情数据加载到目标分割表情区域,使得虚拟形象模型对应的虚拟动画形象的表情进行更新。
步骤2418,根据表情变化数据确定虚拟动画形象对应的强表情部分。
步骤2420,获取与强表情部分相关联的弱表情部分。
步骤2422,根据表情变化数据计算得到强表情部分对应的运动状态数据;根据强表情部分对应的运动状态数据确定弱表情部分对应的骨骼控制数据。
步骤2422a,若强表情部分为虚拟动画形象的头部时,通过头部转动的俯仰角速度以及对应的计算所需的补偿值、云端控制最大的俯仰角阈值计算得到耳朵骨骼的欧拉角中的章动角,通过头部转动的偏航角以及对应的计算所需的补偿值、云端控制最大的偏航角阈值计算得到耳朵骨骼的欧拉角中的旋进角,由章动角和旋进角确定耳朵骨骼的欧拉角。
步骤2422b,若强表情部分为虚拟动画形象的嘴部时,通过云端控制的最大俯仰角阈值和表情变化数据中的张嘴对应的表情系数计算得到下牙与舌头骨骼的欧拉角。
步骤2422c,若强表情部分为虚拟动画形象的眼部时,通过云端控制的最大俯仰角阈值、旋转的方向值和表情变化数据中眼部表情变化系数计算得到眼球骨骼的欧拉角中的章动角,通过云端控制的最大偏航角阈值、旋转的方向值和表情变化数据中眼部表情变化系数计算得到眼部骨骼的欧拉角中的旋进角,由章动角和旋进角确定眼球骨骼的欧拉角。
步骤2424,根据骨骼控制数据控制弱表情部分对应的骨骼运动,以更新虚拟形象模型对应的虚拟动画形象的表情。
步骤2426,获取语音数据,根据语音数据确定对应的当前弱表情部分。
步骤2428,获取与当前弱表情部分对应的骨骼动画,播放骨骼动画,以更新虚拟形象模型对应的虚拟动画形象的表情。
步骤2430,从虚拟形象模型对应的各个分割表情区域中获取当前分割表情区域。
步骤2432,获取当前分割表情区域对应的普通表情基。
步骤2434,对普通表情基中的各个普通表情基对应的虚拟形象模型进行多次不同的非线性组合生成对应的多个混合表情基对应的虚拟形象模型,组成当前分割表情区域对应的混合表情基。
步骤2436,从各个分割表情区域中获取下一个分割表情区域作为当前分割 表情区域,返回获取当前分割表情区域对应的普通表情基的步骤,直到得到各个分割表情区域对应的混合表情基。
步骤2438,将各个分割表情区域对应的普通表情基和混合表情基组成对应的表情基,目标表情数据是从表情基中表情数据中选取得到的。
步骤2440,根据表情变化数据从预设背景图像中获取对应的第一背景图像,将第一背景图像加载至虚拟形象模型对应的虚拟动画形象所处的虚拟环境中。
步骤2442,获取语音数据,根据语音数据从预设背景图像中获取对应的第二背景图像,将第二背景图像加载至虚拟形象模型对应的虚拟动画形象所处的虚拟环境中。
上述表情动画数据处理方法的所有流程示意图。应该理解的是,虽然流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,流程图中至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
如图24所示,在一个实施例中,提供了一种表情动画数据处理装置2500,该装置包括:
当前表情数据获取模块2502,用于确定人脸在图像中的位置,获取虚拟形象模型,根据人脸在图像中的位置和三维脸部模型获取当前表情数据。
表情更新数据获取模块2504,用于从当前表情数据获取表情变化数据。
目标分割表情区域检测模块2506,用于确定与表情变化数据匹配的目标分割表情区域,目标分割表情区域是从虚拟形象模型对应的各个分割表情区域中选取得到的。
目标基础虚拟形象数据获取模块2508,用于获取与目标分割表情区域匹配的目标基础虚拟形象数据,根据表情变化数据组合目标基础虚拟形象数据生成加载表情数据。
虚拟动画形象更新模块2510,用于将加载表情数据加载到目标分割表情区域以更新虚拟形象模型对应的虚拟动画形象的表情。
如图25所述,在一个实施例中,该表情动画数据处理装置2500还包括:第一运动部位检测模块2602,第二运动部位获取模块2604,运动状态数据计算模块2606,骨骼控制数据检测模块2608,骨骼运动控制模块2610,其中:
第一运动部位检测模块2602,用于根据表情变化数据确定虚拟动画形象对应的第一运动部位。
第二运动部位获取模块2604,用于获取与第一运动部位相关联的第二运动部位。
运动状态数据计算模块2606,用于根据表情变化数据计算得到第一运动部位对应的运动状态数据。
骨骼控制数据检测模块2608,用于根据第一运动部位对应的运动状态数据确定第二运动部位对应的骨骼控制数据。
骨骼运动控制模块2610,用于根据骨骼控制数据控制第二运动部位对应的骨骼运动,以更新虚拟形象模型对应的虚拟动画形象的表情。
在一个实施例中,若第一运动部位为第一预设部位,运动状态数据计算模块还用于根据表情变化数据计算得到第一预设部位对应的偏航角速度和俯仰角速度;根据俯仰角速度、预设最大俯仰角阈值和第一预设补偿值确定第二运动部位对应的第一骨骼控制数据;根据偏航角速度、预设最大偏航角阈值和与第二预设补偿值确定第二运动部位对应的第二骨骼控制数据;骨骼控制数据检测模块,还用于根据第一骨骼控制数据和第二骨骼控制数据确定第二运动部位对应的骨骼控制数据。
在一个实施例中,若第一运动部位为第二预设部位,运动状态数据计算模块还用于根据表情变化数据计算得到第二预设部位对应的表情变化系数;骨骼控制数据检测模块,还用于根据表情变化系数和预设最大俯仰角阈值确定第二运动部位对应的骨骼控制数据。
在一个实施例中,若第一运动部位为第三预设部位,运动状态数据计算模块还用于根据表情变化数据计算得到第三预设部位对应的表情变化系数;根据表 情变化数据计算得到第三预设部位对应的俯仰角方向值和偏航角方向值;根据表情变化系数、俯仰角方向值和预设最大俯仰角阈值确定第三预设部位对应的第一骨骼控制数据;根据表情变化系数、偏航角方向值和预设最大偏航角阈值确定第三预设部位对应的第二骨骼控制数据;骨骼控制数据检测模块,还用于根据第一骨骼控制数据和第二骨骼控制数据确定第三预设部位对应的骨骼控制数据。
如图26所示,在一个实施例中,该表情动画数据处理装置2500还包括:参照点获取模块2702,相对位置获取模块2704,初始虚拟动画形象生成模块2706,其中:
参照点获取模块2702,用于获取参照点,根据参照点确定虚拟空间坐标原点,根据虚拟空间坐标原点建立虚拟空间。
相对位置获取模块2704,用于获取行为主体相对于参照点的相对位置。
初始虚拟动画形象生成模块2706,用于根据相对位置确定行为主体对应的虚拟动画形象在虚拟空间的目标位置,根据目标位置在虚拟空间生成行为主体对应的初始虚拟动画形象。
如图27所示,在一个实施例中,该表情动画数据处理装置还包括:语音数据获取模块2802,骨骼动画获取模块2804,其中:
语音数据获取模块2802,用于获取语音数据,根据语音数据确定对应的当前第二运动部位。
骨骼动画获取模块2804,用于获取与当前第二运动部位对应的骨骼动画,播放骨骼动画,以更新虚拟形象模型对应的虚拟动画形象的表情。
如图28所示,在一个实施例中,目标分割表情区域检测模块2506包括:当前运动部位检测单元2506a,分割表情区域获取单元2506b,目标分割表情区域匹配单元2506c,其中:
当前运动部位检测单元2506a,用于根据表情变化数据确定虚拟动画形象对应的当前运动部位。
分割表情区域获取单元2506b,用于获取虚拟形象模型对应的预设多个分割表情区域。
目标分割表情区域匹配单元2506c,用于从预设多个分割表情区域中获取与 当前运动部位匹配的目标分割表情区域。
如图29所示,在一个实施例中,该表情动画数据处理装置2500还包括:当前分割表情区域获取模块2902,子基本虚拟形象模型集获取模块2904,子混合虚拟形象模型集获取模块2906,基础虚拟形象数据生成模块2908,其中:
当前分割表情区域获取模块2902,用于从虚拟形象模型对应的各个分割表情区域中获取当前分割表情区域;
子基本虚拟形象模型集获取模块2904,用于获取当前分割表情区域对应的子基本虚拟形象模型集;
子混合虚拟形象模型集获取模块2906,用于对子基本虚拟形象模型集中的各个子基本虚拟形象模型进行多次不同的非线性组合生成对应的多个子混合虚拟形象模型,组成当前分割表情区域对应的子混合虚拟形象模型集;
子基本虚拟形象模型集获取模块2904还用于从各个分割表情区域中获取下一个分割表情区域作为当前分割表情区域,返回获取当前分割表情区域对应的子基本虚拟形象模型集的步骤,直到得到各个分割表情区域对应的子混合虚拟形象模型集;
基础虚拟形象数据生成模块2908,用于将各个分割表情区域对应的子基本虚拟形象模型集和子混合虚拟形象模型集组成基础虚拟形象数据,目标基础虚拟形象数据是从基础虚拟形象数据中选取得到的。
在一个实施例中,目标基础虚拟形象数据获取模块2508还用于根据表情变化数据计算得到各个目标子基本虚拟形象模型和各个目标子混合虚拟形象模型对应的组合系数;根据组合系数将多个目标子基本虚拟形象模型和多个目标子混合虚拟形象模型进行线性组合生成加载表情数据。
如图30所示,在一个实施例中,虚拟动画形象更新模块2510还包括:顶点位置集合获取单元2510a和目标顶点位置获取单元2510b,其中:
顶点位置集合获取单元2510a,用于获取当前顶点位置集合,当前顶点位置集合由生成加载表情数据的各个目标子基础虚拟形象模型对应的当前顶点位置组成。
目标顶点位置获取单元2510b,根据当前顶点位置集合确定加载表情数据对 应的网格的当前目标顶点位置;获取下一个顶点位置集合,根据下一个顶点位置集合确定加载表情数据对应的网格的下一个目标顶点位置,直至确定加载表情数据对应的网格的各个目标顶点位置。
如图31所示,在一个实施例中,目标基础虚拟形象数据获取模块2508还包括:
预设权重系数获取单元2508a,用于获取各个目标表情对应的预设权重系数。
生成顺序确定单元2508b,用于根据各个目标表情对应的预设权重系数的大小关系,确定各个目标表情对应的加载表情数据的生成顺序;
在本实施例中,虚拟动画形象更新模块2510还用于按照各个目标表情对应的加载表情数据的生成顺序,依次将各个加载表情数据加载到目标分割表情区域以更新虚拟形象模型对应的虚拟动画形象的表情。
在一个实施例中,表情更新数据获取模块2504还用于对当前表情数据进行特征点提取,得到对应的表情特征点;将表情特征点与预设表情数据集合进行匹配以确定当前更新表情,获取与当前更新表情对应的表情变化数据。
在一个实施例中,表情更新数据获取模块2504还用于获取历史表情数据,对历史表情数据进行特征点提取,得到对应的历史表情特征点;对当前表情数据进行特征点提取,得到对应的当前表情特征点;将历史表情特征点与当前表情特征点进行比较,根据比较结果得到对应的表情变化数据。
在一个实施例中,该表情动画数据处理装置还用于根据表情变化数据从预设背景图像中获取对应的第一背景图像,将第一背景图像加载至虚拟形象模型对应的虚拟动画形象所处的虚拟环境中;或获取语音数据,根据语音数据从预设背景图像中获取对应的第二背景图像,将第二背景图像加载至虚拟形象模型对应的虚拟动画形象所处的虚拟环境中。
图32示出了一个实施例中计算机设备的内部结构图。该计算机设备具体可以是图1中的终端110。如图32所示,该计算机设备包括该计算机设备包括通过系统总线连接的处理器、存储器、网络接口、输入装置和显示屏。其中,存储器包括非易失性存储介质和内存储器。该计算机设备的非易失性存储介质存储有操作系统,还可存储有计算机可读指令,该计算机可读指令被处理器执行时, 可使得处理器实现表情动画数据处理方法。该内存储器中也可储存有计算机可读指令,该计算机可读指令被处理器执行时,可使得处理器执行表情动画数据处理方法。计算机设备的显示屏可以是液晶显示屏或者电子墨水显示屏,计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图32中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,本申请提供的表情动画数据处理装置可以实现为一种计算机可读指令的形式,计算机可读指令可在如图32所示的计算机设备上运行。计算机设备的存储器中可存储组成该表情动画数据处理装置的各个程序模块,比如,图24所示的当前表情数据获取模块、表情更新数据获取模块、目标分割表情区域检测模块、目标基础虚拟形象数据获取模块和虚拟动画形象更新模块。各个程序模块构成的计算机可读指令使得处理器执行本说明书中描述的本申请各个实施例的表情动画数据处理方法中的步骤。
例如,图32所示的计算机设备可以通过如图24所示的表情动画数据处理装置中的当前表情数据获取模块执行根据三维脸部模型获取当前表情数据的步骤。计算机设备可通过表情更新数据获取模块执行从当前表情数据获取表情变化数据的步骤。
在一个实施例中,提出了一种计算机设备,包括存储器和处理器,所述存储器存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行以下步骤:确定人脸在图像中的位置,获取虚拟形象模型;根据人脸在图像中的位置和三维脸部模型获取当前表情数据;从当前表情数据获取表情变化数据;确定与表情变化数据匹配的目标分割表情区域,目标分割表情区域是从虚拟形象模型对应的各个分割表情区域中选取得到的;获取与目标分割表情区域匹配的目标基础虚拟形象数据,根据表情变化数据组合目标基础虚拟形象数据生成加载表情数据;将加载表情数据加载到目标分割表情区域以更新虚拟 形象模型对应的虚拟动画形象的表情。
在一个实施例中,所述计算机可读指令还使得所述处理器执行如下步骤:根据表情变化数据确定虚拟动画形象对应的第一运动部位;获取与第一运动部位相关联的第二运动部位;根据表情变化数据计算得到第一运动部位对应的运动状态数据;根据第一运动部位对应的运动状态数据确定第二运动部位对应的骨骼控制数据;根据骨骼控制数据控制第二运动部位对应的骨骼运动,以更新虚拟形象模型对应的虚拟动画形象的表情。
在一个实施例中,若第一运动部位为第一预设部位,根据表情变化数据计算得到第一运动部位对应的运动状态数据,根据第一运动部位对应的运动状态数据确定第二运动部位对应的骨骼控制数据,包括:根据表情变化数据计算得到第一预设部位对应的偏航角速度和俯仰角速度;根据俯仰角速度、预设最大俯仰角阈值和第一预设补偿值确定第二运动部位对应的第一骨骼控制数据;根据偏航角速度、预设最大偏航角阈值和与第二预设补偿值确定第二运动部位对应的第二骨骼控制数据;根据第一骨骼控制数据和第二骨骼控制数据确定第二运动部位对应的骨骼控制数据。
在一个实施例中,若第一运动部位为第二预设部位,根据表情变化数据计算得到第一运动部位对应的运动状态数据,根据第一运动部位对应的运动状态数据确定第二运动部位对应的骨骼控制数据,包括:根据表情变化数据计算得到第二预设部位对应的表情变化系数;根据表情变化系数和预设最大俯仰角阈值确定第二运动部位对应的骨骼控制数据。
在一个实施例中,若第一运动部位为第三预设部位,根据表情变化数据计算得到第一运动部位对应的运动状态数据,根据第一运动部位对应的运动状态数据确定第二运动部位对应的骨骼控制数据,包括:根据表情变化数据计算得到第三预设部位对应的表情变化系数;根据表情变化数据计算得到第三预设部位对应的俯仰角方向值和偏航角方向值;根据表情变化系数、俯仰角方向值和预设最大俯仰角阈值确定第三预设部位对应的第一骨骼控制数据;根据表情变化系数、偏航角方向值和预设最大偏航角阈值确定第三预设部位对应的第二骨骼控制数据;根据第一骨骼控制数据和第二骨骼控制数据确定第三预设部位对应的骨骼 控制数据。
在一个实施例中,所述计算机可读指令还使得所述处理器执行如下步骤:获取参照点,根据参照点确定虚拟空间坐标原点,根据虚拟空间坐标原点建立虚拟空间;获取行为主体相对于参照点的相对位置;根据相对位置确定行为主体对应的虚拟动画形象在虚拟空间的目标位置,根据目标位置在虚拟空间生成行为主体对应的初始虚拟动画形象。
在一个实施例中,所述计算机可读指令还使得所述处理器执行如下步骤:获取语音数据,根据语音数据确定对应的当前第二运动部位;获取与当前第二运动部位对应的骨骼动画,播放骨骼动画,以更新虚拟形象模型对应的虚拟动画形象的表情。
在一个实施例中,确定与表情变化数据匹配的目标分割表情区域,包括:根据表情变化数据确定虚拟动画形象对应的当前运动部位;获取虚拟形象模型对应的预设多个分割表情区域;从预设多个分割表情区域中获取与当前运动部位匹配的目标分割表情区域。
在一个实施例中,所述计算机可读指令还使得所述处理器执行如下步骤:从虚拟形象模型对应的各个分割表情区域中获取当前分割表情区域;获取当前分割表情区域对应的子基本虚拟形象模型集;对子基本虚拟形象模型集中的各个子基本虚拟形象模型进行多次不同的非线性组合生成对应的多个子混合虚拟形象模型,组成当前分割表情区域对应的子混合虚拟形象模型集;从各个分割表情区域中获取下一个分割表情区域作为当前分割表情区域,返回获取当前分割表情区域对应的子基本虚拟形象模型集的步骤,直到得到各个分割表情区域对应的子混合虚拟形象模型集;将各个分割表情区域对应的子基本虚拟形象模型集和子混合虚拟形象模型集组成基础虚拟形象数据,目标基础虚拟形象数据是从基础虚拟形象数据中选取得到的。
在一个实施例中,目标基础虚拟形象数据包括多个目标子基本虚拟形象模型和多个目标子混合虚拟形象模型,根据表情变化数据组合目标基础虚拟形象数据生成加载表情数据,包括:根据表情变化数据计算得到各个目标子基本虚拟形象模型和各个目标子混合虚拟形象模型对应的组合系数;根据组合系数将多 个目标子基本虚拟形象模型和多个目标子混合虚拟形象模型进行线性组合生成加载表情数据。
在一个实施例中,将加载表情数据加载到目标分割表情区域,包括:获取当前顶点位置集合,当前顶点位置集合由生成加载表情数据的各个目标子基础虚拟形象模型对应的当前顶点位置组成;根据当前顶点位置集合确定加载表情数据对应的网格的当前目标顶点位置;获取下一个顶点位置集合,根据下一个顶点位置集合确定加载表情数据对应的网格的下一个目标顶点位置,直至确定加载表情数据对应的网格的各个目标顶点位置。
在一个实施例中,当表情变化数据对应多个目标表情更新时,获取与目标分割表情区域匹配的目标基础虚拟形象数据,根据表情变化数据组合目标基础虚拟形象数据生成加载表情数据,包括:获取各个目标表情对应的预设权重系数;根据各个目标表情对应的预设权重系数的大小关系,确定各个目标表情对应的加载表情数据的生成顺序;将加载表情数据加载到目标分割表情区域以更新虚拟形象模型对应的虚拟动画形象的表情,包括:按照各个目标表情对应的加载表情数据的生成顺序,依次将各个加载表情数据加载到目标分割表情区域以更新虚拟形象模型对应的虚拟动画形象的表情。
在一个实施例中,从当前表情数据获取表情变化数据,包括:对当前表情数据进行特征点提取,得到对应的表情特征点;将表情特征点与预设表情数据集合进行匹配以确定当前更新表情,获取与当前更新表情对应的表情变化数据。
在一个实施例中,从当前表情数据获取表情变化数据,包括:获取历史表情数据,对历史表情数据进行特征点提取,得到对应的历史表情特征点;对当前表情数据进行特征点提取,得到对应的当前表情特征点;将历史表情特征点与当前表情特征点进行比较,根据比较结果得到对应的表情变化数据。
在一个实施例中,所述计算机可读指令还使得所述处理器执行如下步骤:根据表情变化数据从预设背景图像中获取对应的第一背景图像,将第一背景图像加载至虚拟形象模型对应的虚拟动画形象所处的虚拟环境中;或获取语音数据,根据语音数据从预设背景图像中获取对应的第二背景图像,将第二背景图像加载至虚拟形象模型对应的虚拟动画形象所处的虚拟环境中。
在一个实施例中,获取虚拟形象模型,包括:对图像中的人脸进行人脸特征点提取,根据人脸特征点获取对应的虚拟形象模型;或获取虚拟形象模型集合,该虚拟形象模型集合包括多个虚拟形象模型,获取虚拟形象模型选择指令,根据虚拟形象模型选择指令从虚拟形象模型集合中获取目标虚拟形象模型。
在一个实施例中,提出了一种计算机可读存储介质,存储有计算机可读指令,所述计算机可读指令被处理器执行时,使得所述处理器执行以下步骤:确定人脸在图像中的位置,获取虚拟形象模型;根据人脸在图像中的位置和三维脸部模型获取当前表情数据;从当前表情数据获取表情变化数据;确定与表情变化数据匹配的目标分割表情区域,目标分割表情区域是从虚拟形象模型对应的各个分割表情区域中选取得到的;获取与目标分割表情区域匹配的目标基础虚拟形象数据,根据表情变化数据组合目标基础虚拟形象数据生成加载表情数据;将加载表情数据加载到目标分割表情区域以更新虚拟形象模型对应的虚拟动画形象的表情。
在一个实施例中,所述计算机可读指令还使得所述处理器执行如下步骤:根据表情变化数据确定虚拟动画形象对应的第一运动部位;获取与第一运动部位相关联的第二运动部位;根据表情变化数据计算得到第一运动部位对应的运动状态数据;根据第一运动部位对应的运动状态数据确定第二运动部位对应的骨骼控制数据;根据骨骼控制数据控制第二运动部位对应的骨骼运动,以更新虚拟形象模型对应的虚拟动画形象的表情。
在一个实施例中,若第一运动部位为第一预设部位,根据表情变化数据计算得到第一运动部位对应的运动状态数据,根据第一运动部位对应的运动状态数据确定第二运动部位对应的骨骼控制数据,包括:根据表情变化数据计算得到第一预设部位对应的偏航角速度和俯仰角速度;根据俯仰角速度、预设最大俯仰角阈值和第一预设补偿值确定第二运动部位对应的第一骨骼控制数据;根据偏航角速度、预设最大偏航角阈值和与第二预设补偿值确定第二运动部位对应的第二骨骼控制数据;根据第一骨骼控制数据和第二骨骼控制数据确定第二运动部位对应的骨骼控制数据。
在一个实施例中,若第一运动部位为第二预设部位,根据表情变化数据计算 得到第一运动部位对应的运动状态数据,根据第一运动部位对应的运动状态数据确定第二运动部位对应的骨骼控制数据,包括:根据表情变化数据计算得到第二预设部位对应的表情变化系数;根据表情变化系数和预设最大俯仰角阈值确定第二运动部位对应的骨骼控制数据。
在一个实施例中,若第一运动部位为第三预设部位,根据表情变化数据计算得到第一运动部位对应的运动状态数据,根据第一运动部位对应的运动状态数据确定第二运动部位对应的骨骼控制数据,包括:根据表情变化数据计算得到第三预设部位对应的表情变化系数;根据表情变化数据计算得到第三预设部位对应的俯仰角方向值和偏航角方向值;根据表情变化系数、俯仰角方向值和预设最大俯仰角阈值确定第三预设部位对应的第一骨骼控制数据;根据表情变化系数、偏航角方向值和预设最大偏航角阈值确定第三预设部位对应的第二骨骼控制数据;根据第一骨骼控制数据和第二骨骼控制数据确定第三预设部位对应的骨骼控制数据。
在一个实施例中,所述计算机可读指令还使得所述处理器执行如下步骤:获取参照点,根据参照点确定虚拟空间坐标原点,根据虚拟空间坐标原点建立虚拟空间;获取行为主体相对于参照点的相对位置;根据相对位置确定行为主体对应的虚拟动画形象在虚拟空间的目标位置,根据目标位置在虚拟空间生成行为主体对应的初始虚拟动画形象。
在一个实施例中,所述计算机可读指令还使得所述处理器执行如下步骤:获取语音数据,根据语音数据确定对应的当前第二运动部位;获取与当前第二运动部位对应的骨骼动画,播放骨骼动画,以更新虚拟形象模型对应的虚拟动画形象的表情。
在一个实施例中,确定与表情变化数据匹配的目标分割表情区域,包括:根据表情变化数据确定虚拟动画形象对应的当前运动部位;获取虚拟形象模型对应的预设多个分割表情区域;从预设多个分割表情区域中获取与当前运动部位匹配的目标分割表情区域。
在一个实施例中,所述计算机可读指令还使得所述处理器执行如下步骤:从虚拟形象模型对应的各个分割表情区域中获取当前分割表情区域;获取当前分 割表情区域对应的子基本虚拟形象模型集;对子基本虚拟形象模型集中的各个子基本虚拟形象模型进行多次不同的非线性组合生成对应的多个子混合虚拟形象模型,组成当前分割表情区域对应的子混合虚拟形象模型集;从各个分割表情区域中获取下一个分割表情区域作为当前分割表情区域,返回获取当前分割表情区域对应的子基本虚拟形象模型集的步骤,直到得到各个分割表情区域对应的子混合虚拟形象模型集;将各个分割表情区域对应的子基本虚拟形象模型集和子混合虚拟形象模型集组成基础虚拟形象数据,目标基础虚拟形象数据是从基础虚拟形象数据中选取得到的。
在一个实施例中,目标基础虚拟形象数据包括多个目标子基本虚拟形象模型和多个目标子混合虚拟形象模型,根据表情变化数据组合目标基础虚拟形象数据生成加载表情数据,包括:根据表情变化数据计算得到各个目标子基本虚拟形象模型和各个目标子混合虚拟形象模型对应的组合系数;根据组合系数将多个目标子基本虚拟形象模型和多个目标子混合虚拟形象模型进行线性组合生成加载表情数据。
在一个实施例中,将加载表情数据加载到目标分割表情区域的步骤,包括:获取当前顶点位置集合,当前顶点位置集合由生成加载表情数据的各个目标子基础虚拟形象模型对应的当前顶点位置组成;根据当前顶点位置集合确定加载表情数据对应的网格的当前目标顶点位置;获取下一个顶点位置集合,根据下一个顶点位置集合确定加载表情数据对应的网格的下一个目标顶点位置,直至确定加载表情数据对应的网格的各个目标顶点位置。
在一个实施例中,当表情变化数据对应多个目标表情更新时,获取与目标分割表情区域匹配的目标基础虚拟形象数据,根据表情变化数据组合目标基础虚拟形象数据生成加载表情数据,包括:获取各个目标表情对应的预设权重系数;根据各个目标表情对应的预设权重系数的大小关系,确定各个目标表情对应的加载表情数据的生成顺序;将加载表情数据加载到目标分割表情区域以更新虚拟形象模型对应的虚拟动画形象的表情,包括:按照各个目标表情对应的加载表情数据的生成顺序,依次将各个加载表情数据加载到目标分割表情区域以更新虚拟形象模型对应的虚拟动画形象的表情。
在一个实施例中,从当前表情数据获取表情变化数据,包括:对当前表情数据进行特征点提取,得到对应的表情特征点;将表情特征点与预设表情数据集合进行匹配以确定当前更新表情,获取与当前更新表情对应的表情变化数据。
在一个实施例中,从当前表情数据获取表情变化数据,包括:获取历史表情数据,对历史表情数据进行特征点提取,得到对应的历史表情特征点;对当前表情数据进行特征点提取,得到对应的当前表情特征点;将历史表情特征点与当前表情特征点进行比较,根据比较结果得到对应的表情变化数据。
在一个实施例中,所述计算机可读指令还使得所述处理器执行如下步骤:根据表情变化数据从预设背景图像中获取对应的第一背景图像,将第一背景图像加载至虚拟形象模型对应的虚拟动画形象所处的虚拟环境中;或获取语音数据,根据语音数据从预设背景图像中获取对应的第二背景图像,将第二背景图像加载至虚拟形象模型对应的虚拟动画形象所处的虚拟环境中。
在一个实施例中,获取虚拟形象模型,包括:对图像中的人脸进行人脸特征点提取,根据人脸特征点获取对应的虚拟形象模型;或获取虚拟形象模型集合,该虚拟形象模型集合包括多个虚拟形象模型,获取虚拟形象模型选择指令,根据虚拟形象模型选择指令从虚拟形象模型集合中获取目标虚拟形象模型。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的程序可存储于一非易失性计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM) 等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (54)

  1. 一种表情动画数据处理方法,包括:
    计算机设备确定人脸在图像中的位置,获取虚拟形象模型;
    所述计算机设备根据所述人脸在图像中的位置和三维脸部模型获取当前表情数据;
    所述计算机设备从所述当前表情数据获取表情变化数据;
    所述计算机设备确定与所述表情变化数据匹配的目标分割表情区域,所述目标分割表情区域是从虚拟形象模型对应的各个分割表情区域中选取得到的;
    所述计算机设备获取与所述目标分割表情区域匹配的目标基础虚拟形象数据,根据所述表情变化数据组合所述目标基础虚拟形象数据生成加载表情数据;
    所述计算机设备将所述加载表情数据加载到所述目标分割表情区域以更新所述虚拟形象模型对应的虚拟动画形象的表情。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述计算机设备根据所述表情变化数据确定所述虚拟动画形象对应的第一运动部位;
    所述计算机设备获取与所述第一运动部位相关联的第二运动部位;
    所述计算机设备根据所述表情变化数据计算得到所述第一运动部位对应的运动状态数据;
    所述计算机设备根据所述第一运动部位对应的运动状态数据确定所述第二运动部位对应的骨骼控制数据;
    所述计算机设备根据所述骨骼控制数据控制所述第二运动部位对应的骨骼运动,以更新所述虚拟形象模型对应的虚拟动画形象的表情。
  3. 根据权利要求2所述的方法,其特征在于,若所述第一运动部位为第一预设部位,所述计算机设备根据所述表情变化数据计算得到所述第一运动部位对应的运动状态数据,所述计算机设备根据所述第一运动部位对应的运动状态数据确定所述第二运动部位对应的骨骼控制数据,包括:
    所述计算机设备根据所述表情变化数据计算得到所述第一预设部位对应的 偏航角速度和俯仰角速度;根据所述俯仰角速度、预设最大俯仰角阈值和第一预设补偿值确定所述第二运动部位对应的第一骨骼控制数据;
    所述计算机设备根据所述偏航角速度、预设最大偏航角阈值和与第二预设补偿值确定所述第二运动部位对应的第二骨骼控制数据;
    所述计算机设备根据所述第一骨骼控制数据和所述第二骨骼控制数据确定所述第二运动部位对应的骨骼控制数据。
  4. 根据权利要求2所述的方法,其特征在于,若所述第一运动部位为第二预设部位,所述计算机设备根据所述表情变化数据计算得到所述第一运动部位对应的运动状态数据,所述计算机设备根据所述第一运动部位对应的运动状态数据确定所述第二运动部位对应的骨骼控制数据,包括:
    所述计算机设备根据所述表情变化数据计算得到所述第二预设部位对应的表情变化系数;
    所述计算机设备根据所述表情变化系数和预设最大俯仰角阈值确定所述第二运动部位对应的骨骼控制数据。
  5. 根据权利要求2所述的方法,其特征在于,若所述第一运动部位为第三预设部位,所述计算机设备根据所述表情变化数据计算得到所述第一运动部位对应的运动状态数据,所述计算机设备根据所述第一运动部位对应的运动状态数据确定所述第二运动部位对应的骨骼控制数据,包括:
    所述计算机设备根据所述表情变化数据计算得到所述第三预设部位对应的表情变化系数;
    所述计算机设备根据所述表情变化数据计算得到所述第三预设部位对应的俯仰角方向值和偏航角方向值;
    所述计算机设备根据所述表情变化系数、所述俯仰角方向值和预设最大俯仰角阈值确定所述第三预设部位对应的第一骨骼控制数据;
    所述计算机设备根据所述表情变化系数、所述偏航角方向值和预设最大偏航角阈值确定所述第三预设部位对应的第二骨骼控制数据;
    所述计算机设备根据所述第一骨骼控制数据和所述第二骨骼控制数据确定所述第三预设部位对应的骨骼控制数据。
  6. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述计算机设备获取参照点,根据所述参照点确定虚拟空间坐标原点,根据所述虚拟空间坐标原点建立虚拟空间;
    所述计算机设备获取行为主体相对于所述参照点的相对位置;
    所述计算机设备根据所述相对位置确定所述行为主体对应的虚拟动画形象在所述虚拟空间的目标位置,根据所述目标位置在所述虚拟空间生成所述行为主体对应的初始虚拟动画形象。
  7. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述计算机设备获取语音数据,根据所述语音数据确定对应的当前第二运动部位;
    所述计算机设备获取与所述当前第二运动部位对应的骨骼动画,播放所述骨骼动画,以更新所述虚拟形象模型对应的虚拟动画形象的表情。
  8. 根据权利要求1所述的方法,其特征在于,所述计算机设备确定与所述表情变化数据匹配的目标分割表情区域,包括:
    所述计算机设备根据所述表情变化数据确定所述虚拟动画形象对应的当前运动部位;
    所述计算机设备获取所述虚拟形象模型对应的预设多个分割表情区域;
    所述计算机设备从所述预设多个分割表情区域中获取与所述当前运动部位匹配的目标分割表情区域。
  9. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述计算机设备从所述虚拟形象模型对应的各个分割表情区域中获取当前分割表情区域;
    所述计算机设备获取所述当前分割表情区域对应的子基本虚拟形象模型集;
    所述计算机设备对所述子基本虚拟形象模型集中的各个子基本虚拟形象模型进行多次不同的非线性组合生成对应的多个子混合虚拟形象模型,组成所述当前分割表情区域对应的子混合虚拟形象模型集;
    所述计算机设备从所述各个分割表情区域中获取下一个分割表情区域作为所述当前分割表情区域,返回所述获取所述当前分割表情区域对应的子基本虚 拟形象模型集的步骤,直到得到所述各个分割表情区域对应的子混合虚拟形象模型集;
    所述计算机设备将所述各个分割表情区域对应的子基本虚拟形象模型集和子混合虚拟形象模型集组成基础虚拟形象数据,所述目标基础虚拟形象数据是从所述基础虚拟形象数据中选取得到的。
  10. 根据权利要求1所述的方法,其特征在于,所述目标基础虚拟形象数据包括多个目标子基本虚拟形象模型和多个目标子混合虚拟形象模型,所述计算机设备根据所述表情变化数据组合所述目标基础虚拟形象数据生成加载表情数据,包括:
    所述计算机设备根据所述表情变化数据计算得到各个目标子基本虚拟形象模型和各个目标子混合虚拟形象模型对应的组合系数;
    所述计算机设备根据所述组合系数将所述多个目标子基本虚拟形象模型和多个目标子混合虚拟形象模型进行线性组合生成所述加载表情数据。
  11. 根据权利要求1所述的方法,其特征在于,所述计算机设备将所述加载表情数据加载到所述目标分割表情区域的步骤,包括:
    所述计算机设备获取当前顶点位置集合,所述当前顶点位置集合由生成所述加载表情数据的各个目标子基础虚拟形象模型对应的当前顶点位置组成;
    所述计算机设备根据所述当前顶点位置集合确定所述加载表情数据对应的网格的当前目标顶点位置;
    所述计算机设备获取下一个顶点位置集合,根据所述下一个顶点位置集合确定所述加载表情数据对应的网格的下一个目标顶点位置,直至确定所述加载表情数据对应的网格的各个目标顶点位置。
  12. 根据权利要求1所述的方法,其特征在于,当所述表情变化数据对应多个目标表情更新时,所述计算机设备获取与所述目标分割表情区域匹配的目标基础虚拟形象数据,根据所述表情变化数据组合所述目标基础虚拟形象数据生成加载表情数据,包括:
    所述计算机设备获取各个所述目标表情对应的预设权重系数;
    所述计算机设备根据各个目标表情对应的预设权重系数的大小关系,确定 各个所述目标表情对应的加载表情数据的生成顺序;
    所述计算机设备将所述加载表情数据加载到所述目标分割表情区域以更新所述虚拟形象模型对应的虚拟动画形象的表情,包括:
    所述计算机设备按照各个所述目标表情对应的加载表情数据的生成顺序,依次将各个加载表情数据加载到所述目标分割表情区域以更新所述虚拟形象模型对应的虚拟动画形象的表情。
  13. 根据权利要求1所述的方法,其特征在于,所述计算机设备从所述当前表情数据获取表情变化数据,包括:
    所述计算机设备对所述当前表情数据进行特征点提取,得到对应的表情特征点;
    所述计算机设备将所述表情特征点与预设表情数据集合进行匹配以确定当前更新表情,获取与所述当前更新表情对应的表情变化数据。
  14. 根据权利要求1所述的方法,其特征在于,所述计算机设备从所述当前表情数据获取表情变化数据,包括:
    所述计算机设备获取历史表情数据,对所述历史表情数据进行特征点提取,得到对应的历史表情特征点;
    所述计算机设备对所述当前表情数据进行特征点提取,得到对应的当前表情特征点;
    所述计算机设备将所述历史表情特征点与所述当前表情特征点进行比较,根据比较结果得到对应的表情变化数据。
  15. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述计算机设备根据所述表情变化数据从预设背景图像中获取对应的第一背景图像,将所述第一背景图像加载至所述虚拟形象模型对应的虚拟动画形象所处的虚拟环境中。
  16. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述计算机设备获取语音数据,根据所述语音数据从预设背景图像中获取对应的第二背景图像,将所述第二背景图像加载至所述虚拟形象模型对应的虚拟动画形象所处的虚拟环境中。
  17. 根据权利要求1所述的方法,其特征在于,所述计算机设备获取虚拟形象模型,包括:
    所述计算机设备对所述图像中的人脸进行人脸特征点提取,根据所述人脸特征点获取对应的虚拟形象模型。
  18. 根据权利要求1所述的方法,其特征在于,所述计算机设备获取虚拟形象模型,包括:
    所述计算机设备获取虚拟形象模型集合,所述虚拟形象模型集合包括多个虚拟形象模型;
    所述计算机设备获取虚拟形象模型选择指令,根据所述虚拟形象模型选择指令从所述虚拟形象模型集合中获取目标虚拟形象模型。
  19. 一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如下步骤:确定人脸在图像中的位置,获取虚拟形象模型;
    根据所述人脸在图像中的位置和三维脸部模型获取当前表情数据;
    从所述当前表情数据获取表情变化数据;
    确定与所述表情变化数据匹配的目标分割表情区域,所述目标分割表情区域是从虚拟形象模型对应的各个分割表情区域中选取得到的;
    获取与所述目标分割表情区域匹配的目标基础虚拟形象数据,根据所述表情变化数据组合所述目标基础虚拟形象数据生成加载表情数据;
    将所述加载表情数据加载到所述目标分割表情区域以更新所述虚拟形象模型对应的虚拟动画形象的表情。
  20. 根据权利要求19所述的计算机设备,其特征在于,所述计算机可读指令还使得所述处理器执行如下步骤:
    根据所述表情变化数据确定所述虚拟动画形象对应的第一运动部位;
    获取与所述第一运动部位相关联的第二运动部位;
    根据所述表情变化数据计算得到所述第一运动部位对应的运动状态数据;
    根据所述第一运动部位对应的运动状态数据确定所述第二运动部位对应的骨骼控制数据;
    根据所述骨骼控制数据控制所述第二运动部位对应的骨骼运动,以更新所述虚拟形象模型对应的虚拟动画形象的表情。
  21. 根据权利要求20所述的计算机设备,其特征在于,若所述第一运动部位为第一预设部位,所述根据所述表情变化数据计算得到所述第一运动部位对应的运动状态数据,根据所述第一运动部位对应的运动状态数据确定所述第二运动部位对应的骨骼控制数据,包括:
    根据所述表情变化数据计算得到所述第一预设部位对应的偏航角速度和俯仰角速度;根据所述俯仰角速度、预设最大俯仰角阈值和第一预设补偿值确定所述第二运动部位对应的第一骨骼控制数据;
    根据所述偏航角速度、预设最大偏航角阈值和与第二预设补偿值确定所述第二运动部位对应的第二骨骼控制数据;
    根据所述第一骨骼控制数据和所述第二骨骼控制数据确定所述第二运动部位对应的骨骼控制数据。
  22. 根据权利要求20所述的计算机设备,其特征在于,若所述第一运动部位为第二预设部位,所述根据所述表情变化数据计算得到所述第一运动部位对应的运动状态数据,根据所述第一运动部位对应的运动状态数据确定所述第二运动部位对应的骨骼控制数据,包括:
    根据所述表情变化数据计算得到所述第二预设部位对应的表情变化系数;
    根据所述表情变化系数和预设最大俯仰角阈值确定所述第二运动部位对应的骨骼控制数据。
  23. 根据权利要求20所述的计算机设备,其特征在于,若所述第一运动部位为第三预设部位,所述根据所述表情变化数据计算得到所述第一运动部位对应的运动状态数据,根据所述第一运动部位对应的运动状态数据确定所述第二运动部位对应的骨骼控制数据,包括:
    根据所述表情变化数据计算得到所述第三预设部位对应的表情变化系数;
    根据所述表情变化数据计算得到所述第三预设部位对应的俯仰角方向值和 偏航角方向值;
    根据所述表情变化系数、所述俯仰角方向值和预设最大俯仰角阈值确定所述第三预设部位对应的第一骨骼控制数据;
    根据所述表情变化系数、所述偏航角方向值和预设最大偏航角阈值确定所述第三预设部位对应的第二骨骼控制数据;
    根据所述第一骨骼控制数据和所述第二骨骼控制数据确定所述第三预设部位对应的骨骼控制数据。
  24. 根据权利要求19所述的计算机设备,其特征在于,所述计算机可读指令还使得所述处理器执行如下步骤:
    获取参照点,根据所述参照点确定虚拟空间坐标原点,根据所述虚拟空间坐标原点建立虚拟空间;
    获取行为主体相对于所述参照点的相对位置;
    根据所述相对位置确定所述行为主体对应的虚拟动画形象在所述虚拟空间的目标位置,根据所述目标位置在所述虚拟空间生成所述行为主体对应的初始虚拟动画形象。
  25. 根据权利要求19所述的计算机设备,其特征在于,所述计算机可读指令还使得所述处理器执行如下步骤:
    获取语音数据,根据所述语音数据确定对应的当前第二运动部位;
    获取与所述当前第二运动部位对应的骨骼动画,播放所述骨骼动画,以更新所述虚拟形象模型对应的虚拟动画形象的表情。
  26. 根据权利要求19所述的计算机设备,其特征在于,所述确定与所述表情变化数据匹配的目标分割表情区域,包括:
    根据所述表情变化数据确定所述虚拟动画形象对应的当前运动部位;
    获取所述虚拟形象模型对应的预设多个分割表情区域;
    从所述预设多个分割表情区域中获取与所述当前运动部位匹配的目标分割表情区域。
  27. 根据权利要求19所述的计算机设备,其特征在于,所述计算机可读指令还使得所述处理器执行如下步骤:
    从所述虚拟形象模型对应的各个分割表情区域中获取当前分割表情区域;
    获取所述当前分割表情区域对应的子基本虚拟形象模型集;
    对所述子基本虚拟形象模型集中的各个子基本虚拟形象模型进行多次不同的非线性组合生成对应的多个子混合虚拟形象模型,组成所述当前分割表情区域对应的子混合虚拟形象模型集;
    从所述各个分割表情区域中获取下一个分割表情区域作为所述当前分割表情区域,返回所述获取所述当前分割表情区域对应的子基本虚拟形象模型集的步骤,直到得到所述各个分割表情区域对应的子混合虚拟形象模型集;
    将所述各个分割表情区域对应的子基本虚拟形象模型集和子混合虚拟形象模型集组成基础虚拟形象数据,所述目标基础虚拟形象数据是从所述基础虚拟形象数据中选取得到的。
  28. 根据权利要求19所述的计算机设备,其特征在于,所述目标基础虚拟形象数据包括多个目标子基本虚拟形象模型和多个目标子混合虚拟形象模型,所述根据所述表情变化数据组合所述目标基础虚拟形象数据生成加载表情数据,包括:
    根据所述表情变化数据计算得到各个目标子基本虚拟形象模型和各个目标子混合虚拟形象模型对应的组合系数;
    根据所述组合系数将所述多个目标子基本虚拟形象模型和多个目标子混合虚拟形象模型进行线性组合生成所述加载表情数据。
  29. 根据权利要求19所述的计算机设备,其特征在于,所述将所述加载表情数据加载到所述目标分割表情区域的步骤,包括:
    获取当前顶点位置集合,所述当前顶点位置集合由生成所述加载表情数据的各个目标子基础虚拟形象模型对应的当前顶点位置组成;
    根据所述当前顶点位置集合确定所述加载表情数据对应的网格的当前目标顶点位置;
    获取下一个顶点位置集合,根据所述下一个顶点位置集合确定所述加载表情数据对应的网格的下一个目标顶点位置,直至确定所述加载表情数据对应的网格的各个目标顶点位置。
  30. 根据权利要求19所述的计算机设备,其特征在于,当所述表情变化数据对应多个目标表情更新时,所述获取与所述目标分割表情区域匹配的目标基础虚拟形象数据,根据所述表情变化数据组合所述目标基础虚拟形象数据生成加载表情数据,包括:
    获取各个所述目标表情对应的预设权重系数;
    根据各个目标表情对应的预设权重系数的大小关系,确定各个所述目标表情对应的加载表情数据的生成顺序;
    所述将所述加载表情数据加载到所述目标分割表情区域以更新所述虚拟形象模型对应的虚拟动画形象的表情,包括:
    按照各个所述目标表情对应的加载表情数据的生成顺序,依次将各个加载表情数据加载到所述目标分割表情区域以更新所述虚拟形象模型对应的虚拟动画形象的表情。
  31. 根据权利要求19所述的计算机设备,其特征在于,所述从所述当前表情数据获取表情变化数据,包括:
    对所述当前表情数据进行特征点提取,得到对应的表情特征点;
    将所述表情特征点与预设表情数据集合进行匹配以确定当前更新表情,获取与所述当前更新表情对应的表情变化数据。
  32. 根据权利要求19所述的计算机设备,其特征在于,所述从所述当前表情数据获取表情变化数据,包括:
    获取历史表情数据,对所述历史表情数据进行特征点提取,得到对应的历史表情特征点;
    对所述当前表情数据进行特征点提取,得到对应的当前表情特征点;
    将所述历史表情特征点与所述当前表情特征点进行比较,根据比较结果得到对应的表情变化数据。
  33. 根据权利要求19所述的计算机设备,其特征在于,所述计算机可读指令还使得所述处理器执行如下步骤:
    根据所述表情变化数据从预设背景图像中获取对应的第一背景图像,将所述第一背景图像加载至所述虚拟形象模型对应的虚拟动画形象所处的虚拟环境 中。
  34. 根据权利要求19所述的计算机设备,其特征在于,所述计算机可读指令还使得所述处理器执行如下步骤:
    获取语音数据,根据所述语音数据从预设背景图像中获取对应的第二背景图像,将所述第二背景图像加载至所述虚拟形象模型对应的虚拟动画形象所处的虚拟环境中。
  35. 根据权利要求19所述的计算机设备,其特征在于,所述获取虚拟形象模型,包括:
    对所述图像中的人脸进行人脸特征点提取,根据所述人脸特征点获取对应的虚拟形象模型。
  36. 根据权利要求19所述的计算机设备,其特征在于,所述获取虚拟形象模型,包括:
    获取虚拟形象模型集合,所述虚拟形象模型集合包括多个虚拟形象模型;
    获取虚拟形象模型选择指令,根据所述虚拟形象模型选择指令从所述虚拟形象模型集合中获取目标虚拟形象模型。
  37. 一个或多个存储有计算机可读指令的非易失性存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行如下步骤:
    确定人脸在图像中的位置,获取虚拟形象模型;
    根据所述人脸在图像中的位置和三维脸部模型获取当前表情数据;
    从所述当前表情数据获取表情变化数据;
    确定与所述表情变化数据匹配的目标分割表情区域,所述目标分割表情区域是从虚拟形象模型对应的各个分割表情区域中选取得到的;
    获取与所述目标分割表情区域匹配的目标基础虚拟形象数据,根据所述表情变化数据组合所述目标基础虚拟形象数据生成加载表情数据;
    将所述加载表情数据加载到所述目标分割表情区域以更新所述虚拟形象模型对应的虚拟动画形象的表情。
  38. 根据权利要求37所述的存储介质,其特征在于,所述计算机可读指令还使得所述处理器执行如下步骤:
    根据所述表情变化数据确定所述虚拟动画形象对应的第一运动部位;
    获取与所述第一运动部位相关联的第二运动部位;
    根据所述表情变化数据计算得到所述第一运动部位对应的运动状态数据;
    根据所述第一运动部位对应的运动状态数据确定所述第二运动部位对应的骨骼控制数据;
    根据所述骨骼控制数据控制所述第二运动部位对应的骨骼运动,以更新所述虚拟形象模型对应的虚拟动画形象的表情。
  39. 根据权利要求38所述的存储介质,其特征在于,若所述第一运动部位为第一预设部位,所述根据所述表情变化数据计算得到所述第一运动部位对应的运动状态数据,根据所述第一运动部位对应的运动状态数据确定所述第二运动部位对应的骨骼控制数据,包括:
    根据所述表情变化数据计算得到所述第一预设部位对应的偏航角速度和俯仰角速度;根据所述俯仰角速度、预设最大俯仰角阈值和第一预设补偿值确定所述第二运动部位对应的第一骨骼控制数据;
    根据所述偏航角速度、预设最大偏航角阈值和与第二预设补偿值确定所述第二运动部位对应的第二骨骼控制数据;
    根据所述第一骨骼控制数据和所述第二骨骼控制数据确定所述第二运动部位对应的骨骼控制数据。
  40. 根据权利要求38所述的存储介质,其特征在于,若所述第一运动部位为第二预设部位,所述根据所述表情变化数据计算得到所述第一运动部位对应的运动状态数据,根据所述第一运动部位对应的运动状态数据确定所述第二运动部位对应的骨骼控制数据,包括:
    根据所述表情变化数据计算得到所述第二预设部位对应的表情变化系数;
    根据所述表情变化系数和预设最大俯仰角阈值确定所述第二运动部位对应的骨骼控制数据。
  41. 根据权利要求38所述的存储介质,其特征在于,若所述第一运动部位 为第三预设部位,所述根据所述表情变化数据计算得到所述第一运动部位对应的运动状态数据,根据所述第一运动部位对应的运动状态数据确定所述第二运动部位对应的骨骼控制数据,包括:
    根据所述表情变化数据计算得到所述第三预设部位对应的表情变化系数;
    根据所述表情变化数据计算得到所述第三预设部位对应的俯仰角方向值和偏航角方向值;
    根据所述表情变化系数、所述俯仰角方向值和预设最大俯仰角阈值确定所述第三预设部位对应的第一骨骼控制数据;
    根据所述表情变化系数、所述偏航角方向值和预设最大偏航角阈值确定所述第三预设部位对应的第二骨骼控制数据;
    根据所述第一骨骼控制数据和所述第二骨骼控制数据确定所述第三预设部位对应的骨骼控制数据。
  42. 根据权利要求37所述的存储介质,其特征在于,所述计算机可读指令还使得所述处理器执行如下步骤:
    获取参照点,根据所述参照点确定虚拟空间坐标原点,根据所述虚拟空间坐标原点建立虚拟空间;
    获取行为主体相对于所述参照点的相对位置;
    根据所述相对位置确定所述行为主体对应的虚拟动画形象在所述虚拟空间的目标位置,根据所述目标位置在所述虚拟空间生成所述行为主体对应的初始虚拟动画形象。
  43. 根据权利要求37所述的存储介质,其特征在于,所述计算机可读指令还使得所述处理器执行如下步骤:
    获取语音数据,根据所述语音数据确定对应的当前第二运动部位;
    获取与所述当前第二运动部位对应的骨骼动画,播放所述骨骼动画,以更新所述虚拟形象模型对应的虚拟动画形象的表情。
  44. 根据权利要求37所述的存储介质,其特征在于,所述确定与所述表情变化数据匹配的目标分割表情区域,包括:
    根据所述表情变化数据确定所述虚拟动画形象对应的当前运动部位;
    获取所述虚拟形象模型对应的预设多个分割表情区域;
    从所述预设多个分割表情区域中获取与所述当前运动部位匹配的目标分割表情区域。
  45. 根据权利要求37所述的存储介质,其特征在于,所述计算机可读指令还使得所述处理器执行如下步骤:
    从所述虚拟形象模型对应的各个分割表情区域中获取当前分割表情区域;
    获取所述当前分割表情区域对应的子基本虚拟形象模型集;
    对所述子基本虚拟形象模型集中的各个子基本虚拟形象模型进行多次不同的非线性组合生成对应的多个子混合虚拟形象模型,组成所述当前分割表情区域对应的子混合虚拟形象模型集;
    从所述各个分割表情区域中获取下一个分割表情区域作为所述当前分割表情区域,返回所述获取所述当前分割表情区域对应的子基本虚拟形象模型集的步骤,直到得到所述各个分割表情区域对应的子混合虚拟形象模型集;
    将所述各个分割表情区域对应的子基本虚拟形象模型集和子混合虚拟形象模型集组成基础虚拟形象数据,所述目标基础虚拟形象数据是从所述基础虚拟形象数据中选取得到的。
  46. 根据权利要求37所述的存储介质,其特征在于,所述目标基础虚拟形象数据包括多个目标子基本虚拟形象模型和多个目标子混合虚拟形象模型,所述根据所述表情变化数据组合所述目标基础虚拟形象数据生成加载表情数据,包括:
    根据所述表情变化数据计算得到各个目标子基本虚拟形象模型和各个目标子混合虚拟形象模型对应的组合系数;
    根据所述组合系数将所述多个目标子基本虚拟形象模型和多个目标子混合虚拟形象模型进行线性组合生成所述加载表情数据。
  47. 根据权利要求37所述的存储介质,其特征在于,所述将所述加载表情数据加载到所述目标分割表情区域的步骤,包括:
    获取当前顶点位置集合,所述当前顶点位置集合由生成所述加载表情数据的各个目标子基础虚拟形象模型对应的当前顶点位置组成;
    根据所述当前顶点位置集合确定所述加载表情数据对应的网格的当前目标顶点位置;
    获取下一个顶点位置集合,根据所述下一个顶点位置集合确定所述加载表情数据对应的网格的下一个目标顶点位置,直至确定所述加载表情数据对应的网格的各个目标顶点位置。
  48. 根据权利要求37所述的存储介质,其特征在于,当所述表情变化数据对应多个目标表情更新时,所述获取与所述目标分割表情区域匹配的目标基础虚拟形象数据,根据所述表情变化数据组合所述目标基础虚拟形象数据生成加载表情数据,包括:
    获取各个所述目标表情对应的预设权重系数;
    根据各个目标表情对应的预设权重系数的大小关系,确定各个所述目标表情对应的加载表情数据的生成顺序;
    所述将所述加载表情数据加载到所述目标分割表情区域以更新所述虚拟形象模型对应的虚拟动画形象的表情,包括:
    按照各个所述目标表情对应的加载表情数据的生成顺序,依次将各个加载表情数据加载到所述目标分割表情区域以更新所述虚拟形象模型对应的虚拟动画形象的表情。
  49. 根据权利要求37所述的存储介质,其特征在于,所述从所述当前表情数据获取表情变化数据,包括:
    对所述当前表情数据进行特征点提取,得到对应的表情特征点;
    将所述表情特征点与预设表情数据集合进行匹配以确定当前更新表情,获取与所述当前更新表情对应的表情变化数据。
  50. 根据权利要求37所述的存储介质,其特征在于,所述从所述当前表情数据获取表情变化数据,包括:
    获取历史表情数据,对所述历史表情数据进行特征点提取,得到对应的历史表情特征点;
    对所述当前表情数据进行特征点提取,得到对应的当前表情特征点;
    将所述历史表情特征点与所述当前表情特征点进行比较,根据比较结果得 到对应的表情变化数据。
  51. 根据权利要求37所述的存储介质,其特征在于,所述计算机可读指令还使得所述处理器执行如下步骤:根据所述表情变化数据从预设背景图像中获取对应的第一背景图像,将所述第一背景图像加载至所述虚拟形象模型对应的虚拟动画形象所处的虚拟环境中。
  52. 根据权利要求37所述的存储介质,其特征在于,所述计算机可读指令还使得所述处理器执行如下步骤:
    获取语音数据,根据所述语音数据从预设背景图像中获取对应的第二背景图像,将所述第二背景图像加载至所述虚拟形象模型对应的虚拟动画形象所处的虚拟环境中。
  53. 根据权利要求37所述的存储介质,其特征在于,所述获取虚拟形象模型,包括:
    对所述图像中的人脸进行人脸特征点提取,根据所述人脸特征点获取对应的虚拟形象模型。
  54. 根据权利要求37所述的存储介质,其特征在于,所述获取虚拟形象模型,包括:
    获取虚拟形象模型集合,所述虚拟形象模型集合包括多个虚拟形象模型;
    获取虚拟形象模型选择指令,根据所述虚拟形象模型选择指令从所述虚拟形象模型集合中获取目标虚拟形象模型。
PCT/CN2019/071336 2018-02-09 2019-01-11 表情动画数据处理方法、计算机设备和存储介质 WO2019154013A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19751218.9A EP3751521A4 (en) 2018-02-09 2019-01-11 EXPRESSION ANIMATION DATA PROCESSING PROCESS, COMPUTER DEVICE AND STORAGE MEDIA
US16/895,912 US11270488B2 (en) 2018-02-09 2020-06-08 Expression animation data processing method, computer device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810136285.X 2018-02-09
CN201810136285.XA CN110135226B (zh) 2018-02-09 2018-02-09 表情动画数据处理方法、装置、计算机设备和存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/895,912 Continuation US11270488B2 (en) 2018-02-09 2020-06-08 Expression animation data processing method, computer device, and storage medium

Publications (1)

Publication Number Publication Date
WO2019154013A1 true WO2019154013A1 (zh) 2019-08-15

Family

ID=67549240

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/071336 WO2019154013A1 (zh) 2018-02-09 2019-01-11 表情动画数据处理方法、计算机设备和存储介质

Country Status (4)

Country Link
US (1) US11270488B2 (zh)
EP (1) EP3751521A4 (zh)
CN (1) CN110135226B (zh)
WO (1) WO2019154013A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325846A (zh) * 2020-02-13 2020-06-23 腾讯科技(深圳)有限公司 表情基确定方法、虚拟形象驱动方法、装置及介质
CN112149599A (zh) * 2020-09-29 2020-12-29 网易(杭州)网络有限公司 表情追踪方法、装置、存储介质和电子设备
CN115661310A (zh) * 2022-12-22 2023-01-31 海马云(天津)信息技术有限公司 虚拟数字人表情逼近方法、装置、存储介质、电子设备

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11087520B2 (en) * 2018-09-19 2021-08-10 XRSpace CO., LTD. Avatar facial expression generating system and method of avatar facial expression generation for facial model
KR102664688B1 (ko) * 2019-02-19 2024-05-10 삼성전자 주식회사 가상 캐릭터 기반 촬영 모드를 제공하는 전자 장치 및 이의 동작 방법
US10902618B2 (en) * 2019-06-14 2021-01-26 Electronic Arts Inc. Universal body movement translation and character rendering system
CN110570499B (zh) * 2019-09-09 2023-08-15 珠海金山数字网络科技有限公司 一种表情生成方法、装置、计算设备及存储介质
CN110717974B (zh) * 2019-09-27 2023-06-09 腾讯数码(天津)有限公司 展示状态信息的控制方法、装置、电子设备和存储介质
US20220375258A1 (en) * 2019-10-29 2022-11-24 Guangzhou Huya Technology Co., Ltd Image processing method and apparatus, device and storage medium
CN110766777B (zh) * 2019-10-31 2023-09-29 北京字节跳动网络技术有限公司 虚拟形象的生成方法、装置、电子设备及存储介质
CN110782515A (zh) * 2019-10-31 2020-02-11 北京字节跳动网络技术有限公司 虚拟形象的生成方法、装置、电子设备及存储介质
US11504625B2 (en) 2020-02-14 2022-11-22 Electronic Arts Inc. Color blindness diagnostic system
US11636391B2 (en) * 2020-03-26 2023-04-25 International Business Machines Corporation Automatic combinatoric feature generation for enhanced machine learning
US11648480B2 (en) 2020-04-06 2023-05-16 Electronic Arts Inc. Enhanced pose generation based on generative modeling
US11232621B2 (en) 2020-04-06 2022-01-25 Electronic Arts Inc. Enhanced animation generation based on conditional modeling
CN111260754B (zh) * 2020-04-27 2020-08-07 腾讯科技(深圳)有限公司 人脸图像编辑方法、装置和存储介质
US11361491B2 (en) 2020-07-03 2022-06-14 Wipro Limited System and method of generating facial expression of a user for virtual environment
CN112419485B (zh) * 2020-11-25 2023-11-24 北京市商汤科技开发有限公司 一种人脸重建方法、装置、计算机设备及存储介质
CN112328459A (zh) * 2020-12-16 2021-02-05 四川酷赛科技有限公司 一种信息动态提醒方法、终端设备及存储介质
CN112634416B (zh) * 2020-12-23 2023-07-28 北京达佳互联信息技术有限公司 虚拟形象模型的生成方法、装置、电子设备及存储介质
CN113066155A (zh) * 2021-03-23 2021-07-02 华强方特(深圳)动漫有限公司 一种3d表情处理方法及装置
CN113192164A (zh) * 2021-05-12 2021-07-30 广州虎牙科技有限公司 虚拟形象随动控制方法、装置、电子设备和可读存储介质
CN113449590B (zh) * 2021-05-14 2022-10-28 网易(杭州)网络有限公司 说话视频生成方法及装置
US11887232B2 (en) 2021-06-10 2024-01-30 Electronic Arts Inc. Enhanced system for generation of facial models and animation
CN113470149B (zh) * 2021-06-30 2022-05-06 完美世界(北京)软件科技发展有限公司 表情模型的生成方法及装置、存储介质、计算机设备
CN113780141A (zh) * 2021-08-31 2021-12-10 Oook(北京)教育科技有限责任公司 一种对弈模型的构建方法和装置
CN114170651A (zh) * 2021-11-17 2022-03-11 北京紫晶光电设备有限公司 表情识别方法、装置、设备及计算机存储介质
CN115526966B (zh) * 2022-10-12 2023-06-30 广州鬼谷八荒信息科技有限公司 一种用调度五官部件实现虚拟人物表情展现的方法
CN116258800A (zh) * 2022-11-25 2023-06-13 北京字跳网络技术有限公司 一种表情驱动方法、装置、设备及介质
CN117115361B (zh) * 2023-10-19 2024-01-19 北京蔚领时代科技有限公司 一种3d写实人像面部表情绑定自动迁移方法及装置
CN117152382A (zh) * 2023-10-30 2023-12-01 海马云(天津)信息技术有限公司 虚拟数字人面部表情计算方法和装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102479388A (zh) * 2010-11-22 2012-05-30 北京盛开互动科技有限公司 基于人脸跟踪和分析的表情互动方法
US20130016124A1 (en) * 2011-07-14 2013-01-17 Samsung Electronics Co., Ltd. Method, apparatus, and system for processing virtual world
CN103198508A (zh) * 2013-04-07 2013-07-10 河北工业大学 人脸表情动画生成方法
CN103942822A (zh) * 2014-04-11 2014-07-23 浙江大学 一种基于单视频摄像机的面部特征点跟踪和人脸动画方法

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8694899B2 (en) * 2010-06-01 2014-04-08 Apple Inc. Avatars reflecting user states
US9177410B2 (en) * 2013-08-09 2015-11-03 Ayla Mandel System and method for creating avatars or animated sequences using human body features extracted from a still image
US9508197B2 (en) * 2013-11-01 2016-11-29 Microsoft Technology Licensing, Llc Generating an avatar from real time image data
US9947139B2 (en) * 2014-06-20 2018-04-17 Sony Interactive Entertainment America Llc Method and apparatus for providing hybrid reality environment
CN107431635B (zh) * 2015-03-27 2021-10-08 英特尔公司 化身面部表情和/或语音驱动的动画化
CN106204698A (zh) * 2015-05-06 2016-12-07 北京蓝犀时空科技有限公司 为自由组合创作的虚拟形象生成及使用表情的方法和系统
CN107180445B (zh) * 2016-03-10 2019-12-10 腾讯科技(深圳)有限公司 一种动画模型的表情控制方法和装置
US10275941B2 (en) * 2016-11-01 2019-04-30 Dg Holdings, Inc. Multi-layered depth and volume preservation of stacked meshes
KR102439054B1 (ko) * 2017-05-16 2022-09-02 애플 인크. 이모지 레코딩 및 전송
US10636192B1 (en) * 2017-06-30 2020-04-28 Facebook Technologies, Llc Generating a graphical representation of a face of a user wearing a head mounted display
CN107657651B (zh) * 2017-08-28 2019-06-07 腾讯科技(上海)有限公司 表情动画生成方法和装置、存储介质及电子装置
US10430642B2 (en) * 2017-12-07 2019-10-01 Apple Inc. Generating animated three-dimensional models from captured images
US10375313B1 (en) * 2018-05-07 2019-08-06 Apple Inc. Creative camera

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102479388A (zh) * 2010-11-22 2012-05-30 北京盛开互动科技有限公司 基于人脸跟踪和分析的表情互动方法
US20130016124A1 (en) * 2011-07-14 2013-01-17 Samsung Electronics Co., Ltd. Method, apparatus, and system for processing virtual world
CN103198508A (zh) * 2013-04-07 2013-07-10 河北工业大学 人脸表情动画生成方法
CN103942822A (zh) * 2014-04-11 2014-07-23 浙江大学 一种基于单视频摄像机的面部特征点跟踪和人脸动画方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3751521A4

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325846A (zh) * 2020-02-13 2020-06-23 腾讯科技(深圳)有限公司 表情基确定方法、虚拟形象驱动方法、装置及介质
CN112149599A (zh) * 2020-09-29 2020-12-29 网易(杭州)网络有限公司 表情追踪方法、装置、存储介质和电子设备
CN112149599B (zh) * 2020-09-29 2024-03-08 网易(杭州)网络有限公司 表情追踪方法、装置、存储介质和电子设备
CN115661310A (zh) * 2022-12-22 2023-01-31 海马云(天津)信息技术有限公司 虚拟数字人表情逼近方法、装置、存储介质、电子设备

Also Published As

Publication number Publication date
EP3751521A4 (en) 2021-11-24
US20200302668A1 (en) 2020-09-24
CN110135226A (zh) 2019-08-16
CN110135226B (zh) 2023-04-07
US11270488B2 (en) 2022-03-08
EP3751521A1 (en) 2020-12-16

Similar Documents

Publication Publication Date Title
WO2019154013A1 (zh) 表情动画数据处理方法、计算机设备和存储介质
US12045925B2 (en) Computing images of head mounted display wearer
JP6934887B2 (ja) 単眼カメラを用いたリアルタイム3d捕捉およびライブフィードバックのための方法およびシステム
US9697635B2 (en) Generating an avatar from real time image data
EP3798801A1 (en) Image processing method and apparatus, storage medium, and computer device
WO2022205760A1 (zh) 三维人体重建方法、装置、设备及存储介质
JP7182919B2 (ja) 映像処理方法、コンピュータプログラムおよび記録媒体
CN107944420B (zh) 人脸图像的光照处理方法和装置
CN108875539B (zh) 表情匹配方法、装置和系统及存储介质
CN110163063B (zh) 表情处理方法、装置、计算机可读存储介质和计算机设备
US10818078B2 (en) Reconstruction and detection of occluded portions of 3D human body model using depth data from single viewpoint
WO2022051460A1 (en) 3d asset generation from 2d images
US20220319231A1 (en) Facial synthesis for head turns in augmented reality content
KR102250163B1 (ko) 딥러닝 기술을 이용하여 비디오 영상을 3d 비디오 영상으로 변환하는 방법 및 장치
CN115769260A (zh) 基于光度测量的3d对象建模
CN114202615A (zh) 人脸表情的重建方法、装置、设备和存储介质
WO2024093763A1 (zh) 全景图像处理方法、装置、计算机设备、介质和程序产品
CN114026524A (zh) 利用纹理操作的动画化人脸
CN115550563A (zh) 视频处理方法、装置、计算机设备和存储介质
CN114419253A (zh) 一种卡通人脸的构建、直播方法及相关装置
CN108846897B (zh) 三维模型表面材质模拟方法、装置、存储介质及电子设备
TWI792845B (zh) 追蹤臉部表情的動畫產生方法及其神經網路訓練方法
US20240062425A1 (en) Automatic Colorization of Grayscale Stereo Images
Condegni et al. A Digital Human System with Realistic Facial Expressions for Friendly Human-Machine Interaction
CN116029948A (zh) 图像处理方法、装置、电子设备和计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19751218

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019751218

Country of ref document: EP

Effective date: 20200909