CN111105494A - Method and system for generating three-dimensional dynamic head portrait - Google Patents

Method and system for generating three-dimensional dynamic head portrait Download PDF

Info

Publication number
CN111105494A
CN111105494A CN201911419494.6A CN201911419494A CN111105494A CN 111105494 A CN111105494 A CN 111105494A CN 201911419494 A CN201911419494 A CN 201911419494A CN 111105494 A CN111105494 A CN 111105494A
Authority
CN
China
Prior art keywords
dimensional
head portrait
information
user
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911419494.6A
Other languages
Chinese (zh)
Other versions
CN111105494B (en
Inventor
于树雷
王仕超
姜小勇
王萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Great Wall Motor Co Ltd
Original Assignee
Great Wall Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Great Wall Motor Co Ltd filed Critical Great Wall Motor Co Ltd
Priority to CN201911419494.6A priority Critical patent/CN111105494B/en
Publication of CN111105494A publication Critical patent/CN111105494A/en
Application granted granted Critical
Publication of CN111105494B publication Critical patent/CN111105494B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]

Abstract

The invention provides a method and a system for generating a three-dimensional dynamic head portrait, which are applied to a terminal comprising a three-dimensional camera device and comprise the following steps: acquiring first three-dimensional head portrait information of a user through a three-dimensional camera device, and generating an initial three-dimensional head portrait of the user; acquiring second three-dimensional head portrait information of a user, and generating a three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait; and displaying the three-dimensional dynamic head portrait in a preset interactive interface. According to the method and the device, the second three-dimensional head portrait information of the user is collected, so that the expression and limb dynamic information of the user in a period of time can be continuously obtained, and further, the three-dimensional dynamic head portrait capable of representing the dynamic information of the user can be generated according to the second three-dimensional head portrait information of the user and the initial three-dimensional head portrait, so that the users can communicate through the three-dimensional dynamic head portrait displayed in a preset interaction interface, and the interactivity among the users is improved.

Description

Method and system for generating three-dimensional dynamic head portrait
Technical Field
The invention relates to the technical field of electronics, in particular to a method and a system for generating a three-dimensional dynamic head portrait.
Background
Along with the development of intelligent terminal technology, intelligent terminal equipment becomes the indispensable instrument in people's life, and more people utilize intelligent terminal equipment to carry out operations such as social interaction, position sharing.
When the social interaction, the position sharing and other operations are performed on the existing intelligent terminal equipment, a user can set a head portrait representing the user in a social interaction and position sharing interface according to the preference of the user, the head portrait can be a static picture selected by the user, and if the intelligent terminal equipment supports a three-dimensional head portrait mode, the head portrait can also be a three-dimensional head portrait selected from a three-dimensional head portrait database of the intelligent terminal equipment, so that when the user performs the social interaction or the position sharing with friends, not only can information such as a user name or a nickname of the user be displayed, but also head portrait information of the user can be displayed, and the interest of the user in the social interaction or the position sharing among the friends is increased.
However, in the current scheme, the avatar set on the intelligent terminal device by the user is only a fixed static picture or a preset fixed three-dimensional avatar, and the avatar is only used for calibrating the identity of the user, and when information needs to be transferred between users, the user can communicate with the target user only by sending voice information or text information, so that interactivity between users is poor, and user experience is reduced.
Disclosure of Invention
In view of this, the present invention aims to provide a method and a system for generating a three-dimensional dynamic head portrait, so as to solve the problems in the prior art that a head portrait set on an intelligent terminal device by a user can only be a fixed static picture or a preset fixed three-dimensional head portrait, and therefore, information communication cannot be performed between users through the head portrait, so that interactivity between users is poor, and user experience is low.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
a method for generating a three-dimensional dynamic head portrait is applied to a terminal comprising a three-dimensional camera device, and comprises the following steps:
when an acquisition instruction is received, acquiring first three-dimensional head portrait information of a user through the three-dimensional camera device, and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information;
acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device, and generating a three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait;
and displaying the three-dimensional dynamic head portrait in a preset interactive interface.
Further, the second three-dimensional head portrait information comprises a posture feature;
the step of acquiring, by the three-dimensional camera device, second three-dimensional avatar information of the user, and generating the three-dimensional dynamic avatar according to the second three-dimensional avatar information and the initial three-dimensional avatar includes:
acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device;
under the condition that the change value of the attitude characteristic is detected to be larger than or equal to a preset threshold value, carrying out amplification processing on the attitude characteristic to generate target three-dimensional head portrait information;
and generating the three-dimensional dynamic head portrait according to the target three-dimensional head portrait information and the initial three-dimensional head portrait.
Further, the pose features include expressive pose features and/or limb pose features.
Further, the preset interaction interface comprises a position sharing interface;
the step of displaying the three-dimensional dynamic head portrait in a preset interactive interface comprises the following steps:
acquiring real-time position information of the terminal, and determining the real-time position of the terminal in the position sharing interface according to the real-time position information;
and displaying the three-dimensional dynamic head portrait at the real-time position.
Further, the preset interaction interface comprises a social software interface;
the step of displaying the three-dimensional dynamic head portrait in a preset interactive interface comprises the following steps:
the method comprises the steps of obtaining chat text information or chat voice information, displaying the chat text information or the chat voice information in the social software interface, and displaying the three-dimensional dynamic avatar at a position corresponding to the chat text information or the chat voice information.
Further, the terminal is a vehicle-mounted multimedia host;
after the step of displaying the three-dimensional dynamic head portrait in the preset interactive interface, the method further comprises:
under the condition that first three-dimensional head portrait information of a target user is acquired from an automobile remote server, generating an initial three-dimensional head portrait of the target user according to the first three-dimensional head portrait information of the target user;
receiving second three-dimensional head portrait information of the target user, and generating a three-dimensional dynamic head portrait of the target user according to the second three-dimensional head portrait information of the target user and the initial three-dimensional head portrait of the target user;
and displaying the three-dimensional dynamic head portrait of the target user in the preset interactive interface.
Further, after the step of performing amplification processing on the posture feature to generate the target three-dimensional head portrait information when it is detected that the variation value of the posture feature is greater than or equal to a preset threshold, the method further includes:
determining target expression information, target character information and target voice information corresponding to the target three-dimensional head portrait information according to the corresponding relation among the pre-stored three-dimensional head portrait information, expression information, character information and voice information;
and displaying the target expression information, the target character information and the target voice information in the preset interactive interface.
Further, after the step of acquiring, by the three-dimensional camera device, first three-dimensional avatar information of a user and generating an initial three-dimensional avatar of the user according to the first three-dimensional avatar information when the acquisition instruction is received, the method further includes:
displaying the initial three-dimensional head portrait;
under the condition that modification operation information aiming at the initial three-dimensional head portrait is received, generating a target initial three-dimensional head portrait according to the initial three-dimensional head portrait and the modification operation information;
the step of acquiring, by the three-dimensional camera device, second three-dimensional avatar information of the user, and generating the three-dimensional dynamic avatar according to the second three-dimensional avatar information and the initial three-dimensional avatar includes:
acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device, and generating a target three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the target initial three-dimensional head portrait;
the step of displaying the three-dimensional dynamic head portrait in a preset interactive interface comprises the following steps:
and displaying the target three-dimensional dynamic head portrait in the preset interactive interface.
Further, the step of acquiring, by the three-dimensional camera device, first three-dimensional avatar information of a user when the acquisition instruction is received, and generating an initial three-dimensional avatar of the user according to the first three-dimensional avatar information includes:
when the acquisition instruction is received, generating acquisition prompt information aiming at the first three-dimensional head portrait information and displaying the acquisition prompt information;
and acquiring first three-dimensional head portrait information of the user through the three-dimensional camera device, and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information.
A system for generating a three-dimensional dynamic head portrait, applied to a terminal including a three-dimensional camera device, the system comprising:
the first generation module is used for acquiring first three-dimensional head portrait information of a user through the three-dimensional camera device when receiving an acquisition instruction, and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information;
the second generation module is used for acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device and generating the three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait;
and the first display module is used for displaying the three-dimensional dynamic head portrait in a preset interactive interface.
Further, the second three-dimensional head portrait information comprises a posture feature;
the second generation module includes:
the first obtaining submodule is used for obtaining second three-dimensional head portrait information of the user through the three-dimensional camera device;
the first generation submodule is used for amplifying the attitude characteristic to generate target three-dimensional head portrait information under the condition that the change value of the attitude characteristic is detected to be greater than or equal to a preset threshold value;
and the second generation submodule is used for generating the three-dimensional dynamic head portrait according to the target three-dimensional head portrait information and the initial three-dimensional head portrait.
Further, the pose features include expressive pose features and/or limb pose features.
Further, the preset interaction interface comprises a position sharing interface;
the first display module, comprising:
the second obtaining submodule is used for obtaining real-time position information of the terminal and determining the real-time position of the terminal in the position sharing interface according to the real-time position information;
and the first display sub-module is used for displaying the three-dimensional dynamic head portrait at the real-time position.
Further, the preset interaction interface comprises a social software interface;
the first display module, comprising:
and the second display sub-module is used for acquiring chat text information or chat voice information, displaying the chat text information or the chat voice information in the social software interface, and displaying the three-dimensional dynamic avatar at a position corresponding to the chat text information or the chat voice information.
Further, the terminal is a vehicle-mounted multimedia host;
the system further comprises:
the third generation module is used for generating an initial three-dimensional head portrait of the target user according to the first three-dimensional head portrait information of the target user under the condition that the first three-dimensional head portrait information of the target user is acquired from the automobile remote server;
the fourth generation module is used for receiving the second three-dimensional head portrait information of the target user and generating a three-dimensional dynamic head portrait of the target user according to the second three-dimensional head portrait information of the target user and the initial three-dimensional head portrait of the target user;
and the second display module is used for displaying the three-dimensional dynamic head portrait of the target user in the preset interactive interface.
Further, the system further comprises:
the determining module is used for determining target expression information, target character information and target voice information corresponding to the target three-dimensional head portrait information according to the corresponding relation among the prestored three-dimensional head portrait information, expression information, character information and voice information;
and the third display module is used for displaying the target expression information, the target character information and the target voice information in the preset interactive interface.
Further, the system further comprises:
the display module is used for displaying the initial three-dimensional head portrait;
a fifth generating module, configured to generate a target initial three-dimensional avatar according to the initial three-dimensional avatar and modification operation information when modification operation information for the initial three-dimensional avatar is received;
the second generation module includes:
the third generation submodule is used for acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device and generating a target three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the target initial three-dimensional head portrait;
the first display module, comprising:
and the third display sub-module is used for displaying the target three-dimensional dynamic head portrait in the preset interactive interface.
Further, the first generating module includes:
the fourth generation submodule is used for generating and displaying acquisition prompt information aiming at the first three-dimensional head portrait information when the acquisition instruction is received;
and the fifth generation submodule is used for acquiring first three-dimensional head portrait information of the user through the three-dimensional camera device and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information.
Compared with the prior art, the method and the system for generating the three-dimensional dynamic head portrait have the following advantages:
the invention provides a method and a system for generating a three-dimensional dynamic head portrait, which are applied to a terminal comprising a three-dimensional camera device and comprise the following steps: when an acquisition instruction is received, acquiring first three-dimensional head portrait information of a user through a three-dimensional camera device, and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information; acquiring second three-dimensional head portrait information of a user through a three-dimensional camera device, and generating a three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait; and displaying the three-dimensional dynamic head portrait in a preset interactive interface. According to the method and the device, the terminal comprising the three-dimensional camera device can generate the initial three-dimensional head portrait of the user according to the acquired first three-dimensional head portrait information, acquire the second three-dimensional head portrait information of the user, can continuously acquire the expression and limb dynamic information of the user within a period of time, and further can generate the three-dimensional dynamic head portrait capable of representing the expression and limb dynamic information of the user according to the second three-dimensional head portrait information of the user and the initial three-dimensional head portrait, so that when the user and friends are in social interaction or position sharing, the three-dimensional dynamic head portrait displayed in a preset interaction interface can be used for communication, the interactivity among the users is improved, and the use experience degree of the user is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart illustrating steps of a method for generating a three-dimensional dynamic head portrait according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of another method for generating a three-dimensional dynamic head portrait according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating interaction steps of a method for generating a three-dimensional dynamic avatar according to an embodiment of the present invention;
fig. 4 is a flow chart of video data transmission according to an embodiment of the present invention;
fig. 5 is a block diagram of a system for generating a three-dimensional dynamic avatar according to an embodiment of the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
Referring to fig. 1, a flowchart illustrating steps of a method for generating a three-dimensional dynamic head portrait according to an embodiment of the present invention is shown, where the method is applied to a terminal including a three-dimensional camera device.
Step 101, when receiving an acquisition instruction, acquiring first three-dimensional head portrait information of a user through the three-dimensional camera device, and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information.
In the step, under the condition that the terminal receives an acquisition instruction for acquiring the first three-dimensional head portrait information, the first three-dimensional head portrait information of the user is acquired through the three-dimensional camera device, and a three-dimensional head portrait representing the appearance characteristics of the user is generated as an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information.
In the embodiment of the invention, the terminal can be an intelligent mobile terminal such as a mobile phone, a notebook computer, a tablet computer and a vehicle-mounted computer.
The three-dimensional (3Dimensions, 3D) camera device may be a 3D camera installed in the terminal, and the 3D camera may detect distance information of a shooting space and accurately know a distance between each point in an image and the camera, so that a three-dimensional space coordinate of each point in the image can be obtained by adding a plane coordinate of the point in a two-dimensional image, thereby obtaining three-dimensional image information of the object to be shot, and after image processing, a three-dimensional image of the object to be shot can be obtained.
Specifically, the 3D camera can be a 3D structure optical camera, and because the 3D structure optical camera utilizes the structure optical equipment to acquire color, infrared and depth pictures of a scene simultaneously, and carries out detection and analysis on a face in the scene, a 3D face image is formed, a face model with face depth information is created, and therefore more excellent recognition speed, recognition accuracy and safety can be realized.
In the embodiment of the present invention, the terminal may be provided with a 3D dynamic avatar mode option, if the 3D dynamic avatar mode option is set to be turned on, an avatar displayed on an interactive interface preset in the terminal is a 3D dynamic avatar, and correspondingly, the acquisition instruction may be an acquisition instruction of the first three-dimensional avatar information generated by the terminal after detecting that the 3D dynamic avatar mode option is turned on by a user in the terminal.
It should be noted that the initial three-dimensional head portrait of the user may also be a specific three-dimensional image selected by the user in a preset three-dimensional head portrait resource database, the preset three-dimensional head portrait resource database stores a three-dimensional head portrait representing the appearance characteristics of the user and three-dimensional head portraits of various cartoons or animals, the three-dimensional head portrait representing the appearance characteristics of the user may be generated in advance and stored in the preset three-dimensional head portrait resource database through any three-dimensional camera device, and the user may select a three-dimensional image according to personal preferences as the initial three-dimensional head portrait for generating the 3D dynamic head portrait of the user, thereby improving the user experience.
And 102, acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device, and generating the three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait.
In this step, the terminal has generated a three-dimensional static avatar representing the appearance characteristics of the user, that is, an initial three-dimensional avatar of the user, and further, may continue to obtain second three-dimensional avatar information of the user through the three-dimensional camera, where the second three-dimensional avatar information may represent dynamic information of the user.
Further, an image processing module in the terminal performs image processing on the second three-dimensional avatar information based on the initial three-dimensional avatar of the user, so that a three-dimensional dynamic avatar of the user can be generated.
It should be noted that, because the three-dimensional dynamic avatar is generated according to the dynamic information representing the user, the three-dimensional dynamic avatar can be changed correspondingly according to the user's action, so that the avatar of the user is no longer a fixed static picture or a preset fixed three-dimensional avatar, but can represent the three-dimensional dynamic avatar of the user's expression and limb dynamic information, and thus when multiple users communicate with each other, the user can express the information that the user needs to transmit to the target user through the change of the expression and limb action without inputting text information or voice information in the terminal, thereby increasing the interest and safety of the user in the communication process.
For example, when a user communicates with a target user by inputting text information or voice information into a terminal while driving a vehicle, safety of a driving process may be affected, and at this time, the user may communicate with the target user through expressions and body movements when the terminal is turned on in the 3D dynamic avatar mode.
Optionally, the process of acquiring the second three-dimensional avatar information of the user by using the three-dimensional camera device may be continuously acquiring the second three-dimensional avatar information of the user at preset time intervals. The preset time period may be a fixed time value, such as 0.1 millisecond and 0.5 millisecond, determined according to the network environment of the terminal, and if the network environment of the terminal is good, the preset time period is set to be short, so that the generated three-dimensional dynamic avatar contains more dynamic information of the expression and the limbs of the user, and if the network environment of the terminal is poor, the preset time period is set to be long, so that the generated three-dimensional dynamic avatar is displayed more smoothly in the interface.
And 103, displaying the three-dimensional dynamic head portrait in a preset interactive interface.
In this step, the three-dimensional dynamic avatar of the user generated in step 102 is displayed in a preset interactive interface.
In summary, the method for generating a three-dimensional dynamic head portrait according to the embodiments of the present invention is applied to a terminal including a three-dimensional image capturing apparatus, and includes: when an acquisition instruction is received, acquiring first three-dimensional head portrait information of a user through a three-dimensional camera device, and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information; acquiring second three-dimensional head portrait information of a user through a three-dimensional camera device, and generating a three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait; and displaying the three-dimensional dynamic head portrait in a preset interactive interface. In the embodiment of the invention, the terminal comprising the three-dimensional camera device can generate the initial three-dimensional head portrait of the user according to the acquired first three-dimensional head portrait information, acquire the second three-dimensional head portrait information of the user, continuously acquire the expression and limb dynamic information of the user within a period of time, and further generate the three-dimensional dynamic head portrait capable of representing the expression and limb dynamic information of the user according to the second three-dimensional head portrait information of the user and the initial three-dimensional head portrait, so that when the user and a friend perform social interaction or position sharing, the user can communicate through the three-dimensional dynamic head portrait displayed in a preset interaction interface, the interaction among the users is improved, and the use experience of the user is improved.
Referring to fig. 2, a flowchart illustrating steps of another method for generating a three-dimensional dynamic avatar according to an embodiment of the present invention is shown.
Step 201, when the acquisition instruction is received, generating and displaying acquisition prompt information aiming at the first three-dimensional head portrait information.
In this step, when the terminal receives an acquisition instruction for acquiring first three-dimensional avatar information, acquisition prompt information for the first three-dimensional avatar information is generated and displayed.
For example, if the terminal is a vehicle-mounted multimedia host, an initial 3D avatar recording button is arranged in a display screen of the vehicle-mounted multimedia host, if the vehicle-mounted multimedia host detects that a user clicks the initial 3D avatar recording button in the display screen, an acquisition instruction is generated, text prompt information for prompting the user to record the initial 3D avatar is further generated according to the acquisition instruction and displayed in the display screen of the vehicle-mounted multimedia host, or voice prompt information for prompting the user to record the initial 3D avatar is generated and played through a vehicle-mounted microphone device, so that the user can accurately record the initial 3D avatar according to the prompt information, wherein the prompt information can be 'please sit upright, visually see the front', 'please turn the head left and right, thanks to the cooperation of' you, and finish face entry "wrong, face entry failed, ask you to re-enter", etc.
Step 202, acquiring first three-dimensional head portrait information of the user through the three-dimensional camera device, and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information.
This step may specifically refer to step 101, which is not described herein again.
After step 202, step 203 may be performed, or step 207 may be performed.
And step 203, displaying the initial three-dimensional head portrait.
In this step, the initial three-dimensional avatar generated in step 202 is displayed on the display interface of the terminal.
And 204, under the condition that modification operation information aiming at the initial three-dimensional head portrait is received, generating a target initial three-dimensional head portrait according to the initial three-dimensional head portrait and the modification operation information.
In this step, in an interface displaying the initial three-dimensional avatar in the terminal, a modification option for the initial three-dimensional avatar may be set, and a user may perform a modification operation on the initial three-dimensional avatar according to personal preferences through the modification option, so as to generate a modification operation for the initial three-dimensional avatar.
Further, the terminal generates a target initial three-dimensional head portrait according with personal preference of the user according to the initial three-dimensional head portrait and the modification operation information under the condition that the modification operation information of the user aiming at the initial three-dimensional head portrait is received.
And step 205, acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device, and generating a target three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the target initial three-dimensional head portrait.
In this step, the terminal has generated a three-dimensional static avatar representing the appearance characteristics of the user and conforming to the personal preferences of the user, that is, the target initial three-dimensional avatar of the user, and further, second three-dimensional avatar information of the user can be continuously acquired by the three-dimensional camera, and the second three-dimensional avatar information can represent the dynamic information of the user.
Further, an image processing module in the terminal performs image processing on the second three-dimensional avatar information based on the target initial three-dimensional avatar of the user, so that the target three-dimensional dynamic avatar of the user can be generated.
It should be noted that, because the target three-dimensional dynamic head portrait is generated according to the dynamic information representing the user, the target three-dimensional dynamic head portrait can implement corresponding changes following the user's actions, so that the user's head portrait is no longer a fixed static picture or a preset fixed three-dimensional head portrait, but can represent the three-dimensional dynamic head portrait of the user's expression and limb dynamic information, and therefore, when multiple users communicate, the user can express the information that the user needs to transmit to the target user through the changes of the expression and limb actions, without inputting text information or voice information in the terminal, thereby increasing the interest and safety of the user's communication process.
And step 206, displaying the target three-dimensional dynamic head portrait in the preset interactive interface.
In this step, the three-dimensional dynamic avatar of the user generated in step 205 is displayed in a preset interactive interface.
And step 207, acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device, and generating the three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait.
Optionally, the second three-dimensional avatar information includes a pose feature, and the pose feature includes an expression pose feature and/or a limb pose feature.
The posture characteristic can be a relation between human body appearance and configuration obtained by adopting a posture estimation method, specifically, the method divides a human body into a plurality of parts, establishes an appearance model of each part and a position relation model between the parts, provides an energy function for evaluating the matching degree of a position to be detected in an image and a template support, and determines the most possible human body position by minimizing the deformation degree, thereby obtaining a portrait model in the image.
The gesture features may include expression gesture features and body gesture features, that is, dynamic information of facial expressions and body movements of a person in an image needs to be detected, so as to determine facial expression changes and body movements of the person. For example, by acquiring the expression posture characteristics of the user through the 3D camera device, whether the user is laughing, crying, depressed or the like at the moment can be determined, and by acquiring the limb posture characteristics of the user, whether the user expresses an approved posture through a nodding motion or expresses a negative posture through a shaking motion at the moment can be determined.
Optionally, step 207 may specifically include:
substep 2071, obtaining second three-dimensional head portrait information of the user through the three-dimensional camera device.
In this step, the terminal has generated a three-dimensional static avatar representing the appearance characteristics of the user, that is, an initial three-dimensional avatar of the user, and further, may continue to obtain second three-dimensional avatar information of the user through the three-dimensional camera, where the second three-dimensional avatar information may represent dynamic information of the user.
Optionally, the process of acquiring the second three-dimensional avatar information of the user by using the three-dimensional camera device may be continuously acquiring the second three-dimensional avatar information of the user at preset time intervals. The preset time period may be a fixed time value, such as 0.1 millisecond and 0.5 millisecond, determined according to the network environment of the terminal, and if the network environment of the terminal is good, the preset time period is set to be short, so that the generated three-dimensional dynamic avatar contains more dynamic information of the expression and the limbs of the user, and if the network environment of the terminal is poor, the preset time period is set to be long, so that the generated three-dimensional dynamic avatar is displayed more smoothly in the interface.
And a substep 2072, performing amplification processing on the posture feature to generate target three-dimensional head portrait information when detecting that the change value of the posture feature is greater than or equal to a preset threshold value.
In this step, the gesture feature in the second three-dimensional head portrait information acquired by the acquisition unit is detected and compared with the gesture feature acquired at the previous acquisition time, so as to determine the change value of the current gesture feature, that is, the degree of expression change or limb action change of the user from the previous acquisition time to the current time can be determined, if the change value is greater than or equal to a preset threshold value, the degree of expression change or limb action change of the user is relatively large, in order to enable the three-dimensional head portrait information generated by the second three-dimensional head portrait information to more obviously represent the expression change or limb action change of the user, the gesture feature in the second three-dimensional head portrait information is amplified to obtain the target three-dimensional head portrait information, and while ensuring that the expression change or limb action change of the three-dimensional head portrait in the target three-dimensional dynamic head portrait is consistent with the expression change or limb action change of the user, compared with the action corresponding to the expression change or the limb action change of the user, the expression change or the limb action change of the three-dimensional head portrait in the target three-dimensional dynamic head portrait is more exaggerated, so that the target user communicating with the user can clearly and definitely receive the expression change or the limb action change of the user by observing the target three-dimensional dynamic head portrait corresponding to the user, and the emotion of the user at the moment or the intention which the user wants to express is known.
Substep 2073, generating the three-dimensional dynamic avatar according to the target three-dimensional avatar information and the initial three-dimensional avatar.
In this step, the image processing module in the terminal performs image processing on the target three-dimensional avatar information based on the initial three-dimensional avatar of the user, so that the target three-dimensional dynamic avatar of the user can be generated.
Optionally, in another implementation manner, step 207 may further include:
and a substep 2074, determining target expression information, target character information and target voice information corresponding to the target three-dimensional head portrait information according to the pre-stored corresponding relationship among the three-dimensional head portrait information, the expression information, the character information and the voice information.
When the terminal detects that the change value of the gesture feature is larger than or equal to a preset threshold value, the gesture feature is amplified to generate target three-dimensional head portrait information, and then the terminal can determine the target expression information, the target character information and the target voice information corresponding to the target three-dimensional head portrait information according to the pre-stored corresponding relation among the three-dimensional head portrait information, the expression information, the character information and the voice information.
Specifically, when the degree of the expression change or the limb action change of the user is detected to be large, in order to enable the three-dimensional head portrait information generated through the second three-dimensional head portrait information to more obviously represent the expression change or the limb action change of the user, the posture characteristics in the second three-dimensional head portrait information are amplified, and the target three-dimensional head portrait information is obtained, wherein the target three-dimensional head portrait information is more exaggerated than the action corresponding to the expression change or the limb action change of the user.
Meanwhile, in order to express the detected expression change or limb action change of the user more intuitively, the terminal can determine target expression information, target text information and target voice information which can represent the expression change or limb action change of the user at the moment by inquiring the corresponding relation and according to the target three-dimensional head portrait information if the degree of the expression change or limb action change of the user is detected to be larger, so that a target user who communicates with the user can receive the expression change or limb action change of the user more clearly and definitely through the target expression information, the target text information and the target voice information, and know the emotion of the user at the moment or the intention which the user wants to express.
And a substep 2075 of displaying the target expression information, the target text information and the target voice information in the preset interactive interface.
In this step, the target expression information, target text information, and target voice information representing the expression change or the limb movement change of the user, which are generated in step 2054, are displayed in a preset interaction interface.
For example, if it is detected that the user is laughing at this time, the generated target expression information is an expression of a current size, the target character information is "haha", and the target speech information is a segment of speech including a large laughing sound.
And 208, displaying the three-dimensional dynamic head portrait in a preset interactive interface.
In this step, the three-dimensional dynamic avatar of the user generated in step 207 is displayed in a preset interactive interface.
Optionally, the preset interaction interface includes a location sharing interface, and step 208 may specifically include:
substep 2081, obtaining real-time position information of the terminal, and determining the real-time position of the terminal in the position sharing interface according to the real-time position information.
In this step, the terminal may determine real-time position coordinates of the terminal through a Global Positioning System (GPS), and determine a real-time position of the terminal in a map displayed on a position sharing interface in combination with the position sharing interface displayed in the terminal.
Substep 2082, at the real-time location, presenting the three-dimensional dynamic avatar.
In this step, the terminal determined in sub-step 2081 displays the three-dimensional dynamic avatar determined in step 207 at a real-time location in a map displayed on a location sharing interface.
Therefore, when a user carries out position sharing with a friend or carries out team traveling with the friend through the terminal, a navigation interface in the terminal is not a simple map and a picture head portrait added to a plurality of coordinate points on the map, but becomes a 3D dynamic head portrait capable of acting along with the action of the user, and is accompanied with special effects of various expressions, so that the user can carry out real-time interaction with the friend in the process of carrying out position sharing and team traveling, the user can observe the action and emotion change of the friend on the navigation interface, the user and the friend do not travel together, but the user has the effect of common traveling, and the user is full of fun.
Optionally, the preset interaction interface includes a social software interface, and step 208 may specifically include:
substep 2083, obtaining chat text information or chat voice information, displaying the chat text information or the chat voice information in the social software interface, and displaying the three-dimensional dynamic avatar at a position corresponding to the chat text information or the chat voice information.
In this step, a user inputs chat text information or chat voice information to be sent through a chat information input module of the terminal, and displays the chat text information or the chat voice information in a social software interface of the terminal and displays the three-dimensional dynamic avatar determined in step 207 at a position corresponding to the chat text information or the chat voice information under the condition of obtaining the chat text information or the chat voice information.
Therefore, when a user conducts social chat with friends through the terminal, the chat interface in the terminal is not chat text information or chat voice information and a static head portrait set by the user, but is changed into a 3D dynamic head portrait capable of acting along with actions of the user, and meanwhile special effects of various expressions are accompanied, so that the user can conduct real-time interaction with the friends through the 3D dynamic head portrait when the user conducts social chat with the friends.
Step 209, in a case where the first three-dimensional avatar information of the target user is acquired from the automobile remote server, generating an initial three-dimensional avatar of the target user according to the first three-dimensional avatar information of the target user.
Optionally, the terminal is a vehicle-mounted multimedia Host (HUT), in step 208, the three-dimensional dynamic head portrait of the user may be displayed in a multimedia interface of the HUT, and meanwhile, the HUT may further obtain first three-dimensional head portrait information of the target user from a remote automotive server (TSP), and generate the initial three-dimensional head portrait of the target user according to the first three-dimensional head portrait information of the target user.
Specifically, first three-dimensional head portrait information of a target user can be acquired through other three-dimensional camera devices and uploaded to a TSP cloud, the TSP cloud can transmit the first three-dimensional head portrait information of the target user to a HUT through a data transmission channel (APN), and the HUT performs image processing on the head portrait information, so that a three-dimensional head portrait representing appearance characteristics of the target user is generated and serves as an initial three-dimensional head portrait of the target user.
Step 210, receiving the second three-dimensional avatar information of the target user, and generating a three-dimensional dynamic avatar of the target user according to the second three-dimensional avatar information of the target user and the initial three-dimensional avatar of the target user.
In this step, the terminal has generated a three-dimensional static avatar representing the appearance characteristics of the target user, that is, an initial three-dimensional avatar of the target user, and further, continuously obtains second three-dimensional avatar information of the target user, where the second three-dimensional avatar information of the target user may represent dynamic information of the target user.
Further, an image processing module in the terminal performs image processing on the second three-dimensional avatar information of the target user based on the initial three-dimensional avatar of the target user, so that a three-dimensional dynamic avatar of the target user can be generated.
Optionally, the process of acquiring the second three-dimensional avatar information of the user by using the three-dimensional camera device may be continuously acquiring the second three-dimensional avatar information of the user at preset time intervals. The preset time period may be a fixed time value determined according to the network environment of the terminal, for example, 0.1 millisecond, 0.5 millisecond, and if the network environment of the terminal is better, the preset time period is set to be shorter, so that the generated three-dimensional dynamic avatar of the target user includes more dynamic information of the expression and the limb of the user, and if the network environment of the terminal is poorer, the preset time period is set to be longer, so that the generated three-dimensional dynamic avatar of the target user is displayed more smoothly in an interface.
And step 211, displaying the three-dimensional dynamic head portrait of the target user in the preset interactive interface.
In this step, the three-dimensional dynamic avatar of the target user generated in step 210 is displayed in a preset interactive interface.
Therefore, the user can clearly and definitely receive the expression change or the limb action change of the target user by observing the three-dimensional dynamic head portrait corresponding to the target user in the preset interactive interface of the terminal, and know the current emotion of the target user or the intention which the target user wants to express.
In summary, the method for generating a three-dimensional dynamic head portrait according to the embodiments of the present invention is applied to a terminal including a three-dimensional image capturing apparatus, and includes: when an acquisition instruction is received, acquiring first three-dimensional head portrait information of a user through a three-dimensional camera device, and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information; acquiring second three-dimensional head portrait information of a user through a three-dimensional camera device, and generating a three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait; and displaying the three-dimensional dynamic head portrait in a preset interactive interface. In the embodiment of the invention, the terminal comprising the three-dimensional camera device can generate the initial three-dimensional head portrait of the user according to the acquired first three-dimensional head portrait information, acquire the second three-dimensional head portrait information of the user, continuously acquire the expression and limb dynamic information of the user within a period of time, and further generate the three-dimensional dynamic head portrait capable of representing the expression and limb dynamic information of the user according to the second three-dimensional head portrait information of the user and the initial three-dimensional head portrait, so that when the user and a friend perform social interaction or position sharing, the user can communicate through the three-dimensional dynamic head portrait displayed in a preset interaction interface, the interaction among the users is improved, and the use experience of the user is improved.
On the basis of the above embodiment, the embodiment of the present invention further provides a method for generating a three-dimensional dynamic avatar.
Referring to fig. 3, which is a flowchart illustrating interaction steps of a method for generating a three-dimensional dynamic head portrait according to an embodiment of the present invention, where the terminal includes a 3D camera and a vehicle HUT, and the preset interaction interface is a position sharing interface or a team traveling interface displayed in a HUT display interface when the vehicle navigates, and the method specifically includes:
step 301, starting the vehicle, and waking up a Controller Area Network (CAN) Network.
In this step, if the vehicle is started, the entire vehicle CAN network is automatically woken up, and a wake-up signal is transmitted to the vehicle HUT system, so that the vehicle HUT system executes step 309, and at the same time, the 3D camera executes step 302.
Step 302, the 3D camera is started and the system and software are loaded.
After a signal of vehicle starting is received, the 3D camera is started, and relevant systems and software for shooting the face information of the user and generating a 3D model of the face of the user and 3D dynamic head portrait information are loaded.
Step 303, whether the face function is enabled.
And the 3D camera system judges whether the face function is enabled, if so, the step 304 is executed, otherwise, the ending step is executed, and the process of shooting the face information of the user through the 3D camera and generating the 3D model of the face of the user and the 3D dynamic head portrait information is ended.
Step 304, whether a head is detected.
In this step, the 3D camera system detects whether the head of the user is detected by the image capturing system, if so, step 305 is executed, and if not, step 304 is repeatedly executed to continuously detect whether the head is detected.
And step 305, recognizing the human face.
The 3D camera system starts a face recognition process after detecting the head of the user.
Step 306, whether the identification is successful.
In this step, the 3D camera system determines whether the face recognition process is successful, if so, step 307 is executed, and if not, step 305 is repeatedly executed.
And 307, establishing a human face 3D model.
In this step, the 3D camera may establish a 3D model representing the facial features of the user through the facial information of the user acquired in step 305.
Specifically, the 3D camera can detect the distance information of the shooting space, and accurately know the distance between each point in the image and the camera, so that the three-dimensional space coordinates of each point in the image can be obtained by adding the plane coordinates of the point in the two-dimensional image, thereby obtaining the three-dimensional image information of the object to be shot, and after image processing, the three-dimensional image of the object to be shot can be obtained.
And 308, acquiring 3D dynamic head portrait information.
In this step, the 3D camera can continuously gather user's face information, obtains corresponding video data to send this video data to HUT.
After step 308, step 315 may be performed, as may step 312.
Referring to fig. 4, a flow chart of transmitting video data according to an embodiment of the present invention is shown, where the flow chart may include:
a1, the 3D camera notifies the HUT requesting to send video data.
In this step, after the 3D camera acquires the corresponding video data, an instruction requesting to send the video data is first sent to the HUT to notify the HUT that the 3D camera has acquired the video data, and may send the video data to the HUT.
A2, the HUT informs the 3D camera that it can start sending video data over ethernet.
In this step, after receiving the instruction requesting to transmit video data transmitted by the 3D camera, the HUT may generate and return an instruction allowing the 3D camera to transmit video data to notify the 3D camera that the 3D camera may start transmitting video data over the ethernet network
A3, the 3D camera sends video data to HUT through Ethernet.
In this step, after receiving the instruction from the HUT to allow sending video data, the 3D camera may transmit the video data to the HUT via the ethernet in the SOME IP protocol.
After step A3, if the 3D camera has already transmitted the video data, step a6 is performed, and if the 3D camera has not already transmitted the video data, step a4 is performed.
A4, the 3D camera informs the HUT that video data is being sent.
In this step, the 3D camera is transmitting video data, and the 3D camera notifies the HUT that video data is being transmitted.
A5, HUT displays video.
In this step, since the 3D camera is transmitting video data, the HUT can display the received video information in its display screen.
A6, the 3D camera informs the HUT that the video transmission is finished.
In this step, when the 3D camera has finished sending the video data, the 3D camera notifies the HUT and the video sending is finished.
A7, HUT finishes the video display.
In this step, since the 3D camera has already transmitted the video data, the HUT ends the video display.
In step 309, the HUT is started and the navigation system and software are loaded.
In this step, if the vehicle is started, the vehicle HUT also performs a start operation, and after the HUT system is started, if it is received that the user has started the navigation function, the navigation system and software related to the navigation function are loaded.
In step 310, the receive data function is enabled.
Step 311, whether the 3D dynamic avatar mode is turned on.
In this step, it is detected whether the 3D dynamic avatar mode is turned on, if it is detected that the 3D dynamic avatar mode is turned on by the user through the display screen of the HUT system, step 312 is executed, and if it is detected that the 3D dynamic avatar mode is not turned on by the user, step 319 is executed.
Step 312, the image video display function is started.
In this step, since it is detected that the user has opened the 3D dynamic head portrait mode through the display screen of the HUT system, the image video display function is started, and video data corresponding to the face information of the user continuously collected by the 3D camera in step 308 is received.
Step 313, the Digital Signal Processing (DSP) module processes the 3D dynamic avatar information.
In this step, the DSP module in the HUT performs image processing on the received video data based on the face 3D model of the user established in step 307, so that a 3D dynamic avatar of the user can be generated.
Step 314, the HUT display screen displays the 3D dynamic avatar.
In this step, the HUT presents the generated 3D dynamic avatar on the display screen of the HUT.
Step 315, whether a special expression is detected.
In this step, the 3D camera analyzes the collected 3D dynamic head portrait information and determines whether a special expression is detected, if so, step 316 is executed, and if not, step 308 is repeatedly executed.
And step 316, head portrait amplification processing.
In this step, the specific expression detected in step 315 is amplified, and the amplified 3D dynamic avatar information is sent to the HUT system, and the DSP module in the HUT performs image processing on the received amplified 3D dynamic avatar information based on the face 3D model of the user established in step 307, so that a 3D dynamic avatar of the user with the amplified specific expression can be generated and displayed on the display screen of the HUT.
In step 317, the HUT displays the push expression.
In this step, after the step of showing the 3D dynamic avatar of the user with the enlarged special expression in the display screen of the HUT, the HUT can also query the push expression corresponding to the 3D dynamic avatar with the special expression in a preset database, and show the push expression in the display screen of the HUT.
Step 318, 3D avatar navigation.
In step 311, if it is detected that the user does not start the 3D dynamic avatar mode, 3D avatar navigation is performed, and the face 3D model established in step 307 is used as a navigation avatar to be displayed on the display screen of the HUT.
And 319, transmitting the 3D dynamic head portrait information of the target user in the navigation software to the vehicle HUT through the APN channel.
In this step, the TSP server may further transmit the 3D dynamic avatar information of the target user in the navigation software to the vehicle HUT through the APN channel, and the DSP module in the HUT performs image processing on the received 3D dynamic avatar information of the target user, so as to generate the 3D dynamic avatar of the target user and display the 3D dynamic avatar in the display screen of the HUT.
On the basis of the above embodiment, the embodiment of the present invention further provides a system for generating a three-dimensional dynamic head portrait, which is applied to a terminal including a three-dimensional camera device.
Referring to fig. 5, a block diagram of a structure of a system for generating a three-dimensional dynamic avatar according to an embodiment of the present invention is shown, which may specifically include the following modules:
the first generating module 401 is configured to, when receiving a collecting instruction, acquire first three-dimensional head portrait information of a user through the three-dimensional camera device, and generate an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information.
Optionally, the first generating module 401 includes:
the fourth generation submodule is used for generating and displaying acquisition prompt information aiming at the first three-dimensional head portrait information when the acquisition instruction is received;
and the fifth generation submodule is used for acquiring first three-dimensional head portrait information of the user through the three-dimensional camera device and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information.
A second generating module 402, configured to obtain, by the three-dimensional camera device, second three-dimensional avatar information of the user, and generate the three-dimensional dynamic avatar according to the second three-dimensional avatar information and the initial three-dimensional avatar.
Optionally, the second three-dimensional avatar information includes a pose feature, and the second generating module 402 includes:
the first obtaining submodule is used for obtaining second three-dimensional head portrait information of the user through the three-dimensional camera device;
the first generation submodule is used for amplifying the attitude characteristic to generate target three-dimensional head portrait information under the condition that the change value of the attitude characteristic is detected to be greater than or equal to a preset threshold value;
and the second generation submodule is used for generating the three-dimensional dynamic head portrait according to the target three-dimensional head portrait information and the initial three-dimensional head portrait.
Optionally, the gesture features include expressive gesture features and/or limb gesture features.
The first display module 403 is configured to display the three-dimensional dynamic avatar in a preset interactive interface.
Optionally, the preset interaction interface includes a location sharing interface, and the first display module 403 includes:
the second obtaining submodule is used for obtaining real-time position information of the terminal and determining the real-time position of the terminal in the position sharing interface according to the real-time position information;
and the first display sub-module is used for displaying the three-dimensional dynamic head portrait at the real-time position.
Optionally, the preset interaction interface includes a social software interface, and the first display module 403 includes:
and the second display sub-module is used for acquiring chat text information or chat voice information, displaying the chat text information or the chat voice information in the social software interface, and displaying the three-dimensional dynamic avatar at a position corresponding to the chat text information or the chat voice information.
Optionally, the terminal is a vehicle-mounted multimedia host, and the system further includes:
the third generation module is used for generating an initial three-dimensional head portrait of the target user according to the first three-dimensional head portrait information of the target user under the condition that the first three-dimensional head portrait information of the target user is acquired from the automobile remote server;
the fourth generation module is used for receiving the second three-dimensional head portrait information of the target user and generating a three-dimensional dynamic head portrait of the target user according to the second three-dimensional head portrait information of the target user and the initial three-dimensional head portrait of the target user;
and the second display module is used for displaying the three-dimensional dynamic head portrait of the target user in the preset interactive interface.
Optionally, the system further includes:
the determining module is used for determining target expression information, target character information and target voice information corresponding to the target three-dimensional head portrait information according to the corresponding relation among the prestored three-dimensional head portrait information, expression information, character information and voice information;
and the third display module is used for displaying the target expression information, the target character information and the target voice information in the preset interactive interface.
Optionally, the system further includes:
the display module is used for displaying the initial three-dimensional head portrait;
a fifth generating module, configured to generate a target initial three-dimensional avatar according to the initial three-dimensional avatar and modification operation information when modification operation information for the initial three-dimensional avatar is received;
the second generating module 402, including:
the third generation submodule is used for acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device and generating a target three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the target initial three-dimensional head portrait;
the first display module 403 includes:
and the third display sub-module is used for displaying the target three-dimensional dynamic head portrait in the preset interactive interface.
In summary, the system for generating a three-dimensional dynamic head portrait according to an embodiment of the present invention is applied to a terminal including a three-dimensional image capturing apparatus, and includes: when an acquisition instruction is received, acquiring first three-dimensional head portrait information of a user through a three-dimensional camera device, and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information; acquiring second three-dimensional head portrait information of a user through a three-dimensional camera device, and generating a three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait; and displaying the three-dimensional dynamic head portrait in a preset interactive interface. In the embodiment of the invention, the terminal comprising the three-dimensional camera device can generate the initial three-dimensional head portrait of the user according to the acquired first three-dimensional head portrait information, acquire the second three-dimensional head portrait information of the user, continuously acquire the expression and limb dynamic information of the user within a period of time, and further generate the three-dimensional dynamic head portrait capable of representing the expression and limb dynamic information of the user according to the second three-dimensional head portrait information of the user and the initial three-dimensional head portrait, so that when the user and a friend perform social interaction or position sharing, the user can communicate through the three-dimensional dynamic head portrait displayed in a preset interaction interface, the interaction among the users is improved, and the use experience of the user is improved.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (18)

1. A method for generating a three-dimensional dynamic head portrait, which is applied to a terminal comprising a three-dimensional camera device, is characterized by comprising the following steps:
when an acquisition instruction is received, acquiring first three-dimensional head portrait information of a user through the three-dimensional camera device, and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information;
acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device, and generating the three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait;
and displaying the three-dimensional dynamic head portrait in a preset interactive interface.
2. The method of claim 1, wherein the second three-dimensional avatar information includes a pose feature;
the step of acquiring, by the three-dimensional camera device, second three-dimensional avatar information of the user, and generating the three-dimensional dynamic avatar according to the second three-dimensional avatar information and the initial three-dimensional avatar includes:
acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device;
under the condition that the change value of the attitude characteristic is detected to be larger than or equal to a preset threshold value, carrying out amplification processing on the attitude characteristic to generate target three-dimensional head portrait information;
and generating the three-dimensional dynamic head portrait according to the target three-dimensional head portrait information and the initial three-dimensional head portrait.
3. The method of claim 2, wherein the pose features comprise expressive pose features and/or limb pose features.
4. The method of claim 1, wherein the predetermined interactive interface comprises a location sharing interface;
the step of displaying the three-dimensional dynamic head portrait in a preset interactive interface comprises the following steps:
acquiring real-time position information of the terminal, and determining the real-time position of the terminal in the position sharing interface according to the real-time position information;
and displaying the three-dimensional dynamic head portrait at the real-time position.
5. The method of claim 1, wherein the predetermined interactive interface comprises a social software interface;
the step of displaying the three-dimensional dynamic head portrait in a preset interactive interface comprises the following steps:
the method comprises the steps of obtaining chat text information or chat voice information, displaying the chat text information or the chat voice information in the social software interface, and displaying the three-dimensional dynamic avatar at a position corresponding to the chat text information or the chat voice information.
6. The method according to claim 1, wherein the terminal is an in-vehicle multimedia host;
after the step of displaying the three-dimensional dynamic head portrait in the preset interactive interface, the method further comprises:
under the condition that first three-dimensional head portrait information of a target user is acquired from an automobile remote server, generating an initial three-dimensional head portrait of the target user according to the first three-dimensional head portrait information of the target user;
receiving second three-dimensional head portrait information of the target user, and generating a three-dimensional dynamic head portrait of the target user according to the second three-dimensional head portrait information of the target user and the initial three-dimensional head portrait of the target user;
and displaying the three-dimensional dynamic head portrait of the target user in the preset interactive interface.
7. The method according to claim 2, wherein after the step of generating target three-dimensional avatar information by performing enlargement processing on the pose feature when detecting that the variation value of the pose feature is greater than or equal to a preset threshold, the method further comprises:
determining target expression information, target character information and target voice information corresponding to the target three-dimensional head portrait information according to the corresponding relation among the pre-stored three-dimensional head portrait information, expression information, character information and voice information;
and displaying the target expression information, the target character information and the target voice information in the preset interactive interface.
8. The method according to claim 1, wherein after the step of acquiring first three-dimensional avatar information of a user by the three-dimensional camera upon receiving an acquisition instruction, and generating an initial three-dimensional avatar of the user according to the first three-dimensional avatar information, the method further comprises:
displaying the initial three-dimensional head portrait;
under the condition that modification operation information aiming at the initial three-dimensional head portrait is received, generating a target initial three-dimensional head portrait according to the initial three-dimensional head portrait and the modification operation information;
the step of acquiring, by the three-dimensional camera device, second three-dimensional avatar information of the user, and generating the three-dimensional dynamic avatar according to the second three-dimensional avatar information and the initial three-dimensional avatar includes:
acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device, and generating a target three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the target initial three-dimensional head portrait;
the step of displaying the three-dimensional dynamic head portrait in a preset interactive interface comprises the following steps:
and displaying the target three-dimensional dynamic head portrait in the preset interactive interface.
9. The method according to claim 1, wherein the step of acquiring first three-dimensional head portrait information of a user through the three-dimensional camera device when receiving the acquisition instruction, and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information comprises:
when the acquisition instruction is received, generating acquisition prompt information aiming at the first three-dimensional head portrait information and displaying the acquisition prompt information;
and acquiring first three-dimensional head portrait information of the user through the three-dimensional camera device, and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information.
10. A system for generating a three-dimensional dynamic head portrait, applied to a terminal including a three-dimensional camera device, the system comprising:
the first generation module is used for acquiring first three-dimensional head portrait information of a user through the three-dimensional camera device when receiving an acquisition instruction, and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information;
the second generation module is used for acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device and generating the three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait;
and the first display module is used for displaying the three-dimensional dynamic head portrait in a preset interactive interface.
11. The system of claim 10, wherein the second three-dimensional avatar information includes a pose feature;
the second generation module includes:
the first obtaining submodule is used for obtaining second three-dimensional head portrait information of the user through the three-dimensional camera device;
the first generation submodule is used for amplifying the attitude characteristic to generate target three-dimensional head portrait information under the condition that the change value of the attitude characteristic is detected to be greater than or equal to a preset threshold value;
and the second generation submodule is used for generating the three-dimensional dynamic head portrait according to the target three-dimensional head portrait information and the initial three-dimensional head portrait.
12. The system of claim 11, wherein the pose features comprise expressive pose features and/or limb pose features.
13. The system of claim 10, wherein the predetermined interactive interface comprises a location sharing interface;
the first display module, comprising:
the second obtaining submodule is used for obtaining real-time position information of the terminal and determining the real-time position of the terminal in the position sharing interface according to the real-time position information;
and the first display sub-module is used for displaying the three-dimensional dynamic head portrait at the real-time position.
14. The system of claim 10, wherein the predetermined interactive interface comprises a social software interface;
the first display module, comprising:
and the second display sub-module is used for acquiring chat text information or chat voice information, displaying the chat text information or the chat voice information in the social software interface, and displaying the three-dimensional dynamic avatar at a position corresponding to the chat text information or the chat voice information.
15. The system according to claim 10, wherein the terminal is an in-vehicle multimedia host;
the system further comprises:
the third generation module is used for generating an initial three-dimensional head portrait of the target user according to the first three-dimensional head portrait information of the target user under the condition that the first three-dimensional head portrait information of the target user is acquired from the automobile remote server;
the fourth generation module is used for receiving the second three-dimensional head portrait information of the target user and generating a three-dimensional dynamic head portrait of the target user according to the second three-dimensional head portrait information of the target user and the initial three-dimensional head portrait of the target user;
and the second display module is used for displaying the three-dimensional dynamic head portrait of the target user in the preset interactive interface.
16. The system of claim 11, further comprising:
the determining module is used for determining target expression information, target character information and target voice information corresponding to the target three-dimensional head portrait information according to the corresponding relation among the prestored three-dimensional head portrait information, expression information, character information and voice information;
and the third display module is used for displaying the target expression information, the target character information and the target voice information in the preset interactive interface.
17. The system of claim 10, further comprising:
the display module is used for displaying the initial three-dimensional head portrait;
a fifth generating module, configured to generate a target initial three-dimensional avatar according to the initial three-dimensional avatar and modification operation information when modification operation information for the initial three-dimensional avatar is received;
the second generation module includes:
the third generation submodule is used for acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device and generating a target three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the target initial three-dimensional head portrait;
the first display module, comprising:
and the third display sub-module is used for displaying the target three-dimensional dynamic head portrait in the preset interactive interface.
18. The system of claim 10, wherein the first generation module comprises:
the fourth generation submodule is used for generating and displaying acquisition prompt information aiming at the first three-dimensional head portrait information when the acquisition instruction is received;
and the fifth generation submodule is used for acquiring first three-dimensional head portrait information of the user through the three-dimensional camera device and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information.
CN201911419494.6A 2019-12-31 2019-12-31 Three-dimensional dynamic head portrait generation method and system Active CN111105494B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911419494.6A CN111105494B (en) 2019-12-31 2019-12-31 Three-dimensional dynamic head portrait generation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911419494.6A CN111105494B (en) 2019-12-31 2019-12-31 Three-dimensional dynamic head portrait generation method and system

Publications (2)

Publication Number Publication Date
CN111105494A true CN111105494A (en) 2020-05-05
CN111105494B CN111105494B (en) 2023-10-24

Family

ID=70425948

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911419494.6A Active CN111105494B (en) 2019-12-31 2019-12-31 Three-dimensional dynamic head portrait generation method and system

Country Status (1)

Country Link
CN (1) CN111105494B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763636A (en) * 2009-09-23 2010-06-30 中国科学院自动化研究所 Method for tracing position and pose of 3D human face in video sequence
CN103886632A (en) * 2014-01-06 2014-06-25 宇龙计算机通信科技(深圳)有限公司 Method for generating user expression head portrait and communication terminal
CN105704419A (en) * 2014-11-27 2016-06-22 程超 Method for human-human interaction based on adjustable template profile photos
WO2017092196A1 (en) * 2015-12-01 2017-06-08 深圳奥比中光科技有限公司 Method and apparatus for generating three-dimensional animation
CN107480614A (en) * 2017-07-31 2017-12-15 广东欧珀移动通信有限公司 Motion management method, apparatus and terminal device
CN107705341A (en) * 2016-08-08 2018-02-16 创奇思科研有限公司 The method and its device of user's expression head portrait generation
CN109151540A (en) * 2017-06-28 2019-01-04 武汉斗鱼网络科技有限公司 The interaction processing method and device of video image
CN109671141A (en) * 2018-11-21 2019-04-23 深圳市腾讯信息技术有限公司 The rendering method and device of image, storage medium, electronic device
CN110099159A (en) * 2018-01-29 2019-08-06 优酷网络技术(北京)有限公司 A kind of methods of exhibiting and client of chat interface
CN110298319A (en) * 2019-07-01 2019-10-01 北京字节跳动网络技术有限公司 Image composition method and device
CN110520901A (en) * 2017-05-16 2019-11-29 苹果公司 Emoticon is recorded and is sent

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763636A (en) * 2009-09-23 2010-06-30 中国科学院自动化研究所 Method for tracing position and pose of 3D human face in video sequence
CN103886632A (en) * 2014-01-06 2014-06-25 宇龙计算机通信科技(深圳)有限公司 Method for generating user expression head portrait and communication terminal
CN105704419A (en) * 2014-11-27 2016-06-22 程超 Method for human-human interaction based on adjustable template profile photos
WO2017092196A1 (en) * 2015-12-01 2017-06-08 深圳奥比中光科技有限公司 Method and apparatus for generating three-dimensional animation
CN107705341A (en) * 2016-08-08 2018-02-16 创奇思科研有限公司 The method and its device of user's expression head portrait generation
CN110520901A (en) * 2017-05-16 2019-11-29 苹果公司 Emoticon is recorded and is sent
CN109151540A (en) * 2017-06-28 2019-01-04 武汉斗鱼网络科技有限公司 The interaction processing method and device of video image
CN107480614A (en) * 2017-07-31 2017-12-15 广东欧珀移动通信有限公司 Motion management method, apparatus and terminal device
CN110099159A (en) * 2018-01-29 2019-08-06 优酷网络技术(北京)有限公司 A kind of methods of exhibiting and client of chat interface
CN109671141A (en) * 2018-11-21 2019-04-23 深圳市腾讯信息技术有限公司 The rendering method and device of image, storage medium, electronic device
CN110298319A (en) * 2019-07-01 2019-10-01 北京字节跳动网络技术有限公司 Image composition method and device

Also Published As

Publication number Publication date
CN111105494B (en) 2023-10-24

Similar Documents

Publication Publication Date Title
US10755695B2 (en) Methods in electronic devices with voice-synthesis and acoustic watermark capabilities
US20210132686A1 (en) Storage medium, augmented reality presentation apparatus, and augmented reality presentation method
US10486312B2 (en) Robot, robot control method, and robot system
US7725547B2 (en) Informing a user of gestures made by others out of the user's line of sight
CN109600550B (en) Shooting prompting method and terminal equipment
US11341148B2 (en) Information processing system and information processing method to specify persons with good affinity toward each other
EP4105889A1 (en) Augmented reality processing method and apparatus, storage medium, and electronic device
CN108683850B (en) Shooting prompting method and mobile terminal
US8948451B2 (en) Information presentation device, information presentation method, information presentation system, information registration device, information registration method, information registration system, and program
JPWO2017130486A1 (en) Information processing apparatus, information processing method, and program
CN109272473B (en) Image processing method and mobile terminal
CN109426343B (en) Collaborative training method and system based on virtual reality
KR20190053001A (en) Electronic device capable of moving and method for operating thereof
CN109448069B (en) Template generation method and mobile terminal
CN108763475B (en) Recording method, recording device and terminal equipment
CN111583355A (en) Face image generation method and device, electronic equipment and readable storage medium
CN111670431A (en) Information processing apparatus, information processing method, and program
KR20200092207A (en) Electronic device and method for providing graphic object corresponding to emotion information thereof
CN110225196B (en) Terminal control method and terminal equipment
US10845921B2 (en) Methods and systems for augmenting images in an electronic device
US11212488B2 (en) Conference system
CN111240471B (en) Information interaction method and wearable device
CN111105494B (en) Three-dimensional dynamic head portrait generation method and system
CN115002493A (en) Interaction method and device for live broadcast training, electronic equipment and storage medium
JP2006304066A (en) Server for use in remote conference, client computer, imaging apparatus, control method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant