CN111105494B - Three-dimensional dynamic head portrait generation method and system - Google Patents

Three-dimensional dynamic head portrait generation method and system Download PDF

Info

Publication number
CN111105494B
CN111105494B CN201911419494.6A CN201911419494A CN111105494B CN 111105494 B CN111105494 B CN 111105494B CN 201911419494 A CN201911419494 A CN 201911419494A CN 111105494 B CN111105494 B CN 111105494B
Authority
CN
China
Prior art keywords
head portrait
dimensional
information
user
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911419494.6A
Other languages
Chinese (zh)
Other versions
CN111105494A (en
Inventor
于树雷
王仕超
姜小勇
王萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Great Wall Motor Co Ltd
Original Assignee
Great Wall Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Great Wall Motor Co Ltd filed Critical Great Wall Motor Co Ltd
Priority to CN201911419494.6A priority Critical patent/CN111105494B/en
Publication of CN111105494A publication Critical patent/CN111105494A/en
Application granted granted Critical
Publication of CN111105494B publication Critical patent/CN111105494B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a method and a system for generating a three-dimensional dynamic head portrait, which are applied to a terminal comprising a three-dimensional camera device, and comprise the following steps: acquiring first three-dimensional head portrait information of a user through a three-dimensional camera device, and generating an initial three-dimensional head portrait of the user; acquiring second three-dimensional head portrait information of a user, and generating a three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait; and displaying the three-dimensional dynamic head portrait in a preset interactive interface. According to the method and the device for displaying the three-dimensional head portrait, the second three-dimensional head portrait information of the user is collected, so that the expression and limb dynamic information of the user in a period of time can be continuously obtained, the three-dimensional dynamic head portrait capable of representing the dynamic information of the user can be further generated according to the second three-dimensional head portrait information of the user and the initial three-dimensional head portrait, and the users can communicate through the three-dimensional dynamic head portrait displayed in the preset interaction interface, so that the interaction among the users is improved.

Description

Three-dimensional dynamic head portrait generation method and system
Technical Field
The invention relates to the technical field of electronics, in particular to a method and a system for generating a three-dimensional dynamic head portrait.
Background
Along with the development of intelligent terminal technology, intelligent terminal equipment becomes an indispensable tool in people's life, and more people utilize intelligent terminal equipment to carry out operations such as social interaction, position sharing.
When social interaction, position sharing and other operations are performed on the existing intelligent terminal equipment, a user can set a head portrait representing the user in a social interaction and position sharing interface according to own preference, the head portrait can be a static picture selected by the user, and if the intelligent terminal equipment supports a three-dimensional head portrait mode, the head portrait can also be a three-dimensional head portrait selected from a three-dimensional head portrait database of the intelligent terminal equipment, so that when the user performs social interaction or position sharing with friends, information such as a user name or a nickname of the user can be displayed, and head portrait information of the user can be displayed, thereby increasing fun when the friends perform social interaction or position sharing.
However, in the current scheme, because the head portrait set by the user on the intelligent terminal device is only a fixed still picture or a preset fixed three-dimensional head portrait, the head portrait is only used for calibrating the identity of the user, and when information transfer is needed between the users, communication with the target user can be carried out only by sending voice information or text information, so that the interactivity between the users is poor, and the use experience of the users is reduced.
Disclosure of Invention
In view of the above, the present invention aims to provide a method and a system for generating a three-dimensional dynamic head portrait, so as to solve the problems in the prior art that the head portrait set by a user on an intelligent terminal device can only be a fixed still picture or a preset fixed three-dimensional head portrait, so that the users cannot exchange information through the head portrait, the interactivity between the users is poor, and the use experience of the users is low.
In order to achieve the above purpose, the technical scheme of the invention is realized as follows:
a method for generating a three-dimensional dynamic head portrait, which is applied to a terminal comprising a three-dimensional camera device, the method comprising:
when an acquisition instruction is received, acquiring first three-dimensional head portrait information of a user through the three-dimensional camera device, and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information;
acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device, and generating a three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait;
and displaying the three-dimensional dynamic head portrait in a preset interactive interface.
Further, the second three-dimensional head portrait information includes gesture features;
The step of obtaining second three-dimensional head portrait information of the user through the three-dimensional camera device and generating the three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait comprises the following steps:
acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device;
amplifying the gesture features under the condition that the change value of the gesture features is detected to be larger than or equal to a preset threshold value, and generating target three-dimensional head portrait information;
and generating the three-dimensional dynamic head portrait according to the target three-dimensional head portrait information and the initial three-dimensional head portrait.
Further, the gesture features include expression gesture features and/or limb gesture features.
Further, the preset interactive interface comprises a position sharing interface;
the step of displaying the three-dimensional dynamic head portrait in a preset interactive interface comprises the following steps:
acquiring real-time position information of the terminal, and determining the real-time position of the terminal in the position sharing interface according to the real-time position information;
at the real-time location, the three-dimensional dynamic head portrait is shown.
Further, the preset interactive interface comprises a social software interface;
The step of displaying the three-dimensional dynamic head portrait in a preset interactive interface comprises the following steps:
and acquiring chat text information or chat voice information, displaying the chat text information or the chat voice information in the social software interface, and displaying the three-dimensional dynamic head portrait at a position corresponding to the chat text information or the chat voice information.
Further, the terminal is a vehicle-mounted multimedia host;
after the step of displaying the three-dimensional dynamic head portrait in the preset interactive interface, the method further comprises:
under the condition that first three-dimensional head portrait information of a target user is obtained from an automobile remote server, generating an initial three-dimensional head portrait of the target user according to the first three-dimensional head portrait information of the target user;
receiving second three-dimensional head portrait information of the target user, and generating a three-dimensional dynamic head portrait of the target user according to the second three-dimensional head portrait information of the target user and the initial three-dimensional head portrait of the target user;
and displaying the three-dimensional dynamic head portrait of the target user in the preset interactive interface.
Further, after the step of generating the target three-dimensional head portrait information by amplifying the gesture feature when the detected change value of the gesture feature is greater than or equal to the preset threshold, the method further includes:
Determining target expression information, target text information and target voice information corresponding to the target three-dimensional head portrait information according to the corresponding relation among the pre-stored three-dimensional head portrait information, expression information, text information and voice information;
and displaying the target expression information, the target text information and the target voice information in the preset interactive interface.
Further, after the step of acquiring the first three-dimensional head portrait information of the user through the three-dimensional camera device and generating the initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information when the acquisition instruction is received, the method further includes:
displaying the initial three-dimensional head portrait;
generating a target initial three-dimensional head portrait according to the initial three-dimensional head portrait and the modification operation information under the condition that the modification operation information aiming at the initial three-dimensional head portrait is received;
the step of obtaining second three-dimensional head portrait information of the user through the three-dimensional camera device and generating the three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait comprises the following steps:
acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device, and generating a target three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the target initial three-dimensional head portrait;
The step of displaying the three-dimensional dynamic head portrait in a preset interactive interface comprises the following steps:
and displaying the target three-dimensional dynamic head portrait in the preset interactive interface.
Further, when receiving the acquisition instruction, the step of acquiring first three-dimensional head portrait information of the user through the three-dimensional camera device and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information includes:
when the acquisition instruction is received, generating acquisition prompt information aiming at the first three-dimensional head portrait information and displaying the acquisition prompt information;
acquiring first three-dimensional head portrait information of the user through the three-dimensional camera device, and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information.
A system for generating a three-dimensional dynamic head portrait, applied to a terminal including a three-dimensional image pickup device, the system comprising:
the first generation module is used for acquiring first three-dimensional head portrait information of a user through the three-dimensional camera device when receiving an acquisition instruction, and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information;
the second generation module is used for acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device and generating the three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait;
The first display module is used for displaying the three-dimensional dynamic head portrait in a preset interactive interface.
Further, the second three-dimensional head portrait information includes gesture features;
the second generation module includes:
the first acquisition sub-module is used for acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device;
the first generation sub-module is used for amplifying the gesture features to generate target three-dimensional head portrait information under the condition that the change value of the gesture features is detected to be larger than or equal to a preset threshold value;
and the second generation sub-module is used for generating the three-dimensional dynamic head portrait according to the target three-dimensional head portrait information and the initial three-dimensional head portrait.
Further, the gesture features include expression gesture features and/or limb gesture features.
Further, the preset interactive interface comprises a position sharing interface;
the first display module includes:
the second acquisition sub-module is used for acquiring real-time position information of the terminal and determining the real-time position of the terminal in the position sharing interface according to the real-time position information;
and the first display sub-module is used for displaying the three-dimensional dynamic head portrait at the real-time position.
Further, the preset interactive interface comprises a social software interface;
the first display module includes:
the second display sub-module is used for acquiring the chat text information or the chat voice information, displaying the chat text information or the chat voice information in the social software interface, and displaying the three-dimensional dynamic head portrait at the position corresponding to the chat text information or the chat voice information.
Further, the terminal is a vehicle-mounted multimedia host;
the system further comprises:
the third generation module is used for generating an initial three-dimensional head portrait of the target user according to the first three-dimensional head portrait information of the target user under the condition that the first three-dimensional head portrait information of the target user is acquired from the automobile remote server;
the fourth generation module is used for receiving the second three-dimensional head portrait information of the target user and generating a three-dimensional dynamic head portrait of the target user according to the second three-dimensional head portrait information of the target user and the initial three-dimensional head portrait of the target user;
and the second display module is used for displaying the three-dimensional dynamic head portrait of the target user in the preset interactive interface.
Further, the system further comprises:
the determining module is used for determining target expression information, target text information and target voice information corresponding to the target three-dimensional head portrait information according to the corresponding relation among the pre-stored three-dimensional head portrait information, the pre-stored expression information, the pre-stored text information and the pre-stored voice information;
and the third display module is used for displaying the target expression information, the target text information and the target voice information in the preset interactive interface.
Further, the system further comprises:
the display module is used for displaying the initial three-dimensional head portrait;
a fifth generation module, configured to generate a target initial three-dimensional head portrait according to the initial three-dimensional head portrait and modification operation information when the modification operation information for the initial three-dimensional head portrait is received;
the second generation module includes:
the third generation sub-module is used for acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device and generating a target three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the target initial three-dimensional head portrait;
the first display module includes:
and the third display sub-module is used for displaying the target three-dimensional dynamic head portrait in the preset interactive interface.
Further, the first generating module includes:
the fourth generation sub-module is used for generating and displaying acquisition prompt information aiming at the first three-dimensional head portrait information when the acquisition instruction is received;
and a fifth generation sub-module, configured to acquire first three-dimensional head portrait information of the user through the three-dimensional imaging device, and generate an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information.
Compared with the prior art, the method and the system for generating the three-dimensional dynamic head portrait have the following advantages:
the invention provides a method and a system for generating a three-dimensional dynamic head portrait, which are applied to a terminal comprising a three-dimensional camera device, and comprise the following steps: when an acquisition instruction is received, acquiring first three-dimensional head portrait information of a user through a three-dimensional camera device, and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information; acquiring second three-dimensional head portrait information of a user through a three-dimensional camera device, and generating a three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait; and displaying the three-dimensional dynamic head portrait in a preset interactive interface. According to the terminal comprising the three-dimensional camera device, an initial three-dimensional head portrait of a user can be generated according to the acquired first three-dimensional head portrait information, the second three-dimensional head portrait information of the user can be acquired, the expression and limb dynamic information of the user in a period of time can be continuously acquired, and further, a three-dimensional dynamic head portrait capable of representing the expression and limb dynamic information of the user can be generated according to the second three-dimensional head portrait information of the user and the initial three-dimensional head portrait, so that when the user performs social interaction or position sharing with friends, the three-dimensional dynamic head portrait displayed in a preset interaction interface can be used for communicating, the interaction among the users is improved, and the use experience of the user is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention. In the drawings:
fig. 1 is a flow chart of steps of a method for generating a three-dimensional dynamic head portrait according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of another method for generating a three-dimensional dynamic avatar according to an embodiment of the present invention;
FIG. 3 is a flowchart of the interactive steps of a method for generating a three-dimensional dynamic head portrait according to an embodiment of the present invention;
fig. 4 is a flowchart of a video data transmission according to an embodiment of the present invention;
fig. 5 is a block diagram of a system for generating a three-dimensional dynamic head portrait according to an embodiment of the present invention.
Detailed Description
It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other.
The invention will be described in detail below with reference to the drawings in connection with embodiments.
Referring to fig. 1, a flowchart of steps of a method for generating a three-dimensional dynamic head portrait according to an embodiment of the present invention is shown, where the method is applied to a terminal including a three-dimensional image capturing device.
Step 101, when an acquisition instruction is received, acquiring first three-dimensional head portrait information of a user through the three-dimensional camera device, and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information.
In the step, under the condition that a terminal receives an acquisition instruction for acquiring first three-dimensional head portrait information, acquiring the first three-dimensional head portrait information of a user through a three-dimensional camera device, and generating a three-dimensional head portrait representing the appearance characteristics of the user according to the first three-dimensional head portrait information as an initial three-dimensional head portrait of the user.
In the embodiment of the invention, the terminal can be an intelligent mobile terminal such as a mobile phone, a notebook computer, a tablet personal computer and a vehicle-mounted computer.
The three-dimensional (3D) camera device may be a 3D camera installed in the terminal, where the 3D camera may detect distance information of a shooting space, accurately know a distance between each point in an image and the camera, and add plane coordinates of the point in a two-dimensional image to obtain three-dimensional space coordinates of each point in the image, thereby obtaining three-dimensional image information of a shot object, and after image processing, obtain a three-dimensional image of the shot object.
Specifically, the 3D camera may be a 3D structured light camera, and because the 3D structured light camera uses structured light equipment to simultaneously obtain color, infrared and depth pictures of a scene, and performs detection analysis on a face in the scene to form a 3D face image, and creates a face model with face depth information, the 3D structured light camera can achieve better recognition speed, recognition accuracy and safety.
In the embodiment of the present invention, a 3D dynamic avatar mode option may be set in the terminal, if the 3D dynamic avatar mode option is set to be on, an avatar displayed on a preset interactive interface in the terminal is a 3D dynamic avatar, and the corresponding acquisition instruction may be an acquisition instruction of first three-dimensional avatar information generated after the terminal detects that the user opens the 3D dynamic avatar mode option in the terminal.
It should be noted that, the initial three-dimensional head portrait of the user may also be a specific three-dimensional image selected by the user from a preset three-dimensional head portrait resource database, where the preset three-dimensional head portrait resource database stores three-dimensional head portraits representing the appearance characteristics of the user and three-dimensional head portraits of various cartoon images or animals, where the three-dimensional head portraits representing the appearance characteristics of the user may be generated in advance by any three-dimensional camera device and stored in the preset three-dimensional head portrait resource database, and the user may select a three-dimensional image as the initial three-dimensional head portrait for generating the 3D dynamic head portraits of the user according to personal preference, thereby improving the user experience.
Step 102, obtaining second three-dimensional head portrait information of the user through the three-dimensional camera device, and generating the three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait.
In this step, the terminal has generated a three-dimensional static head portrait characterizing the appearance of the user, that is, an initial three-dimensional head portrait of the user, and further, the terminal may continuously acquire second three-dimensional head portrait information of the user through the three-dimensional image capturing device, where the second three-dimensional head portrait information may characterize dynamic information of the user.
Further, the image processing module in the terminal performs image processing on the second three-dimensional head portrait information based on the initial three-dimensional head portrait of the user, so that a three-dimensional dynamic head portrait of the user can be generated.
It should be noted that, because the three-dimensional dynamic head portrait is generated according to the dynamic information representing the user, the three-dimensional dynamic head portrait can realize the action of following the user, and make corresponding changes, so that the head portrait of the user is not a fixed static picture or a preset fixed three-dimensional head portrait, but can represent the three-dimensional dynamic head portrait of the expression and limb dynamic information of the user, therefore, when a plurality of users exchange, the head portrait of the user can change along with the action of the user, and the user can express the information required to be transmitted to the target user through the change of the expression and limb action, without inputting text information or voice information in the terminal, thereby increasing the interestingness and safety of the user exchange process.
For example, when a user communicates with a target user by inputting text information or voice information in a terminal while driving a vehicle, the safety of the driving process is affected, and at this time, the user can communicate with the target user through expression and limb motion when the terminal is in a 3D dynamic avatar mode.
Optionally, the process of acquiring the second three-dimensional head portrait information of the user through the three-dimensional camera device may be that each interval is preset for a period of time, and the second three-dimensional head portrait information of the user is continuously acquired. The preset time period may be a fixed time value determined according to the network environment of the terminal, for example, 0.1 ms and 0.5 ms, if the network environment of the terminal is good, the preset time period is set to be smaller, so that the generated three-dimensional dynamic head portrait contains more user expressions and dynamic information of limbs, and if the network environment of the terminal is poor, the preset time period is set to be larger, so that the generated three-dimensional dynamic head portrait is smoother when displayed in an interface.
And step 103, displaying the three-dimensional dynamic head portrait in a preset interactive interface.
In this step, the three-dimensional dynamic head portrait of the user generated in step 102 is displayed in a preset interactive interface.
In summary, the method for generating a three-dimensional dynamic head portrait provided by the embodiment of the present invention is applied to a terminal including a three-dimensional image capturing device, and includes: when an acquisition instruction is received, acquiring first three-dimensional head portrait information of a user through a three-dimensional camera device, and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information; acquiring second three-dimensional head portrait information of a user through a three-dimensional camera device, and generating a three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait; and displaying the three-dimensional dynamic head portrait in a preset interactive interface. According to the terminal comprising the three-dimensional camera device, an initial three-dimensional head portrait of a user can be generated according to the acquired first three-dimensional head portrait information, the second three-dimensional head portrait information of the user can be acquired, the expression and limb dynamic information of the user in a period of time can be continuously acquired, and further, a three-dimensional dynamic head portrait capable of representing the expression and limb dynamic information of the user can be generated according to the second three-dimensional head portrait information of the user and the initial three-dimensional head portrait, so that when the user performs social interaction or position sharing with friends, the three-dimensional dynamic head portrait displayed in a preset interaction interface can be used for communicating, the interactivity among the users is improved, and the use experience of the user is improved.
Referring to fig. 2, a flowchart of steps of another method for generating a three-dimensional dynamic head portrait according to an embodiment of the present invention is shown.
And step 201, when the acquisition instruction is received, generating and displaying acquisition prompt information aiming at the first three-dimensional head portrait information.
In the step, under the condition that a terminal receives an acquisition instruction for acquiring first three-dimensional head portrait information, acquisition prompt information aiming at the first three-dimensional head portrait information is generated and displayed.
For example, if the terminal is a vehicle-mounted multimedia host, an initial 3D head portrait recording button is arranged in a display screen of the vehicle-mounted multimedia host, if the vehicle-mounted multimedia host detects that a user clicks the initial 3D head portrait recording button in the display screen, an acquisition instruction is generated, text prompt information for prompting the user to record the initial 3D head portrait is further generated according to the acquisition instruction, and is displayed in the display screen of the vehicle-mounted multimedia host, voice prompt information for prompting the user to record the initial 3D head portrait can also be generated, and the voice prompt information is played through a vehicle-mounted microphone device, so that the user can accurately record the initial 3D head portrait according to the prompt information, wherein the prompt information can be ' please sit straight ahead ', ' please rotate the head left and right ', ' thank you with, face recording is completed, face recording is failed, and recording again is performed.
Step 202, acquiring first three-dimensional head portrait information of the user through the three-dimensional camera device, and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information.
This step may refer to step 101, and will not be described herein.
After step 202, step 203 may be performed, or step 207 may be performed.
And 203, displaying the initial three-dimensional head portrait.
In this step, the initial three-dimensional head portrait generated in step 202 is displayed on the display interface of the terminal.
Step 204, generating a target initial three-dimensional head portrait according to the initial three-dimensional head portrait and the modification operation information under the condition that the modification operation information for the initial three-dimensional head portrait is received.
In this step, in the interface for displaying the initial three-dimensional head portrait in the terminal, a modification option for the initial three-dimensional head portrait may be set, and the user may perform a modification operation on the initial three-dimensional head portrait according to personal preference through the modification option, so as to generate a modification operation for the initial three-dimensional head portrait.
Further, under the condition that the terminal receives the modification operation information of the user for the initial three-dimensional head portrait, generating a target initial three-dimensional head portrait which accords with personal preference of the user according to the initial three-dimensional head portrait and the modification operation information.
Step 205, obtaining second three-dimensional head portrait information of the user through the three-dimensional camera device, and generating a target three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the target initial three-dimensional head portrait.
In this step, the terminal has generated a three-dimensional static head portrait characterizing the appearance characteristics of the user and conforming to the personal preferences of the user, that is, the target initial three-dimensional head portrait of the user, and further, the terminal can continuously acquire the second three-dimensional head portrait information of the user through the three-dimensional camera device, and the second three-dimensional head portrait information can characterize the dynamic information of the user.
Further, the image processing module in the terminal performs image processing on the second three-dimensional head portrait information based on the target initial three-dimensional head portrait of the user, so that the target three-dimensional dynamic head portrait of the user can be generated.
It should be noted that, because the target three-dimensional dynamic head portrait is generated according to the dynamic information representing the user, the target three-dimensional dynamic head portrait can realize following the action of the user to perform corresponding changes, so that the head portrait of the user is not a fixed still picture or a preset fixed three-dimensional head portrait, but can represent the three-dimensional dynamic head portrait of the expression and limb dynamic information of the user, therefore, when a plurality of users are communicating, the head portrait of the user can change along with the action of the user, the user can express the information required to be transmitted to the target user through the change of the expression and the limb action, and text information or voice information is not input in the terminal, thereby increasing the interestingness and safety of the user communication process.
And 206, displaying the target three-dimensional dynamic head portrait in the preset interactive interface.
In this step, the three-dimensional dynamic head portrait of the user generated in step 205 is displayed in a preset interactive interface.
Step 207, obtaining second three-dimensional head portrait information of the user through the three-dimensional camera device, and generating the three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait.
Optionally, the second three-dimensional head portrait information includes gesture features, and the gesture features include expression gesture features and/or limb gesture features.
The gesture features can be the relationship between the appearance and the configuration of the human body obtained by a gesture estimation method, specifically, the method divides the human body into a plurality of parts, establishes an appearance model of each part and a position relationship model between the parts, puts forward an energy function for evaluating the matching degree of a position to be detected in an image and a template bracket, and determines the most probable human body position by minimizing the deformation degree, thereby obtaining a portrait model in the image.
The gesture features may include expression gesture features and limb gesture features, that is, dynamic information of facial expressions and limb actions of a person in an image needs to be detected, so as to determine facial expression changes and limb actions of the person. For example, by acquiring the expression posture characteristics of the user through the 3D imaging device, it is possible to determine whether the user is laughter, crying or depression at this time, and by acquiring the limb posture characteristics of the user, it is possible to determine whether the user expresses an agreed attitude through a nodding action at this time or expresses a negative attitude through a shaking action.
Optionally, step 207 may specifically include:
in a substep 2071, the second three-dimensional head portrait information of the user is acquired by the three-dimensional camera device.
In this step, the terminal has generated a three-dimensional static head portrait characterizing the appearance of the user, that is, an initial three-dimensional head portrait of the user, and further, the terminal may continuously acquire second three-dimensional head portrait information of the user through the three-dimensional image capturing device, where the second three-dimensional head portrait information may characterize dynamic information of the user.
Optionally, the process of acquiring the second three-dimensional head portrait information of the user through the three-dimensional camera device may be that each interval is preset for a period of time, and the second three-dimensional head portrait information of the user is continuously acquired. The preset time period may be a fixed time value determined according to the network environment of the terminal, for example, 0.1 ms and 0.5 ms, if the network environment of the terminal is good, the preset time period is set to be smaller, so that the generated three-dimensional dynamic head portrait contains more user expressions and dynamic information of limbs, and if the network environment of the terminal is poor, the preset time period is set to be larger, so that the generated three-dimensional dynamic head portrait is smoother when displayed in an interface.
And sub-step 2072, in the case that the change value of the gesture feature is detected to be greater than or equal to a preset threshold value, performing amplification processing on the gesture feature to generate target three-dimensional head portrait information.
In the step, the gesture feature in the acquired second three-dimensional head portrait information can be detected and compared with the gesture feature obtained at the last acquisition time, so that the change value of the current gesture feature can be determined, namely the degree of the expression change or limb motion change of the user from the last acquisition time to the current time can be determined, if the change value is larger than or equal to a preset threshold value, the degree of the expression change or limb motion change of the user is larger, the expression change or limb motion change of the user can be more obviously represented for the three-dimensional head portrait information generated through the second three-dimensional head portrait information, the gesture feature in the second three-dimensional head portrait information can be amplified to obtain target three-dimensional head portrait information, the expression change or limb motion change of the three-dimensional head portrait in the target three-dimensional dynamic head portrait is ensured to be consistent with the expression change or limb motion change of the user, and the expression change or limb motion change of the three-dimensional head portrait in the target three-dimensional dynamic head portrait is more exaggerated compared with the corresponding expression change or limb motion change of the user, so that the target user communicating with the user can be enabled to carry out the expression change or limb motion change of the target three-dimensional head portrait, and the user can be clearly or the intended figure of the user can be obtained through observing the target head change.
Sub-step 2073, generating the three-dimensional dynamic head portrait according to the target three-dimensional head portrait information and the initial three-dimensional head portrait.
In this step, the image processing module in the terminal performs image processing on the target three-dimensional head portrait information based on the initial three-dimensional head portrait of the user, so that a target three-dimensional dynamic head portrait of the user can be generated.
Optionally, in another implementation, step 207 may further include:
sub-step 2074, determining target expression information, target text information and target voice information corresponding to the target three-dimensional head portrait information according to the corresponding relation among the pre-stored three-dimensional head portrait information, expression information, text information and voice information.
When the terminal detects that the variation value of the gesture feature is greater than or equal to a preset threshold value, the gesture feature is amplified, and after target three-dimensional head portrait information is generated, the terminal can determine target expression information, target text information and target voice information corresponding to the target three-dimensional head portrait information according to the corresponding relation among the pre-stored three-dimensional head portrait information, expression information, text information and voice information.
Specifically, when the degree of detecting the expression change or the limb movement change of the user is larger, in order to enable the three-dimensional head portrait information generated by the second three-dimensional head portrait information to more obviously represent the expression change or the limb movement change of the user, the gesture features in the second three-dimensional head portrait information are amplified to obtain target three-dimensional head portrait information with more exaggerated expression change or limb movement change compared with the corresponding movement of the expression change or the limb movement change of the user.
Meanwhile, in order to further intuitively express the detected expression change or limb action change of the user, the terminal can more clearly and definitely receive the expression change or limb action change of the user or the intention which the user wants to express by inquiring the corresponding relation according to the corresponding relation between the prestored three-dimensional head portrait information, expression information, text information and voice information and determining the target expression information, the target text information and the target voice information which can represent the expression change or the limb action change of the user at the moment according to the target three-dimensional head portrait information.
And step 2075, in the preset interactive interface, displaying the target expression information, the target text information and the target voice information.
In this step, the target expression information, the target text information, and the target voice information, which are generated in step 2054 and represent the user expression change or the limb motion change, are displayed in a preset interactive interface.
For example, if it is detected that the user is laughter at this time, the generated target expression information is the expression of the current size, the target text information is "haha", and the target voice information is a piece of voice including laughter.
And step 208, displaying the three-dimensional dynamic head portrait in a preset interactive interface.
In this step, the three-dimensional dynamic head portrait of the user generated in step 207 is displayed in a preset interactive interface.
Optionally, the preset interactive interface includes a location sharing interface, and step 208 may specifically include:
in the substep 2081, real-time location information of the terminal is obtained, and the real-time location of the terminal in the location sharing interface is determined according to the real-time location information.
In this step, the terminal can determine the real-time position coordinates where the terminal is located by means of a global positioning system (Global Positioning System, GPS), and determine the real-time position of the terminal in the map displayed on the position sharing interface in combination with the position sharing interface displayed on the terminal.
Sub-step 2082, at the real-time location, shows the three-dimensional dynamic head portrait.
In this step, the three-dimensional dynamic head portrait determined in step 207 is presented at the real-time position of the terminal determined in sub-step 2081 in the map displayed by the position sharing interface.
Therefore, when a user performs position sharing with friends or performs team formation with the friends through the terminal, the navigation interface in the terminal is not a abbreviated map and a plurality of coordinate points on the map plus a picture head portrait, but becomes a 3D dynamic head portrait which can act along with the action of the user, and meanwhile, the user can interact with the friends in real time in the process of position sharing and team formation travel, and the user can observe the action and emotion change of the friends in the navigation interface, so that the user does not travel with the friends, but has the effect of traveling together, thereby making the travel full of fun.
Optionally, the preset interactive interface includes a social software interface, and step 208 may specifically include:
sub-step 2083, obtaining chat text information or chat voice information, displaying the chat text information or the chat voice information in the social software interface, and displaying the three-dimensional dynamic head portrait at a position corresponding to the chat text information or the chat voice information.
In this step, the user inputs the chat text information or the chat voice information to be sent through the chat information input module of the terminal, and under the condition of obtaining the chat text information or the chat voice information, the chat text information or the chat voice information is displayed in the social software interface of the terminal, and the three-dimensional dynamic head portrait determined in the step 207 is displayed at the position corresponding to the chat text information or the chat voice information.
Therefore, when a user performs social chat with friends through the terminal, the chat interface in the terminal is not chat text information or chat voice information any more, and a static head portrait set by the user is changed into a 3D dynamic head portrait which can act along with the action of the user, and meanwhile, the user can interact with the friends in real time through the 3D dynamic head portrait along with special effects of various expressions when performing chat with the friends.
Step 209, when the first three-dimensional head portrait information of the target user is obtained from the remote server of the automobile, generating an initial three-dimensional head portrait of the target user according to the first three-dimensional head portrait information of the target user.
Optionally, the terminal is a vehicle-mounted multimedia Host (HUT), in step 208, in a multimedia interface of the HUT, a three-dimensional dynamic avatar of the user may be displayed, and at the same time, the HUT may further obtain first three-dimensional avatar information of the target user from a remote server (Telematics Service Provider, TSP) of the automobile, and generate an initial three-dimensional avatar of the target user according to the first three-dimensional avatar information of the target user.
Specifically, the first three-dimensional head portrait information of the target user can be acquired through other three-dimensional camera devices, the first three-dimensional head portrait information of the target user is uploaded to a TSP cloud end, the TSP cloud end can transmit the first three-dimensional head portrait information of the target user to a HUT through a data transmission channel (APN), and the HUT performs image processing on the head portrait information, so that a three-dimensional head portrait representing the appearance characteristics of the target user is generated and is used as an initial three-dimensional head portrait of the target user.
Step 210, receiving second three-dimensional head portrait information of the target user, and generating a three-dimensional dynamic head portrait of the target user according to the second three-dimensional head portrait information of the target user and the initial three-dimensional head portrait of the target user.
In this step, the terminal has generated a three-dimensional static head portrait characterizing the appearance of the target user, that is, an initial three-dimensional head portrait of the target user, and further, continuously acquires second three-dimensional head portrait information of the target user, where the second three-dimensional head portrait information of the target user may characterize dynamic information of the target user.
Further, the image processing module in the terminal performs image processing on the second three-dimensional head portrait information of the target user based on the initial three-dimensional head portrait of the target user, so that a three-dimensional dynamic head portrait of the target user can be generated.
Optionally, the process of acquiring the second three-dimensional head portrait information of the user through the three-dimensional camera device may be that each interval is preset for a period of time, and the second three-dimensional head portrait information of the user is continuously acquired. The preset time period may be a fixed time value determined according to the network environment of the terminal, for example, 0.1 ms and 0.5 ms, if the network environment of the terminal is good, the preset time period is set to be smaller, so that the generated three-dimensional dynamic head portrait of the target user contains more user expressions and dynamic information of limbs, and if the network environment of the terminal is poor, the preset time period is set to be larger, so that the generated three-dimensional dynamic head portrait of the target user is smoother when displayed in an interface.
Step 211, displaying the three-dimensional dynamic head portrait of the target user in the preset interactive interface.
In this step, the three-dimensional dynamic head portrait of the target user generated in step 210 is displayed in a preset interactive interface.
Therefore, the user can clearly and definitely receive the expression change or limb action change of the target user through observing the three-dimensional dynamic head portrait corresponding to the target user in the preset interactive interface of the terminal, and the emotion of the target user at the moment or the intention which the target user wants to express can be obtained.
In summary, the method for generating a three-dimensional dynamic head portrait provided by the embodiment of the present invention is applied to a terminal including a three-dimensional image capturing device, and includes: when an acquisition instruction is received, acquiring first three-dimensional head portrait information of a user through a three-dimensional camera device, and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information; acquiring second three-dimensional head portrait information of a user through a three-dimensional camera device, and generating a three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait; and displaying the three-dimensional dynamic head portrait in a preset interactive interface. According to the terminal comprising the three-dimensional camera device, an initial three-dimensional head portrait of a user can be generated according to the acquired first three-dimensional head portrait information, the second three-dimensional head portrait information of the user can be acquired, the expression and limb dynamic information of the user in a period of time can be continuously acquired, and further, a three-dimensional dynamic head portrait capable of representing the expression and limb dynamic information of the user can be generated according to the second three-dimensional head portrait information of the user and the initial three-dimensional head portrait, so that when the user performs social interaction or position sharing with friends, the three-dimensional dynamic head portrait displayed in a preset interaction interface can be used for communicating, the interactivity among the users is improved, and the use experience of the user is improved.
On the basis of the embodiment, the embodiment of the invention also provides a method for generating the three-dimensional dynamic head portrait.
Referring to fig. 3, a flowchart of interaction steps of a method for generating a three-dimensional dynamic head portrait according to an embodiment of the present invention is shown, where the terminal includes a 3D camera and a vehicle HUT, and when the preset interaction interface is a position sharing interface or a team trip interface displayed in a HUT display interface during navigation of the vehicle, the method specifically includes:
in step 301, the vehicle starts, and the whole vehicle controller area network (Controller Area Network, CAN) wakes up.
In this step, if the vehicle starts, the whole vehicle CAN network is automatically awakened, and a wake-up signal is sent to the vehicle HUT system, so that the vehicle HUT system executes step 309, and at the same time, the 3D camera executes step 302.
Step 302,3D, the camera is started and the system and software are loaded.
After receiving the signal of starting the vehicle, the 3D camera is started, and relevant systems and software for shooting the face information of the user and generating a 3D model of the face of the user and 3D dynamic head portrait information are loaded.
Step 303, whether the face function is enabled.
The 3D camera system determines whether the face function is enabled, if yes, step 304 is executed, if not, an ending step is executed, and the process of shooting the face information of the user through the 3D camera and generating a 3D model of the face of the user and 3D dynamic head portrait information is ended.
Step 304, whether a header is detected.
In this step, the 3D camera system detects whether the head of the user is detected by the image acquisition system, if so, step 305 is executed, and if not, step 304 is repeatedly executed to continuously detect whether the head is detected.
Step 305, face recognition.
The 3D camera system starts the face recognition process after detecting the head of the user.
Step 306, whether the identification was successful.
In this step, the 3D camera system determines whether the face recognition process is successful, if so, step 307 is executed, and if not, step 305 is repeatedly executed.
And 307, building a face 3D model.
In this step, the 3D camera may establish a 3D model characterizing the face features of the user through the face information of the user acquired in step 305.
Specifically, the 3D camera can detect the distance information of the shooting space, accurately know the distance between each point in the image and the camera, and then can acquire the three-dimensional space coordinates of each point in the image by adding the plane coordinates of the point in the two-dimensional image, thereby obtaining the three-dimensional image information of the shot object, and after image processing, the three-dimensional image of the shot object can be obtained.
Step 308, collecting 3D dynamic head portrait information.
In this step, the 3D camera may continuously collect face information of the user, obtain corresponding video data, and send the video data to the HUT.
After step 308, step 315 may be performed, or step 312 may be performed.
Referring to fig. 4, a flowchart of sending video data according to an embodiment of the present invention may include:
a1, the 3D camera informs the HUT to request to send video data.
In this step, after the 3D camera collects the corresponding video data, an instruction requesting to transmit the video data is first sent to the HUT to inform the HUT that the 3D camera has collected the video data, and may be transmitted to the HUT.
A2, HUT informs the 3D camera, and can start sending video data through Ethernet.
In this step, the HUT, after receiving an instruction to request to transmit video data transmitted from the 3D camera, may generate and return an instruction to allow the 3D camera to transmit video data to notify that the 3D camera may start transmitting video data over ethernet
A3, the 3D camera sends video data to the HUT through the Ethernet.
In this step, the 3D camera, after receiving the command sent by the HUT to allow video data to be transmitted, may transmit the command to the HUT in the SOME IP protocol through ethernet.
After step A3, if the 3D camera has finished transmitting the video data, step A6 is executed, and if the 3D camera has not finished transmitting the video data, step A4 is executed.
And A4, notifying the HUT by the 3D camera, and sending video data.
In this step, when the 3D camera is transmitting video data, the 3D camera notifies the HUT that video data is being transmitted.
A5, HUT displays the video.
In this step, the HUT may display the received video information in its display screen, since the 3D camera is transmitting video data.
And A6, the 3D camera informs the HUT of the end of video transmission.
In this step, if the 3D camera has completed transmitting video data, the 3D camera notifies the HUT that video transmission is completed.
And A7, the HUT finishes video display.
In this step, the HUT ends the video display since the 3D camera has already transmitted the video data.
In step 309, the HUT is started and the navigation system and software are loaded.
In this step, if the vehicle is started, the vehicle HUT also performs a start operation, and after the HUT system is started, if the navigation function is started by the user, the navigation system and software related to the navigation function are loaded.
Step 310, the receive data function is turned on.
Step 311, whether the 3D dynamic head portrait mode is turned on.
In this step, it is detected whether the 3D dynamic avatar mode is turned on, if it is detected that the 3D dynamic avatar mode is turned on by the user through the display screen of the HUT system, step 312 is executed, and if it is detected that the 3D dynamic avatar mode is not turned on by the user, step 319 is executed.
Step 312, the image video presentation function is initiated.
In this step, as it is detected that the user opens the 3D dynamic head portrait mode through the display screen of the HUT system, the image video display function is started, and video data corresponding to the face information of the user continuously collected by the 3D camera in step 308 is received.
In step 313, the digital signal processing (Digital Signal Processing, DSP) module processes the 3D dynamic head portrait information.
In this step, the DSP module in the HUT performs image processing on the received video data based on the face 3D model of the user established in step 307, so that a 3D dynamic head portrait of the user can be generated.
In step 314, the HUT display screen presents a 3D dynamic avatar.
In this step, the HUT presents the generated 3D dynamic head portrait in the display screen of the HUT.
Step 315, whether a special expression is detected.
In this step, the 3D camera analyzes the collected 3D dynamic head portrait information, and determines whether a special expression is detected, if so, step 316 is executed, and if not, step 308 is repeatedly executed.
Step 316, head portrait enlargement processing.
In this step, the specific expression detected in step 315 is amplified, and the amplified 3D dynamic head portrait information is sent to the HUT system, and the DSP module in the HUT performs image processing on the received amplified 3D dynamic head portrait information based on the face 3D model of the user established in step 307, so that a 3D dynamic head portrait of the user with the amplified specific expression may be generated and displayed in the display screen of the HUT.
In step 317, the HUT shows the push expression.
In this step, after the step of displaying the 3D dynamic head portrait of the user with the enlarged specific expression in the display screen of the HUT, the HUT may further query, in a preset database, a push expression corresponding to the 3D dynamic head portrait with the specific expression, and display the push expression in the display screen of the HUT.
Step 318,3D head portrait navigation.
In step 311, if it is detected that the user does not turn on the 3D dynamic head portrait mode, 3D head portrait navigation is performed, and the face 3D model established in step 307 is used as a navigation head portrait to be displayed on the display screen of the HUT.
Step 319, transmitting the 3D dynamic head portrait information of the target user in the navigation software to the vehicle HUT through the APN channel.
In this step, the TSP server may also transmit the 3D dynamic avatar information of the target user in the navigation software to the vehicle HUT through the APN channel, and perform image processing on the received 3D dynamic avatar information of the target user, so as to generate a 3D dynamic avatar of the target user, and display the 3D dynamic avatar in the display screen of the HUT.
On the basis of the embodiment, the embodiment of the invention also provides a system for generating the three-dimensional dynamic head portrait, which is applied to the terminal comprising the three-dimensional camera device.
Referring to fig. 5, a structural block diagram of a three-dimensional dynamic head portrait generating system according to an embodiment of the present invention may specifically include the following modules:
the first generating module 401 is configured to obtain, when receiving an acquisition instruction, first three-dimensional head portrait information of a user through the three-dimensional image capturing device, and generate an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information.
Optionally, the first generating module 401 includes:
the fourth generation sub-module is used for generating and displaying acquisition prompt information aiming at the first three-dimensional head portrait information when the acquisition instruction is received;
And a fifth generation sub-module, configured to acquire first three-dimensional head portrait information of the user through the three-dimensional imaging device, and generate an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information.
The second generating module 402 is configured to obtain, by using the three-dimensional imaging device, second three-dimensional head portrait information of the user, and generate the three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait.
Optionally, the second three-dimensional head portrait information includes gesture features, and the second generating module 402 includes:
the first acquisition sub-module is used for acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device;
the first generation sub-module is used for amplifying the gesture features to generate target three-dimensional head portrait information under the condition that the change value of the gesture features is detected to be larger than or equal to a preset threshold value;
and the second generation sub-module is used for generating the three-dimensional dynamic head portrait according to the target three-dimensional head portrait information and the initial three-dimensional head portrait.
Optionally, the gesture features include expression gesture features and/or limb gesture features.
The first display module 403 is configured to display the three-dimensional dynamic head portrait in a preset interactive interface.
Optionally, the preset interactive interface includes a location sharing interface, and the first display module 403 includes:
the second acquisition sub-module is used for acquiring real-time position information of the terminal and determining the real-time position of the terminal in the position sharing interface according to the real-time position information;
and the first display sub-module is used for displaying the three-dimensional dynamic head portrait at the real-time position.
Optionally, the preset interactive interface includes a social software interface, and the first display module 403 includes:
the second display sub-module is used for acquiring the chat text information or the chat voice information, displaying the chat text information or the chat voice information in the social software interface, and displaying the three-dimensional dynamic head portrait at the position corresponding to the chat text information or the chat voice information.
Optionally, the terminal is a vehicle-mounted multimedia host, and the system further includes:
the third generation module is used for generating an initial three-dimensional head portrait of the target user according to the first three-dimensional head portrait information of the target user under the condition that the first three-dimensional head portrait information of the target user is acquired from the automobile remote server;
The fourth generation module is used for receiving the second three-dimensional head portrait information of the target user and generating a three-dimensional dynamic head portrait of the target user according to the second three-dimensional head portrait information of the target user and the initial three-dimensional head portrait of the target user;
and the second display module is used for displaying the three-dimensional dynamic head portrait of the target user in the preset interactive interface.
Optionally, the system further comprises:
the determining module is used for determining target expression information, target text information and target voice information corresponding to the target three-dimensional head portrait information according to the corresponding relation among the pre-stored three-dimensional head portrait information, the pre-stored expression information, the pre-stored text information and the pre-stored voice information;
and the third display module is used for displaying the target expression information, the target text information and the target voice information in the preset interactive interface.
Optionally, the system further comprises:
the display module is used for displaying the initial three-dimensional head portrait;
a fifth generation module, configured to generate a target initial three-dimensional head portrait according to the initial three-dimensional head portrait and modification operation information when the modification operation information for the initial three-dimensional head portrait is received;
The second generating module 402 includes:
the third generation sub-module is used for acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device and generating a target three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the target initial three-dimensional head portrait;
the first display module 403 includes:
and the third display sub-module is used for displaying the target three-dimensional dynamic head portrait in the preset interactive interface.
In summary, the system for generating a three-dimensional dynamic head portrait provided by the embodiment of the present invention is applied to a terminal including a three-dimensional image capturing device, and includes: when an acquisition instruction is received, acquiring first three-dimensional head portrait information of a user through a three-dimensional camera device, and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information; acquiring second three-dimensional head portrait information of a user through a three-dimensional camera device, and generating a three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait; and displaying the three-dimensional dynamic head portrait in a preset interactive interface. According to the terminal comprising the three-dimensional camera device, an initial three-dimensional head portrait of a user can be generated according to the acquired first three-dimensional head portrait information, the second three-dimensional head portrait information of the user can be acquired, the expression and limb dynamic information of the user in a period of time can be continuously acquired, and further, a three-dimensional dynamic head portrait capable of representing the expression and limb dynamic information of the user can be generated according to the second three-dimensional head portrait information of the user and the initial three-dimensional head portrait, so that when the user performs social interaction or position sharing with friends, the three-dimensional dynamic head portrait displayed in a preset interaction interface can be used for communicating, the interactivity among the users is improved, and the use experience of the user is improved.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (14)

1. A method for generating a three-dimensional dynamic head portrait, which is applied to a terminal comprising a three-dimensional camera device, characterized in that the method comprises the following steps:
when an acquisition instruction is received, acquiring first three-dimensional head portrait information of a user through the three-dimensional camera device, and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information;
Acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device, and generating the three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait;
displaying the three-dimensional dynamic head portrait in a preset interactive interface;
the second three-dimensional head portrait information includes gesture features;
the gesture features comprise expression gesture features and/or limb gesture features;
the step of obtaining second three-dimensional head portrait information of the user through the three-dimensional camera device and generating the three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait comprises the following steps:
acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device;
amplifying the gesture features under the condition that the change value of the gesture features is detected to be larger than or equal to a preset threshold value, and generating target three-dimensional head portrait information;
and generating the three-dimensional dynamic head portrait according to the target three-dimensional head portrait information and the initial three-dimensional head portrait.
2. The method of claim 1, wherein the pre-set interactive interface comprises a location sharing interface;
The step of displaying the three-dimensional dynamic head portrait in a preset interactive interface comprises the following steps:
acquiring real-time position information of the terminal, and determining the real-time position of the terminal in the position sharing interface according to the real-time position information;
at the real-time location, the three-dimensional dynamic head portrait is shown.
3. The method of claim 1, wherein the pre-set interactive interface comprises a social software interface;
the step of displaying the three-dimensional dynamic head portrait in a preset interactive interface comprises the following steps:
and acquiring chat text information or chat voice information, displaying the chat text information or the chat voice information in the social software interface, and displaying the three-dimensional dynamic head portrait at a position corresponding to the chat text information or the chat voice information.
4. The method of claim 1, wherein the terminal is a vehicle-mounted multimedia host;
after the step of displaying the three-dimensional dynamic head portrait in the preset interactive interface, the method further comprises:
under the condition that first three-dimensional head portrait information of a target user is obtained from an automobile remote server, generating an initial three-dimensional head portrait of the target user according to the first three-dimensional head portrait information of the target user;
Receiving second three-dimensional head portrait information of the target user, and generating a three-dimensional dynamic head portrait of the target user according to the second three-dimensional head portrait information of the target user and the initial three-dimensional head portrait of the target user;
and displaying the three-dimensional dynamic head portrait of the target user in the preset interactive interface.
5. The method according to claim 1, wherein, in the case where it is detected that the change value of the posture feature is greater than or equal to a preset threshold value, the posture feature is subjected to an enlargement process, and the target three-dimensional head portrait information is generated, the method further includes:
determining target expression information, target text information and target voice information corresponding to the target three-dimensional head portrait information according to the corresponding relation among the pre-stored three-dimensional head portrait information, expression information, text information and voice information;
and displaying the target expression information, the target text information and the target voice information in the preset interactive interface.
6. The method according to claim 1, wherein after the step of acquiring first three-dimensional head portrait information of a user by the three-dimensional imaging device and generating an initial three-dimensional head portrait of the user from the first three-dimensional head portrait information when the acquisition instruction is received, the method further comprises:
Displaying the initial three-dimensional head portrait;
generating a target initial three-dimensional head portrait according to the initial three-dimensional head portrait and the modification operation information under the condition that the modification operation information aiming at the initial three-dimensional head portrait is received;
the step of obtaining second three-dimensional head portrait information of the user through the three-dimensional camera device and generating the three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait comprises the following steps:
acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device, and generating a target three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the target initial three-dimensional head portrait;
the step of displaying the three-dimensional dynamic head portrait in a preset interactive interface comprises the following steps:
and displaying the target three-dimensional dynamic head portrait in the preset interactive interface.
7. The method according to claim 1, wherein the step of acquiring first three-dimensional head portrait information of a user by the three-dimensional image capturing device and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information when an acquisition instruction is received includes:
When the acquisition instruction is received, generating acquisition prompt information aiming at the first three-dimensional head portrait information and displaying the acquisition prompt information;
acquiring first three-dimensional head portrait information of the user through the three-dimensional camera device, and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information.
8. A system for generating a three-dimensional dynamic head portrait, applied to a terminal comprising a three-dimensional image pickup device, characterized in that the system comprises:
the first generation module is used for acquiring first three-dimensional head portrait information of a user through the three-dimensional camera device when receiving an acquisition instruction, and generating an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information;
the second generation module is used for acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device and generating the three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the initial three-dimensional head portrait;
the first display module is used for displaying the three-dimensional dynamic head portrait in a preset interactive interface;
the second generation module includes:
the first acquisition sub-module is used for acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device;
The second three-dimensional head portrait information includes gesture features;
the first generation sub-module is used for amplifying the gesture features to generate target three-dimensional head portrait information under the condition that the change value of the gesture features is detected to be larger than or equal to a preset threshold value;
the second generation submodule is used for generating the three-dimensional dynamic head portrait according to the target three-dimensional head portrait information and the initial three-dimensional head portrait;
the gesture features include expression gesture features and/or limb gesture features.
9. The system of claim 8, wherein the pre-set interactive interface comprises a location sharing interface;
the first display module includes:
the second acquisition sub-module is used for acquiring real-time position information of the terminal and determining the real-time position of the terminal in the position sharing interface according to the real-time position information;
and the first display sub-module is used for displaying the three-dimensional dynamic head portrait at the real-time position.
10. The system of claim 8, wherein the preset interactive interface comprises a social software interface;
the first display module includes:
the second display sub-module is used for acquiring the chat text information or the chat voice information, displaying the chat text information or the chat voice information in the social software interface, and displaying the three-dimensional dynamic head portrait at the position corresponding to the chat text information or the chat voice information.
11. The system of claim 8, wherein the terminal is a vehicle-mounted multimedia host;
the system further comprises:
the third generation module is used for generating an initial three-dimensional head portrait of the target user according to the first three-dimensional head portrait information of the target user under the condition that the first three-dimensional head portrait information of the target user is acquired from the automobile remote server;
the fourth generation module is used for receiving the second three-dimensional head portrait information of the target user and generating a three-dimensional dynamic head portrait of the target user according to the second three-dimensional head portrait information of the target user and the initial three-dimensional head portrait of the target user;
and the second display module is used for displaying the three-dimensional dynamic head portrait of the target user in the preset interactive interface.
12. The system of claim 8, wherein the system further comprises:
the determining module is used for determining target expression information, target text information and target voice information corresponding to the target three-dimensional head portrait information according to the corresponding relation among the pre-stored three-dimensional head portrait information, the pre-stored expression information, the pre-stored text information and the pre-stored voice information;
and the third display module is used for displaying the target expression information, the target text information and the target voice information in the preset interactive interface.
13. The system of claim 8, wherein the system further comprises:
the display module is used for displaying the initial three-dimensional head portrait;
a fifth generation module, configured to generate a target initial three-dimensional head portrait according to the initial three-dimensional head portrait and modification operation information when the modification operation information for the initial three-dimensional head portrait is received;
the second generation module includes:
the third generation sub-module is used for acquiring second three-dimensional head portrait information of the user through the three-dimensional camera device and generating a target three-dimensional dynamic head portrait according to the second three-dimensional head portrait information and the target initial three-dimensional head portrait;
the first display module includes:
and the third display sub-module is used for displaying the target three-dimensional dynamic head portrait in the preset interactive interface.
14. The system of claim 8, wherein the first generation module comprises:
the fourth generation sub-module is used for generating and displaying acquisition prompt information aiming at the first three-dimensional head portrait information when the acquisition instruction is received;
and a fifth generation sub-module, configured to acquire first three-dimensional head portrait information of the user through the three-dimensional imaging device, and generate an initial three-dimensional head portrait of the user according to the first three-dimensional head portrait information.
CN201911419494.6A 2019-12-31 2019-12-31 Three-dimensional dynamic head portrait generation method and system Active CN111105494B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911419494.6A CN111105494B (en) 2019-12-31 2019-12-31 Three-dimensional dynamic head portrait generation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911419494.6A CN111105494B (en) 2019-12-31 2019-12-31 Three-dimensional dynamic head portrait generation method and system

Publications (2)

Publication Number Publication Date
CN111105494A CN111105494A (en) 2020-05-05
CN111105494B true CN111105494B (en) 2023-10-24

Family

ID=70425948

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911419494.6A Active CN111105494B (en) 2019-12-31 2019-12-31 Three-dimensional dynamic head portrait generation method and system

Country Status (1)

Country Link
CN (1) CN111105494B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763636A (en) * 2009-09-23 2010-06-30 中国科学院自动化研究所 Method for tracing position and pose of 3D human face in video sequence
CN103886632A (en) * 2014-01-06 2014-06-25 宇龙计算机通信科技(深圳)有限公司 Method for generating user expression head portrait and communication terminal
CN105704419A (en) * 2014-11-27 2016-06-22 程超 Method for human-human interaction based on adjustable template profile photos
WO2017092196A1 (en) * 2015-12-01 2017-06-08 深圳奥比中光科技有限公司 Method and apparatus for generating three-dimensional animation
CN107480614A (en) * 2017-07-31 2017-12-15 广东欧珀移动通信有限公司 Motion management method, apparatus and terminal device
CN107705341A (en) * 2016-08-08 2018-02-16 创奇思科研有限公司 The method and its device of user's expression head portrait generation
CN109151540A (en) * 2017-06-28 2019-01-04 武汉斗鱼网络科技有限公司 The interaction processing method and device of video image
CN109671141A (en) * 2018-11-21 2019-04-23 深圳市腾讯信息技术有限公司 The rendering method and device of image, storage medium, electronic device
CN110099159A (en) * 2018-01-29 2019-08-06 优酷网络技术(北京)有限公司 A kind of methods of exhibiting and client of chat interface
CN110298319A (en) * 2019-07-01 2019-10-01 北京字节跳动网络技术有限公司 Image composition method and device
CN110520901A (en) * 2017-05-16 2019-11-29 苹果公司 Emoticon is recorded and is sent

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763636A (en) * 2009-09-23 2010-06-30 中国科学院自动化研究所 Method for tracing position and pose of 3D human face in video sequence
CN103886632A (en) * 2014-01-06 2014-06-25 宇龙计算机通信科技(深圳)有限公司 Method for generating user expression head portrait and communication terminal
CN105704419A (en) * 2014-11-27 2016-06-22 程超 Method for human-human interaction based on adjustable template profile photos
WO2017092196A1 (en) * 2015-12-01 2017-06-08 深圳奥比中光科技有限公司 Method and apparatus for generating three-dimensional animation
CN107705341A (en) * 2016-08-08 2018-02-16 创奇思科研有限公司 The method and its device of user's expression head portrait generation
CN110520901A (en) * 2017-05-16 2019-11-29 苹果公司 Emoticon is recorded and is sent
CN109151540A (en) * 2017-06-28 2019-01-04 武汉斗鱼网络科技有限公司 The interaction processing method and device of video image
CN107480614A (en) * 2017-07-31 2017-12-15 广东欧珀移动通信有限公司 Motion management method, apparatus and terminal device
CN110099159A (en) * 2018-01-29 2019-08-06 优酷网络技术(北京)有限公司 A kind of methods of exhibiting and client of chat interface
CN109671141A (en) * 2018-11-21 2019-04-23 深圳市腾讯信息技术有限公司 The rendering method and device of image, storage medium, electronic device
CN110298319A (en) * 2019-07-01 2019-10-01 北京字节跳动网络技术有限公司 Image composition method and device

Also Published As

Publication number Publication date
CN111105494A (en) 2020-05-05

Similar Documents

Publication Publication Date Title
CN108881784B (en) Virtual scene implementation method and device, terminal and server
CN105450736B (en) Method and device for connecting with virtual reality
CN109802931B (en) Communication processing method, terminal and storage medium
WO2018113639A1 (en) Interaction method between user terminals, terminal, server, system and storage medium
CN109040643B (en) Mobile terminal and remote group photo method and device
US20200293526A1 (en) Information processing system, information processing method, and storage medium
CN111445583B (en) Augmented reality processing method and device, storage medium and electronic equipment
US20180085928A1 (en) Robot, robot control method, and robot system
CN108683850B (en) Shooting prompting method and mobile terminal
CN112492339B (en) Live broadcast method, device, server, terminal and storage medium
US20230141166A1 (en) Data Sharing Method and Device
CN111311758A (en) Augmented reality processing method and device, storage medium and electronic equipment
CN110166848B (en) Live broadcast interaction method, related device and system
CN108881544B (en) Photographing method and mobile terminal
CN108881721B (en) Display method and terminal
CN111670431A (en) Information processing apparatus, information processing method, and program
WO2024067468A1 (en) Interaction control method and apparatus based on image recognition, and device
CN111240471B (en) Information interaction method and wearable device
CN111105494B (en) Three-dimensional dynamic head portrait generation method and system
CN108989666A (en) Image pickup method, device, mobile terminal and computer-readable storage medium
CN112354185A (en) Cloud game control system and cloud game control method
CN111968199A (en) Picture processing method, terminal device and storage medium
CN112449098A (en) Shooting method, device, terminal and storage medium
JP2014519127A (en) Method, apparatus, and terminal device for information generation and processing
CN111147745B (en) Shooting method, shooting device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant