CN111445561B - Virtual object processing method, device, equipment and storage medium - Google Patents

Virtual object processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN111445561B
CN111445561B CN202010220090.0A CN202010220090A CN111445561B CN 111445561 B CN111445561 B CN 111445561B CN 202010220090 A CN202010220090 A CN 202010220090A CN 111445561 B CN111445561 B CN 111445561B
Authority
CN
China
Prior art keywords
human body
model
skin
target user
mannequin
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010220090.0A
Other languages
Chinese (zh)
Other versions
CN111445561A (en
Inventor
张晓东
李士岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010220090.0A priority Critical patent/CN111445561B/en
Publication of CN111445561A publication Critical patent/CN111445561A/en
Application granted granted Critical
Publication of CN111445561B publication Critical patent/CN111445561B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Tourism & Hospitality (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Multimedia (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • Geometry (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a virtual object processing method, device, equipment and storage medium, and relates to the technical field of computer vision. The method comprises the following steps: establishing a three-dimensional first human body model based on the human body reference model; binding the first human body model and the human skeleton model to generate a second human body model; the skin of the second phantom is reconstructed based on the image of the skin of the phantom reference model. The technical scheme of the embodiment of the application can truly reflect the detailed information of the skin of the human body reference model and avoid the phenomenon of terrorism.

Description

Virtual object processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of computer vision, and in particular, to a method, an apparatus, a device, and a storage medium for processing a virtual object.
Background
With the development of computer vision technology, it is becoming more and more popular to interact with users through virtual objects such as digital people, and how to generate vivid and natural virtual character objects is a focus of attention.
In one aspect, scan data is obtained by three-dimensional scanning of a human reference model, and a virtual character model is created based on the scan data. However, in this technical solution, on the one hand, the virtual character model obtained from only the scan data cannot realize the skeletal animation of the model; on the other hand, due to illumination, shadow and other reasons, the three-dimensional scanning result is difficult to reflect the detailed information of the skin of the human body reference model, and the terrorist valley phenomenon is easy to occur.
Therefore, how to implement skeletal animation of the virtual character model and avoid the occurrence of the "terrorist valley" phenomenon becomes a technical problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides a processing method, device, equipment and storage medium of a virtual object, which are used for solving the problems of how to realize skeleton animation of a virtual character model and avoiding the phenomenon of terrorism.
In a first aspect, the present application provides a method for processing a virtual object, including:
establishing a three-dimensional first human body model based on the human body reference model;
binding the first mannequin with a human skeleton model to generate a second mannequin;
reconstructing the skin of the second manikin based on the image of the skin of the manikin reference model.
In a second aspect, the present application provides a processing apparatus for a virtual object, including:
the first model generation module is used for establishing a three-dimensional first human body model based on the human body reference model;
the second model generation module is used for binding the first human body model and the human skeleton model to generate a second human body model;
and the skin reconstruction module is used for reconstructing the skin of the second human body model based on the image of the skin of the human body reference model.
In a third aspect, the present application provides an electronic device comprising: at least one processor, memory, and an interface to communicate with other electronic devices; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of processing a virtual object of any of the first aspects.
In a fourth aspect, the present application provides a non-transitory computer-readable storage medium storing computer instructions for causing a processing method of a virtual object of any one of the first aspects of the computer.
One embodiment of the above application has the following advantages or benefits: on one hand, a three-dimensional first human body model is established based on a human body reference model, and the first human body model and a human body skeleton model are bound, so that skeleton animation of the human body model can be realized; on the other hand, based on the bound skin image of the human body reference model, the skin of the second human body model is rebuilt, so that the detailed information of the skin of the human body reference model can be truly reflected, and the phenomenon of terrorist valley is avoided.
Other effects of the above alternative will be described below in connection with specific embodiments.
Drawings
The drawings are included to provide a better understanding of the present application and are not to be construed as limiting the application. Wherein:
fig. 1 is a schematic diagram of an application scenario of a virtual object processing method according to some embodiments of the present application.
Fig. 2 is a flow chart illustrating a method for processing a virtual object according to some embodiments of the present application.
Fig. 3 is a flow chart of a processing method of a virtual object according to other embodiments of the present application.
Fig. 4 is a flow diagram of interaction with a second mannequin provided according to some embodiments of the present application.
FIG. 5 is a schematic flow chart of interaction with a second mannequin provided according to still other embodiments of the present application
Fig. 6 is a schematic block diagram of a virtual object processing apparatus provided in accordance with some embodiments of the present application.
Fig. 7 is a schematic block diagram of a second model generation module provided in accordance with some embodiments of the application.
Fig. 8 is a schematic block diagram of a virtual object processing apparatus according to further embodiments of the present application.
Fig. 9 is a schematic block diagram of a processing apparatus for virtual objects according to further embodiments of the present application.
Fig. 10 is a block diagram of an electronic device for implementing a method for processing a virtual object according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present application are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the current technical scheme for generating the virtual person, the three-dimensional scanning is carried out on the human body reference model to obtain scanning data, and the virtual person model is built based on the scanning data. However, in this technical solution, on the one hand, the virtual character model obtained from only the scan data cannot realize the skeletal animation of the model; on the other hand, the three-dimensional scanning result is difficult to reflect detailed information of the skin of the reference character model due to illumination, shadow, and the like, and is prone to the phenomenon of "terrorist valleys".
Based on the above, the basic idea of the application is that: binding the human body model and the human body skeleton model, and correcting the bound human body model based on the high-definition image of the skin of the human body reference model. According to the technical scheme in the example embodiment of the application, on one hand, a three-dimensional first human body model is established based on a human body reference model, and the first human body model and a human body skeleton model are bound, so that skeleton animation of the human body model can be realized; on the other hand, based on the bound skin image of the human body reference model, the skin of the second human body model is rebuilt, so that the detailed information of the skin of the human body reference model can be truly reflected, and the phenomenon of terrorist valley is avoided.
The terms and the like referred to in the present application are explained below:
terrorism phenomenon: because the robot and the human are similar in appearance and action, the human can generate positive emotion on the robot until a certain degree, even if the robot and the human have a little difference, the difference is very obvious and attractive, and the whole robot is very stiff and terrible.
Human body reference model: representing a real person or a real person model for generating a virtual manikin.
Facial deformer: a deformer for driving facial muscles of a manikin to generate facial expressions.
Subsurface scattering: is a physical rendering technology for simulating semi-transparent substances such as skin, jade, milk and the like.
Digital person: the method is crystallization of a digital character technology and an artificial intelligence technology, the digital character technology such as portrait modeling, motion capturing and the like brings vivid and natural image expression for digital people, and the artificial intelligence technology such as voice recognition, natural voice understanding, conversation listing and the like brings perfect cognition, understanding and expression capability for the digital people.
The method for processing the virtual object provided by the application is described below through a specific embodiment.
Fig. 1 is a schematic diagram of an application scenario of a virtual object processing method according to some embodiments of the present application. Referring to fig. 1, the application scene includes an image acquisition device 110 and an image processing device 120. The image acquisition device 110 is configured to acquire scan data of the human body reference model 130, for example, the human body reference model 130 may be scanned by the image acquisition device 110 to obtain scan data of the human body reference model 130, and the scan data is sent to the image processing device 120, where the image acquisition device 110 may be an optical scanner or a camera. The image processing device 120 is configured to build a three-dimensional first mannequin according to the scan data, bind the first mannequin with the mannequin to generate a second mannequin, reconstruct the skin of the second mannequin based on the image of the skin of the mannequin, for example, image processing software such as Maya software may be installed in the image processing device 120, and process the scan data through the image processing software to generate the first mannequin.
The image processing apparatus 120 may be a desktop computer or a laptop computer, or may be another suitable general-purpose computing device, such as a notebook computer or a cloud computing device, which is not particularly limited in the present application.
A method of processing a virtual object according to an exemplary embodiment of the present application is described below with reference to fig. 2 in conjunction with the application scenario of fig. 1. It should be noted that the above application scenario is only shown for the convenience of understanding the spirit and principle of the present application, and the embodiments of the present application are not limited in any way. Rather, embodiments of the application may be applied to any scenario where applicable.
Fig. 2 is a flow chart illustrating a method for processing a virtual object according to some embodiments of the present application. The processing method of the virtual object includes steps S210 to S230, and the processing method of the virtual object may be applied to a processing apparatus of the virtual object, such as the image processing apparatus 120 of fig. 1, and the processing method of the virtual object in the exemplary embodiment is described in detail with reference to the accompanying drawings.
Referring to fig. 2, in step S210, a three-dimensional first human body model is built based on a human body reference model.
In an example embodiment, scan data of a human reference model is acquired, and a three-dimensional first human model is built from the scan data of the human reference model. Scan data of the human body reference model can be acquired by: the human body reference model is scanned in all directions through a 360-degree photographic packaging box formed by a scanner or a single-lens reflex camera, and scanning data of the human body reference model are obtained. Further, the scan data of the human body reference model includes human body depth image data, and a three-dimensional first human body model is reconstructed through curved surface reconstruction operation according to the scan data of the human body reference model.
In step S220, the first mannequin is bound to the mannequin to generate a second mannequin.
In an example embodiment, a human bone model is obtained, bones of each part in the human bone model are matched with corresponding parts of a first human model, and an association relationship between vertices of the first human model and corresponding bones in the human bone model is established. For example, according to the names of the respective parts of the first human body model and the names of the respective bones of the human body bone model, the respective parts of the first human body model and the bones of the human body bone model are matched, and the association relationship between the vertex of the first human body model and the corresponding bones in the human body bone model is established based on the matching result, and can be regarded as skin information of the first human body model.
Further, because the first mannequin may not match the size of the mannequin, in some embodiments, the scaling of the bones of the mannequin is performed such that the bones of the mannequin match the size of the corresponding portions of the first mannequin.
In step S230, the skin of the second phantom is reconstructed based on the image of the skin of the phantom.
In an example embodiment, a high definition image of skin of a human reference model is acquired, a map of skin of a second human model is determined from the high definition image of the human reference model, the skin of the second human model is reconstructed from the determined map of skin of the second human model, the map of skin of the second human model may include: one or more of diffuse reflection mapping, high light mapping, and normal mapping.
It should be noted that the skin mapping in the embodiment of the present application is not limited to the above-mentioned mapping, for example, the second manikin mapping may also include a roughness mapping or a mask mapping, which is also within the scope of the present application.
According to the technical scheme in the example embodiment of fig. 2, on one hand, a three-dimensional first mannequin is built based on a mannequin reference model, and the first mannequin and a mannequin skeleton model are bound, so that skeleton animation of the mannequin can be realized; on the other hand, based on the bound skin image of the human body reference model, the skin of the second human body model is rebuilt, so that the detailed information of the skin of the human body reference model can be truly reflected, and the phenomenon of terrorist valley is avoided.
Further, in some embodiments, the generated second manikin is bound to a face deformer for driving facial movements of the second manikin. Through binding the second human body model with the facial deformer, various facial expression actions of the second human body model can be realized through the facial deformer, so that the facial expression of the second human body model is more vivid and natural.
Further, in order to make the clothing of the second manikin more visual and natural, in an example embodiment, based on the image of the clothing of the manikin, a clothing map and clothing material of the manikin are determined; based on the garment map and garment material of the human body reference model, reconstructing the garment of the second human body model.
Further, in some embodiments, the first mannequin is tied to a mannequin and a muscle model to generate a second mannequin. By introducing the muscle model into the second human body model, the action and the expression of the second human body model can be controlled more accurately, so that the action and the expression of the second human body model are more natural and real.
Fig. 3 is a flow chart of a processing method of a virtual object according to other embodiments of the present application.
Referring to fig. 3, in step S310, a three-dimensional first human body model is built based on a human body reference model.
Since the implementation process and effect of step S310 are similar to those of step S210, the description thereof will not be repeated here.
In step S320, an association relationship between the vertex of the first human model and the corresponding bone in the human bone model is established.
In an example embodiment, according to the names of the respective parts of the first human body model and the names of the respective bones of the human body bone model, the respective parts of the first human body model and the bones of the human body bone model are matched, and based on the matching result, an association relationship between the vertex of the first human body model and the corresponding bones in the human body bone model is established, and the association relationship between the vertex of the first human body model and the corresponding bones in the human body bone model can be regarded as skin information of the first human body model.
In step S330, based on the association relationship, the bone weight of the bone corresponding to the vertex of the first mannequin is adjusted to generate a second mannequin.
In some example embodiments, a vertex in the first mannequin may correspond to a plurality of bones, each bone having a weight, and the final position of the vertex of the model after rendering the first mannequin is obtained by weighted averaging the positions of the bones. Thus, in order to make the first mannequin more matched to the mannequin, the skeletal weights at the various vertices of the first mannequin may also be adjusted so that the actions of the second mannequin can be better controlled after the second mannequin is generated.
In step S340, the skin of the second phantom is reconstructed based on the image of the skin of the phantom.
In an example embodiment, a skin map and skin texture of a human reference model are determined based on an image of skin of the human reference model; reconstructing the skin of the second manikin based on the skin map and the skin material of the manikin. The mapping of the skin of the second manikin may comprise: one or more of diffuse reflection mapping, high light mapping, and normal mapping. The skin map and the skin material are determined based on the image of the skin of the human body reference model, and the skin of the second human body model is reconstructed based on the determined skin map and the skin material, so that the details of the skin of the second human body model can be reflected more accurately, and the generated skin is more vivid and natural.
It should be noted that the skin mapping in the embodiment of the present application is not limited to the above-mentioned mapping, for example, the second manikin mapping may also include a roughness mapping or a mask mapping, which is also within the scope of the present application.
According to the technical scheme in the example embodiment of fig. 3, by adjusting the skeletal weight of the model, the motion of the generated human body model can be controlled more accurately, so that the motion of the human body model is more vivid and natural, and the phenomenon of terrorist valley is further avoided.
Further, in an example embodiment, the skin of the second mannequin may include: the skin-friendly material comprises a surface grease layer, a epidermis layer and a dermis layer, wherein the surface grease layer is used for simulating high light reflection of skin; the skin layer is used to simulate the contribution layer of subsurface scattering. The dermis layer is used to simulate the contribution layer of subsurface scattering. Reconstructing the skin of the second manikin based on the skin map and the skin texture of the manikin, comprising: and determining skin mapping and matching materials corresponding to the surface grease layer, the epidermis layer and the dermis layer, and reconstructing the skin of the second human body model layer by layer.
According to the technical scheme of the embodiment, the skin of the second human body model is divided into three layers, and the skin of the second human body model is built layer by layer, so that the details of the skin of the second human body model can be reflected more accurately, the generated skin is more vivid and natural, and the terrorist valley phenomenon is further avoided.
Further, to reduce processing performance overhead without affecting the realism of the second manikin, in some embodiments, before generating the second manikin, the method further comprises: one or more of the number of faces, the complexity of the material and the number of calculations of the projections of the bound first mannequin are reduced. For example, one or more of the number of faces, the complexity of the material, and the number of computations of the projection of the bound first mannequin may be reduced in combination with the computational performance of the image processing apparatus.
Furthermore, in some embodiments, reconstructing the skin of the second mannequin includes: reconstructing the skin of the second human body model through three-dimensional structure reconstruction and subsurface scattering material operation. By combining material operation of subsurface scattering during three-dimensional structure reconstruction, the skin of the second human body model is reconstructed, so that the generated skin is more natural and real.
Fig. 4 is a flow chart of interaction with a second mannequin provided according to some embodiments of the present application.
Referring to fig. 4, in step S410, voice information uttered by a target user is acquired;
in an example embodiment, the surrounding voice information is monitored by a monitoring device such as a microphone, and when the voice information sent by the target user is monitored, the voice information sent by the target user is acquired. The target user may be a user who wakes up the second mannequin by wake words, or may be a user who stands directly opposite the second mannequin.
In step S420, the corresponding keyword and/or emotion information is parsed from the voice information of the target user.
In an example embodiment, corresponding text information is parsed from the voice information of the target user, and corresponding keywords are extracted from the text information. Further, emotion information of the target user is determined according to the text information, the pitch of the voice, the speed of speech and the like contained in the voice information. For example, in the exhibition hall scene, the voice information of "please ask for where the exhibition hall is," which is sent by the user, "the keywords such as" No. 3 "and" exhibition hall "are extracted from the voice information, and the intention of the user is determined as a question path. Further, according to the text information such as 'please ask', pitch, speed and the like in the voice information of the user, the emotion of the user is determined to be a relaxed emotion. The user's emotion may also include other suitable emotions, such as a sudden emotion, a happy emotion, a hesitant emotion, etc., as is within the scope of the present application.
In step S430, based on the analyzed keyword and/or emotion information, in combination with the context information of the current interaction, the second manikin is driven to perform feedback operation on the target user, where the feedback operation includes one or more of voice feedback, motion feedback, and expression feedback.
In an example embodiment, based on the analyzed keyword and/or emotion information, the content of feedback operation for feeding back the target user is determined in combination with context information of the current interaction, and according to the content of the feedback operation, the second manikin is driven to feed back the target user, where the feedback operation includes one or more of voice feedback, action feedback and expression feedback, and the context information of the current interaction may be the interaction content of the previous interaction or the subject of the current interaction. For example, if the analyzed keyword is "thank you", determining whether the round of interaction is finished according to the context information of the current interaction, if so, determining that the second human body model carries out feedback operation of "smiling click" response to the target user; if not, a prompt is sent out to ask for something else. While the user is speaking continuously, it is determined that the second manikin is listening at intervals of a few seconds or sounds "one's voice.
According to the technical scheme in the example embodiment of fig. 4, on one hand, corresponding keywords and/or emotion information are obtained from voice information sent by a target user, so that the intention and emotion of the target user can be accurately determined; on the other hand, based on the keywords and/or the emotion information, the target user is fed back through the second human body model in combination with the context information of the current interaction, so that services can be provided for the user in a human-to-human interaction mode, and the service experience of the user is improved.
Fig. 5 is a schematic flow chart of interaction with a second mannequin provided according to still other embodiments of the present application.
Referring to fig. 5, in step S510, image information of a target user is acquired.
In an example embodiment, an image of a target user is acquired by an image acquisition device, such as a camera.
In step S520, a current state of the target user is determined according to the image information of the target user, the current state including: one or more of actions, expressions, and environments of the target user.
In an example embodiment, in the example embodiment, one or more of motion information, expression information, and surrounding environment information of the target user is extracted from an image of the target user, and a current motion, a current expression, and a current environment of the target user are determined according to the motion information, the expression information, and the surrounding environment information of the target user. For example, if the current action information of the target user is carrying luggage information, determining that the current action of the target user includes a carrying luggage action; if the current expression information of the target user contains the urgent expression information, determining that the current expression of the target is the urgent expression; if the surrounding environment information of the target user contains the companion information, determining that the current environment of the target user comprises the companion.
It should be noted that, the current state of the target user may also include other suitable state information, for example, session information of the current session or the previous session, personal information of the target user, and the like, which is also within the scope of the present application.
In step S530, the second manikin is driven to feed back the target user according to the current state and/or voice information of the target user.
In an example embodiment, the target user is responded according to the current state of the target user and/or the voice information of the target user, for example, taking a scene of a station as an example, if the current state of the target user is a luggage carrying state and the voice information of the target user contains keywords of "train number", "waiting room" and "which" then it is determined that the intention of the target user is to go to the waiting room, and then the position of the waiting room corresponding to the train number is fed back to the target user. Taking the greeting scene as an example, if the current state of the target user is just reaching the entrance of the greeting hall, the second manikin is driven to say "welcome light" and smile to slightly bow.
Further, in an example embodiment, the target user is subjected to a feedback operation by a second manikin, such as a digital person, according to the current state of the target user, the feedback operation including one or more of voice feedback, motion feedback, and expression feedback. For example, based on the current state of the target user, it is determined that the user is walking in front of the device, the eyes of a second mannequin, such as a digital person, will look at the user, but will not look all the way down, at random glance sideways at, elsewhere, if at a predetermined time, such as less than 3 seconds, from the last time glance sideways at, then the probability of having a random glance sideways at, elsewhere, is 0, e.g., if at a predetermined time, such as 3 seconds-10 seconds, from the last time glance sideways at, then the probability of having a random glance sideways at, elsewhere, is linearly or nonlinearly increased by 0- > 100%. The digital person feeds back the target user, so that services can be provided for the user in a person-to-person interaction mode, and the service experience of the user is improved.
According to the technical scheme in the example embodiment of fig. 5, feedback is performed according to the current state and/or voice information of the target user, and feedback can be performed according to cognition and understanding of the current state of the target user, so that efficient and natural interaction like between people can be realized.
Fig. 6 is a schematic block diagram of a virtual object processing apparatus provided in accordance with some embodiments of the present application. Referring to fig. 6, the processing apparatus 600 of the virtual object includes:
a first model generation module 610 for building a three-dimensional first human model based on a human reference model;
a second model generating module 620, configured to bind the first mannequin and the mannequin to generate a second mannequin;
a skin reconstruction module 630 for reconstructing the skin of the second manikin based on the image of the skin of the manikin reference model.
According to the technical solution in the exemplary embodiment of fig. 6, on the one hand, a three-dimensional first mannequin is built based on a mannequin reference model, and the first mannequin and the mannequin are bound, so that the skeletal animation of the mannequin can be realized; on the other hand, based on the bound skin image of the human body reference model, the skin of the second human body model is rebuilt, so that the detailed information of the skin of the human body reference model can be truly reflected, and the phenomenon of terrorist valley is avoided.
FIG. 7 is a schematic block diagram of a second model generation module provided in accordance with some embodiments of the application; referring to fig. 7, in some embodiments of the present application, the second model generating module 620 includes:
an association establishing unit 710, configured to establish an association relationship between the vertex of the first human body model and a corresponding bone in the human body bone model;
and a weight adjustment unit 720, configured to adjust a bone weight of a bone corresponding to the vertex of the first mannequin based on the association relationship, so as to generate the second mannequin.
In some embodiments of the application, the apparatus 600 further comprises:
and the facial deformer binding module is used for binding the second human body model with the facial deformer, wherein the facial deformer is used for driving the facial action of the second human body model.
In some embodiments of the present application, the skin reconstruction module 630 is specifically configured to:
determining a skin map and a skin texture of the human body reference model based on an image of skin of the human body reference model;
reconstructing the skin of the second manikin based on the skin map and the skin material of the manikin.
In some embodiments of the application, the apparatus 600 further comprises:
a clothing determining module for determining clothing map and clothing material of the human body reference model based on the image of the clothing of the human body reference model;
and the clothing reconstruction module is used for reconstructing clothing of the second human body model based on the clothing map and the clothing materials of the human body reference model.
In some embodiments of the present application, the skin reconstruction module 630 is specifically further configured to:
reconstructing the skin of the second model through three-dimensional structure reconstruction and subsurface scattering material operation.
In some embodiments of the application, the apparatus 600 further comprises:
and the reduction module is used for reducing one or more of the number of faces, the complexity of materials and the calculation times of projection of the bound first human body model before the second human body model is generated.
Fig. 8 is a schematic block diagram of a virtual object processing apparatus according to further embodiments of the present application. Referring to fig. 8, in some embodiments of the application, the apparatus 600 further comprises:
a voice information acquisition module 810, configured to acquire voice information sent by a target user;
A voice parsing module 820, configured to parse corresponding keywords and/or emotion information from the voice information;
the first feedback module 830 is configured to drive the second manikin to perform a feedback operation on the target user based on the keyword and/or the emotion information and in combination with context information of current interaction, where the feedback operation includes one or more of voice feedback, motion feedback, and expression feedback.
Fig. 9 is a schematic block diagram of a processing apparatus for virtual objects according to further embodiments of the present application. Referring to fig. 9, the apparatus 600 further includes:
an image information acquisition module 910, configured to acquire image information of a target user;
a state determining module 920, configured to determine a current state of the target user according to the image information, where the current state includes: one or more of actions, expressions and environments of the target user;
and the second feedback module 930 is configured to drive the second manikin to feed back the target user according to the current state and/or the voice information of the target user.
The processing device for virtual objects provided in the foregoing embodiments is configured to implement the technical scheme of the processing method for virtual objects in any one of the foregoing method embodiments, and the implementation principle and the technical effect are similar, and are not described herein again.
It should be noted that, the division of the respective modules of the apparatus provided in the above embodiments is merely a division of logic functions, and may be integrated in whole or in part into one physical entity or may be physically separated. And these modules may all be implemented in software in the form of calls by the processing element; or can be realized in hardware; the method can also be realized in a form of calling software by a processing element, and the method can be realized in a form of hardware by a part of modules. For example, the benefit index determination module may be a processing element that is set up separately, may be implemented in a chip of the above-described apparatus, or may be stored in a memory of the above-described apparatus in the form of program codes, and may be called by a processing element of the above-described apparatus to execute the functions of the above-described processing module. The implementation of the other modules is similar. In addition, all or part of the modules can be integrated together or can be independently implemented. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in a software form.
Fig. 10 is a block diagram of an electronic device for implementing a method of processing a virtual object according to an embodiment of the present application. As shown in fig. 10, is a block diagram of an electronic device according to an embodiment of the application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 10, the electronic device includes: one or more processors 1010, memory 1020, and interfaces for connecting components, including high-speed and low-speed interfaces, and interfaces for communicating with other electronic devices. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of a graphical user interface (Graphical User Interface, GUI) on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 1010 is illustrated in fig. 10.
Memory 1020 is a non-transitory computer readable storage medium provided by the present application. The memory stores instructions executable by at least one processor, so that the at least one processor executes the virtual object processing method corresponding to any execution body provided by the application. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method provided by the present application.
The memory 1020 serves as a non-transitory computer readable storage medium storing non-transitory software programs, non-transitory computer executable programs, and modules, such as corresponding program instructions/modules in the virtual object processing method according to the embodiment of the present application. The processor 1010 executes various functional applications and data processing of the server by executing non-transitory software programs, instructions, and modules stored in the memory 1020, that is, implements the processing method of the virtual object corresponding to any execution subject in the above method embodiments.
Memory 1020 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for a function; the storage data area may then store data, such as data provided by parties stored in a data processing platform, or tertiary in a secure isolation area, etc. In addition, memory 1020 may include high-speed random access memory and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 1020 may optionally include memory located remotely from processor 1010, which may be connected to the data processing electronics via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Furthermore, the electronic device may further include: an input device 1030 and an output device 1040. The processor 1010, memory 1020, input device 1030, and output device 1040 may be connected by a bus 1050 or otherwise, for example in fig. 10.
The input device 1030 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the data processing electronic device, such as a touch screen, keypad, mouse, trackpad, touchpad, pointer stick, one or more mouse buttons, trackball, joystick, and the like. The output means 1040 may include a display device, auxiliary lighting means (e.g., LEDs), tactile feedback means (e.g., vibration motors), and the like. The display device may include, but is not limited to, a liquid crystal display (Liquid Crystal Display, LCD), a light emitting diode (Light Emitting Diode, LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be implemented in digital electronic circuitry, integrated circuitry, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable logic devices (Programmable Logic device, PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a cathode ray tube or LCD monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local area network (Local Area Network, LAN), wide area network (Wide Area Network, WAN) and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
Further, the present application also provides a non-transitory computer readable storage medium storing computer instructions, where the computer instructions are configured to implement the technical solution provided by any one of the foregoing method embodiments after being executed by a processor.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed embodiments are achieved, and are not limited herein.
The above embodiments do not limit the scope of the present application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application should be included in the scope of the present application.

Claims (16)

1. A method for processing a virtual object, comprising:
establishing a three-dimensional first human body model according to the scanning data of the human body reference model;
Obtaining a human skeleton model;
scaling the bones of the human skeleton model to match the sizes of the bones of the human skeleton model and the corresponding parts of the first human body model;
establishing an association relationship between the vertex of the first human body model and a corresponding bone in the human body bone model;
based on the association relation, adjusting the bone weight of bones corresponding to the vertexes of the first human body model to generate a second human body model;
reconstructing the skin of the second manikin based on the image of the skin of the manikin reference model;
determining a garment map and garment materials of the human body reference model based on the image of the garment of the human body reference model;
reconstructing the clothing of the second mannequin based on the clothing map and clothing material of the mannequin.
2. The method according to claim 1, wherein the method further comprises:
binding the second manikin with a facial deformer, wherein the facial deformer is used for driving facial actions of the second manikin.
3. The method of claim 1, wherein reconstructing the skin of the second mannequin based on the image of the skin of the mannequin includes:
Determining a skin map and a skin texture of the human body reference model based on an image of skin of the human body reference model;
reconstructing the skin of the second manikin based on the skin map and the skin material of the manikin.
4. The method of claim 1, wherein reconstructing the skin of the second mannequin comprises:
reconstructing the skin of the second human body model through three-dimensional structure reconstruction and subsurface scattering material operation.
5. The method of claim 1, wherein prior to the generating the second mannequin, the method further comprises:
and reducing one or more of the number of faces, the complexity of materials and the calculation times of projection of the bound first human body model.
6. The method according to claim 1, wherein the method further comprises:
acquiring voice information sent by a target user;
analyzing corresponding keywords and/or emotion information from the voice information;
and driving the second human body model to perform feedback operation on the target user based on the keywords and/or the emotion information and combining the context information of the current interaction, wherein the feedback operation comprises one or more of voice feedback, action feedback and expression feedback.
7. The method according to claim 1, wherein the method further comprises:
acquiring image information of a target user;
determining a current state of the target user according to the image information, wherein the current state comprises the following steps: one or more of actions, expressions and environments of the target user;
and driving the second human body model to feed back the target user according to the current state and/or the voice information of the target user.
8. A processing apparatus for a virtual object, comprising:
the first model generation module is used for establishing a three-dimensional first human body model according to the scanning data of the human body reference model;
the second model generation module is used for binding the first human body model and the human skeleton model to generate a second human body model;
a skin reconstruction module for reconstructing the skin of the second mannequin based on the image of the skin of the mannequin;
the second model generation module includes:
the acquisition module is used for acquiring a human skeleton model;
the association establishing unit is used for scaling the bones of the human skeleton model so as to enable the bones of the human skeleton model to be matched with the sizes of the corresponding parts of the first human skeleton model; establishing an association relationship between the vertex of the first human body model and a corresponding bone in the human body bone model;
The weight adjustment module is used for adjusting the skeleton weight of the skeleton corresponding to the vertex of the first human body model based on the association relation to generate the second human body model;
a clothing determining module for determining clothing map and clothing material of the human body reference model based on the image of the clothing of the human body reference model;
and the clothing reconstruction module is used for reconstructing clothing of the second human body model based on the clothing map and the clothing materials of the human body reference model.
9. The apparatus of claim 8, wherein the apparatus further comprises:
and the facial deformer binding module is used for binding the second human body model with the facial deformer, wherein the facial deformer is used for driving the facial action of the second human body model.
10. The device according to claim 8, characterized in that the skin reconstruction module is in particular
For the purpose of:
determining a skin map and a skin texture of the human body reference model based on an image of skin of the human body reference model;
reconstructing the skin of the second manikin based on the skin map and the skin material of the manikin.
11. The device according to claim 10, characterized in that the skin reconstruction module is in particular
Also used for:
reconstructing the skin of the second model through three-dimensional structure reconstruction and subsurface scattering material operation.
12. The apparatus of claim 8, wherein the apparatus further comprises:
and the reduction module is used for reducing one or more of the number of faces, the complexity of materials and the calculation times of projection of the bound first human body model before the second human body model is generated.
13. The apparatus of claim 8, wherein the apparatus further comprises:
the voice information acquisition module is used for acquiring voice information sent by a target user;
the voice analysis module is used for analyzing corresponding keywords and/or emotion information from the voice information;
the first feedback module is used for driving the second human body model to perform feedback operation on the target user based on the keywords and/or the emotion information and combining the context information of the current interaction, and the feedback operation comprises one or more of voice feedback, action feedback and expression feedback.
14. The apparatus of claim 8, wherein the apparatus further comprises:
the image information acquisition module is used for acquiring image information of a target user;
The state determining module is used for determining the current state of the target user according to the image information, and the current state comprises the following steps: one or more of actions, expressions and environments of the target user;
and the second feedback module is used for driving the second human body model to feed back the target user according to the current state and/or the voice information of the target user.
15. An electronic device, comprising: at least one processor, memory, and an interface to communicate with other electronic devices; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of processing a virtual object according to any one of claims 1 to 7.
16. A non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the virtual object processing method according to any one of claims 1 to 7.
CN202010220090.0A 2020-03-25 2020-03-25 Virtual object processing method, device, equipment and storage medium Active CN111445561B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010220090.0A CN111445561B (en) 2020-03-25 2020-03-25 Virtual object processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010220090.0A CN111445561B (en) 2020-03-25 2020-03-25 Virtual object processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111445561A CN111445561A (en) 2020-07-24
CN111445561B true CN111445561B (en) 2023-11-17

Family

ID=71654743

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010220090.0A Active CN111445561B (en) 2020-03-25 2020-03-25 Virtual object processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111445561B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111951360B (en) * 2020-08-14 2023-06-23 腾讯科技(深圳)有限公司 Animation model processing method and device, electronic equipment and readable storage medium
CN112509099B (en) * 2020-11-30 2024-02-06 北京百度网讯科技有限公司 Avatar driving method, apparatus, device and storage medium
CN112652057B (en) * 2020-12-30 2024-05-07 北京百度网讯科技有限公司 Method, device, equipment and storage medium for generating human body three-dimensional model
CN113409430B (en) 2021-06-01 2023-06-23 北京百度网讯科技有限公司 Drivable three-dimensional character generation method, drivable three-dimensional character generation device, electronic equipment and storage medium
CN115529500A (en) * 2022-09-20 2022-12-27 中国电信股份有限公司 Method and device for generating dynamic image

Citations (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2028065A1 (en) * 1989-11-15 1991-05-16 George S. Allen Method and Apparatus for Imaging the Anatomy
CN101082901A (en) * 2006-06-01 2007-12-05 上海戏剧学院 Virtual rehearsing system
TW200818056A (en) * 2006-10-12 2008-04-16 Nat Univ Tsing Hua Drivable simulation model combining curvature profile and skeleton and method of producing the same
CN101271589A (en) * 2007-03-22 2008-09-24 中国科学院计算技术研究所 Three-dimensional mannequin joint center extraction method
CN101719284A (en) * 2009-12-25 2010-06-02 北京航空航天大学 Method for physically deforming skin of virtual human based on hierarchical model
KR20140052631A (en) * 2012-10-25 2014-05-07 주식회사 다림비젼 Real time 3d simulator using multi-sensor scan including x-ray for the modeling of skeleton, skin and muscle reference model
CN103955963A (en) * 2014-04-30 2014-07-30 崔岩 Digital human body three-dimensional reconstruction method and system based on Kinect device
CN105303602A (en) * 2015-10-09 2016-02-03 摩多文化(深圳)有限公司 Automatic 3D human body model bone binding process and method
CN105528808A (en) * 2016-02-29 2016-04-27 华中师范大学 Jingchu folktale clay sculpture figure digital 3D model synthesis method and system
CN105913486A (en) * 2016-04-08 2016-08-31 东华大学 Digital mixing human body rapid modeling method applied to apparel industries
CN106327589A (en) * 2016-08-17 2017-01-11 北京中达金桥技术股份有限公司 Kinect-based 3D virtual dressing mirror realization method and system
CN106934385A (en) * 2017-03-24 2017-07-07 深圳市唯特视科技有限公司 A kind of character physical's shape method of estimation based on 3D scannings
CN107491506A (en) * 2017-07-31 2017-12-19 西安蒜泥电子科技有限责任公司 Lot-size model posture transform method
CN107705365A (en) * 2017-09-08 2018-02-16 郭睿 Editable three-dimensional (3 D) manikin creation method, device, electronic equipment and computer program product
CN108537888A (en) * 2018-04-09 2018-09-14 浙江大学 A kind of quick fitting method based on skeleton
CN108597015A (en) * 2018-01-08 2018-09-28 江苏辰锐网络科技有限公司 The automatic binding system of three dimensional biological model bone, method, equipment and computer program product
WO2018209570A1 (en) * 2017-05-16 2018-11-22 深圳市三维人工智能科技有限公司 Device and method for inheriting vertex weight of 3d scanning model
CN109064550A (en) * 2018-07-27 2018-12-21 北京花开影视制作有限公司 A kind of body scans modeling method
CN109410298A (en) * 2018-11-02 2019-03-01 北京恒信彩虹科技有限公司 A kind of production method and expression shape change method of dummy model
KR20190023486A (en) * 2017-08-29 2019-03-08 주식회사 룩씨두 Method And Apparatus for Providing 3D Fitting
CN109615683A (en) * 2018-08-30 2019-04-12 广州多维魔镜高新科技有限公司 A kind of 3D game animation model production method based on 3D dress form
WO2019095051A1 (en) * 2017-11-14 2019-05-23 Ziva Dynamics Inc. Method and system for generating an animation-ready anatomy
CN109816756A (en) * 2019-01-08 2019-05-28 新疆农业大学 A kind of movements design method of virtual bionic model
CN110148209A (en) * 2019-04-30 2019-08-20 深圳市华讯方舟太赫兹科技有限公司 Manikin generation method, image processing equipment and the device with store function
CN110751733A (en) * 2018-07-04 2020-02-04 桂滨 Method and apparatus for converting 3D scanned object into avatar
CN112184921A (en) * 2020-10-30 2021-01-05 北京百度网讯科技有限公司 Avatar driving method, apparatus, device, and medium
CN115471632A (en) * 2022-10-19 2022-12-13 深圳仙库智能有限公司 Real human body model reconstruction method, device, equipment and medium based on 3D scanning

Patent Citations (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2028065A1 (en) * 1989-11-15 1991-05-16 George S. Allen Method and Apparatus for Imaging the Anatomy
CN101082901A (en) * 2006-06-01 2007-12-05 上海戏剧学院 Virtual rehearsing system
TW200818056A (en) * 2006-10-12 2008-04-16 Nat Univ Tsing Hua Drivable simulation model combining curvature profile and skeleton and method of producing the same
CN101271589A (en) * 2007-03-22 2008-09-24 中国科学院计算技术研究所 Three-dimensional mannequin joint center extraction method
CN101719284A (en) * 2009-12-25 2010-06-02 北京航空航天大学 Method for physically deforming skin of virtual human based on hierarchical model
KR20140052631A (en) * 2012-10-25 2014-05-07 주식회사 다림비젼 Real time 3d simulator using multi-sensor scan including x-ray for the modeling of skeleton, skin and muscle reference model
CN103955963A (en) * 2014-04-30 2014-07-30 崔岩 Digital human body three-dimensional reconstruction method and system based on Kinect device
CN105303602A (en) * 2015-10-09 2016-02-03 摩多文化(深圳)有限公司 Automatic 3D human body model bone binding process and method
CN105528808A (en) * 2016-02-29 2016-04-27 华中师范大学 Jingchu folktale clay sculpture figure digital 3D model synthesis method and system
CN105913486A (en) * 2016-04-08 2016-08-31 东华大学 Digital mixing human body rapid modeling method applied to apparel industries
CN106327589A (en) * 2016-08-17 2017-01-11 北京中达金桥技术股份有限公司 Kinect-based 3D virtual dressing mirror realization method and system
CN106934385A (en) * 2017-03-24 2017-07-07 深圳市唯特视科技有限公司 A kind of character physical's shape method of estimation based on 3D scannings
WO2018209570A1 (en) * 2017-05-16 2018-11-22 深圳市三维人工智能科技有限公司 Device and method for inheriting vertex weight of 3d scanning model
CN107491506A (en) * 2017-07-31 2017-12-19 西安蒜泥电子科技有限责任公司 Lot-size model posture transform method
KR20190023486A (en) * 2017-08-29 2019-03-08 주식회사 룩씨두 Method And Apparatus for Providing 3D Fitting
CN107705365A (en) * 2017-09-08 2018-02-16 郭睿 Editable three-dimensional (3 D) manikin creation method, device, electronic equipment and computer program product
WO2019095051A1 (en) * 2017-11-14 2019-05-23 Ziva Dynamics Inc. Method and system for generating an animation-ready anatomy
CN108597015A (en) * 2018-01-08 2018-09-28 江苏辰锐网络科技有限公司 The automatic binding system of three dimensional biological model bone, method, equipment and computer program product
CN108537888A (en) * 2018-04-09 2018-09-14 浙江大学 A kind of quick fitting method based on skeleton
CN110751733A (en) * 2018-07-04 2020-02-04 桂滨 Method and apparatus for converting 3D scanned object into avatar
CN109064550A (en) * 2018-07-27 2018-12-21 北京花开影视制作有限公司 A kind of body scans modeling method
CN109615683A (en) * 2018-08-30 2019-04-12 广州多维魔镜高新科技有限公司 A kind of 3D game animation model production method based on 3D dress form
CN109410298A (en) * 2018-11-02 2019-03-01 北京恒信彩虹科技有限公司 A kind of production method and expression shape change method of dummy model
CN109816756A (en) * 2019-01-08 2019-05-28 新疆农业大学 A kind of movements design method of virtual bionic model
CN110148209A (en) * 2019-04-30 2019-08-20 深圳市华讯方舟太赫兹科技有限公司 Manikin generation method, image processing equipment and the device with store function
CN112184921A (en) * 2020-10-30 2021-01-05 北京百度网讯科技有限公司 Avatar driving method, apparatus, device, and medium
CN115471632A (en) * 2022-10-19 2022-12-13 深圳仙库智能有限公司 Real human body model reconstruction method, device, equipment and medium based on 3D scanning

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
一种人体运动骨骼提取和动画自动生成方法;吴伟和;郝爱民;赵永涛;万巧慧;李帅;;计算机研究与发展(第07期);1408-1419 *
一种可编辑的三维人体蒙皮网格动画合成方法;李连东;樊养余;雷涛;吕国云;;计算机应用研究(第03期);1176-1179 *
三维人体模型动态仿真技术的研究与实现;陈小满;《赤峰学院学报(自然科学版)》;20190425(第04期);55-58 *
三维人体模型动态仿真技术的研究与实现;陈小满;赤峰学院学报;55-58 *
游戏动画中三维人物模型制作的研究;贾银莎等;《中小企业管理与科技(上旬刊)》;20090505(第05期);274-275 *
计算机动画中人体建模与皮肤变形技术的研究现状与展望;吴小𠇔;马利庄;顾宝军;;中国图象图形学报(第04期);565-573 *

Also Published As

Publication number Publication date
CN111445561A (en) 2020-07-24

Similar Documents

Publication Publication Date Title
CN111445561B (en) Virtual object processing method, device, equipment and storage medium
CN111833418B (en) Animation interaction method, device, equipment and storage medium
CN112215927B (en) Face video synthesis method, device, equipment and medium
US11736756B2 (en) Producing realistic body movement using body images
CN110163054B (en) Method and device for generating human face three-dimensional image
CN110286756A (en) Method for processing video frequency, device, system, terminal device and storage medium
CN111368137A (en) Video generation method and device, electronic equipment and readable storage medium
US11918412B2 (en) Generating a simulated image of a baby
CN111294665A (en) Video generation method and device, electronic equipment and readable storage medium
US11514638B2 (en) 3D asset generation from 2D images
CN111538456A (en) Human-computer interaction method, device, terminal and storage medium based on virtual image
WO2018175869A1 (en) System and method for mass-animating characters in animated sequences
CN117036583A (en) Video generation method, device, storage medium and computer equipment
CN112330805A (en) Face 3D model generation method, device and equipment and readable storage medium
CN117274491A (en) Training method, device, equipment and medium for three-dimensional reconstruction model
CN115049016A (en) Model driving method and device based on emotion recognition
US20210279928A1 (en) Method and apparatus for image processing
CN113542758A (en) Generating antagonistic neural network assisted video compression and broadcast
US20230106330A1 (en) Method for creating a variable model of a face of a person
Lokesh et al. Computer Interaction to human through photorealistic facial model for inter-process communication
CN113542759B (en) Generating an antagonistic neural network assisted video reconstruction
CN113379879A (en) Interaction method, device, equipment, storage medium and computer program product
WO2022195818A1 (en) Image generation system and image generation method
US20240119690A1 (en) Stylizing representations in immersive reality applications
US20230085339A1 (en) Generating an avatar having expressions that mimics expressions of a person

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant