CN111445561A - Virtual object processing method, device, equipment and storage medium - Google Patents
Virtual object processing method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN111445561A CN111445561A CN202010220090.0A CN202010220090A CN111445561A CN 111445561 A CN111445561 A CN 111445561A CN 202010220090 A CN202010220090 A CN 202010220090A CN 111445561 A CN111445561 A CN 111445561A
- Authority
- CN
- China
- Prior art keywords
- human body
- model
- skin
- human
- target user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title abstract description 19
- 238000012545 processing Methods 0.000 claims abstract description 44
- 238000000034 method Methods 0.000 claims abstract description 32
- 210000000988 bone and bone Anatomy 0.000 claims description 35
- 239000000463 material Substances 0.000 claims description 28
- 230000014509 gene expression Effects 0.000 claims description 21
- 230000015654 memory Effects 0.000 claims description 20
- 230000009471 action Effects 0.000 claims description 19
- 230000008451 emotion Effects 0.000 claims description 16
- 230000003993 interaction Effects 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000004458 analytical method Methods 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 2
- 210000003491 skin Anatomy 0.000 description 73
- 238000010586 diagram Methods 0.000 description 16
- 238000013507 mapping Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000036651 mood Effects 0.000 description 5
- 230000002452 interceptive effect Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 210000004207 dermis Anatomy 0.000 description 3
- 230000008921 facial expression Effects 0.000 description 3
- 239000004519 grease Substances 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000019771 cognition Effects 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 210000002615 epidermis Anatomy 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 210000003205 muscle Anatomy 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 230000036548 skin texture Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 239000013078 crystal Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 210000001097 facial muscle Anatomy 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 239000010977 jade Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000008267 milk Substances 0.000 description 1
- 210000004080 milk Anatomy 0.000 description 1
- 235000013336 milk Nutrition 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- -1 skin Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/016—Input arrangements with force or tactile feedback as computer generated output to the user
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/26—Government or public services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Abstract
The application discloses a processing method, a processing device, processing equipment and a storage medium of a virtual object, and relates to the technical field of computer vision. The method comprises the following steps: establishing a three-dimensional first human body model based on the human body reference model; binding the first human body model with a human skeleton model to generate a second human body model; the skin of the second human model is reconstructed based on the image of the skin of the human reference model. According to the technical scheme of the embodiment of the application, the detail information of the skin of the human body reference model can be truly reflected, and the terrorism phenomenon is avoided.
Description
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to a method, an apparatus, a device, and a storage medium for processing a virtual object.
Background
As computer vision technology has developed and become more popular for interacting with users through virtual objects, such as digital people, how to generate vivid and natural virtual character objects has become a focus of attention.
In one technical scheme, scanning data is obtained by three-dimensionally scanning a human body reference model, and a virtual character model is established based on the scanning data. However, in this technical solution, on one hand, the virtual character model obtained only from the scan data cannot realize the skeleton animation of the model; on the other hand, due to illumination or shadows, the three-dimensional scanning result is difficult to reflect the detail information of the skin of the human reference model, and the phenomenon of terrorism is easy to occur.
Therefore, how to realize the skeleton animation of the virtual character model and avoid the terrorism phenomenon becomes a technical problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides a processing method, a processing device, processing equipment and a storage medium of a virtual object, which are used for solving the problems of how to realize skeleton animation of a virtual character model and avoiding the phenomenon of terrorism.
In a first aspect, the present application provides a method for processing a virtual object, including:
establishing a three-dimensional first human body model based on the human body reference model;
binding the first human body model with a human skeleton model to generate a second human body model;
reconstructing the skin of the second human model based on the image of the skin of the human reference model.
In a second aspect, the present application provides an apparatus for processing a virtual object, including:
the first model generation module is used for establishing a three-dimensional first human body model based on the human body reference model;
the second model generation module is used for binding the first human body model with the human skeleton model to generate a second human body model;
a skin reconstruction module for reconstructing the skin of the second human body model based on the image of the skin of the human body reference model.
In a third aspect, the present application provides an electronic device, comprising: at least one processor, a memory, and an interface to communicate with other electronic devices; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of processing a virtual object according to any one of the first aspect.
In a fourth aspect, the present application provides a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of processing the virtual object of any one of the first aspect.
One embodiment in the above application has the following advantages or benefits: on one hand, a three-dimensional first human body model is established based on the human body reference model, and the first human body model is bound with the human body skeleton model, so that skeleton animation of the human body model can be realized; on the other hand, the skin of the second human body model is reconstructed based on the bound image of the skin of the human body reference model, so that the detail information of the skin of the human body reference model can be truly reflected, and the terrorism phenomenon is avoided.
Other effects of the above-described alternative will be described below with reference to specific embodiments.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a schematic diagram of an application scenario of a processing method of a virtual object according to some embodiments of the present application.
Fig. 2 is a schematic flow chart of a processing method of a virtual object according to some embodiments of the present application.
Fig. 3 is a flowchart illustrating a processing method of a virtual object according to another embodiment of the present application.
FIG. 4 is a flow diagram of interaction with a second mannequin provided in accordance with some embodiments of the present application.
FIG. 5 is a schematic flow chart illustrating interaction with a second human body model according to further embodiments of the present application
Fig. 6 is a schematic block diagram of a processing apparatus of a virtual object provided in accordance with some embodiments of the present application.
Fig. 7 is a schematic block diagram of a second model generation module provided in accordance with some embodiments of the present application.
FIG. 8 is a schematic block diagram of a processing device for virtual objects provided in accordance with further embodiments of the present application.
Fig. 9 is a schematic block diagram of a processing device of a virtual object provided in accordance with further embodiments of the present application.
Fig. 10 is a block diagram of an electronic device for implementing a processing method of a virtual object according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the current technical scheme for generating a virtual human, scanning data is obtained by three-dimensionally scanning a human reference model, and a virtual human model is established based on the scanning data. However, in this technical solution, on one hand, the virtual character model obtained only from the scan data cannot realize the skeleton animation of the model; on the other hand, due to illumination, shadows, and the like, the three-dimensional scan result hardly reflects the detailed information of the skin of the reference character model, and the phenomenon of "terror valley" is likely to occur.
Based on the above, the basic idea of the present application is: and binding the human body model with the human body skeleton model, and correcting the bound human body model based on the high-definition image of the skin of the human body reference model. According to the technical scheme in the embodiment of the application, on one hand, a three-dimensional first human body model is established based on a human body reference model, and the first human body model is bound with a human body skeleton model, so that skeleton animation of the human body model can be realized; on the other hand, the skin of the second human body model is reconstructed based on the bound image of the skin of the human body reference model, so that the detail information of the skin of the human body reference model can be truly reflected, and the terrorism phenomenon is avoided.
The following explains terms and the like referred to in the present application:
the terror valley phenomenon: because the appearance and the action of the robot are quite similar to those of the human, the human also can generate positive emotion to the robot until a certain degree is reached, even if the robot and the human have little difference, the difference is very obvious and glaring, and the whole robot is very rigid and terrorist.
Human body reference model: representing a real person or a model of a real person for generating a virtual manikin.
A face deformer: and the deformer is used for driving facial muscles of the human body model and generating facial expressions.
Sub-surface scattering: is a physical rendering technology for simulating semi-transparent substances such as skin, jade, milk and the like.
A digital person: the digital character technology such as portrait modeling, motion capture and the like brings vivid and natural image expression for the digital human, and the artificial intelligence technology such as voice recognition, natural voice understanding, dialogue enumeration and the like brings perfect cognition, understanding and expression capabilities for the digital human.
The following describes a method for processing a virtual object according to the present application with reference to specific embodiments.
Fig. 1 is a schematic diagram of an application scenario of a processing method of a virtual object according to some embodiments of the present application. Referring to fig. 1, the application scenario includes an image acquisition device 110 and an image processing device 120. The image acquiring device 110 is configured to acquire scanning data of the human body reference model 130, for example, the human body reference model 130 may be scanned by the image acquiring device 110 to obtain scanning data of the human body reference model 130, and the scanning data is sent to the image processing device 120, where the image acquiring device 110 may be an optical scanner or a camera. The image processing device 120 is configured to establish a three-dimensional first human body model according to the scan data, bind the first human body model with the human skeleton model, generate a second human body model, and reconstruct the skin of the second human body model based on the image of the skin of the human reference model, for example, the image processing device 120 may be installed with image processing software such as Maya software, and process the scan data through the image processing software to generate the first human body model.
It should be noted that the image processing apparatus 120 may be a desktop computer or a laptop computer, or may be other suitable general-purpose computing devices, such as a notebook computer or a cloud computing device, which is not limited in this respect.
A method for processing a virtual object according to an exemplary embodiment of the present application is described below with reference to fig. 2 in conjunction with the application scenario of fig. 1. It should be noted that the above application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present application, and the embodiments of the present application are not limited in this respect. Rather, embodiments of the present application may be applied to any scenario where applicable.
Fig. 2 is a schematic flow chart of a processing method of a virtual object according to some embodiments of the present application. The processing method of the virtual object includes steps S210 to S230, and the processing method of the virtual object can be applied to a processing device of the virtual object, such as the image processing apparatus 120 of fig. 1, and the following describes the processing method of the virtual object in the exemplary embodiment in detail with reference to the drawings.
Referring to fig. 2, in step S210, a three-dimensional first human body model is created based on the human body reference model.
In an example embodiment, scan data of a reference model of a human body is acquired, and a three-dimensional first human body model is built according to the scan data of the reference model of the human body. The scanning data of the human body reference model can be acquired by the following modes: the human body reference model is scanned in all directions through a 360-degree photographic packaging box formed by a scanner or a single lens reflex, and scanning data of the human body reference model are obtained. Furthermore, the scanning data of the human body reference model comprises human body depth image data, and a three-dimensional first human body model is reconstructed through curved surface reconstruction operation according to the scanning data of the human body reference model.
In step S220, the first human body model is bound to the human skeleton model to generate a second human body model.
In an example embodiment, a human skeleton model is obtained, bones of each part in the human skeleton model are matched with corresponding parts of a first human model, and an association relationship between a vertex of the first human model and the corresponding bones in the human skeleton model is established. For example, each part of the first human body model is matched with the skeleton of the human skeleton model according to the name of each part of the first human body model and the name of each skeleton of the human skeleton model, the association relationship between the vertex of the first human body model and the corresponding skeleton in the human skeleton model is established based on the matching result, and the association relationship between the vertex of the first human body model and the corresponding skeleton in the human skeleton model can be regarded as the skin information of the first human body model.
Further, since the sizes of the first mannequin and the human bone model may not match, in some embodiments, the bones of the human bone model are scaled such that the bones of the human bone model match the sizes of the corresponding parts of the first mannequin.
In step S230, the skin of the second human model is reconstructed based on the image of the skin of the human reference model.
In an example embodiment, acquiring a high-definition image of the skin of the human reference model, determining a map of the skin of the second human model according to the high-definition image of the human reference model, and reconstructing the skin of the second human model according to the determined map of the skin of the second human model, the map of the skin of the second human model may include: one or more of diffuse reflection mapping, highlight mapping, and normal mapping.
It should be noted that the skin map in the embodiment of the present application is not limited to the above maps, for example, the map of the second human body model may further include a roughness map or a mask map, and the like, which is also within the scope of the present application.
According to the technical solution in the example embodiment of fig. 2, on one hand, a three-dimensional first human body model is established based on a human body reference model, and the first human body model is bound with a human body skeleton model, so that skeleton animation of the human body model can be realized; on the other hand, the skin of the second human body model is reconstructed based on the bound image of the skin of the human body reference model, so that the detail information of the skin of the human body reference model can be truly reflected, and the terrorism phenomenon is avoided.
Furthermore, in some embodiments, the generated second human body model is bound with a face deformer, the face deformer for driving facial movements of the second human body model. Through binding second manikin and face deformer, can realize the various facial expression actions of second manikin through face deformer for the facial expression of second manikin is more lively natural.
Further, in order to make the clothing of the second human body model more natural, in an example embodiment, the clothing map and the clothing material of the human body reference model are determined based on the image of the clothing of the human body reference model; and reconstructing the clothing of the second human body model based on the clothing chartlet and the clothing material of the human body reference model.
Further, in some embodiments, the first mannequin is bound to a skeletal model and a muscle model of the human body, generating a second mannequin. The muscle model is introduced into the second human body model, so that the action and the expression of the second human body model can be controlled more accurately, and the action and the expression of the second human body model are more natural and real.
Fig. 3 is a flowchart illustrating a processing method of a virtual object according to another embodiment of the present application.
Referring to fig. 3, in step S310, a three-dimensional first human body model is created based on the human body reference model.
Since the implementation process and effect of step S310 are similar to those of step S210, no further description is provided herein.
In step S320, an association relationship between vertices of the first human body model and corresponding bones in the human body bone model is established.
In an example embodiment, the positions of the first human body model and the bones of the human bone model are matched according to the names of the positions of the first human body model and the names of the bones of the human bone model, the association relationship between the top point of the first human body model and the corresponding bones in the human bone model is established based on the matching result, and the association relationship between the top point of the first human body model and the corresponding bones in the human bone model can be regarded as skin information of the first human body model.
In step S330, the bone weight of the bone corresponding to the vertex of the first human model is adjusted based on the association relationship, and a second human model is generated.
In some example embodiments, a vertex in the first human model may correspond to a plurality of bones, each bone having a weight, and after rendering the first human model, the final position of the vertex of the model is obtained by weighted averaging the positions of the bones. Thus, in order to make the first manikin more closely match the human bone model, the bone weights at the vertices of the first manikin may also be adjusted so that after the second manikin is generated, the actions of the second manikin can be better controlled.
In step S340, the skin of the second human model is reconstructed based on the image of the skin of the human reference model.
In an example embodiment, a skin map and skin texture of a human reference model are determined based on an image of skin of the human reference model; and reconstructing the skin of the second human body model based on the skin map and the skin material of the human body reference model. The mapping of the skin of the second mannequin may include: one or more of diffuse reflection mapping, highlight mapping, and normal mapping. The skin map and the skin material are determined based on the image of the skin of the human body reference model, the skin of the second human body model is reconstructed based on the determined skin map and the determined skin material, the details of the skin of the second human body model can be reflected more accurately, and the generated skin is more vivid and natural.
It should be noted that the skin map in the embodiment of the present application is not limited to the above maps, for example, the map of the second human body model may further include a roughness map or a mask map, and the like, which is also within the scope of the present application.
According to the technical scheme in the example embodiment of fig. 3, the action of the generated human body model can be more accurately controlled by adjusting the skeleton weight of the model, so that the action of the human body model is more vivid and natural, and the terrorism phenomenon is further avoided.
Further, in an example embodiment, the skin of the second human model may include: the skin care product comprises a surface grease layer, an epidermal layer and a dermis layer, wherein the surface grease layer is used for simulating high light reflection of skin; the skin layer was used to simulate the contribution of subsurface scattering. The dermis layer is used to simulate the contribution of subsurface scattering. Reconstructing the skin of the second human body model based on the skin map and the skin texture of the human body reference model, comprising: and determining skin maps and matching materials corresponding to the surface grease layer, the epidermis layer and the dermis layer, and reconstructing the skin of the second human body model layer by layer.
According to the technical scheme of the embodiment, the skin of the second human body model is divided into three layers, and the skin of the second human body model is built layer by layer, so that the details of the skin of the second human body model can be more accurately reflected, the generated skin is more vivid and natural, and the terrorist phenomenon is further avoided.
Further, to reduce processing performance overhead without affecting the realism of the second human body model, in some embodiments, prior to generating the second human body model, the method further comprises: and reducing one or more of the number of faces, the complexity of materials and the calculation times of the projection of the bound first human body model. For example, one or more of the number of faces, the complexity of the material, and the number of calculations of the projection of the bundled first human body model may be reduced in conjunction with the size of the computational performance of the image processing apparatus.
Furthermore, in some embodiments, reconstructing the skin of the second human model comprises: and reconstructing the skin of the second human body model through three-dimensional structure reconstruction and material operation of sub-surface scattering. The skin of the second human body model is reconstructed by combining the material operation of sub-surface scattering during the reconstruction of the three-dimensional structure, so that the generated skin is more natural and real.
FIG. 4 is a schematic flow diagram of interaction with a second mannequin according to some embodiments of the present application.
Referring to fig. 4, in step S410, voice information uttered by a target user is acquired;
in an example embodiment, the surrounding voice information is monitored by a monitoring device such as a microphone, and when the voice information sent by the target user is monitored, the voice information sent by the target user is acquired. The target user may be a user who wakes up the second human body model by a wake-up word, or may be a user standing right opposite the second human body model.
In step S420, the corresponding keyword and/or emotion information is parsed from the voice information of the target user.
In an example embodiment, corresponding text information is analyzed from the voice information of the target user, and corresponding keywords are extracted from the text information. Furthermore, the emotion information of the target user is determined according to the text information, the pitch of the voice, the speed of the voice and other information contained in the voice information. For example, in a scene of an exhibition hall, voice information "ask where the exhibition hall No. 3 is located" issued by a user is monitored, keywords such as "No. 3" and "exhibition hall" are extracted from the voice information, and the intention of the user is determined as asking for a way. Furthermore, the emotion of the user is determined to be relaxed emotion according to the character information in the voice information of the user, such as 'asking question', pitch, speed of speech and the like. The mood of the user may also include other suitable moods, such as an urgent mood, a happy mood, a hesitant mood, etc., which are also within the scope of the present application.
In step S430, based on the keywords and/or emotion information obtained by parsing, and in combination with the context information of the current interaction, the second human body model is driven to perform feedback operation on the target user, where the feedback operation includes one or more of voice feedback, motion feedback, and expression feedback.
In an example embodiment, based on the keyword and/or emotion information obtained through analysis, in combination with context information of the current interaction, content of a feedback operation for feeding back to the target user is determined, the second human body model is driven to perform the feedback operation on the target user according to the content of the feedback operation, the feedback operation includes one or more of voice feedback, motion feedback and expression feedback, and the context information of the current interaction may be interactive content of the previous round of interaction or a theme of the current interaction. For example, if the keyword obtained by analysis is "thank you", determining whether the round of interaction is finished according to the context information of the current interaction, and if so, determining that the second human body model performs a feedback operation of "smile nodding" response on the target user; if not, a prompt is sent to ask for other information to help you. While the user is speaking continuously, the second human body model is determined to be nodding every few seconds or the voice of the "kay" is sent out to indicate that the user is listening.
According to the technical solution in the example embodiment of fig. 4, on one hand, corresponding keywords and/or emotion information are obtained from voice information uttered by a target user, and the intention and emotion of the target user can be accurately determined; on the other hand, based on the keywords and/or the emotion information, the target user is fed back through the second human body model by combining with the current interactive context information, service can be provided for the user in a human-to-human interactive mode, and the service experience of the user is improved.
FIG. 5 is a flow diagram illustrating interaction with a second mannequin according to still further embodiments of the present application.
Referring to fig. 5, in step S510, image information of a target user is acquired.
In an example embodiment, an image of a target user is acquired by an image acquisition device, such as a camera.
In step S520, a current state of the target user is determined according to the image information of the target user, where the current state includes: one or more of the target user's actions, expressions, environment.
In an example embodiment, one or more of motion information, expression information, and surrounding environment information of a target user are extracted from an image of the target user, and a current motion, a current expression, and a current environment of the target user are determined according to the motion information, expression information, and surrounding environment information of the target user. For example, if the current action information of the target user is the baggage carrying information, determining that the current action of the target user includes a baggage carrying action; if the current expression information of the target user contains urgent expression information, determining that the current target expression is an urgent expression; and if the surrounding environment information of the target user contains the partner information, determining that the current environment of the target user comprises the partner.
It should be noted that the current status of the target user may also include other suitable status information, for example, session information of the current or previous session or personal information of the target user, and the like, which is also within the protection scope of the present application.
In step S530, the second human body model is driven to perform feedback on the target user according to the current state and/or the voice information of the target user.
In an example embodiment, the target user is responded according to the current state of the target user and/or the voice information of the target user, for example, taking a scene of a station as an example, if the current state of the target user is a luggage carrying state and the voice information of the target user includes keywords of "train number", "waiting room" and "which", it is determined that the intention of the target user is to go to the waiting room, and the position of the waiting room corresponding to the train number is fed back to the target user. Taking the guest scenario as an example, if the current status of the target user is just at the entrance of the guest's lobby, the second mannequin was driven to say "welcome" and smile a slight bow.
Further, in an example embodiment, a feedback operation including one or more of voice feedback, motion feedback, and expression feedback is performed on the target user through a second human body model, such as a digital human, according to the current state of the target user. For example, from the current state of the target user, it is determined that when the user moves in front of the device, a second mannequin, e.g., a digital person, whose eyes are looking at the user but not looking all the time, is random glance sideways at everywhere else, if it is below a predetermined time, e.g., 3 seconds, from the last glance sideways at else then the probability of being random glance sideways at everywhere else is 0, e.g., if it is 3 seconds-10 seconds from the last glance sideways at else then the probability of being random glance sideways at everywhere else is linearly or non-linearly increased by 0- > 100%). The digital person feeds back the target user, so that the service can be provided for the user in an interactive mode between persons, and the service experience of the user is improved.
According to the technical scheme in the example embodiment of fig. 5, feedback is performed according to the current state and/or the voice information of the target user, and feedback can be performed according to the cognition and understanding of the current state of the target user, so that efficient and natural interaction between people can be realized.
Fig. 6 is a schematic block diagram of a processing apparatus of a virtual object provided in accordance with some embodiments of the present application. Referring to fig. 6, the apparatus 600 for processing a virtual object includes:
a first model generation module 610 for building a three-dimensional first human body model based on the human body reference model;
a second model generation module 620, configured to bind the first human body model with a human skeleton model to generate a second human body model;
a skin reconstruction module 630 for reconstructing the skin of the second human model based on the image of the skin of the human reference model.
According to the technical solution in the example embodiment of fig. 6, on one hand, a three-dimensional first human body model is established based on a human body reference model, and the first human body model is bound with a human body skeleton model, so that skeleton animation of the human body model can be realized; on the other hand, the skin of the second human body model is reconstructed based on the bound image of the skin of the human body reference model, so that the detail information of the skin of the human body reference model can be truly reflected, and the terrorism phenomenon is avoided.
FIG. 7 is a schematic block diagram of a second model generation module provided in accordance with some embodiments of the present application; referring to fig. 7, in some embodiments of the present application, the second model generation module 620 includes:
an association establishing unit 710, configured to establish an association relationship between a vertex of the first human body model and a corresponding bone in the human body bone model;
a weight adjusting unit 720, configured to adjust a bone weight of a bone corresponding to a vertex of the first human body model based on the association relationship, and generate the second human body model.
In some embodiments of the present application, the apparatus 600 further comprises:
and the face deformer binding module is used for binding the second human body model with the face deformer, wherein the face deformer is used for driving the face action of the second human body model.
In some embodiments of the present application, the skin reconstruction module 630 is specifically configured to:
determining a skin map and skin material of the human body reference model based on the image of the skin of the human body reference model;
and reconstructing the skin of the second human body model based on the skin map and the skin material of the human body reference model.
In some embodiments of the present application, the apparatus 600 further comprises:
the clothing determining module is used for determining clothing chartlet and clothing material of the human body reference model based on the clothing image of the human body reference model;
and the clothing rebuilding module is used for rebuilding clothing of the second human body model based on the clothing map and the clothing material of the human body reference model.
In some embodiments of the present application, the skin reconstruction module 630 is further specifically configured to:
and reconstructing the skin of the second model through three-dimensional structure reconstruction and material operation of sub-surface scattering.
In some embodiments of the present application, the apparatus 600 further comprises:
a reduction module, configured to reduce one or more of the number of faces, the complexity of materials, and the number of calculations of projection of the bound first human body model before the second human body model is generated.
FIG. 8 is a schematic block diagram of a processing device for virtual objects provided in accordance with further embodiments of the present application. Referring to fig. 8, in some embodiments of the present application, the apparatus 600 further comprises:
a voice information obtaining module 810, configured to obtain voice information sent by a target user;
a voice parsing module 820, configured to parse the voice information to obtain corresponding keywords and/or emotion information;
and a first feedback module 830, configured to drive the second human body model to perform a feedback operation on the target user based on the keyword and/or the emotion information and in combination with context information of a current interaction, where the feedback operation includes one or more of voice feedback, motion feedback, and expression feedback.
Fig. 9 is a schematic block diagram of a processing device of a virtual object provided in accordance with further embodiments of the present application. Referring to fig. 9, the apparatus 600 further includes:
an image information obtaining module 910, configured to obtain image information of a target user;
a state determining module 920, configured to determine a current state of the target user according to the image information, where the current state includes: one or more of an action, an expression, and an environment of the target user;
a second feedback module 930, configured to drive the second human body model to feed back the target user according to the current state and/or the voice information of the target user.
The processing apparatus for virtual objects provided in the foregoing several embodiments is used to implement the technical solution of the processing method for virtual objects in any of the foregoing method embodiments, and the implementation principle and the technical effect are similar, and are not described herein again.
It should be noted that the division of the modules of the apparatus provided in the above embodiments is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the profit indicator determining module may be a processing element separately set up, or may be implemented by being integrated into a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and a processing element of the apparatus calls and executes the functions of the processing module. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
Fig. 10 is a block diagram of an electronic device for implementing a processing method of a virtual object according to an embodiment of the present application. As shown in fig. 10, is a block diagram of an electronic device according to an embodiment of the application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 10, the electronic apparatus includes: one or more processors 1010, memory 1020, and interfaces for connecting the various components, including high-speed and low-speed interfaces, as well as interfaces for communicating with other electronic devices. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information for a Graphical User Interface (GUI) on an external input/output device, such as a display device coupled to the Interface. In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). Fig. 10 illustrates an example of a processor 1010.
The memory 1020, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as corresponding program instructions/modules in the processing method of the virtual object in the embodiments of the present application. The processor 1010 executes various functional applications of the server and data processing, namely, implements a processing method of a virtual object corresponding to any execution subject in the above method embodiments, by executing the non-transitory software program, instructions, and modules stored in the memory 1020.
The memory 1020 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the data storage area may store data, such as data provided by parties stored in the data processing platform, or tertiary data in a secure isolation area, etc. Further, the memory 1020 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 1020 may optionally include memory located remotely from processor 1010, which may be connected to data processing electronics over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Further, the electronic device may further include: an input device 1030 and an output device 1040. The processor 1010, memory 1020, input device 1030, and output device 1040 may be connected by a bus 1050 or otherwise, as exemplified by the bus connections in fig. 10.
The input device 1030 may receive input numeric or character information and generate key signal inputs related to user settings and function controls of the data processing electronic device, such as a touch screen, a keypad, a mouse, a trackpad, a touchpad, a pointing stick, one or more mouse buttons, a trackball, a joystick, etc. the output device 1040 may include a Display device, an auxiliary lighting device (e.g., L ED), a tactile feedback device (e.g., a vibration motor), etc. the Display device may include, but is not limited to, a liquid Crystal Display (L acquisition Crystal Display, L CD), a light emitting diode (L light emitting diode, <ttttttttttttranslation = L ">tttL/t &ttg) Display, and a plasma Display.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, Integrated circuitry, Application Specific Integrated Circuits (ASICs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable logic devices (P L D)) that provides machine instructions and/or data to a Programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal.
The systems and techniques described here can be implemented on a computer having a display device (e.g., a cathode ray tube or L CD monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer for providing interaction with the user.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., AN application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with AN implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
Further, the present application also provides a non-transitory computer readable storage medium storing computer instructions, which are executed by a processor to implement the technical solution provided by any of the foregoing method embodiments.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (20)
1. A method for processing a virtual object, comprising:
establishing a three-dimensional first human body model based on the human body reference model;
binding the first human body model with a human skeleton model to generate a second human body model;
reconstructing the skin of the second human model based on the image of the skin of the human reference model.
2. The method of claim 1, wherein said binding the first mannequin to a model of human bone, generating a second mannequin, comprises:
establishing an incidence relation between a vertex of the first human body model and a corresponding bone in the human body bone model;
and adjusting the bone weight of the bones corresponding to the vertexes of the first human body model based on the incidence relation, and generating the second human body model.
3. The method of claim 2, further comprising:
and binding the second human body model with a face deformer, wherein the face deformer is used for driving the face action of the second human body model.
4. The method according to claim 1, wherein the reconstructing the skin of the second human model based on the image of the skin of the human reference model comprises:
determining a skin map and skin material of the human body reference model based on the image of the skin of the human body reference model;
and reconstructing the skin of the second human body model based on the skin map and the skin material of the human body reference model.
5. The method of claim 4, further comprising:
determining a clothing chartlet and clothing material of the human body reference model based on the image of the clothing of the human body reference model;
and reconstructing the clothing of the second human body model based on the clothing map and the clothing material of the human body reference model.
6. The method of claim 4, wherein the reconstructing the skin of the second human model comprises:
and reconstructing the skin of the second human body model through three-dimensional structure reconstruction and material operation of sub-surface scattering.
7. The method of claim 1, wherein prior to said generating said second human model, said method further comprises:
and reducing one or more of the number of faces, the complexity of materials and the calculation times of the projection of the bound first human body model.
8. The method of claim 1, further comprising:
acquiring voice information sent by a target user;
analyzing corresponding keywords and/or emotion information from the voice information;
and driving the second human body model to perform feedback operation on the target user based on the keywords and/or the emotion information and by combining context information of current interaction, wherein the feedback operation comprises one or more of voice feedback, action feedback and expression feedback.
9. The method of claim 1, further comprising:
acquiring image information of a target user;
determining a current state of the target user according to the image information, wherein the current state comprises: one or more of an action, an expression, and an environment of the target user;
and driving the second human body model to feed back the target user according to the current state and/or the voice information of the target user.
10. An apparatus for processing a virtual object, comprising:
the first model generation module is used for establishing a three-dimensional first human body model based on the human body reference model;
the second model generation module is used for binding the first human body model with the human skeleton model to generate a second human body model;
a skin reconstruction module for reconstructing the skin of the second human body model based on the image of the skin of the human body reference model.
11. The apparatus of claim 10, wherein the second model generation module comprises:
the association establishing unit is used for establishing an association relation between a vertex of the first human body model and a corresponding skeleton in the human body skeleton model;
and the weight adjusting module is used for adjusting the bone weight of the bone corresponding to the vertex of the first human body model based on the incidence relation to generate the second human body model.
12. The apparatus of claim 11, further comprising:
and the face deformer binding module is used for binding the second human body model with the face deformer, wherein the face deformer is used for driving the face action of the second human body model.
13. The apparatus of claim 10, wherein the skin reconstruction module is specifically configured to:
determining a skin map and skin material of the human body reference model based on the image of the skin of the human body reference model;
and reconstructing the skin of the second human body model based on the skin map and the skin material of the human body reference model.
14. The apparatus of claim 13, further comprising:
the clothing determining module is used for determining clothing chartlet and clothing material of the human body reference model based on the clothing image of the human body reference model;
and the clothing rebuilding module is used for rebuilding clothing of the second human body model based on the clothing map and the clothing material of the human body reference model.
15. The apparatus of claim 13, wherein the skin reconstruction module is further configured to:
and reconstructing the skin of the second model through three-dimensional structure reconstruction and material operation of sub-surface scattering.
16. The apparatus of claim 10, further comprising:
a reduction module, configured to reduce one or more of the number of faces, the complexity of materials, and the number of calculations of projection of the bound first human body model before the second human body model is generated.
17. The apparatus of claim 10, further comprising:
the voice information acquisition module is used for acquiring voice information sent by a target user;
the voice analysis module is used for analyzing corresponding keywords and/or emotion information from the voice information;
and the first feedback module is used for driving the second human body model to perform feedback operation on the target user based on the keywords and/or the emotion information and by combining context information of current interaction, wherein the feedback operation comprises one or more of voice feedback, action feedback and expression feedback.
18. The apparatus of claim 10, further comprising:
the image information acquisition module is used for acquiring the image information of a target user;
a state determination module, configured to determine a current state of the target user according to the image information, where the current state includes: one or more of an action, an expression, and an environment of the target user;
and the second feedback module is used for driving the second human body model to feed back the target user according to the current state and/or the voice information of the target user.
19. An electronic device, comprising: at least one processor, a memory, and an interface to communicate with other electronic devices; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method of processing a virtual object according to any one of claims 1 to 9.
20. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the method of processing a virtual object according to any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010220090.0A CN111445561B (en) | 2020-03-25 | 2020-03-25 | Virtual object processing method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010220090.0A CN111445561B (en) | 2020-03-25 | 2020-03-25 | Virtual object processing method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111445561A true CN111445561A (en) | 2020-07-24 |
CN111445561B CN111445561B (en) | 2023-11-17 |
Family
ID=71654743
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010220090.0A Active CN111445561B (en) | 2020-03-25 | 2020-03-25 | Virtual object processing method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111445561B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112509099A (en) * | 2020-11-30 | 2021-03-16 | 北京百度网讯科技有限公司 | Avatar driving method, apparatus, device and storage medium |
CN112652057A (en) * | 2020-12-30 | 2021-04-13 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for generating human body three-dimensional model |
WO2022252674A1 (en) * | 2021-06-01 | 2022-12-08 | 北京百度网讯科技有限公司 | Method and apparatus for generating drivable 3d character, electronic device and storage medium |
CN115529500A (en) * | 2022-09-20 | 2022-12-27 | 中国电信股份有限公司 | Method and device for generating dynamic image |
CN111951360B (en) * | 2020-08-14 | 2023-06-23 | 腾讯科技(深圳)有限公司 | Animation model processing method and device, electronic equipment and readable storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101082901A (en) * | 2006-06-01 | 2007-12-05 | 上海戏剧学院 | Virtual rehearsing system |
CN101271589A (en) * | 2007-03-22 | 2008-09-24 | 中国科学院计算技术研究所 | Three-dimensional mannequin joint center extraction method |
CN105528808A (en) * | 2016-02-29 | 2016-04-27 | 华中师范大学 | Jingchu folktale clay sculpture figure digital 3D model synthesis method and system |
CN105913486A (en) * | 2016-04-08 | 2016-08-31 | 东华大学 | Digital mixing human body rapid modeling method applied to apparel industries |
CN106327589A (en) * | 2016-08-17 | 2017-01-11 | 北京中达金桥技术股份有限公司 | Kinect-based 3D virtual dressing mirror realization method and system |
CN107705365A (en) * | 2017-09-08 | 2018-02-16 | 郭睿 | Editable three-dimensional (3 D) manikin creation method, device, electronic equipment and computer program product |
CN108597015A (en) * | 2018-01-08 | 2018-09-28 | 江苏辰锐网络科技有限公司 | The automatic binding system of three dimensional biological model bone, method, equipment and computer program product |
CN109410298A (en) * | 2018-11-02 | 2019-03-01 | 北京恒信彩虹科技有限公司 | A kind of production method and expression shape change method of dummy model |
CN109816756A (en) * | 2019-01-08 | 2019-05-28 | 新疆农业大学 | A kind of movements design method of virtual bionic model |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2028065C (en) * | 1989-11-15 | 1996-06-11 | George S. Allen | Method and apparatus for imaging the anatomy |
TW200818056A (en) * | 2006-10-12 | 2008-04-16 | Nat Univ Tsing Hua | Drivable simulation model combining curvature profile and skeleton and method of producing the same |
CN101719284B (en) * | 2009-12-25 | 2011-08-31 | 北京航空航天大学 | Method for physically deforming skin of virtual human based on hierarchical model |
KR101456162B1 (en) * | 2012-10-25 | 2014-11-03 | 주식회사 다림비젼 | Real time 3d simulator using multi-sensor scan including x-ray for the modeling of skeleton, skin and muscle reference model |
CN103955963B (en) * | 2014-04-30 | 2017-05-10 | 崔岩 | Digital human body three-dimensional reconstruction method and system based on Kinect device |
CN105303602A (en) * | 2015-10-09 | 2016-02-03 | 摩多文化(深圳)有限公司 | Automatic 3D human body model bone binding process and method |
CN106934385A (en) * | 2017-03-24 | 2017-07-07 | 深圳市唯特视科技有限公司 | A kind of character physical's shape method of estimation based on 3D scannings |
CN110062935B (en) * | 2017-05-16 | 2022-12-27 | 深圳市三维人工智能科技有限公司 | Vertex weight inheritance device and method of 3D scanning model |
CN107491506B (en) * | 2017-07-31 | 2020-06-16 | 西安蒜泥电子科技有限责任公司 | Batch model posture transformation method |
KR20190023486A (en) * | 2017-08-29 | 2019-03-08 | 주식회사 룩씨두 | Method And Apparatus for Providing 3D Fitting |
WO2019095051A1 (en) * | 2017-11-14 | 2019-05-23 | Ziva Dynamics Inc. | Method and system for generating an animation-ready anatomy |
CN108537888B (en) * | 2018-04-09 | 2020-05-12 | 浙江大学 | Rapid fitting method based on framework |
HK1253750A2 (en) * | 2018-07-04 | 2019-06-28 | Bun Kwai | Method and apparatus for converting 3d scanned objects to avatars |
CN109064550A (en) * | 2018-07-27 | 2018-12-21 | 北京花开影视制作有限公司 | A kind of body scans modeling method |
CN109615683A (en) * | 2018-08-30 | 2019-04-12 | 广州多维魔镜高新科技有限公司 | A kind of 3D game animation model production method based on 3D dress form |
CN110148209B (en) * | 2019-04-30 | 2023-09-15 | 深圳市重投华讯太赫兹科技有限公司 | Human body model generation method, image processing device and device with storage function |
CN112184921B (en) * | 2020-10-30 | 2024-02-06 | 北京百度网讯科技有限公司 | Avatar driving method, apparatus, device and medium |
CN115471632A (en) * | 2022-10-19 | 2022-12-13 | 深圳仙库智能有限公司 | Real human body model reconstruction method, device, equipment and medium based on 3D scanning |
-
2020
- 2020-03-25 CN CN202010220090.0A patent/CN111445561B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101082901A (en) * | 2006-06-01 | 2007-12-05 | 上海戏剧学院 | Virtual rehearsing system |
CN101271589A (en) * | 2007-03-22 | 2008-09-24 | 中国科学院计算技术研究所 | Three-dimensional mannequin joint center extraction method |
CN105528808A (en) * | 2016-02-29 | 2016-04-27 | 华中师范大学 | Jingchu folktale clay sculpture figure digital 3D model synthesis method and system |
CN105913486A (en) * | 2016-04-08 | 2016-08-31 | 东华大学 | Digital mixing human body rapid modeling method applied to apparel industries |
CN106327589A (en) * | 2016-08-17 | 2017-01-11 | 北京中达金桥技术股份有限公司 | Kinect-based 3D virtual dressing mirror realization method and system |
CN107705365A (en) * | 2017-09-08 | 2018-02-16 | 郭睿 | Editable three-dimensional (3 D) manikin creation method, device, electronic equipment and computer program product |
CN108597015A (en) * | 2018-01-08 | 2018-09-28 | 江苏辰锐网络科技有限公司 | The automatic binding system of three dimensional biological model bone, method, equipment and computer program product |
CN109410298A (en) * | 2018-11-02 | 2019-03-01 | 北京恒信彩虹科技有限公司 | A kind of production method and expression shape change method of dummy model |
CN109816756A (en) * | 2019-01-08 | 2019-05-28 | 新疆农业大学 | A kind of movements design method of virtual bionic model |
Non-Patent Citations (2)
Title |
---|
贾银莎等: "游戏动画中三维人物模型制作的研究", 《中小企业管理与科技(上旬刊)》 * |
陈小满: "三维人体模型动态仿真技术的研究与实现", 《赤峰学院学报(自然科学版)》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111951360B (en) * | 2020-08-14 | 2023-06-23 | 腾讯科技(深圳)有限公司 | Animation model processing method and device, electronic equipment and readable storage medium |
CN112509099A (en) * | 2020-11-30 | 2021-03-16 | 北京百度网讯科技有限公司 | Avatar driving method, apparatus, device and storage medium |
CN112509099B (en) * | 2020-11-30 | 2024-02-06 | 北京百度网讯科技有限公司 | Avatar driving method, apparatus, device and storage medium |
CN112652057A (en) * | 2020-12-30 | 2021-04-13 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for generating human body three-dimensional model |
WO2022252674A1 (en) * | 2021-06-01 | 2022-12-08 | 北京百度网讯科技有限公司 | Method and apparatus for generating drivable 3d character, electronic device and storage medium |
JP7376006B2 (en) | 2021-06-01 | 2023-11-08 | ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッド | Drivable 3D character generation method, device, electronic device, and storage medium |
CN115529500A (en) * | 2022-09-20 | 2022-12-27 | 中国电信股份有限公司 | Method and device for generating dynamic image |
Also Published As
Publication number | Publication date |
---|---|
CN111445561B (en) | 2023-11-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111445561B (en) | Virtual object processing method, device, equipment and storage medium | |
US10521946B1 (en) | Processing speech to drive animations on avatars | |
US10732708B1 (en) | Disambiguation of virtual reality information using multi-modal data including speech | |
CN111833418B (en) | Animation interaction method, device, equipment and storage medium | |
US11741668B2 (en) | Template based generation of 3D object meshes from 2D images | |
US20210358188A1 (en) | Conversational ai platform with rendered graphical output | |
US11514638B2 (en) | 3D asset generation from 2D images | |
US11232645B1 (en) | Virtual spaces as a platform | |
Ali et al. | Design of seamless multi-modal interaction framework for intelligent virtual agents in wearable mixed reality environment | |
US11918412B2 (en) | Generating a simulated image of a baby | |
CN111538456A (en) | Human-computer interaction method, device, terminal and storage medium based on virtual image | |
Hartholt et al. | Ubiquitous virtual humans: A multi-platform framework for embodied ai agents in xr | |
EP4168997A1 (en) | 3d object model reconstruction from 2d images | |
CN114241558A (en) | Model training method, video generation method, device, equipment and medium | |
CN114187394B (en) | Avatar generation method, apparatus, electronic device, and storage medium | |
Stefanov et al. | Opensense: A platform for multimodal data acquisition and behavior perception | |
Čereković et al. | Multimodal behavior realization for embodied conversational agents | |
DE102023102142A1 (en) | CONVERSATIONAL AI PLATFORM WITH EXTRAACTIVE QUESTION ANSWER | |
CN115170703A (en) | Virtual image driving method, device, electronic equipment and storage medium | |
CN113379879A (en) | Interaction method, device, equipment, storage medium and computer program product | |
CN115761855B (en) | Face key point information generation, neural network training and three-dimensional face reconstruction method | |
US20240119690A1 (en) | Stylizing representations in immersive reality applications | |
WO2024066549A1 (en) | Data processing method and related device | |
Mendi | A 3D face animation system for mobile devices | |
AlTarawneh | A cloud-based extensible avatar for human robot interaction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |