CN116630495A - Virtual digital human model planning system based on AIGC algorithm - Google Patents

Virtual digital human model planning system based on AIGC algorithm Download PDF

Info

Publication number
CN116630495A
CN116630495A CN202310807099.5A CN202310807099A CN116630495A CN 116630495 A CN116630495 A CN 116630495A CN 202310807099 A CN202310807099 A CN 202310807099A CN 116630495 A CN116630495 A CN 116630495A
Authority
CN
China
Prior art keywords
virtual digital
module
image
digital person
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310807099.5A
Other languages
Chinese (zh)
Other versions
CN116630495B (en
Inventor
周洪峰
陈前进
程亮
庄嘉城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Vphonor Information Technology Co ltd
Original Assignee
Shenzhen Vphonor Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Vphonor Information Technology Co ltd filed Critical Shenzhen Vphonor Information Technology Co ltd
Priority to CN202310807099.5A priority Critical patent/CN116630495B/en
Publication of CN116630495A publication Critical patent/CN116630495A/en
Application granted granted Critical
Publication of CN116630495B publication Critical patent/CN116630495B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Abstract

The invention relates to the technical field of virtual digital people, in particular to a virtual digital person model planning system based on an AIGC algorithm, which comprises a data acquisition unit, a virtual digital person generation unit, an editing storage unit and a data optimization unit, wherein the data acquisition unit is used for acquiring human body image data, the virtual digital person generation unit is used for collecting images of the data acquisition unit, analyzing the image data through the AIGC algorithm to generate virtual digital persons, the editing storage unit is used for editing the appearance and clothing of the virtual digital person generated by the virtual digital person generation unit and storing the virtual digital persons in a classified mode, and the data optimization unit is used for operating the virtual digital person edited by the editing storage unit.

Description

Virtual digital human model planning system based on AIGC algorithm
Technical Field
The invention relates to the technical field of virtual digital people, in particular to a virtual digital human model planning system based on an AIGC algorithm.
Background
A virtual digital human model is a computer-generated virtual entity for simulating and representing human figures and actions. In the fields of virtual reality, game development, film and television special effects and the like, a virtual digital human model is widely applied to the aspects of human-computer interaction, role playing, scene reconstruction and the like, and a traditional virtual digital human model generation method generally uses a manual design or a method based on physical simulation, but details of human morphology and action performance are difficult to consider in the generation process of the methods, so that the generated model lacks reality and expressive force.
In order to address the above problems, a need exists for a virtual digital human model planning system based on the AIGC algorithm.
Disclosure of Invention
The invention aims to provide a virtual digital human model planning system based on an AIGC algorithm so as to solve the problems in the background technology.
In order to achieve the above purpose, a virtual digital human model planning system based on AIGC algorithm is provided, which comprises a data acquisition unit, a virtual digital human generation unit, an editing storage unit, a data optimization unit and an application interface unit;
the data acquisition unit is used for acquiring human body image data, preprocessing human body images and transmitting the human body images;
the virtual digital person generating unit is used for collecting images of the data acquisition unit and analyzing the image data through an AIGC algorithm to generate virtual digital persons;
the editing storage unit is used for editing the appearance, clothing and hairstyle of the virtual digital person generated by the virtual digital person generating unit and storing the appearance, clothing and hairstyle in a classified manner;
the data optimizing unit is used for running the virtual digital person edited by the editing storage unit, feeding back the virtual digital person in real time according to the limb actions of the virtual digital person and performing self-adaptive adjustment.
The application interface unit is used for calling the virtual digital person generating unit to generate the virtual digital person model, and other software can conveniently call the virtual digital person generating unit to generate the virtual digital person model through setting an application programming interface, so that wider application is realized.
As a further improvement of the technical scheme, the data acquisition unit comprises an image acquisition module, an image preprocessing module and an image data transmission module;
the image acquisition module is used for acquiring a human body data image;
the image preprocessing module is used for preprocessing the image acquired by the image acquisition module, so that the quality and the definition of the image are improved;
the image data transmission module is used for transmitting the image preprocessed by the image preprocessing module.
As a further improvement of the technical scheme, the image preprocessing module processes the image, and the image preprocessing module comprises image smoothing, image denoising and image enhancement so as to improve the quality of the image.
As a further improvement of the technical scheme, the virtual digital person generating unit comprises an image analyzing module and a virtual person generating module;
the image analysis module is used for analyzing the image human body morphology and expression actions and transmitting the image human body morphology and expression actions;
the virtual person generation module is used for receiving the analysis results of the human body morphology and the expression actions of the image analysis module and generating virtual digital persons through an AIGC algorithm.
As a further improvement of the technical scheme, the virtual digital person generating unit further comprises an action editing module, wherein the action editing module is used for editing actions of the virtual digital person and improving sense of reality of the virtual digital person.
As a further improvement of the technical scheme, the editing storage unit comprises a personalized editing module and a classification storage module;
the personalized editing module is used for editing and modifying the appearance, clothing and hairstyle of the virtual digital person, so that the expressive force of the virtual digital person is improved;
the classification storage module is used for classifying and storing the virtual digital person edited by the personalized editing module, so that the later application and the retrieval are facilitated.
As a further improvement of the technical scheme, the data optimization unit comprises an operation module and a data feedback module;
the operation module is used for operating the personalized editing module to edit the modified virtual digital person so that the virtual digital person can normally move;
the data feedback module is used for feeding back the virtual digital human expressive ability of the operation module in real time.
As a further improvement of the technical scheme, the data optimization unit further comprises an adaptive adjustment module, wherein the adaptive adjustment module is used for receiving data fed back by the data feedback module, optimizing and improving a model of normal activities of the virtual digital person and improving the expressive ability of the virtual digital person
As a further improvement of the technical scheme, the self-adaptive adjustment module automatically analyzes and optimizes actions in the virtual digital human model through a convolutional neural network algorithm, wherein the convolutional neural network comprises a convolutional layer, a pooling layer and a full-connection layer
Compared with the prior art, the invention has the beneficial effects that:
1. compared with the traditional virtual digital human model generation method, the virtual digital human model planning system based on the AIGC algorithm introduces the AIGC algorithm, integrates the artificial intelligence and graphic computing technology, can comprehensively consider a plurality of factors such as human body morphology, action performance and the like, and generates a virtual digital human model with stronger sense of reality and expressive force through the virtual digital human generation unit, intelligent decision and graphic computing, so that the generated virtual digital human model is more vivid and natural.
2. The optimizing module in the patent utilizes machine learning and data analysis technology, the virtual digital person is operated through the operating module, the effect of actions, expressions and appearances of the virtual digital person is observed, meanwhile, the actions, expressions and the like of the virtual digital person are fed back through the data feedback module, so that more vivid and real virtual digital person is obtained, the data feedback module feeds back the actions of the virtual digital person through the sensor equipment, and meanwhile, the data acquired by the sensor are processed and calculated, so that the actions, expressions and appearances of the virtual digital person are fed back in real time; the virtual digital human model is continuously optimized and improved according to the feedback data of the user and the requirements of the application scene, and the model parameters can be automatically adjusted by the system through the analysis and the study of the self-adaptive adjusting module on a large amount of data, so that the fidelity degree and the action expression capability of the virtual digital human model are improved, and the generated virtual digital human model is more accurate and self-adaptive.
Drawings
FIG. 1 is a schematic diagram of the overall structure of the present invention;
FIG. 2 is a schematic diagram showing the overall structure of the present invention;
FIG. 3 is a schematic diagram of a data optimization unit according to the present invention.
The meaning of each reference sign in the figure is:
100. a data acquisition unit; 110. an image acquisition module; 120. an image preprocessing module; 130. an image data transmission module;
200. a virtual digital person generating unit; 210. an image analysis module; 220. a virtual person generation module; 230. an action editing module;
300. editing the storage unit; 310. a personalized editing module; 320. a classification storage module;
400. a data optimizing unit; 410. an operation module; 420. a data feedback module; 430. an adaptive adjustment module;
500. an application interface unit.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1 to 3, there is provided an AIGC algorithm-based virtual digital human model planning system including a data acquisition unit 100, a virtual digital human generation unit 200, an edit storage unit 300, a data optimization unit 400, and an application interface unit 500;
the data acquisition unit 100 is used for acquiring human body image data, preprocessing human body images and transmitting the human body images;
the data acquisition unit 100 comprises an image acquisition module 110, an image preprocessing module 120 and an image data transmission module 130, wherein the image acquisition module 110 is used for acquiring a human body data image, the image preprocessing module 120 is used for preprocessing the image acquired by the image acquisition module 110, the quality and the definition of the image are improved, the image data transmission module 130 is used for transmitting the image preprocessed by the image preprocessing module 120, and the image preprocessing module 120 processes the image steps including image smoothing, image denoising and image enhancement so as to improve the quality of the image.
When the device is specifically used, the image acquisition module 110 shoots a human body at multiple angles by means of the camera to obtain a human body data image, the image acquisition module 110 transmits the data image to the image preprocessing module 120, the image preprocessing module 120 preprocesses the image, and the image preprocessing mode comprises the following steps:
image smoothing: smoothing the image by adopting a filter and other technologies, deleting high-frequency noise in the image, and improving the processing effect of a subsequent algorithm;
denoising an image: noise in the image is removed through a filter and other technologies, and the accuracy of a subsequent algorithm is improved;
image enhancement: the image is enhanced by adopting sharpening technology and the like to highlight the target object in the image and improve the resolution of the image, so that the human body data image with high definition and good quality is obtained, and the image is convenient to analyze in the later period.
The virtual digital person generating unit 200 is used for collecting the image of the data acquisition unit 100, analyzing the image data through an AIGC algorithm to generate a virtual digital person;
the virtual digital person generating unit 200 comprises an image analyzing module 210 and a virtual person generating module 220, wherein the image analyzing module 210 is used for analyzing and transmitting the image human body morphology and expression actions, the virtual person generating module 220 is used for receiving the human body morphology and expression action analysis results of the image analyzing module 210 and generating a virtual digital person through an AIGC algorithm; the virtual digital person generating unit 200 further includes an action editing module 230, where the action editing module 230 is configured to edit actions of the virtual digital person, and improve sense of realism of the virtual digital person.
When the system is specifically used, the image analysis module 210 receives the human body data image transmitted by the image data transmission module 130, the image analysis module 210 analyzes the image by adopting a characteristic transformation algorithm with unchanged scale and rotation invariance, so as to obtain image characteristics, the SIFT is a characteristic point description algorithm with unchanged scale and rotation invariance, the local extremum points are detected by calculating Gaussian difference images with different scales in the image, stable and unique characteristic vectors are calculated along the gradient direction, the human body data image is analyzed, characteristics in the image are extracted, the human body image characteristics extracted by the image analysis module 210 are based on the human body image characteristics extracted by the image analysis module 210, the image characteristics are input into the system by an AIGC algorithm, a highly real virtual digital human model is generated by intelligent decision and graphic calculation, the AIGC algorithm combines the artificial intelligent technology and the graphic calculation technology, the generated virtual digital human model is more realistic in consideration of aspects such as human body morphology, motion performance and the like, meanwhile, the motion editing module 230 edits the virtual data human motion by means of motion interception and synthesis, and motion interception and synthesis are a method, the existing motion interception and motion synthesis are combined, the existing motion capture and the motion capture can be combined to generate new motion segments and the running motion segments according to the requirements and the requirements.
The editing storage unit 300 is used for editing the appearance, clothing and hairstyle of the virtual digital person generated by the virtual digital person generating unit 200 and storing the same in a classified manner;
the editing storage unit 300 comprises a personalized editing module 310 and a classification storage module 320, wherein the personalized editing module 310 is used for editing and modifying the appearance, clothing and hairstyle of the virtual digital person, improving the expressive force of the virtual digital person, and the classification storage module 320 is used for classifying and storing the virtual digital person edited by the personalized editing module 310, so that later application and retrieval are facilitated.
When the system is specifically used, the personalized editing module 310 is used for modifying and editing the appearance, the action and the expression of the virtual digital person so as to meet the requirements of different application scenes, and the system is specifically as follows:
editing animation;
the action editing of the virtual digital person is typically implemented using key frame editing, by adding, deleting, modifying or automatically inserting intermediate frames, which may include changing, adjusting or repositioning the limbs of the human body, etc.
Editing the expression;
the expression of the virtual digital person provides rich expression of character characters and emotion, and the expression editing generally comprises actions, mouth shapes and the like of the face of the character, and can be realized by manual editing, action capturing and the like.
Editing light and shadow, materials and textures;
through element editing such as light source, material and texture, can make virtual digital people better integrate into the real world, promote sense of realism and experience sense, for example use modes such as shadow rendering, mapping etc. strengthen the effect such as environmental shielding, projection and reflection of virtual digital people.
The data optimizing unit 400 is used for running the virtual digital person edited by the editing storage unit 300, feeding back in real time according to the limb actions of the virtual digital person and performing self-adaptive adjustment;
the data optimizing unit 400 includes an operation module 410 and a data feedback module 420, wherein the operation module 410 is used for operating the personalized editing module 310 to edit the modified virtual digital person, so that the virtual digital person can normally move, and the data feedback module 420 is used for feeding back the performance of the virtual digital person of the operation module 410 in real time.
When the system is specifically used, after each item of setting of the virtual digital person is completed, the operation module 410 is used for operating the virtual digital person to observe the effects of actions, expressions and appearances of the virtual digital person, and meanwhile, the data feedback module 420 is used for feeding back the actions, expressions and the like of the virtual digital person, so that more vivid and real virtual digital person is obtained, the data feedback module 420 feeds back the actions of the virtual digital person through sensor equipment, and meanwhile, data acquired by a sensor are processed and calculated, so that the actions, expressions and appearances of the virtual digital person are fed back in real time.
The self-adaptive adjustment module 430 automatically analyzes and optimizes actions in the virtual digital human model through a convolutional neural network algorithm, wherein the convolutional neural network comprises a convolutional layer, a pooling layer and a full-connection layer;
convolutional layer
Assume convolutional neural network of the firstThe input data of the layer is->First->The output data of the layer is->. The layer convolution kernel size is +.>×/>k is normally positive odd, in common +.>A plurality of convolution kernels, each corresponding to an output feature map, the convolution kernels being assigned +.>Indicating that the corresponding bias term is +>. The formula of the convolution operation is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,indicate->Layer->The convolution kernels are at position->Output value at->For activating functions such as ReLU->For the neuron number of the upper layer, +.>Is a rank index of the convolution kernel.
Pooling layer
Pooling operations are generally used to reduce the correlation between adjacent pixels while reducing the data dimension to reduce the number of model parameters and computational consumption. Common pooling approaches in convolutional neural networks include maximum pooling and average pooling, assuming the firstThe input of layer neurons is +.>The output +.>The equation for the pooling operation is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,indicate->Layer in position->Pooling results at->For specified pooling operators, e.g.,/>、/>The center position coordinates of the window are pooled.
Full connection layer
The full-connection layer is a multi-layer sensor and is used for realizing nonlinear classification. Assuming that the last layer of the convolutional neural network is a fully connected layer, the input vector isThe output vector is +.>Wherein->The score of each element corresponding to a category is set with +.>Layer->And->For the parameter matrix and bias vector of this layer, the activation function is +.>The formula of the fully connected layer is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,is input +.>And output->Connection weight matrix between->Is a bias vector.
In specific use, the actions in the virtual digital person model are automatically analyzed and optimized through the self-adaptive adjusting module 430, so that the fidelity and the action expression capability of the virtual digital person are improved.
The application interface unit 500 is used for calling the virtual digital person generating unit 200 to generate a virtual digital person model, and other software can conveniently call the virtual digital person generating unit 200 to generate the virtual digital person model by setting an application programming interface, so that wider application is realized.
The application interface unit 500 enables other software or systems to conveniently call the virtual digital mannequin by providing a set of open application programming interfaces API, so as to realize wider application, for example, the virtual reality device can apply the virtual digital mannequin to the immersive experience through the interface, and the game developer can introduce the virtual digital mannequin into the game through the interface, so that the expression and application value of the virtual digital mannequin in various application fields are improved.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the above-described embodiments, and that the above-described embodiments and descriptions are only preferred embodiments of the present invention, and are not intended to limit the invention, and that various changes and modifications may be made therein without departing from the spirit and scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (9)

1. The virtual digital mannequin planning system based on AIGC algorithm is characterized in that: the system comprises a data acquisition unit (100), a virtual digital person generation unit (200), an editing storage unit (300), a data optimization unit (400) and an application interface unit (500);
the data acquisition unit (100) is used for acquiring human body image data, preprocessing human body images and transmitting the human body images;
the virtual digital person generating unit (200) is used for collecting images of the data acquisition unit (100), analyzing the image data through an AIGC algorithm to generate virtual digital persons;
the editing and storing unit (300) is used for editing the appearance, clothing and hairstyle of the virtual digital person generated by the virtual digital person generating unit (200) and storing the appearance, clothing and hairstyle in a classified mode;
the data optimization unit (400) is used for running the virtual digital person edited by the editing storage unit (300), feeding back in real time according to the limb actions of the virtual digital person and performing self-adaptive adjustment;
the application interface unit (500) is used for calling the virtual digital person generating unit (200) to generate a virtual digital person model, and other software can conveniently call the virtual digital person generating unit (200) to generate the virtual digital person model through setting an application programming interface, so that wider application is realized.
2. The AIGC algorithm-based virtual digital mannequin planning system of claim 1, wherein: the data acquisition unit (100) comprises an image acquisition module (110), an image preprocessing module (120) and an image data transmission module (130);
the image acquisition module (110) is used for acquiring a human body data image;
the image preprocessing module (120) is used for preprocessing the image acquired by the image acquisition module (110) to improve the quality and definition of the image;
the image data transmission module (130) is used for transmitting the image preprocessed by the image preprocessing module (120).
3. The AIGC algorithm-based virtual digital mannequin planning system of claim 2, wherein: the image preprocessing module (120) processes the image steps including image smoothing, image denoising, image enhancement to improve the quality of the image.
4. The AIGC algorithm-based virtual digital mannequin planning system of claim 1, wherein: the virtual digital person generating unit (200) comprises an image analyzing module (210) and a virtual person generating module (220);
the image analysis module (210) is used for analyzing and transmitting the image human body morphology and expression actions;
the virtual person generating module (220) is used for receiving the analysis result of the human body morphology and the expression action of the image analyzing module (210) and generating a virtual digital person through an AIGC algorithm.
5. The AIGC algorithm-based virtual digital mannequin planning system of claim 4, wherein: the virtual digital person generating unit (200) further comprises a motion editing module (230), and the motion editing module (230) is used for editing the motion of the virtual digital person and improving the sense of reality of the virtual digital person.
6. The AIGC algorithm-based virtual digital mannequin planning system of claim 1, wherein: the editing storage unit (300) comprises a personalized editing module (310) and a classification storage module (320);
the personalized editing module (310) is used for editing and modifying the appearance, clothing and hairstyle of the virtual digital person, so that the expressive force of the virtual digital person is improved;
the classification storage module (320) is used for classifying and storing the virtual digital person edited by the personalized editing module (310) so as to facilitate later application and retrieval.
7. The AIGC algorithm-based virtual digital mannequin planning system of claim 1, wherein: the data optimization unit (400) comprises an operation module (410) and a data feedback module (420);
the operation module (410) is used for operating the personalized editing module (310) to edit the modified virtual digital person so that the virtual digital person can normally move;
the data feedback module (420) is used for feeding back the virtual digital human expression capability of the running module (410) in real time.
8. The AIGC algorithm-based virtual digital mannequin planning system of claim 7, wherein: the data optimization unit (400) further comprises an adaptive adjustment module (430), wherein the adaptive adjustment module (430) is used for receiving feedback data of the data feedback module (420), optimizing and improving a model of normal activities of the virtual digital person, and improving the expressive power of the virtual digital person.
9. The AIGC algorithm-based virtual digital mannequin planning system of claim 8, wherein: the self-adaptive adjusting module (430) automatically analyzes and optimizes actions in the virtual digital human model through a convolutional neural network algorithm, and the convolutional neural network comprises a convolutional layer, a pooling layer and a full-connection layer.
CN202310807099.5A 2023-07-04 2023-07-04 Virtual digital human model planning system based on AIGC algorithm Active CN116630495B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310807099.5A CN116630495B (en) 2023-07-04 2023-07-04 Virtual digital human model planning system based on AIGC algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310807099.5A CN116630495B (en) 2023-07-04 2023-07-04 Virtual digital human model planning system based on AIGC algorithm

Publications (2)

Publication Number Publication Date
CN116630495A true CN116630495A (en) 2023-08-22
CN116630495B CN116630495B (en) 2024-04-12

Family

ID=87638362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310807099.5A Active CN116630495B (en) 2023-07-04 2023-07-04 Virtual digital human model planning system based on AIGC algorithm

Country Status (1)

Country Link
CN (1) CN116630495B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117333592A (en) * 2023-12-01 2024-01-02 北京妙音数科股份有限公司 AI digital population type animation drawing system based on big data fusion training model

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150094302A (en) * 2014-02-11 2015-08-19 동서대학교산학협력단 System and method for implementing and editing 3-Dimensional character acting motion
US10204525B1 (en) * 2007-12-14 2019-02-12 JeffRoy H. Tillis Suggestion-based virtual sessions engaging the mirror neuron system
US20200306640A1 (en) * 2019-03-27 2020-10-01 Electronic Arts Inc. Virtual character generation from image or video data
KR20210108044A (en) * 2020-02-25 2021-09-02 제주한라대학교산학협력단 Video analysis system for digital twin technology
CN114035678A (en) * 2021-10-26 2022-02-11 山东浪潮科学研究院有限公司 Auxiliary judgment method based on deep learning and virtual reality
US20230186583A1 (en) * 2022-05-19 2023-06-15 Beijing Baidu Netcom Science Technology Co., Ltd. Method and device for processing virtual digital human, and model training method and device
CN116311456A (en) * 2023-03-23 2023-06-23 应急管理部大数据中心 Personalized virtual human expression generating method based on multi-mode interaction information

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10204525B1 (en) * 2007-12-14 2019-02-12 JeffRoy H. Tillis Suggestion-based virtual sessions engaging the mirror neuron system
KR20150094302A (en) * 2014-02-11 2015-08-19 동서대학교산학협력단 System and method for implementing and editing 3-Dimensional character acting motion
US20200306640A1 (en) * 2019-03-27 2020-10-01 Electronic Arts Inc. Virtual character generation from image or video data
KR20210108044A (en) * 2020-02-25 2021-09-02 제주한라대학교산학협력단 Video analysis system for digital twin technology
CN114035678A (en) * 2021-10-26 2022-02-11 山东浪潮科学研究院有限公司 Auxiliary judgment method based on deep learning and virtual reality
US20230186583A1 (en) * 2022-05-19 2023-06-15 Beijing Baidu Netcom Science Technology Co., Ltd. Method and device for processing virtual digital human, and model training method and device
CN116311456A (en) * 2023-03-23 2023-06-23 应急管理部大数据中心 Personalized virtual human expression generating method based on multi-mode interaction information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王朝晖等: "一种虚拟人作业行为的自主优化模型", 软件学报, no. 09, 15 September 2012 (2012-09-15), pages 2358 - 2373 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117333592A (en) * 2023-12-01 2024-01-02 北京妙音数科股份有限公司 AI digital population type animation drawing system based on big data fusion training model
CN117333592B (en) * 2023-12-01 2024-03-08 北京妙音数科股份有限公司 AI digital population type animation drawing system based on big data fusion training model

Also Published As

Publication number Publication date
CN116630495B (en) 2024-04-12

Similar Documents

Publication Publication Date Title
Liu et al. Generative adversarial networks for image and video synthesis: Algorithms and applications
CN111598998B (en) Three-dimensional virtual model reconstruction method, three-dimensional virtual model reconstruction device, computer equipment and storage medium
CN110163054B (en) Method and device for generating human face three-dimensional image
US20230128505A1 (en) Avatar generation method, apparatus and device, and medium
WO2020103700A1 (en) Image recognition method based on micro facial expressions, apparatus and related device
CN108961369A (en) The method and apparatus for generating 3D animation
CN113496507A (en) Human body three-dimensional model reconstruction method
CN109886216B (en) Expression recognition method, device and medium based on VR scene face image restoration
CN108012091A (en) Image processing method, device, equipment and its storage medium
CN116630495B (en) Virtual digital human model planning system based on AIGC algorithm
Chu et al. Expressive telepresence via modular codec avatars
CN113808277B (en) Image processing method and related device
Marques et al. Deep spherical harmonics light probe estimator for mixed reality games
CN114120389A (en) Network training and video frame processing method, device, equipment and storage medium
CN115482062A (en) Virtual fitting method and device based on image generation
Wang et al. Digital twin: Acquiring high-fidelity 3D avatar from a single image
Tu (Retracted) Computer hand-painting of intelligent multimedia images in interior design major
Weng et al. Data augmentation computing model based on generative adversarial network
Wang et al. Expression dynamic capture and 3D animation generation method based on deep learning
CN109285208A (en) Virtual role expression cartooning algorithm based on expression dynamic template library
CN117132711A (en) Digital portrait customizing method, device, equipment and storage medium
CN116342782A (en) Method and apparatus for generating avatar rendering model
Fang et al. Facial makeup transfer with GAN for different aging faces
CN110826510A (en) Three-dimensional teaching classroom implementation method based on expression emotion calculation
Beacco et al. Automatic 3D avatar generation from a single RBG frontal image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant