CN116685028A - Intelligent control system for digital human scene lamplight in virtual environment - Google Patents

Intelligent control system for digital human scene lamplight in virtual environment Download PDF

Info

Publication number
CN116685028A
CN116685028A CN202310966851.0A CN202310966851A CN116685028A CN 116685028 A CN116685028 A CN 116685028A CN 202310966851 A CN202310966851 A CN 202310966851A CN 116685028 A CN116685028 A CN 116685028A
Authority
CN
China
Prior art keywords
scene
picture
virtual
light
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202310966851.0A
Other languages
Chinese (zh)
Inventor
江婷
肖筱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Yiqi Technology Co ltd
Original Assignee
Wuhan Yiqi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Yiqi Technology Co ltd filed Critical Wuhan Yiqi Technology Co ltd
Priority to CN202310966851.0A priority Critical patent/CN116685028A/en
Publication of CN116685028A publication Critical patent/CN116685028A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Circuit Arrangement For Electric Light Sources In General (AREA)

Abstract

The invention discloses an intelligent control system for virtual environment digital person scene lamplight, which relates to the technical field of virtual digital persons, and comprises a real scene data collection module, a scene recognition model training module, a digital person model collection module, a lamplight control model training module and a lamplight control module, wherein the real scene data collection module is used for collecting a plurality of scene data and scene lamplight effect data; the digital person in the virtual environment is displayed by the intelligent control light more closely to the aesthetic light and shadow of the human.

Description

Intelligent control system for digital human scene lamplight in virtual environment
Technical Field
The invention relates to the technical field of virtual digital people, in particular to an intelligent control system for light of a digital human scene in a virtual environment.
Background
With the continuous development of computer graphics and virtual reality technology, digital human scenes are increasingly widely used. The representation of digital human scenes has transitioned from original 2D images, 3D models, to more realistic virtual scenes that require more realistic lighting effects to create a more realistic atmosphere. The conventional light control methods cannot meet the requirements, because the methods are often based on a fixed illumination scheme set in a virtual engine in advance, and cannot perform intelligent control of light according to the station position, the action and the visual angle conversion of a virtual digital person, and in most scenes, the aesthetic height of human beings cannot be achieved;
the invention relates to an LED virtual scene light control method (application publication number: CN 114913310A), which changes and adjusts light and adapts to emotion by analyzing the script and actor emotion of indoor music and personnel; however, the invention does not consider visual effect and does not solve the light control problem in the virtual digital human scene;
therefore, the invention provides an intelligent control system for the digital human scene lamplight in the virtual environment.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems existing in the prior art. Therefore, the invention provides the intelligent control system for the digital human scene lamplight of the virtual environment, which realizes that the digital human in the virtual environment is displayed by the intelligent control lamplight more closely to the aesthetic light and shadow of human.
In order to achieve the above purpose, a virtual environment digital human scene lamplight intelligent control system is provided, which comprises a real scene data collection module, a scene recognition model training module, a digital human model collection module, a lamplight control model training module and a lamplight control module; wherein, each module is connected through a wireless network or an electric mode;
the real scene data collection module is mainly used for collecting a plurality of scene data and scene light effect data in advance;
wherein the scene data comprises a scene picture and a picture tag;
wherein the scene pictures are pictures in various scenes shot by using an image capturing device in the real world; the picture labels are scene numbers manually marked on scenes to which each scene picture belongs in advance;
marking the number of the scene as n, and marking the scene picture set corresponding to the nth scene as Pn;
the scene lighting effect data comprise a lighting strategy, a lighting effect picture and a contrast picture;
the light strategy is the position, the light type, the light irradiation direction, the light irradiation intensity and the light color of all the lights which are deployed in each scene in the real world;
the light and shadow effect pictures are pictures of light and shadow effects shown by characters in the scenes by using each light strategy captured by the image capturing equipment in each scene;
the contrast pictures are pictures with no shadow effect, which are displayed by characters in a scene, corresponding to each shadow effect picture without using a lighting strategy;
marking a light and shadow effect picture set corresponding to an nth scene as Gn, marking each light and shadow effect picture in the light and shadow effect picture set Gn as Gn, marking a lighting strategy corresponding to the light and shadow effect picture Gn as Sgn, and marking a contrast picture corresponding to the light and shadow effect picture Gn as Dgn;
the real scene data collection module sends scene data to the scene recognition model training module and sends scene lamplight effect data to the digital human model collection module;
the scene recognition model training module is mainly used for training a first neural network model for recognizing a scene according to a scene picture by using scene data;
the scene recognition model training module trains the mode of recognizing a first neural network model of a scene according to a scene picture to be:
taking all scene pictures as input of a first neural network model, taking predicted picture labels as output, taking actual picture labels of each scene picture as a predicted target, taking the prediction accuracy of the predicted picture labels relative to the actual picture labels as a training target, and training the first neural network model until the prediction accuracy reaches a preset accuracy;
the scene recognition model training module sends the first neural network model to the light control module;
the digital human model collection module is mainly used for generating virtual digital human scene training data based on scene lighting effect data;
the digital human model collection module generates virtual digital human scene training data in the following way:
for each contrast picture Dgn, building a model of the sizes and the distances of buildings and lighting equipment in the scene in the contrast picture according to equal proportion by using three-dimensional modeling software, and converting the built model into a virtual digital scene; carrying out equal proportion modeling on character features in the comparison picture according to the proportion of scene modeling on characters in the comparison picture, and converting the modeled characters into a virtual digital human model; the character features include actions, expressions, apparel, and skin texture; placing the virtual digital mannequin in a virtual digital scene, and the position of the virtual digital mannequin in the virtual digital scene corresponds to the position of the mannequin in the scene in the comparison picture Dgn; placing the virtual digital human model, and taking a virtual picture of the virtual scene as a model input picture; marking a model input picture corresponding to the comparison picture Dgn as IDgn;
for a virtual digital scene corresponding to each model input picture IDgn, using an optical physical engine to simulate a lighting strategy Sgn to irradiate on a virtual digital human model so as to obtain a virtual light and shadow effect, and taking a virtual light and shadow effect picture as a training target picture;
the virtual digital human scene training data comprises all model input pictures and corresponding training target pictures;
the digital human model collection module sends the virtual digital human scene training data to the lamplight control model training module;
the light control model training module is mainly used for generating a second neural network model of the virtual scene light effect picture for each scene based on virtual digital person scene training data; the second neural network model is used for generating an antagonistic neural network model;
the process of training the second neural network model for generating the virtual scene lamplight effect picture by the lamplight control model training module for each scene is as follows:
taking each model input picture IDgn in the nth scene as the input of a generator of a second neural network model, outputting a corresponding lamplight strategy by the generator, and generating a virtual scene lamplight effect picture corresponding to the model input picture by using an optical physical engine based on the lamplight strategy; transmitting the virtual scene lamplight effect picture to a discriminator of a second neural network model;
the discriminator calculates the light and shadow effect value of the virtual scene light effect picture, if the light and shadow effect value is smaller than the preset light and shadow effect threshold value, the generator outputs the light strategy again until the light and shadow effect value of the generated virtual scene light effect picture is larger than the Yu Guangying effect threshold value;
the light effect value is a structural similarity index of a training target picture corresponding to the virtual scene lamplight effect picture and the model input picture IDgn;
the calculation formula of the shadow effect value is as follows:
marking the virtual scene lamplight effect picture as JDgn, and marking the shadow effect value as YDgn;
the calculation formula of the shadow effect value YDgn is
And->Respectively inputting brightness average values of pictures IDgn for the virtual scene lamplight effect pictures JDGN and the models;
and->Input pictures IDgn for virtual scene lighting effect picture jdggn and model respectivelyStandard deviation of brightness;
inputting the brightness covariance of the picture IDgn for the virtual scene lighting effect picture JDGN and the model;
wherein a1, a2 and a3 are respectively preset adjusting parameters which are larger than 0 and are used for preventing denominator from being 0,、/>and +.>Respectively, are preset proportional coefficients.
The light control model training module sends the trained second neural network model to the light control module;
the lamplight control module is mainly used for generating a lamplight strategy for a virtual environment to be controlled and intelligently controlling lamplight based on the lamplight strategy;
the mode that the light control module carries out intelligent control to light is:
the light control module acquires a picture of a virtual environment to be controlled in advance, inputs the picture into the first neural network model, and acquires the output of the first neural network model to the scene type of the virtual environment;
identifying a virtual digital person in a virtual environment by using a target identification method, obtaining a model input picture of the virtual digital person in the virtual scene, inputting the model input picture into a second neural network model of a corresponding scene type, and obtaining a lamplight strategy and a virtual scene lamplight effect picture output by the second neural network model; the light control module displays the virtual scene light effect picture to the audience;
when the virtual digital person converts different visual angles or moving positions, the light control module outputs corresponding light strategies by reusing the second neural network model.
Compared with the prior art, the invention has the beneficial effects that:
the method comprises the steps of collecting scene pictures in a plurality of different scenes in the real world in advance, training a first neural network model for identifying the scenes based on the scene pictures, generating a plurality of light strategies and corresponding light shadow effect pictures and contrast pictures for each scene, modeling the light shadow effect pictures and the contrast pictures respectively, converting the light shadow effect pictures and the contrast pictures into model input pictures and training target pictures in virtual scenes, taking the model input pictures as generator input for generating an countermeasure network, taking the training target pictures as contrast targets of a discriminator, training the generator, and obtaining a second neural network model for controlling the light strategies in the virtual scenes to generate the light effect pictures of the virtual scenes; in the virtual scene to be controlled, a scene picture is obtained in real time, and a light strategy for controlling light is generated by using a second neural network model, so that light and shadow display which is more close to human aesthetic on digital people in the virtual environment is realized by intelligently controlling light.
Drawings
Fig. 1 is a block diagram of an intelligent control system for light in a digital human scene in a virtual environment in embodiment 1 of the present invention.
Detailed Description
The technical solutions of the present invention will be clearly and completely described in connection with the embodiments, and it is obvious that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
As shown in fig. 1, the virtual environment digital human scene lamplight intelligent control system comprises a real scene data collection module, a scene recognition model training module, a digital human model collection module, a lamplight control model training module and a lamplight control module; the modules are connected through a wireless network or a wired mode;
the real scene data collection module is mainly used for collecting a plurality of scene data and scene light effect data in advance;
wherein the scene data comprises a scene picture and a picture tag;
wherein the scene pictures are pictures in various scenes shot by using an image capturing device in the real world; the picture labels are scene numbers manually marked on scenes to which each scene picture belongs in advance; as an example of one picture tag, the picture tag of the scene picture of the meeting place scene is marked as 0, and the picture tag of the scene picture of the office place is marked as 1;
marking the number of the scene as n, and marking the scene picture set corresponding to the nth scene as Pn;
the scene lighting effect data comprise a lighting strategy, a lighting effect picture and a contrast picture;
the light strategy is the position, the light type, the light irradiation direction, the light irradiation intensity and the light color of all the lights which are deployed in each scene in the real world;
the light and shadow effect pictures are pictures of light and shadow effects shown by characters in the scenes by using each light strategy captured by the image capturing equipment in each scene;
it should be noted that, because the figures in the scene are at the same position, the light and shadow effects shown by the figures are different by different light strategies, in the actual data collection process, for each figure position, the figure in the scene shows the light and shadow effect by the human being is judged manually, the light and shadow effect picture with the best light and shadow effect is obtained, and the corresponding light strategy is used as the light strategy of the light and shadow effect picture;
the contrast pictures are pictures with no shadow effect, which are displayed by characters in a scene, corresponding to each shadow effect picture without using a lighting strategy;
marking a light and shadow effect picture set corresponding to an nth scene as Gn, marking each light and shadow effect picture in the light and shadow effect picture set Gn as Gn, and marking a light strategy corresponding to the light and shadow effect picture Gn as Sgn; marking a contrast picture corresponding to the light and shadow effect picture gn as Dgn;
the real scene data collection module sends scene data to the scene recognition model training module and sends scene lamplight effect data to the digital human model collection module;
the scene recognition model training module is mainly used for training a first neural network model for recognizing a scene according to a scene picture by using scene data;
in a preferred embodiment, the training module for training the scene recognition model to recognize the first neural network model of the scene according to the scene picture is as follows:
taking all scene pictures as input of a first neural network model, taking predicted picture labels as output, taking actual picture labels of each scene picture as a predicted target, taking the prediction accuracy of the predicted picture labels relative to the actual picture labels as a training target, and training the first neural network model until the prediction accuracy reaches a preset accuracy; preferably, the first neural network model is a convolutional neural network model;
the scene recognition model training module sends the first neural network model to the light control module;
the digital human model collection module is mainly used for generating virtual digital human scene training data based on scene lighting effect data;
in a preferred embodiment, the digital person model collection module generates the virtual digital person scene training data by:
for each contrast picture Dgn, building a model of the sizes and the distances of buildings and lighting equipment in the scene in the contrast picture according to equal proportion by using three-dimensional modeling software, and converting the built model into a virtual digital scene; then modeling the action, the expression, the clothing ornament and the skin texture of the person in the comparison picture in equal proportion according to the proportion of scene modeling, and converting the modeled person into a virtual digital human model; placing the virtual digital mannequin in a virtual digital scene, and the position of the virtual digital mannequin in the virtual digital scene corresponds to the position of the mannequin in the scene in the comparison picture Dgn; placing the virtual digital human model, and taking a virtual picture of the virtual scene as a model input picture; marking a model input picture corresponding to the comparison picture Dgn as IDgn;
for a virtual digital scene corresponding to each model input picture IDgn, using an optical physical engine to simulate a lighting strategy Sgn to irradiate on a virtual digital human model so as to obtain a virtual light and shadow effect, and taking a virtual light and shadow effect picture as a training target picture; it can be understood that the virtual digital human model is irradiated according to the lamplight strategy Sgn, the virtual digital human model can be compared with the real-world shadow effect, the model is continuously optimized, meanwhile, when the second neural network model is trained, only the difference of lamplight effects between a virtual shadow effect picture for calculating the shadow effect value and a model input picture is ensured, and the noise of model training is reduced;
the virtual digital human scene training data comprises all model input pictures and corresponding training target pictures;
the digital human model collection module sends the virtual digital human scene training data to the lamplight control model training module;
the light control model training module trains a light strategy in a control virtual scene for each scene mainly based on virtual digital person scene training data, and generates a second neural network model of a virtual scene light effect picture; the second neural network model is used for generating an antagonistic neural network model;
in a preferred embodiment, the training module of the light control model trains out a second neural network model for generating a virtual scene light effect picture for each scene, and the training module of the light control model comprises the following steps:
taking each model input picture IDgn in the nth scene as the input of a generator of a second neural network model, outputting a corresponding lamplight strategy by the generator, and generating a virtual scene lamplight effect picture corresponding to the model input picture by using an optical physical engine based on the lamplight strategy; transmitting the virtual scene lamplight effect picture to a discriminator of a second neural network model;
the discriminator calculates the light and shadow effect value of the virtual scene light effect picture, if the light and shadow effect value is smaller than the preset light and shadow effect threshold value, the generator outputs the light strategy again until the light and shadow effect value of the generated virtual scene light effect picture is larger than the Yu Guangying effect threshold value; it can be understood that when the light and shadow effect value is smaller than the preset light and shadow effect threshold value, the generator is indicated to generate a light strategy similar to the manually selected optimal light strategy, so that the effect of intelligently controlling the light is achieved;
the light effect value is a structural similarity index of a training target picture corresponding to the virtual scene lamplight effect picture and the model input picture IDgn;
specifically, the calculation formula of the shadow effect value is as follows:
marking the virtual scene lamplight effect picture as JDgn, and marking the shadow effect value as YDgn;
the calculation formula of the shadow effect value YDgn is
Wherein, the liquid crystal display device comprises a liquid crystal display device,and->Respectively inputting brightness average values of pictures IDgn for the virtual scene lamplight effect pictures JDGN and the models;
wherein, the liquid crystal display device comprises a liquid crystal display device,and->Respectively inputting the standard deviation of the brightness of the picture IDgn for the virtual scene lighting effect picture JDGN and the model;
wherein, the liquid crystal display device comprises a liquid crystal display device,inputting the brightness covariance of the picture IDgn for the virtual scene lighting effect picture JDGN and the model;
wherein a1, a2 and a3 are respectively preset adjusting parameters which are larger than 0 and are used for preventing denominator from being 0,、/>and +.>Respectively preset proportional coefficients;
the light control model training module sends the trained second neural network model to the light control module;
the lamplight control module is mainly used for generating a lamplight strategy for a virtual environment to be controlled and intelligently controlling lamplight based on the lamplight strategy;
in a preferred embodiment, the manner in which the light control module performs intelligent control on the light is:
the light control module acquires a picture of a virtual environment to be controlled in advance, inputs the picture into the first neural network model, and acquires the output of the first neural network model to the scene type of the virtual environment;
identifying a virtual digital person in a virtual environment by using a target identification method, obtaining a model input picture of the virtual digital person in the virtual scene, inputting the model input picture into a second neural network model of a corresponding scene type, and obtaining a lamplight strategy and a virtual scene lamplight effect picture output by the second neural network model;
the light control module displays the virtual scene light effect picture to the audience;
in a further embodiment of the invention, when the virtual digital person switches different viewing angles or moving positions, the light control module re-uses the second neural network model to output a corresponding light strategy; it can be understood that, because the virtual digital person is generally manufactured by using software, the viewing angle and the position parameter in the virtual scene can be defined in the software in advance, and when the viewing angle parameter and the position parameter change, the virtual digital person can be regarded as having converted different viewing angles or moved positions.
The above embodiments are only for illustrating the technical method of the present invention and not for limiting the same, and it should be understood by those skilled in the art that the technical method of the present invention may be modified or substituted without departing from the spirit and scope of the technical method of the present invention.

Claims (8)

1. The intelligent control system for the digital human scene lamplight of the virtual environment is characterized by comprising a real scene data collection module, a scene recognition model training module, a digital human model collection module, a lamplight control model training module and a lamplight control module;
the real scene data collection module is used for collecting scene data and scene light effect data in advance, sending the scene data to the scene recognition model training module, and sending the scene light effect data to the digital human model collection module, wherein the scene data comprises scene pictures;
the scene recognition model training module is used for training a first neural network model for recognizing a scene according to the scene picture by utilizing the scene data, and transmitting the first neural network model to the lamplight control module;
the digital human model collection module is used for generating virtual digital human scene training data based on the scene light effect data and sending the virtual digital human scene training data to the light control model training module;
the light control model training module trains a second neural network model for generating a virtual scene light effect picture for each scene based on the virtual digital person scene training data, and sends the trained second neural network model to the light control module; the second neural network model is used for generating an antagonistic neural network model;
the lamplight control module is used for generating lamplight strategies for virtual environments to be controlled and intelligently controlling lamplight based on the lamplight strategies.
2. The intelligent control system of virtual environment digital human scene lighting according to claim 1, wherein said scene data comprises scene pictures and picture tags; the scene pictures are pictures in all scenes shot by using the image capturing equipment in the real world; the picture labels are scene numbers which are manually marked on scenes to which each scene picture belongs in advance.
3. The intelligent control system of digital human scene lighting in virtual environment according to claim 2, wherein the scene lighting effect data comprises lighting strategy, lighting effect picture and contrast picture;
the lamplight strategy is the position, the lamplight type, the lamplight irradiation direction, the lamplight irradiation intensity and the lamplight color of all deployed lamplights in each scene in the real world;
the light and shadow effect pictures are pictures of light and shadow effects shown by characters in the scenes by using each light strategy captured by the image capturing equipment in each scene;
the contrast pictures are pictures with no shadow effect, which are displayed by characters in a scene, corresponding to each shadow effect picture without using a lighting strategy;
the number of the scene is marked as n, the set of the light and shadow effect pictures corresponding to the nth scene is marked as Gn, each light and shadow effect picture in the set of the light and shadow effect pictures Gn is marked as Gn, the lighting strategy corresponding to the light and shadow effect picture Gn is marked as Sgn, and the contrast picture corresponding to the light and shadow effect picture Gn is marked as Dgn.
4. A digital human scene lighting intelligent control system according to claim 3, wherein the training process for identifying the first neural network model of the scene according to the scene picture by using the scene data is:
taking all scene pictures as input of a first neural network model, taking predicted picture labels as output of the first neural network model, taking actual picture labels of each scene picture as prediction targets, taking prediction accuracy of the predicted picture labels relative to the actual picture labels as training targets, and training the first neural network model until the prediction accuracy reaches a preset accuracy.
5. The intelligent control system of virtual environment digital human scene lighting according to claim 4, wherein the digital human model collection module generates the virtual digital human scene training data by:
for each comparison picture, building a model according to the equal proportion of the sizes and the distances of the buildings and the lighting equipment in the scene in the comparison picture by using three-dimensional modeling software, and converting the built model into a virtual digital scene; carrying out equal proportion modeling on character features in the comparison picture according to the proportion of scene modeling on characters in the comparison picture, and converting the modeled characters into a virtual digital human model; placing the virtual digital mannequin in a virtual digital scene, and the position of the virtual digital mannequin in the virtual digital scene corresponds to the position of the mannequin in the scene in the comparison picture Dgn; placing the virtual digital human model, and taking a virtual picture of the virtual scene as a model input picture; marking a model input picture corresponding to the comparison picture Dgn as IDgn;
for a virtual digital scene corresponding to each model input picture IDgn, using an optical physical engine to simulate a lighting strategy Sgn to irradiate on a virtual digital human model so as to obtain a virtual light and shadow effect, and taking a virtual light and shadow effect picture as a training target picture;
the virtual digital human scene training data comprises all model input pictures and corresponding training target pictures.
6. The intelligent control system for light in a digital human scene in a virtual environment according to claim 5, wherein the training process of the second neural network model is:
taking each model input picture IDgn in the nth scene as the input of a generator of a second neural network model, outputting a corresponding lamplight strategy by the generator, and generating a virtual scene lamplight effect picture corresponding to the model input picture by using an optical physical engine based on the lamplight strategy; transmitting the virtual scene lamplight effect picture to a discriminator of a second neural network model;
the discriminator calculates the light and shadow effect value of the virtual scene light effect picture, if the light and shadow effect value is smaller than the preset light and shadow effect threshold value, the generator outputs the light strategy again until the light and shadow effect value of the generated virtual scene light effect picture is larger than the Yu Guangying effect threshold value.
7. The intelligent control system for light of digital human scene in virtual environment according to claim 6, wherein the calculation formula of the light effect value is:
marking the virtual scene lamplight effect picture as JDgn, and marking the shadow effect value as YDgn;
the calculation formula of the shadow effect value YDgn is
And->Respectively inputting brightness average values of pictures IDgn for the virtual scene lamplight effect pictures JDGN and the models;
and->Respectively inputting the standard deviation of the brightness of the picture IDgn for the virtual scene lighting effect picture JDGN and the model;
is a virtual scene lampLuminance covariance of the light effect picture JDgn and the model input picture IDgn;
wherein a1, a2 and a3 are respectively preset adjusting parameters which are larger than 0 and are used for preventing denominator from being 0,、/>and +.>Respectively, are preset proportional coefficients.
8. The intelligent control system for light in a digital human scene in a virtual environment according to claim 7, wherein the light control module performs intelligent control on light in the following manner:
the light control module acquires a picture of a virtual environment to be controlled in advance, inputs the picture into the first neural network model, and acquires the output of the first neural network model to the scene type of the virtual environment;
identifying a virtual digital person in a virtual environment by using a target identification method, obtaining a model input picture of the virtual digital person in the virtual scene, inputting the model input picture into a second neural network model of a corresponding scene type, and obtaining a lamplight strategy and a virtual scene lamplight effect picture output by the second neural network model; the light control module displays the virtual scene light effect picture to the audience;
when the virtual digital person converts different visual angles or moving positions, the light control module outputs corresponding light strategies by reusing the second neural network model.
CN202310966851.0A 2023-08-03 2023-08-03 Intelligent control system for digital human scene lamplight in virtual environment Withdrawn CN116685028A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310966851.0A CN116685028A (en) 2023-08-03 2023-08-03 Intelligent control system for digital human scene lamplight in virtual environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310966851.0A CN116685028A (en) 2023-08-03 2023-08-03 Intelligent control system for digital human scene lamplight in virtual environment

Publications (1)

Publication Number Publication Date
CN116685028A true CN116685028A (en) 2023-09-01

Family

ID=87779525

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310966851.0A Withdrawn CN116685028A (en) 2023-08-03 2023-08-03 Intelligent control system for digital human scene lamplight in virtual environment

Country Status (1)

Country Link
CN (1) CN116685028A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117424969A (en) * 2023-10-23 2024-01-19 神力视界(深圳)文化科技有限公司 Light control method and device, mobile terminal and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117424969A (en) * 2023-10-23 2024-01-19 神力视界(深圳)文化科技有限公司 Light control method and device, mobile terminal and storage medium

Similar Documents

Publication Publication Date Title
CN111563446B (en) Human-machine interaction safety early warning and control method based on digital twin
US11736756B2 (en) Producing realistic body movement using body images
CN101452582B (en) Method and device for implementing three-dimensional video specific action
CN102033608B (en) Interactive video display system
CN110503703A (en) Method and apparatus for generating image
CN114972617B (en) Scene illumination and reflection modeling method based on conductive rendering
KR20170074413A (en) 2d image data generation system using of 3d model, and thereof method
CN110322544A (en) A kind of visualization of 3 d scanning modeling method, system, equipment and storage medium
CN109542233A (en) A kind of lamp control system based on dynamic gesture and recognition of face
Pramada et al. Intelligent sign language recognition using image processing
CN116685028A (en) Intelligent control system for digital human scene lamplight in virtual environment
WO2020211427A1 (en) Segmentation and recognition method, system, and storage medium based on scanning point cloud data
CN102298786A (en) Virtual drawing implementation device and method for the same
CN109886154A (en) Most pedestrian's appearance attribute recognition methods according to collection joint training based on Inception V3
Malleson et al. Rapid one-shot acquisition of dynamic VR avatars
CN112667346A (en) Weather data display method and device, electronic equipment and storage medium
CN114782901A (en) Sand table projection method, device, equipment and medium based on visual change analysis
CN112562056A (en) Control method, device, medium and equipment for virtual light in virtual studio
CN102043963A (en) Method for recognizing and counting number of people in image
Li Film and TV animation production based on artificial intelligence AlphaGd
CN112330753B (en) Target detection method of augmented reality system
KR102173608B1 (en) System and method for controlling gesture based light dimming effect using natural user interface
CN112218414A (en) Method and system for adjusting brightness of self-adaptive equipment
CN113065506A (en) Human body posture recognition method and system
CN112270211A (en) Stage lighting control method and system based on somatosensory interaction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20230901

WW01 Invention patent application withdrawn after publication