CN112562056A - Control method, device, medium and equipment for virtual light in virtual studio - Google Patents

Control method, device, medium and equipment for virtual light in virtual studio Download PDF

Info

Publication number
CN112562056A
CN112562056A CN202011411360.2A CN202011411360A CN112562056A CN 112562056 A CN112562056 A CN 112562056A CN 202011411360 A CN202011411360 A CN 202011411360A CN 112562056 A CN112562056 A CN 112562056A
Authority
CN
China
Prior art keywords
target object
picture
virtual
real scene
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011411360.2A
Other languages
Chinese (zh)
Inventor
王毅
赵冰
钱骏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Boguan Information Technology Co Ltd
Original Assignee
Guangzhou Boguan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Boguan Information Technology Co Ltd filed Critical Guangzhou Boguan Information Technology Co Ltd
Priority to CN202011411360.2A priority Critical patent/CN112562056A/en
Publication of CN112562056A publication Critical patent/CN112562056A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications

Abstract

The disclosure provides a control method of virtual light in a virtual studio, a control device of the virtual light in the virtual studio, a computer readable storage medium and electronic equipment, and belongs to the technical field of computers. The method comprises the following steps: acquiring a real scene picture containing a target object and shot by a camera; extracting a keying picture of the target object from the real scene picture; determining the position of the target object in a virtual studio based on the image matting picture; and determining the beam position of virtual light according to the position of the target object in the virtual studio so as to control the virtual light to irradiate to the beam position. The control of virtual light in the virtual studio can be realized, and the program production effect is enriched.

Description

Control method, device, medium and equipment for virtual light in virtual studio
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method for controlling virtual lighting in a virtual studio, a device for controlling virtual lighting in a virtual studio, a computer-readable storage medium, and an electronic device.
Background
The virtual studio is a unique TV program making technology, and it is essential to combine the virtual three-dimensional scene made by computer with the live figure moving image shot by camera in real time, so that the figure and virtual background can change synchronously to realize the fusion of the two. By the method, rich program effects can be realized, the construction cost of the stage is saved, and the virtual three-dimensional scene can be reused, so that the workload of developers can be greatly reduced, and development of new scene pictures and the like is facilitated.
In a virtual studio, the light is a very important component, and can be used for shaping the ambient atmosphere of a virtual three-dimensional scene, character illumination, representation of scene details, and the like. Therefore, it is desirable to provide a method for effectively controlling virtual lighting in a virtual studio.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure provides a method for controlling virtual light in a virtual studio, a device for controlling virtual light in a virtual studio, a computer-readable storage medium, and an electronic device, thereby achieving a problem of controlling virtual light in a virtual studio to at least a certain extent.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a method for controlling virtual lighting in a virtual studio, the method comprising: acquiring a real scene picture containing a target object and shot by a camera; extracting a keying picture of the target object from the real scene picture; determining the position of the target object in a virtual studio based on the image matting picture; and determining the beam position of virtual light according to the position of the target object in the virtual studio so as to control the virtual light to irradiate to the beam position.
In an exemplary embodiment of the present disclosure, the acquiring a real scene picture including a target object captured by a camera includes: receiving a real scene picture of the target object transmitted by at least one camera.
In an exemplary embodiment of the present disclosure, the extracting, in the real scene picture, a matting picture of the target object includes: preprocessing the real scene picture, and determining a pixel area of the target object; and extracting characteristic information of the target object based on the pixel region of the target object, and extracting a keying picture of the target object through the characteristic information.
In an exemplary embodiment of the present disclosure, the preprocessing the real scene picture and determining the pixel region of the target object includes: carrying out binarization processing on the real scene picture to obtain a binary picture of the real scene picture; in the binary picture, determining the pixel distribution of the target object, and removing interference points in the binary picture to obtain a pixel region of the target object.
In an exemplary embodiment of the present disclosure, the extracting feature information of the target object based on a pixel region of the target object and extracting a matting picture of the target object through the feature information includes: generating a pixel matrix of the pixel region; performing dimension reduction processing on the pixel matrix by adopting a convolutional neural network model to generate characteristic information of the target object; and extracting the keying picture of the target object through a pre-trained artificial neural network model and the characteristic information.
In an exemplary embodiment of the disclosure, when performing dimension reduction processing on the pixel matrix by using a convolutional neural network model, the method further includes: and training the convolutional neural network model through a back propagation algorithm.
In an exemplary embodiment of the present disclosure, the determining a position of the target object in a virtual studio based on the matting picture includes: taking the gray value of each pixel in the image matting picture as the pixel weight of each pixel; calculating coordinates of the gravity center of the target object in the real scene picture according to the pixel weight; and determining the position of the gravity center of the target object in the virtual studio according to the difference value of the coordinates of the gravity center of the target object in the real scene picture and the coordinates of the center of the real scene picture.
In an exemplary embodiment of the present disclosure, the determining the beam position of the virtual light according to the position of the target object in the virtual studio includes: adding a preset deviation value to the position of the gravity center of the target object in the virtual studio, and determining the position of the light chasing point of the virtual light; and determining the light source position of the virtual light, and determining the beam direction of the virtual light according to the light source position and the light following point position of the virtual light.
In an exemplary embodiment of the present disclosure, after determining the position of the target object in the virtual studio, the method further comprises: acquiring a virtual scene picture of the virtual studio; and synthesizing the image matting picture of the target object to the virtual scene picture based on the position of the target object in the virtual studio to generate a target picture of the virtual studio.
According to a second aspect of the present disclosure, there is provided an apparatus for controlling virtual lighting in a virtual studio, the apparatus comprising: the acquisition module is used for acquiring a real scene picture which is shot by the camera and contains a target object; the extraction module is used for extracting a keying picture of the target object from the real scene picture; a determining module, configured to determine, based on the matting picture, a position of the target object in a virtual studio; and the control module is used for determining the beam position of the virtual light according to the position of the target object in the virtual studio so as to control the virtual light to irradiate to the beam position.
In an exemplary embodiment of the disclosure, the acquisition module is configured to receive a real scene picture of the target object transmitted by at least one camera.
In an exemplary embodiment of the disclosure, the extraction module is configured to pre-process the real scene picture, determine a pixel region of the target object, extract feature information of the target object based on the pixel region of the target object, and extract a matting picture of the target object through the feature information.
In an exemplary embodiment of the disclosure, the extraction module is further configured to perform binarization processing on the real scene picture to obtain a binary picture of the real scene picture, determine, in the binary picture, a pixel distribution of the target object, and remove an interference point in the binary picture to obtain a pixel region of the target object.
In an exemplary embodiment of the disclosure, the extraction module is further configured to generate a pixel matrix of the pixel region, perform dimension reduction processing on the pixel matrix by using a convolutional neural network model, generate feature information of the target object, and extract a matting picture of the target object through a pre-trained artificial neural network model and the feature information.
In an exemplary embodiment of the disclosure, when the convolutional neural network model is used for performing the dimensionality reduction on the pixel matrix, the extraction module is further configured to train the convolutional neural network model through a back propagation algorithm.
In an exemplary embodiment of the disclosure, the determining module is configured to use a gray value of each pixel in the matting picture as a pixel weight of each pixel, calculate coordinates of a center of gravity of the target object in the real scene picture according to the pixel weight, and determine a position of the center of gravity of the target object in the virtual studio according to a difference between the coordinates of the center of gravity of the target object in the real scene picture and the coordinates of the center of the real scene picture.
In an exemplary embodiment of the present disclosure, the control module is configured to add a preset offset value to a position of a center of gravity of the target object in the virtual studio, determine a light-chasing point position of the virtual light, determine a light source position of the virtual light, and determine a beam direction of the virtual light according to the light source position and the light-chasing point position of the virtual light.
In an exemplary embodiment of the present disclosure, after determining the position of the target object in the virtual studio, the determining module is further configured to obtain a virtual scene picture of the virtual studio, and synthesize the sectional picture of the target object to the virtual scene picture based on the position of the target object in the virtual studio, so as to generate a target picture of the virtual studio.
According to a third aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements any one of the above-mentioned methods for controlling virtual lighting in a virtual studio.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to execute any one of the above methods for controlling virtual lighting in a virtual studio by executing the executable instructions.
The present disclosure has the following beneficial effects:
according to the method for controlling virtual lighting in a virtual studio, the apparatus for controlling virtual lighting in a virtual studio, the computer-readable storage medium, and the electronic and equipment in the present exemplary embodiment, a matte picture of a target object is extracted from a real scene picture of the target object photographed by a camera, a position of the target object in the virtual studio is determined based on the matte picture, and a beam position of the virtual lighting is determined according to the position of the target object in the virtual studio, so as to control the virtual lighting to illuminate to the beam position. The light beam position of the virtual light is determined according to the position of the target object in the virtual studio, the virtual light is controlled to irradiate towards the light beam position, the effect of follow spot lighting can be achieved in the virtual studio, the light richness in programs is improved, various program effect requirements are met, an operator does not need to set the light irradiation mode according to program production requirements, and the workload of the operator is greatly reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is apparent that the drawings in the following description are only some embodiments of the present disclosure, and that other drawings can be obtained from those drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a flowchart of a control method of virtual lighting in the present exemplary embodiment;
FIG. 2 shows a schematic view of a camera position distribution of a camera in the exemplary embodiment;
FIG. 3 illustrates a schematic view of a matte in this exemplary embodiment;
FIG. 4 illustrates a sub-flow diagram of the control of virtual lighting in this exemplary embodiment;
FIG. 5 is a schematic diagram of a target object in the present exemplary embodiment;
FIG. 6 illustrates a flow chart for extracting a matte in this exemplary embodiment;
fig. 7 is a schematic diagram showing a position of the center of gravity of a target object in the present exemplary embodiment;
FIG. 8 is a schematic illustration of a display of virtual lighting in the exemplary embodiment;
fig. 9 is a block diagram showing a configuration of a control apparatus of virtual lighting in the present exemplary embodiment;
fig. 10 shows an electronic device for implementing the above method in the present exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
An exemplary embodiment of the present disclosure first provides a method for controlling virtual lighting in a virtual studio. The method can be applied to electronic equipment, so that the electronic equipment can control the virtual light to irradiate in a corresponding mode in a virtual studio. The virtual studio is a special TV program making method, which uses computer three-dimensional graphics technology and video synthesis technology based on traditional color key image matting technology to make the perspective relation of the three-dimensional virtual scene consistent with the foreground according to the position and parameters of the camera, and makes the shooting object in the foreground completely soaked in the three-dimensional virtual scene generated by the computer in visual effect after color key synthesis, so that the virtual studio has vivid and strong stereo feeling.
Fig. 1 shows a flow of the present exemplary embodiment, which may include the following steps S110 to S140:
and S110, acquiring a real scene picture which is shot by the camera and contains the target object.
The target object refers to a role in a real scene picture, and can be a person, an animal and the like; the real scene picture refers to a picture of a target object in a real scene, and generally, the real scene picture may be a picture of a photographed object in the real scene, such as an actor or a host, photographed by a camera.
In the present exemplary embodiment, the real scene picture of the target object captured by the camera may be received through a network or a specific data interface, such as a USB (Universal Serial Bus) interface, for example, the real scene picture sent by the camera may be received in real time through the network during the capturing process. Meanwhile, in a real environment, the camera may be located in a separate shooting studio or some outdoor shooting place, and a machine processing a real scene picture may be located in another place, and thus, in some cases, an intermediate transmission computer device is required to integrate and transmit a video signal of the camera.
In order to determine the position and posture of the target object in the real environment, and make the target object appear stereoscopic in the virtual studio, in an alternative embodiment, step S110 may be implemented by receiving a real scene picture of the target object transmitted by at least one camera. Specifically, when a plurality of cameras are used for shooting, each camera may be located at a different observation angle of the target object, for example, three cameras, each camera may be a common image capturing device, and as shown in fig. 2, each camera may be located at a position right above, right in front of, and left of the target object, respectively, to capture a real scene picture of the target object at each observation angle; when one camera or two cameras are used for shooting, in order to facilitate the determination of the position and the posture of the target object in the real environment, the cameras can adopt an image pickup device with a depth-of-field function, so that the camera can obtain the real scene picture of the target object by calculating the position relation of the target object in the space.
And S120, extracting a keying picture of the target object in the real scene picture.
The matte picture refers to a picture only containing a target object, for example, the matte picture may be a picture of a person or an object in front of or near the front edge of the main body in a real scene picture.
By identifying the target object in the real scene picture, pictures except the target object can be removed from the real scene picture, and the picture where the target object is located is extracted.
In practical application, in order to facilitate the extraction of the image of the target object, the target object such as an actor or a host can be shot in a green curtain environment, and when the extracted image of the target object is extracted, the background where the target object is located can be deleted by identifying the color distribution of the green curtain environment and the like. For example, referring to fig. 3, when an actor or presenter performs in a green screen environment, by removing the background of the green screen, a matte of the target object under each camera view can be directly obtained.
However, considering a case where there may be an interference point or uneven color distribution in the green screen background in a real environment, in this case, it is difficult to distinguish a cutout picture from which a target object is directly extracted by a simple color. Therefore, in an alternative embodiment, referring to fig. 4, step S120 can be implemented by the following steps S410 to S420:
and S410, preprocessing the real scene picture to determine the pixel area of the target object.
Step S420, extracting characteristic information of the target object based on the pixel area of the target object, and extracting a matting picture of the target object through the characteristic information.
The pixel area of the target object refers to the pixel distribution position of the target object in the real scene picture; the feature information of the target object may include color features, texture features, contour features, and the like in the screen region in which the target object is located.
When the distinguishing condition of the target object in the real scene picture is not good, the real scene picture can be preprocessed firstly, for example, a background picture in a certain range in the real scene picture is deleted, an approximate range of the target object picture is determined, a pixel region of the target object is obtained, and therefore according to the pixel region, a matting picture of the target object is extracted by extracting characteristic information of the target object in the pixel region.
In general, the number of pixels in a real scene screen is very large, and when extracting feature information of a target object, a large amount of computer resources are required to be consumed if all pixels of the target object are processed. Therefore, in order to increase the speed of extracting the matte picture and reduce the waste of computer resources, in an alternative embodiment, step S420 may be implemented by:
generating a pixel matrix of the pixel area;
performing dimension reduction processing on the pixel matrix by adopting a convolutional neural network model to generate characteristic information of a target object;
and extracting a keying picture of the target object through a pre-trained artificial neural network model and the characteristic information.
The method includes the steps of generating a pixel matrix of a pixel area according to pixels of a picture area where a target object is located, for example, preprocessing a real scene picture, generating the pixel matrix composed of 0 and 1 according to the pixels of the picture area where the target object is located, performing dimension reduction processing on the pixel matrix by adopting a convolutional neural network model, specifically, processing the pixel matrix through a convolutional layer of the convolutional neural network model, extracting local features of the picture area where the target object is located, reducing parameter orders of magnitude through a pooling layer, completing dimension reduction processing on the pixel matrix, outputting results through a full connection layer to obtain feature information of the target object, and finally extracting a matting picture of the target object through a pre-trained artificial neural network model and the feature information.
By adopting the convolutional neural network model to perform dimension reduction processing on the pixel matrix, the image with large data volume can be subjected to dimension reduction processing to form data with small data volume, meanwhile, the characteristics of the picture area where the target object is located can be reserved, and the characteristic information of the target object is extracted in a manner similar to human vision.
Further, when the convolutional neural network model is used to perform the dimension reduction processing on the pixel matrix to generate the feature information of the target object, in order to improve the accuracy of generating the feature information, in an optional implementation manner, the convolutional neural network model may be trained through a back propagation algorithm. For example, after each training, the parameters of the convolutional neural network model may be updated using a gradient descent algorithm or the like, so that the error of the feature information obtained by the convolutional neural network model is minimized.
In addition, when the real scene picture is preprocessed in step S410, in order to determine the pixel area of the target object, in an alternative embodiment, the real scene picture may be preprocessed in the following manner:
carrying out binarization processing on the real scene picture to obtain a binary picture of the real scene picture;
in the binary picture, the pixel distribution of the target object is determined, and the interference points in the binary picture are removed to obtain the pixel area of the target object.
The real scene picture is subjected to binarization processing, which is actually a process of enabling the real scene picture to show an obvious black-and-white effect, and compared with the real scene picture, the binary picture can highlight the key outline of a target object, and the data volume is smaller, so that the calculation is more convenient. Specifically, when the real scene picture is subjected to binarization processing, the real scene picture may be converted into a grayscale picture, for example, the grayscale value of each pixel may be calculated by the following formula (1), so that the real scene picture is converted into the grayscale picture:
Gray=0.299R+0.587G+0.114B (1)
wherein R, G, B represents the pixel value of the corresponding pixel on each color channel.
After the gray scale picture of the real scene picture is obtained, all the pixel values except 0 can be set to be 255 according to the gray scale value of the gray scale picture, so that a binary picture of the real scene picture is obtained, the pixel distribution of the target object is further determined according to the binary picture, the interference points in the pixel distribution of the target object are removed, and the pixel area of the target object is obtained.
In addition, in order to reduce the calculation amount, after the binary image of the real scene image is obtained, the image outside the image area where the target object is located can be cut according to the image area where the target object is located. For example, after obtaining a gray-scale screen as shown in fig. 5, a screen outside the screen area where the target object 510 is located in the gray-scale screen may be deleted, so as to obtain a gray-scale screen including only the target object 510.
Fig. 6 shows a flowchart of extracting a matte picture in the present exemplary embodiment, as shown in the figure, a real scene picture may be converted into a gray picture, the gray picture is converted into a binary picture, a boundary picture which is farther away from a picture area where a target object is located in the real scene picture is removed by editing, the picture area where the target object is located is determined, further, the pixel distribution of the target object is determined, an interference point in the pixel distribution area of the target object is removed, and the pixel area of the target object is determined; extracting characteristic information of the target object according to a pixel region of the target object, performing characteristic matching through a pre-trained artificial neural network model, determining a picture range of the target object in a picture, removing a background environment and the like in the picture, and extracting a keying picture of the target object.
In the process of feature matching through the artificial neural network model, the model needs to be pre-trained through a pre-established object feature database, so that the model can accurately distinguish whether a target object exists in a picture, a picture area where the target object is located and the like.
And S130, determining the position of the target object in the virtual studio based on the image matting picture.
In order to make the picture show rich special effects, such as adding a special effect of a monster in a movie or showing a special effect of storms in a weather forecast program, the target object needs to be fused with the picture of the virtual studio, and in the fusion process, the position of the target object in the virtual studio needs to be determined, so that the scene of the target object and the virtual studio shows a visual effect with high reality degree.
To facilitate determining the position of the target object in the virtual studio, in an alternative embodiment, step S130 may be implemented by:
taking the gray value of each pixel in the image matting picture as the pixel weight of each pixel;
calculating the coordinates of the gravity center of the target object in the real scene picture according to the pixel weight;
and determining the position of the gravity center of the target object in the virtual studio according to the difference value of the coordinates of the gravity center of the target object in the real scene picture and the coordinates of the center of the real scene picture.
Specifically, after the matting picture of the target object is extracted, the gray value of each pixel in the matting picture can be used as the pixel weight of each pixel, and the barycentric coordinate of the target object in the real scene picture is calculated according to the formulas (2) and (3):
Figure BDA0002815509060000101
Figure BDA0002815509060000102
wherein X and Y represent pixel coordinates of the center of gravity of the target object in the X-direction and the Y-direction in the real scene picture, respectively, and wnRepresents the weight, w 'of each pixel in the n-th column in the x direction'nRepresents the weight of each pixel in the n-th row in the y-direction by w1For example, it may represent the weight of each pixel in column 1 in the x direction; x is the number ofnRepresenting the pixel value, y, of each pixel in the nth column in the x-directionnRepresenting the pixel value of each pixel in the n-th row in the y-direction by x1For example, it may represent the pixel value of each pixel in column 1 in the x direction; w represents an average value of the sum of the weights of all pixels in the real scene picture. In the calculation process, w may be calculatedn、w′nAnd xn、ynRespectively expressed as vectors, and w is calculated as a dot product of the vectorsnxnAnd w'nynFurther, the values of X and Y are obtained.
In addition, when determining the gray value of the matting picture, the matting picture may be converted into a gray picture, and then the gray value of each pixel is determined, or the pixel value may also be converted into a gray value according to the pixel value of the matting picture in each color channel, such as a red, yellow, and blue color channel, for example, the gray value of each pixel in the matting picture may be calculated according to formula (1).
As shown in fig. 7, the Center coordinate of the real scene picture is Center (x, y), and the target object 510 has the Center of gravity coordinate G (x, y) in the real scene picture, and is located at the waist position of the target object. In the exemplary embodiment, when a plurality of cameras are used for shooting, the position of the center of gravity of the target object in each shooting angle of view in the real scene picture can be determined according to the real scene picture shot by each camera, and the position of the center of gravity is input into the virtual studio, so that the center of gravity of the target object in the virtual studio can be synchronously obtained.
And S140, determining the beam position of the virtual light according to the position of the target object in the virtual studio, and controlling the virtual light to irradiate towards the beam position.
The light beam position of the virtual light refers to the light irradiation position of the virtual light in the virtual studio.
In order to enable the virtual light to present the follow spot effect in the virtual studio, after the position of the target object in the virtual studio is determined, the beam position of the virtual light can be further determined, so that the virtual light can irradiate to the corresponding beam position.
According to the type and the position of the target object in the virtual studio, the light following point of the target object in the virtual studio can be determined to highlight the target object in the virtual studio, and specifically, in an alternative embodiment, the step S140 can be implemented by:
adding a preset offset value to the position of the gravity center of the target object in the virtual studio, and determining the position of the light following point of the virtual light;
and determining the light source position of the virtual light, and determining the light beam direction of the virtual light according to the light source position and the light tracing point position of the virtual light.
The light tracking point position is a light irradiation position of virtual light in the range of the target object; the preset offset value is a distance between the light tracking point of the virtual light and the center of gravity of the target object, and may be generally set according to the type of the target object.
According to the coordinates of the gravity center of the target object in the virtual studio, a certain preset deviant is added to the coordinates, the light following point position of the virtual light can be obtained, and therefore the direction, the distance and the like from the light source position to the light following point position are determined according to the light source position of the virtual light in the virtual studio, and the light beam direction of the virtual light is obtained. Specifically, a vector representing the light beam direction of the virtual light can be obtained through calculation according to the coordinates of the light source position and the follow spot position, and the virtual light is controlled to irradiate along the corresponding direction according to the vector, so that the lock of the follow spot position of the virtual light in the range of the target object is realized as shown in fig. 8.
In some cases, when the area occupied by the target object in the virtual studio is small, the position of the target object in the virtual studio may be directly determined as the beam position of the virtual lamp light so that the virtual lamp light is irradiated to the beam position.
In addition, in order to realize the fusion of the target object and the virtual studio, in an alternative embodiment, the matting picture of the target object can be fused to the virtual scene picture by:
acquiring a virtual scene picture of a virtual studio;
and fusing the keying picture of the target object to the virtual scene picture based on the position of the target object in the virtual studio to generate the target picture of the virtual studio.
The virtual scene picture can be generated in advance through three-dimensional modeling software, for example, in a live broadcast application, the virtual scene picture can be built in a live broadcast engine.
And fusing the image matting picture of the target object into the virtual scene picture according to the position of the target object in the virtual studio, so that the target object presents an immersive visual effect in the virtual studio.
In summary, according to the method for controlling virtual lighting in a virtual studio in the present exemplary embodiment, a real scene picture of a target object captured by a camera may be acquired, a matting picture of the target object is extracted from the real scene picture, a position of the target object in the virtual studio is determined based on the matting picture, and a beam position of the virtual lighting is determined according to the position of the target object in the virtual studio, so as to control the virtual lighting to irradiate to the beam position. The light beam position of the virtual light is determined according to the position of the target object in the virtual studio, the virtual light is controlled to irradiate towards the light beam position, the effect of follow spot lighting can be achieved in the virtual studio, the light richness in programs is improved, various program effect requirements are met, an operator does not need to set the light irradiation mode according to program production requirements, and the workload of the operator is greatly reduced.
The present exemplary embodiment also provides a control apparatus of virtual lighting in a virtual studio, and as shown in fig. 9, the control apparatus 900 of virtual lighting includes: an obtaining module 910, configured to obtain a real scene picture including a target object captured by a camera; an extracting module 920, configured to extract a matting picture of the target object in the real scene picture; a determining module 930 operable to determine a location of the target object in the virtual studio based on the matting map; the control module 940 may be configured to determine the beam position of the virtual light according to the position of the target object in the virtual studio, so as to control the virtual light to irradiate the beam position.
In an exemplary embodiment of the present disclosure, the acquiring module 910 may be configured to receive a real scene picture of a target object transmitted by at least one camera.
In an exemplary embodiment of the present disclosure, the extraction module 920 may be configured to pre-process a real scene picture, determine a pixel region of a target object, extract feature information of the target object based on the pixel region of the target object, and extract a matte picture of the target object through the feature information.
In an exemplary embodiment of the disclosure, the extraction module 920 may be further configured to perform binarization processing on the real scene picture to obtain a binary picture of the real scene picture, determine pixel distribution of the target object in the binary picture, and remove an interference point in the binary picture to obtain a pixel region of the target object.
In an exemplary embodiment of the disclosure, the extracting module 920 may further be configured to generate a pixel matrix of a pixel region, perform dimension reduction processing on the pixel matrix by using a convolutional neural network model, generate feature information of a target object, and extract a matting picture of the target object through a pre-trained artificial neural network model and the feature information.
In an exemplary embodiment of the disclosure, when the convolutional neural network model is used to perform the dimension reduction processing on the pixel matrix, the extraction module 920 may also be used to train the convolutional neural network model through a back propagation algorithm.
In an exemplary embodiment of the disclosure, the determining module 930 may be configured to use the gray value of each pixel in the matting picture as a pixel weight of each pixel, calculate coordinates of the center of gravity of the target object in the real scene picture according to the pixel weight, and determine a position of the center of gravity of the target object in the virtual studio according to a difference between the coordinates of the center of the target object in the real scene picture and the coordinates of the center of the real scene picture.
In an exemplary embodiment of the disclosure, the control module 940 may be configured to add a preset offset value to a position of the center of gravity of the target object in the virtual studio, determine a light-following point position of the virtual light, determine a light source position of the virtual light, and determine a beam direction of the virtual light according to the light source position and the light-following point position of the virtual light.
In an exemplary embodiment of the present disclosure, after determining the position of the target object in the virtual studio, the determining module 930 may be further configured to acquire a virtual scene picture of the virtual studio, and synthesize the matting picture of the target object to the virtual scene picture based on the position of the target object in the virtual studio to generate the target picture of the virtual studio.
The specific details of each module in the above apparatus have been described in detail in the method section, and details of an undisclosed scheme may refer to the method section, and thus are not described again.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the disclosure described in the above-mentioned "exemplary methods" section of this specification, when the program product is run on the terminal device.
The program product may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The exemplary embodiment of the present disclosure also provides an electronic device capable of implementing the above method. An electronic device 1000 according to such an exemplary embodiment of the present disclosure is described below with reference to fig. 10. The electronic device 1000 shown in fig. 10 is only an example and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 10, the electronic device 1000 may be embodied in the form of a general purpose computing device. The components of the electronic device 1000 may include, but are not limited to: the at least one processing unit 1010, the at least one memory unit 1020, a bus 1030 connecting different system components (including the memory unit 1020 and the processing unit 1010), and a display unit 1040.
Wherein the storage unit 1020 stores program code that may be executed by the processing unit 1010 such that the processing unit 1010 performs the steps according to various exemplary embodiments of the present disclosure described in the above-mentioned "exemplary methods" section of this specification. For example, the processing unit 1010 may perform the method steps shown in fig. 1, 4, and 6, and so on.
The memory unit 1020 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)1021 and/or a cache memory unit 1022, and may further include a read-only memory unit (ROM) 1023.
Storage unit 1020 may also include a program/utility 1024 having a set (at least one) of program modules 1025, such program modules 1025 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 1030 may be any one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, and a local bus using any of a variety of bus architectures.
The electronic device 1000 may also communicate with one or more external devices 1100 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 1000, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 1000 to communicate with one or more other computing devices. Such communication may occur through input/output (I/O) interfaces 1050. Also, the electronic device 1000 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet) via the network adapter 1060. As shown, the network adapter 1060 communicates with the other modules of the electronic device 1000 over the bus 1030. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 1000, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, according to exemplary embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Furthermore, the above-described figures are merely schematic illustrations of processes included in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the exemplary embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to make a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the exemplary embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (12)

1. A method for controlling virtual lighting in a virtual studio, the method comprising:
acquiring a real scene picture containing a target object and shot by a camera;
extracting a keying picture of the target object from the real scene picture;
determining the position of the target object in a virtual studio based on the image matting picture;
and determining the beam position of virtual light according to the position of the target object in the virtual studio so as to control the virtual light to irradiate to the beam position.
2. The control method according to claim 1, wherein the acquiring of the real scene picture containing the target object captured by the camera includes:
receiving a real scene picture of the target object transmitted by at least one camera.
3. The control method according to claim 1, wherein the extracting a matting picture of the target object in the real scene picture comprises:
preprocessing the real scene picture, and determining a pixel area of the target object;
and extracting characteristic information of the target object based on the pixel region of the target object, and extracting a keying picture of the target object through the characteristic information.
4. The control method according to claim 3, wherein the preprocessing the real scene picture to determine the pixel area of the target object comprises:
carrying out binarization processing on the real scene picture to obtain a binary picture of the real scene picture;
in the binary picture, determining the pixel distribution of the target object, and removing interference points in the binary picture to obtain a pixel region of the target object.
5. The control method according to claim 3, wherein the extracting feature information of the target object based on the pixel region of the target object and extracting the matte of the target object by the feature information comprises:
generating a pixel matrix of the pixel region;
performing dimension reduction processing on the pixel matrix by adopting a convolutional neural network model to generate characteristic information of the target object;
and extracting the keying picture of the target object through a pre-trained artificial neural network model and the characteristic information.
6. The control method according to claim 5, wherein when the pixel matrix is subjected to dimension reduction processing using a convolutional neural network model, the method further comprises:
and training the convolutional neural network model through a back propagation algorithm.
7. The control method according to claim 1, wherein the determining the position of the target object in the virtual studio based on the matting picture comprises:
taking the gray value of each pixel in the image matting picture as the pixel weight of each pixel;
calculating coordinates of the gravity center of the target object in the real scene picture according to the pixel weight;
and determining the position of the gravity center of the target object in the virtual studio according to the difference value of the coordinates of the gravity center of the target object in the real scene picture and the coordinates of the center of the real scene picture.
8. The control method of claim 7, wherein said determining the beam position of the virtual light according to the position of the target object in the virtual studio comprises:
adding a preset deviation value to the position of the gravity center of the target object in the virtual studio, and determining the position of the light chasing point of the virtual light;
and determining the light source position of the virtual light, and determining the beam direction of the virtual light according to the light source position and the light following point position of the virtual light.
9. The control method of claim 1, wherein after determining the location of the target object in the virtual studio, the method further comprises:
acquiring a virtual scene picture of the virtual studio;
and synthesizing the image matting picture of the target object to the virtual scene picture based on the position of the target object in the virtual studio to generate a target picture of the virtual studio.
10. An apparatus for controlling virtual lighting in a virtual studio, the apparatus comprising:
the acquisition module is used for acquiring a real scene picture which is shot by the camera and contains a target object;
the extraction module is used for extracting a keying picture of the target object from the real scene picture;
a determining module, configured to determine, based on the matting picture, a position of the target object in a virtual studio;
and the control module is used for determining the beam position of the virtual light according to the position of the target object in the virtual studio so as to control the virtual light to irradiate to the beam position.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1-9.
12. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1-9 via execution of the executable instructions.
CN202011411360.2A 2020-12-03 2020-12-03 Control method, device, medium and equipment for virtual light in virtual studio Pending CN112562056A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011411360.2A CN112562056A (en) 2020-12-03 2020-12-03 Control method, device, medium and equipment for virtual light in virtual studio

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011411360.2A CN112562056A (en) 2020-12-03 2020-12-03 Control method, device, medium and equipment for virtual light in virtual studio

Publications (1)

Publication Number Publication Date
CN112562056A true CN112562056A (en) 2021-03-26

Family

ID=75048725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011411360.2A Pending CN112562056A (en) 2020-12-03 2020-12-03 Control method, device, medium and equipment for virtual light in virtual studio

Country Status (1)

Country Link
CN (1) CN112562056A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113115110A (en) * 2021-05-20 2021-07-13 广州博冠信息科技有限公司 Video synthesis method and device, storage medium and electronic equipment
CN113436343A (en) * 2021-06-21 2021-09-24 广州博冠信息科技有限公司 Picture generation method and device for virtual studio, medium and electronic equipment
CN113706719A (en) * 2021-08-31 2021-11-26 广州博冠信息科技有限公司 Virtual scene generation method and device, storage medium and electronic equipment
CN114399425A (en) * 2021-12-23 2022-04-26 北京字跳网络技术有限公司 Image processing method, video processing method, device, equipment and medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113115110A (en) * 2021-05-20 2021-07-13 广州博冠信息科技有限公司 Video synthesis method and device, storage medium and electronic equipment
CN113436343A (en) * 2021-06-21 2021-09-24 广州博冠信息科技有限公司 Picture generation method and device for virtual studio, medium and electronic equipment
CN113706719A (en) * 2021-08-31 2021-11-26 广州博冠信息科技有限公司 Virtual scene generation method and device, storage medium and electronic equipment
CN114399425A (en) * 2021-12-23 2022-04-26 北京字跳网络技术有限公司 Image processing method, video processing method, device, equipment and medium

Similar Documents

Publication Publication Date Title
US11538229B2 (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN108335353B (en) Three-dimensional reconstruction method, device and system of dynamic scene, server and medium
CN110503703B (en) Method and apparatus for generating image
CN112562056A (en) Control method, device, medium and equipment for virtual light in virtual studio
CN108388882B (en) Gesture recognition method based on global-local RGB-D multi-mode
CN110009720B (en) Image processing method and device in AR scene, electronic equipment and storage medium
EP4137991A1 (en) Pedestrian re-identification method and device
CN111832745B (en) Data augmentation method and device and electronic equipment
CN110998669A (en) Image processing apparatus and method
CN111783647A (en) Training method of face fusion model, face fusion method, device and equipment
CN111008927B (en) Face replacement method, storage medium and terminal equipment
CN112598780B (en) Instance object model construction method and device, readable medium and electronic equipment
CN109035415B (en) Virtual model processing method, device, equipment and computer readable storage medium
CN112712487A (en) Scene video fusion method and system, electronic equipment and storage medium
CN114358112A (en) Video fusion method, computer program product, client and storage medium
CN111754622B (en) Face three-dimensional image generation method and related equipment
CN113065506A (en) Human body posture recognition method and system
WO2023217138A1 (en) Parameter configuration method and apparatus, device, storage medium and product
US20130315471A1 (en) Concave surface modeling in image-based visual hull
CN111192305B (en) Method and apparatus for generating three-dimensional image
CN116958344A (en) Animation generation method and device for virtual image, computer equipment and storage medium
CN116012913A (en) Model training method, face key point detection method, medium and device
EP4150560B1 (en) Single image 3d photography with soft-layering and depth-aware inpainting
US20230131418A1 (en) Two-dimensional (2d) feature database generation
US11308586B2 (en) Method for applying a vignette effect to rendered images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination