CN112488972A - Method and device for synthesizing green screen image and virtual image in real time - Google Patents

Method and device for synthesizing green screen image and virtual image in real time Download PDF

Info

Publication number
CN112488972A
CN112488972A CN202011359211.6A CN202011359211A CN112488972A CN 112488972 A CN112488972 A CN 112488972A CN 202011359211 A CN202011359211 A CN 202011359211A CN 112488972 A CN112488972 A CN 112488972A
Authority
CN
China
Prior art keywords
image
opacity
green screen
screen image
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011359211.6A
Other languages
Chinese (zh)
Inventor
朱克锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Kadoxi Technology Co ltd
Original Assignee
Shenzhen Kadoxi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Kadoxi Technology Co ltd filed Critical Shenzhen Kadoxi Technology Co ltd
Priority to CN202011359211.6A priority Critical patent/CN112488972A/en
Publication of CN112488972A publication Critical patent/CN112488972A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a method and a device for synthesizing a green curtain image and a virtual image in real time, wherein the method comprises the steps of acquiring green curtain image data of a current frame; extracting a brightness component Y and a chrominance component UV of each pixel point in the current frame green screen image data, and determining the opacity corresponding to the brightness component Y and the chrominance component UV through a preset conversion algorithm
Figure DDA0002803553890000011
Wherein, the opacity of each pixel point of the virtual image corresponding to the green screen image is
Figure DDA0002803553890000012
The range of (1) is 0 to 1; according to the opacity
Figure DDA0002803553890000013
And the opacity
Figure DDA0002803553890000014
Synthesizing the green screen image and the virtual image according to the ratio of (1); and sending the image data obtained by synthesizing the green screen image and the virtual image, thereby greatly shortening the synthesis time and ensuring the synthesis precision in a limited time.

Description

Method and device for synthesizing green screen image and virtual image in real time
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for synthesizing a green curtain image and a virtual image in real time.
Background
With the development of video technology towards digitalization and multimedia, the demand of people on visual sense is continuously improved, and the video object extraction technology represented by virtual video gradually shows wide application prospect.
The most widely applied technology is image matting technology, which can be understood in popular terms as separating the color of a certain part of a picture from a background, creating a foreground mask, and combining the foreground mask and a virtual image to form an image with a matting target moving in a virtual environment.
In the traditional image matting technology, each frame of image in a video stream is usually in a YUV format, the YUV format is often required to be converted into an RGB format in the image matting process, then image matting processing is performed, the amount of data for converting the data stream in the YUV format into RGB processing is large, processing time is slow, and the method is difficult to adapt to green screen synthesis processing in a real-time video.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are proposed to provide a method and apparatus for real-time synthesis of a green screen image and a virtual image, which overcome or at least partially solve the above problems.
In order to solve the above problem, an embodiment of the present invention discloses a method for synthesizing a green screen image and a virtual image in real time, which is characterized by comprising:
acquiring current frame green screen image data;
extracting a brightness component Y and a chrominance component UV of each pixel point in the current frame green screen image data, and determining the opacity corresponding to the brightness component Y and the chrominance component UV through a preset conversion algorithm
Figure BDA0002803553870000011
Wherein, the opacity of each pixel point of the virtual image corresponding to the green screen image is
Figure BDA0002803553870000012
The range of (1) is 0 to 1;
according to the opacity
Figure BDA0002803553870000013
And the opacity
Figure BDA0002803553870000014
Synthesizing the green screen image and the virtual image according to the ratio of (1);
sending the image data obtained by synthesizing the green screen image and the virtual image
Further, the extracting of the luminance component Y and the chrominance component UV of each pixel point in the current frame green screen image data previously includes:
constructing an image training data set, carrying out image processing on all images in the image training data set to obtain a pixel value and a corresponding opacity alpha of each pixel point in all the images, and taking the pixel value and the corresponding opacity alpha of each pixel point as training data of the image training data set;
training the pixel value of each pixel point and the corresponding opacity alpha through a preset correlation function, and determining a correlation parameter lambda after weighted average1~λ9And obtaining a correlation function between the pixel values and the corresponding opacity alpha.
Further, the conversion algorithm is:
Figure BDA0002803553870000021
further, the opacity is determined according to the degree of opacity
Figure BDA0002803553870000022
And the opacity
Figure BDA0002803553870000023
The synthesizing of the green screen image and the virtual image according to the ratio comprises:
corresponding each pixel point in the green screen image to each pixel point in the virtual image one by one, and calculating the opacity of the green screen image and the pixel points corresponding to the virtual image by the following formula
Figure BDA0002803553870000024
The ratio of (A) to (B);
Figure BDA0002803553870000025
Figure BDA0002803553870000026
Figure BDA0002803553870000027
in the above formula, YD、UD、VDYUV component, Y, of the synthesized targetF、UF、VFIs YUV component in green screen image, YB、UB、VBAre YUV components in the virtual image.
Further, still include:
selecting pixel points of the green screen image, of which the green color value G is larger than the average value of the sum of the red color value R and the blue color value B, as overflowed points;
taking the average value of the sum of the red value R and the blue value B in the overflowed color point as the green value G of the overflowed color pointDWherein, in the step (A),
Figure BDA0002803553870000031
there is also provided an apparatus for synthesizing a green screen image and a virtual image in real time, comprising:
the acquisition module is used for acquiring the green screen image data of the current frame;
an extraction module, configured to extract a luminance component Y and a chrominance component UV of each pixel point in the current frame green screen image data, and determine, through a preset conversion algorithm, an opacity corresponding to the luminance component Y and the chrominance component UV
Figure BDA0002803553870000032
Wherein, the opacity of each pixel point of the virtual image corresponding to the green screen image is
Figure BDA0002803553870000033
Figure BDA0002803553870000034
The range of (1) is 0 to 1;
a synthesis module for synthesizing the opacity
Figure BDA0002803553870000035
And the opacity
Figure BDA0002803553870000036
Synthesizing the green screen image and the virtual image according to the ratio of (1);
and the sending module is used for sending the image data obtained by synthesizing the green screen image and the virtual image.
Further, the extraction module further includes:
the construction module is used for constructing an image training data set, carrying out image processing on all images in the image training data set to obtain a pixel value and a corresponding opacity alpha of each pixel point in all the images, and taking the pixel value and the corresponding opacity alpha of each pixel point as training data of the image training data set;
a calculation module for training the pixel value of each pixel point and the corresponding opacity alpha through a preset correlation function and determining the weighted average correlation parameterNumber lambda1~λ9And obtaining a correlation function between the pixel values and the corresponding opacity alpha.
Further, the synthesis module further includes:
corresponding each pixel point in the green screen image to each pixel point in the virtual image one by one, and calculating the opacity of the green screen image and the pixel points corresponding to the virtual image by the following formula
Figure BDA0002803553870000037
The ratio of (A) to (B);
Figure BDA0002803553870000038
Figure BDA0002803553870000039
Figure BDA00028035538700000310
in the above formula, YD、UD、VDYUV component, Y, of the synthesized targetF、UF、VFIs YUV component in green screen image, YB、UB、VBAre YUV components in the virtual image.
The embodiment of the invention has the following advantages:
the embodiment of the application preprocesses the calculation step through neural network training, so that the opacity of the pixel point can be obtained only by one relatively simple conversion step during real-time synthesis
Figure BDA0002803553870000041
The synthesis time is greatly shortened, and the synthesis precision is ensured in limited time.
Drawings
FIG. 1 is a flowchart illustrating steps of an embodiment of a method for real-time synthesis of a green screen image and a virtual image according to the present invention;
fig. 2 is a block diagram of an embodiment of an apparatus for real-time synthesis of a green screen image and a virtual image according to the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The embodiment of the application provides a method for synthesizing a green screen image and a virtual image in real time, which is used for synthesizing a video stream image shot in a green screen environment in real time into the virtual image, and can be understood as replacing a green screen background in real-time video stream data with the virtual image.
As is known, in a smooth video stream, the reading speed of each frame of image does not exceed 33ms, that is, the real-time synthesis time of the green screen image and the virtual image must be completed within 33ms, and the synthesis process must be completed within a limited time to make the synthesis effect accurate, which requires a fast and accurate calculation method, otherwise, the synthesis effect is not good, or the frame frequency is reduced to increase the synthesis time, which causes video playback jamming and frustration.
By adopting the method provided by the embodiment of the application, the synthesis time can be greatly shortened, the time from the formation of the real-time image to the synthesis with the virtual image is controlled within 10ms, the precision is ensured, and the synthesized image can be read and used for displaying within an effective frame frequency.
Hereinafter, the method of synthesizing the above-described facts will be explained in detail.
As shown in fig. 1, a method for synthesizing a green screen image and a virtual image in real time provided by the embodiment of the present application includes the following steps:
s100, acquiring current frame green screen image data;
s200, extracting a brightness component Y and a chrominance component UV of each pixel point in the current frame green screen image data, and determining the opaqueness corresponding to the brightness component Y and the chrominance component UV through a preset conversion algorithm
Figure BDA0002803553870000051
Wherein, the opacity of each pixel point of the virtual image corresponding to the green screen image is
Figure BDA0002803553870000052
Figure BDA0002803553870000053
The range of (1) is 0 to 1;
s300, according to the opacity
Figure BDA0002803553870000054
And the opacity
Figure BDA0002803553870000055
Synthesizing the green screen image and the virtual image according to the ratio of (1);
and S400, sending image data obtained by synthesizing the green screen image and the virtual image.
The two images are synthesized, and actually, the synthesis of two corresponding pixel points can be understood, wherein the proportion of the more opaque pixel points to the chromaticity after the synthesis is larger. In the above technical scheme, when the opacity of one pixel point of the green screen image
Figure BDA0002803553870000056
Greater than the opacity of the corresponding pixel of the virtual image
Figure BDA0002803553870000057
After the synthesis, the chroma proportion of the pixel point in the green screen image is larger than that of the pixel point in the virtual image, that is, the pixel point is displayed mainly by the color of the green screen image, that is, when the opacity is not enough
Figure BDA0002803553870000058
When the image approaches 1, the color of the corresponding pixel point on the virtual image can hardly be seen.
In the embodiment of the application, the pixel format of each frame of green screen image in the video stream is YUV, and during real-time synthesis, the opacity of each pixel point of the current frame of green screen image must be determined
Figure BDA0002803553870000059
But opacity
Figure BDA00028035538700000510
It is difficult to obtain the luminance component Y and the chrominance component UV directly. Based on the method, the neural network is utilized to carry out continuous iterative training on the image data set in which a large amount of image data are stored through a gradient algorithm, the opacity alpha corresponding to each pixel point at different R (Red ), G (Green ) and B (Blue ) values can be obtained, and then the opacity corresponding to the brightness component Y and the chroma component UV are obtained through a conversion algorithm
Figure BDA00028035538700000511
In the process of neural network training, defaulting the opacity alpha of the pixel with the maximum G (Green) value to be the minimum, namely, the opacity alpha is 0; thus, the opacity alpha in RGB and the opacity in YUV are compared
Figure BDA00028035538700000512
Obtained by conversion
Figure BDA00028035538700000513
The value is also equal to 0, namely, the pixel point displayed as green in the green curtain image is regarded as transparent, and after synthesis, the pixel point color of the virtual image is displayed in the area corresponding to the green pixel point.
In order to realize the effect of real-time synthesis and avoid the influence of a large amount of calculation on the frame frequency of a real-time video stream during synthesis, based on the limited number of color combinations, the embodiment of the application preprocesses the calculation steps through neural network training, so that only one relatively simple conversion step is needed during real-time synthesis to obtain the opacity of the pixel points
Figure BDA0002803553870000061
The synthesis time is greatly shortened, and the synthesis precision is ensured in limited time.
The pretreatment step is explained in detail below.
The extracting of the luminance component Y and the chrominance component UV of each pixel point in the current frame green screen image data, previously, includes:
constructing an image training data set, carrying out image processing on all images in the image training data set to obtain a pixel value and a corresponding opacity alpha of each pixel point in all the images, and taking the pixel value and the corresponding opacity alpha of each pixel point as training data of the image training data set;
training the pixel value of each pixel point and the corresponding opacity alpha through a preset correlation function, and determining a correlation parameter lambda after weighted average1~λ9And obtaining a correlation function between the pixel values and the corresponding opacity alpha.
The correlation function is as follows:
α=λ1R+λ2G+λ3B+(λ4R+λ5G+λ6B)(λ7R+λ8G+λ9B)
in the above formula, R, G, B and α are both determining factors in the image training dataset, and parameters λ - λ with the lowest error between R, G, B and α are found by the continuous gradient descent algorithm, so as to obtain R, G, B and α
Figure BDA0002803553870000062
The correlation function of (a) is, that is,
Figure BDA0002803553870000063
the conversion algorithm is as follows:
Figure BDA0002803553870000064
by the above conversion matrix, the opacity of each pixel point in the green screen image can be obtained
Figure BDA0002803553870000065
And the formula of the UV correlation function of the luminance component Y and the chrominance component is as follows:
Figure BDA0002803553870000066
thus, the opacity of each pixel point in the green screen image is obtained
Figure BDA0002803553870000067
Then, the opacity
Figure BDA0002803553870000068
And the opacity
Figure BDA0002803553870000069
The synthesizing of the green screen image and the virtual image according to the ratio comprises:
corresponding each pixel point in the green screen image to each pixel point in the virtual image one by one, and calculating the opacity of the green screen image and the pixel points corresponding to the virtual image by the following formula
Figure BDA0002803553870000071
The ratio of (A) to (B);
Figure BDA0002803553870000072
Figure BDA0002803553870000073
Figure BDA0002803553870000074
in the above formula, YD、UD、VDYUV component, Y, of the synthesized targetF、UF、VFIs YUV component in green screen image, YB、UB、VBAre YUV components in the virtual image.
In another embodiment, it should be noted that, since the foreground image in the image to be scratched may refract or reflect the color in the background image, that is, the color is a color spill, the pixel point of the color spill needs to be eliminated, so as to improve the accuracy of the synthesis.
Alternatively to this, the first and second parts may,
selecting pixel points of the green screen image, of which the green color value G is larger than the average value of the sum of the red color value R and the blue color value B, as overflowed points;
taking the average value of the sum of the red value R and the blue value B in the overflowed color point as the green value G of the overflowed color pointDWherein, in the step (A),
Figure BDA0002803553870000075
as shown in fig. 2, an embodiment of the present application further provides an apparatus for synthesizing a green curtain image and a virtual image in real time, including:
an obtaining module 100, configured to obtain current frame green screen image data;
an extracting module 200, configured to extract a luminance component Y and a chrominance component UV of each pixel point in the current frame green screen image data, and determine, through a preset conversion algorithm, an opacity corresponding to the luminance component Y and the chrominance component UV
Figure BDA0002803553870000076
Wherein, the opacity of each pixel point of the virtual image corresponding to the green screen image is
Figure BDA0002803553870000077
The range of (1) is 0 to 1;
a synthesis module 300 for synthesizing the opacity
Figure BDA0002803553870000078
And the opacity
Figure BDA0002803553870000079
Synthesizing the green screen image and the virtual image according to the ratio of (1);
and a sending module 400 for sending the image data obtained by synthesizing the green screen image and the virtual image.
Further, the extraction module 200 further includes:
the construction module is used for constructing an image training data set, carrying out image processing on all images in the image training data set to obtain a pixel value and a corresponding opacity alpha of each pixel point in all the images, and taking the pixel value and the corresponding opacity alpha of each pixel point as training data of the image training data set;
the calculation module trains the pixel value of each pixel point and the corresponding opacity alpha through a preset correlation function, and determines a correlation parameter lambda after weighted average1~λ9And obtaining a correlation function between the pixel values and the corresponding opacity alpha.
Further, the synthesis module 300 further includes:
corresponding each pixel point in the green screen image to each pixel point in the virtual image one by one, and calculating the opacity of the green screen image and the pixel points corresponding to the virtual image by the following formula
Figure BDA0002803553870000081
The ratio of (A) to (B);
Figure BDA0002803553870000082
Figure BDA0002803553870000083
Figure BDA0002803553870000084
in the above formula, YD、UD、VDYUV component, Y, of the synthesized targetF、UF、VFIs YUV component in green screen image, YB、UB、VBAre YUV components in the virtual image.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The method and the device for synthesizing the green screen image and the virtual image in real time provided by the invention are described in detail, a specific example is applied in the text to explain the principle and the implementation mode of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A method for synthesizing a green curtain image and a virtual image in real time is characterized by comprising the following steps:
acquiring current frame green screen image data;
extracting a brightness component Y and a chrominance component UV of each pixel point in the current frame green screen image data, and determining the opacity corresponding to the brightness component Y and the chrominance component UV through a preset conversion algorithm
Figure FDA0002803553860000011
Wherein, the opacity of each pixel point of the virtual image corresponding to the green screen image is
Figure FDA0002803553860000012
Figure FDA0002803553860000013
The range of (1) is 0 to 1;
according to the opacity
Figure FDA0002803553860000014
And the opacity
Figure FDA0002803553860000015
Synthesizing the green screen image and the virtual image according to the ratio of (1);
and sending the image data obtained by synthesizing the green screen image and the virtual image.
2. The method of claim 1, wherein the extracting the luminance component Y and the chrominance component UV of each pixel point in the current frame green screen image data comprises:
constructing an image training data set, carrying out image processing on all images in the image training data set to obtain a pixel value and a corresponding opacity alpha of each pixel point in all the images, and taking the pixel value and the corresponding opacity alpha of each pixel point as training data of the image training data set;
training the pixel value of each pixel point and the corresponding opacity alpha through a preset correlation function, and determining a correlation parameter lambda after weighted average1~λ9And obtaining a correlation function between the pixel values and the corresponding opacity alpha.
3. The method of claim 1, wherein the conversion algorithm is:
Figure FDA0002803553860000016
4. the method of claim 1Characterized in that said opacity is dependent on said
Figure FDA0002803553860000017
And the opacity
Figure FDA0002803553860000018
The synthesizing of the green screen image and the virtual image according to the ratio comprises:
corresponding each pixel point in the green screen image to each pixel point in the virtual image one by one, and calculating the opacity of the green screen image and the pixel points corresponding to the virtual image by the following formula
Figure FDA0002803553860000019
The ratio of (A) to (B);
Figure FDA0002803553860000021
Figure FDA0002803553860000022
Figure FDA0002803553860000023
in the above formula, YD、UD、VDYUV component, Y, of the synthesized targetF、UF、VFIs YUV component in green screen image, YB、UB、VBAre YUV components in the virtual image.
5. The method of claim 1, further comprising:
selecting pixel points of the green screen image, of which the green color value G is larger than the average value of the sum of the red color value R and the blue color value B, as overflowed points;
taking the average value of the sum of the red value R and the blue value B in the overflowed color point as the green value G of the overflowed color pointDWherein, in the step (A),
Figure FDA0002803553860000024
6. an apparatus for synthesizing a green screen image and a virtual image in real time comprises:
the acquisition module is used for acquiring the green screen image data of the current frame;
an extraction module, configured to extract a luminance component Y and a chrominance component UV of each pixel point in the current frame green screen image data, and determine, through a preset conversion algorithm, an opacity corresponding to the luminance component Y and the chrominance component UV
Figure FDA0002803553860000025
Wherein, the opacity of each pixel point of the virtual image corresponding to the green screen image is
Figure FDA0002803553860000026
Figure FDA0002803553860000027
The range of (1) is 0 to 1;
a synthesis module for synthesizing the opacity
Figure FDA0002803553860000028
And the opacity
Figure FDA0002803553860000029
Synthesizing the green screen image and the virtual image according to the ratio of (1);
and the sending module is used for sending the image data obtained by synthesizing the green screen image and the virtual image.
7. The apparatus of claim 6, wherein the extraction module further comprises:
the construction module is used for constructing an image training data set, carrying out image processing on all images in the image training data set to obtain a pixel value and a corresponding opacity alpha of each pixel point in all the images, and taking the pixel value and the corresponding opacity alpha of each pixel point as training data of the image training data set;
the calculation module trains the pixel value of each pixel point and the corresponding opacity alpha through a preset correlation function, and determines a correlation parameter lambda after weighted average1~λ9And obtaining a correlation function between the pixel values and the corresponding opacity alpha.
8. The apparatus of claim 6, wherein the synthesis module further comprises:
corresponding each pixel point in the green screen image to each pixel point in the virtual image one by one, and calculating the opacity of the green screen image and the pixel points corresponding to the virtual image by the following formula
Figure FDA0002803553860000031
The ratio of (A) to (B);
Figure FDA0002803553860000032
Figure FDA0002803553860000033
Figure FDA0002803553860000034
in the above formula, YD、UD、VDYUV component, Y, of the synthesized targetF、UF、VFIs YUV component in green screen image, YB、UB、VBAre YUV components in the virtual image.
9. Electronic device, characterized in that it comprises a processor, a memory and a computer program stored on the memory and capable of running on the processor, the computer program, when executed by the processor, implementing the method of real-time composition of a green screen image with a virtual image according to any one of claims 1 to 5.
10. Computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the method for real-time composition of a green screen image and a virtual image according to any one of claims 1 to 5.
CN202011359211.6A 2020-11-27 2020-11-27 Method and device for synthesizing green screen image and virtual image in real time Pending CN112488972A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011359211.6A CN112488972A (en) 2020-11-27 2020-11-27 Method and device for synthesizing green screen image and virtual image in real time

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011359211.6A CN112488972A (en) 2020-11-27 2020-11-27 Method and device for synthesizing green screen image and virtual image in real time

Publications (1)

Publication Number Publication Date
CN112488972A true CN112488972A (en) 2021-03-12

Family

ID=74936474

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011359211.6A Pending CN112488972A (en) 2020-11-27 2020-11-27 Method and device for synthesizing green screen image and virtual image in real time

Country Status (1)

Country Link
CN (1) CN112488972A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023216526A1 (en) * 2022-05-10 2023-11-16 北京字跳网络技术有限公司 Calibration information determination method and apparatus, and electronic device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678724A (en) * 2015-12-29 2016-06-15 北京奇艺世纪科技有限公司 Background replacing method and apparatus for images
CN106341613A (en) * 2015-07-06 2017-01-18 瑞昱半导体股份有限公司 Wide dynamic range imaging method
CN107808373A (en) * 2017-11-15 2018-03-16 北京奇虎科技有限公司 Sample image synthetic method, device and computing device based on posture
US20200364839A1 (en) * 2019-05-17 2020-11-19 Beijing Dajia Internet Information Technology Co., Ltd. Image processing method and apparatus, electronic device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106341613A (en) * 2015-07-06 2017-01-18 瑞昱半导体股份有限公司 Wide dynamic range imaging method
CN105678724A (en) * 2015-12-29 2016-06-15 北京奇艺世纪科技有限公司 Background replacing method and apparatus for images
CN107808373A (en) * 2017-11-15 2018-03-16 北京奇虎科技有限公司 Sample image synthetic method, device and computing device based on posture
US20200364839A1 (en) * 2019-05-17 2020-11-19 Beijing Dajia Internet Information Technology Co., Ltd. Image processing method and apparatus, electronic device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023216526A1 (en) * 2022-05-10 2023-11-16 北京字跳网络技术有限公司 Calibration information determination method and apparatus, and electronic device

Similar Documents

Publication Publication Date Title
CN109862389B (en) Video processing method, device, server and storage medium
TWI470581B (en) Method for tone repropduction for displays
US8421819B2 (en) Pillarboxing correction
US10026160B2 (en) Systems and techniques for automatic image haze removal across multiple video frames
Phillips et al. Camera image quality benchmarking
KR20180132946A (en) Multi-view scene segmentation and propagation
CN103440674B (en) A kind of rapid generation of digital picture wax crayon specially good effect
CN113518185B (en) Video conversion processing method and device, computer readable medium and electronic equipment
Kuo et al. Content-adaptive inverse tone mapping
CN101986689A (en) Method and apparatus for image processing
CN116012232A (en) Image processing method and device, storage medium and electronic equipment
Liba et al. Sky optimization: Semantically aware image processing of skies in low-light photography
CN108564057A (en) Method for establishing human similarity system based on opencv
CN112488972A (en) Method and device for synthesizing green screen image and virtual image in real time
CN110580696A (en) Multi-exposure image fast fusion method for detail preservation
Rizzi et al. Perceptual color film restoration
CN109859303B (en) Image rendering method and device, terminal equipment and readable storage medium
CN114140348A (en) Contrast enhancement method, device and equipment
CN112884659A (en) Image contrast enhancement method and device and display equipment
JP5050141B2 (en) Color image exposure evaluation method
Lakshmi et al. Analysis of tone mapping operators on high dynamic range images
CN113706665B (en) Image processing method and device
CN109214311B (en) Detection method and device
CN113542843B (en) Target bullet screen display method and device, electronic equipment and storage medium
Zabaleta et al. Photorealistic style transfer for cinema shoots

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination