CN115063325A - Video signal processing method and device, computer equipment and storage medium - Google Patents

Video signal processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN115063325A
CN115063325A CN202210988303.3A CN202210988303A CN115063325A CN 115063325 A CN115063325 A CN 115063325A CN 202210988303 A CN202210988303 A CN 202210988303A CN 115063325 A CN115063325 A CN 115063325A
Authority
CN
China
Prior art keywords
yuv
conversion
signal
rgb
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210988303.3A
Other languages
Chinese (zh)
Inventor
姜文波
徐进
葛涛
薛知行
王东
侯山峰
郭晓霞
宋翠翠
李江
裴亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING GEFEI TECHNOLOGY CO LTD
China Media Group
Original Assignee
BEIJING GEFEI TECHNOLOGY CO LTD
China Media Group
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING GEFEI TECHNOLOGY CO LTD, China Media Group filed Critical BEIJING GEFEI TECHNOLOGY CO LTD
Priority to CN202210988303.3A priority Critical patent/CN115063325A/en
Publication of CN115063325A publication Critical patent/CN115063325A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Color Image Communication Systems (AREA)
  • Processing Of Color Television Signals (AREA)

Abstract

The embodiment of the application provides a video signal processing method, a video signal processing device, computer equipment and a storage medium, and relates to the field of computers. The video signal processing method carries out YUV mode preprocessing on a second conversion display lookup table of an RGB mode provided by a user to obtain a first conversion display lookup table of the YUV mode; when the equipment performs mutual conversion of ultra-high definition and high-definition video signals transmitted in real time, an initial YUV signal in the video signals is extracted, the initial YUV signal is subjected to chrominance space conversion based on a first conversion display lookup table of a pre-processed YUV mode to obtain a converted target YUV signal, and finally video signal conversion is performed based on the target YUV signal, so that the problem of resource waste in the video signal conversion process is solved, and the technical effect of improving the video signal conversion efficiency is realized.

Description

Video signal processing method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a video signal processing method and apparatus, a computer device, and a storage medium.
Background
With the rapid development of digitization, program production is in a parallel stage from Standard Dynamic Range (SDR) to High Dynamic Range (HDR), and high definition program standard dynamic range and standard color gamut need to be considered while 4K ultrahigh definition, high dynamic range and wide color gamut are produced. Currently, a method for implementing real-time video conversion processing is implemented by extracting a signal of an image and performing conversion processing required by a target on the image signal, and includes: the method comprises the steps of converting a YUV (color coding method) mode into an RGB (red, green and blue images) mode, converting the RGB mode to extract YUV signals in a video, converting the input YUV images into target RGB mode images according to the images in the video, and converting the RGB mode into the YUV mode to transmit the target RGB mode images according to a target output interface format.
However, in the process of converting the ultrahigh-definition/high-definition signal real-time image, the set of sampling points is very huge, and each sampling point needs to be converted into corresponding RGB through floating point operation according to the conversion formula of YUV and RGB; after the up-conversion and down-conversion are finished, the RGB values are converted back into corresponding YUV values point by point, and a large amount of operation generated in the two conversion processes occupies resources of a processor and a memory.
Disclosure of Invention
The embodiment of the application provides a video signal processing method, a video signal processing device, computer equipment and a storage medium.
In a first aspect of embodiments of the present application, there is provided a video signal processing method, including:
acquiring a video signal to be processed, and extracting an initial YUV signal in the video signal to be processed;
performing chrominance space conversion on the initial YUV signal based on a first conversion display lookup table of a pre-configured YUV mode to obtain a converted target YUV signal;
and processing according to an output interface format based on the target YUV signal.
In an optional embodiment of the present application, before the performing, by the first transform display lookup table based on a preconfigured YUV mode, a chrominance space conversion on the initial YUV signal to obtain a converted target YUV signal, the method further includes:
acquiring a second conversion display lookup table of the RGB mode sent by the user terminal;
converting the collected YUV signals into initial RGB signals;
carrying out mode conversion on the initial RGB signals to obtain YUV values of the initial RGB signals in a YUV mode;
and constructing the first conversion display lookup table of the YUV mode based on the corresponding relation between each YUV value and each YUV signal.
In an optional embodiment of the present application, in the method, converting each acquired YUV signal into an initial RGB signal includes:
determining the brightness component and the chrominance component of each YUV signal;
and for each YUV signal, determining each initial RGB value of the initial RGB signal corresponding to the YUV signal in an RGB mode according to the brightness component, the hue component, the saturation component and each RGB conversion coefficient.
In an optional embodiment of the present application, in the method, performing mode conversion on the initial RGB signals to obtain YUV values of the initial RGB signals in a YUV mode includes:
performing mode conversion on each initial RGB value in the initial RGB signals by adopting a tetrahedral interpolation method to obtain a target RGB value of each initial RGB signal in a YUV mode;
and converting each target RGB value into a YUV value in a YUV mode.
In an optional embodiment of the present application, the method further comprises:
acquiring a new second conversion display lookup table of the RGB mode sent by the user terminal;
and updating the first conversion display lookup table based on the new second conversion display lookup table and each initial RGB signal to obtain a new first conversion display lookup table.
In an optional embodiment of the present application, the updating the first transformed display lookup table based on the new second transformed display lookup table and each of the initial RGB signals in the method to obtain a new first transformed display lookup table includes:
looking up a new RGB value corresponding to each initial RGB signal in the new second conversion display lookup table;
carrying out mode conversion on the new RGB values to obtain new YUV values of the new RGB values in a YUV mode;
and constructing the new first conversion display lookup table of the YUV mode based on the corresponding relation between each new YUV value and each YUV signal.
In an optional embodiment of the present application, in the method, video signal conversion is performed based on the target YUV signal, and includes:
and performing at least one of resolution up-down conversion, brightness up-down conversion and color gamut up-down conversion on the target YUV signal.
In a second aspect of the embodiments of the present application, there is provided a video processing apparatus, including: the device comprises a first acquisition module, a first conversion module and a second conversion module;
the first acquisition module is used for acquiring a video signal to be processed and extracting an initial YUV signal in the video signal to be processed;
the first conversion module is used for carrying out chrominance space conversion on the initial YUV signal based on a first conversion display lookup table of a pre-configured YUV mode to obtain a converted target YUV signal;
and the second conversion module is used for converting the video signal based on the target YUV signal and the target YUV signal.
In a third aspect of embodiments of the present application, there is provided a computer device, including: comprising a memory storing a computer program and a processor implementing the steps of the method as claimed in any one of the above when the processor executes the computer program.
In a fourth aspect of the embodiments of the present application, a computer-readable storage medium is provided, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the steps of the method according to any one of the above.
The embodiment of the application provides a video signal processing method, which comprises the steps of extracting an initial YUV signal from a video signal, carrying out chrominance space conversion on the initial YUV signal based on a first conversion display lookup table of a pre-configured YUV mode to obtain a converted target YUV signal, and finally carrying out video signal conversion based on the target YUV signal to realize direct conversion processing of the video from the YUV mode, so that two times of YUV/RGB mutual conversion operation are omitted for each image sampling point, the resource waste in the video signal conversion process is solved, and the technical effect of improving the video signal conversion efficiency is realized.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic view of an application scenario of a video signal processing method according to an embodiment of the present application;
fig. 2 is a schematic view of an application scenario of a video signal processing method according to an embodiment of the present application;
fig. 3 is a flowchart of a video signal processing method according to an embodiment of the present application 1;
fig. 4 is a flowchart of a video signal processing method according to an embodiment of the present application 2;
fig. 5 is a flowchart of a video signal processing method according to an embodiment of the present application 3;
fig. 6 is a flowchart of a video signal processing method according to an embodiment of the present application 4;
fig. 7 is a flowchart of a video signal processing method according to an embodiment of the present application 5;
FIG. 8 is a flowchart of a video signal processing method according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a video signal processing apparatus according to an embodiment of the present application 1;
fig. 10 is a schematic structural diagram of a video signal processing apparatus according to an embodiment of the present application 2;
fig. 11 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In the process of implementing the present application, the inventors found that, in the current process of performing image conversion in a video, resource utilization is poor.
In view of the foregoing problems, an embodiment of the present application provides a video signal processing method.
In the current technical scheme, a video image is firstly converted into an RGB mode from a YUV mode during processing, and then is converted into the YUV mode from the RGB mode after the conversion is finished, wherein two reciprocal conversion processes are provided.
For high-definition signals, the number of sampling points of each image is 1920 x 1080 x 25, the number of sampling points of each image of UHD signals is 3840 x 2160 x 50, and a video image has 25 frames of pictures per second (as an example of a television PAL system), so that the set of sampling points is very huge in the process of up-down conversion of a real-time image, and each sampling point needs to be converted into corresponding RGB through floating point operation according to a YUV and RGB conversion formula; after the up-conversion and the down-conversion are finished, the RGB values are converted back into corresponding YUV values point by point, a large amount of operations are generated in the two conversion processes, and resources of a processor and a memory are occupied.
As shown in fig. 1, in the prior art, in the process of processing image transformation, a 3D LUT (Look-Up-Table) of a user is obtained, and after YUV signals in a video are extracted, an input YUV image is converted into an RGB image according to the image in the video; and finally, performing up-conversion and down-conversion processing on the converted input RGB image based on a 3D LUT of a user.
Wherein, the image up-down conversion processing means: based on the principle that human eyes are more sensitive to the brightness signal of an image, when the image is transmitted in a YUV image format, only half of the bandwidth of the brightness component is needed for transmitting each chrominance component, and therefore the data transmission bandwidth can be saved. For example, a video signal is extracted, converted into a parallel signal after serial-to-parallel conversion, and the chrominance data is resampled and reduced to YUV444 by YUV422, so that each luminance data has corresponding chrominance data.
Wherein, the L U T is used for realizing color space conversion: the original color space is simulated and reduced into a gray level color space of an LOG (Logiathmic, a video recording form applying a Logarithmic function to an exposure curve) mode, and then the color is replaced by color searching and mapping; when a signal is input, the address is input once for table look-up, the content corresponding to the address is found out and output, and the color space conversion can be realized. The 3D LUT with RGB format is composed of RGB 3 1D LUTs, and the input RGB three-channel color values are mapped according to three lookup tables of the 3D LUT to obtain the converted color. The LOG mode can be used for nonlinearly adjusting the tone of the original data, so that more light and shade information can be kept in limited data, and a larger dynamic range is achieved.
Aiming at the problem of overlarge resource loss of the high-definition video caused by the image processing process, the embodiment of the application provides a video signal processing method, the initial YUV signal in the video signal is extracted, the initial YUV signal is subjected to chrominance space conversion based on a first conversion display lookup table of a pre-configured YUV mode to obtain a converted target YUV signal, and finally the YUV signal is processed based on a target output interface format, so that the video is directly processed and converted from the YUV mode, two times of YUV/RGB mutual conversion operation are saved for each image sampling point, the problem of system resource waste in the video signal conversion process is solved, and the technical effect of improving the video signal conversion efficiency is realized.
The solution in the embodiment of the present application may be implemented by using various computer languages, for example, object-oriented programming language Java and transliteration scripting language JavaScript, etc.
In order to make the technical solutions and advantages in the embodiments of the present application more clearly understood, the following description of the exemplary embodiments of the present application with reference to the accompanying drawings is made in further detail, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and are not exhaustive of all the embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
The following briefly describes an application environment of the video signal method provided by the embodiment of the present application:
referring to fig. 2, a video signal processing method provided in the embodiment of the present application is applied to a terminal device 20, where the terminal device 20 at least includes an image processor 201 for performing conversion processing on a signal in a high definition video. It should be explained that the image processor 201 includes a network control module and a signal processing module: the network control module is used for acquiring YUV signals of video data and a second conversion display lookup table (3D LUT in RGB mode), converting the YUV signals into initial RGB signals according to the acquired YUV signals, performing mode conversion on the initial RGB signals, and converting the second conversion display lookup table to generate a first conversion display lookup table (3D LUT in YUV mode); and the signal processing module is used for acquiring the YUV signals extracted from the first conversion display lookup table and the video and finishing the up-down conversion processing of the image.
Referring to fig. 3, the following embodiment specifically describes an example of applying the method provided by the embodiment of the present application to a conversion process of a signal in a video by using the terminal device as an execution subject. The video signal processing method provided by the embodiment of the application comprises the following steps 301 to 303:
step 301, obtaining a video signal to be processed, and extracting an initial YUV signal in the video signal to be processed.
The video signal to be processed may be a signal in ultra-or high-definition video.
YUV refers to the kind of a compiled true-color space (color space), and is divided into three components, "Y" represents brightness (Luminance), that is, a gray value; the "U" and "V" represent chromaticity (chroma) which is used to describe the color and saturation of the image for specifying the color of the pixel.
The video to be processed may comprise a plurality of frames of images, and the images may be images in YUV mode, or images in other modes, such as images in RGB format. If the image in the video to be processed is in an RGB mode, values of R, G, B components in the RGB mode image need to be extracted, then RGB is converted into YUV, and then an initial YUV signal in the video signal to be processed is extracted; if the image in the video to be processed is in a YUV mode, extracting data in each channel of the initial YUV video sequence, and acquiring a brightness component and a chrominance component.
Step 302, performing chrominance space conversion on the initial YUV signal based on a pre-configured first conversion display lookup table of the YUV mode to obtain a converted target YUV signal.
The first transformation shows that the lookup table can indicate the correspondence between YUV values and YUV signals through a 3D LUT in YUV mode, such as in a three-dimensional coordinate system, by length, width, and height for values expressed as three components of YUV, thereby mapping one set of YUV value three-dimensional data to another set of YUV signal fields.
The chrominance space conversion means the conversion of the original chrominance signal space to the target chrominance signal space. The chrominance space conversion is used to adjust the luminance and chrominance of the image, for example, according to the initial value of Y, U, V components in the YUV signal extracted in step 301, a coordinate point corresponding to the initial value is determined in the first conversion display lookup table, unique address decoding is performed, and a target value corresponding to the initial value is found, i.e., a target YUV signal is obtained.
And step 303, converting the video signal based on the target YUV signal.
The high definition video signal is determined according to a video resolution, such as a video resolution of 720P (1280 × 720) and above.
The conversion of the high-definition signal of the image into the ultra-high-definition signal or the conversion of the ultra-high-definition signal of the image into the signal can be realized by performing conversion processing on the resolution, the brightness or the chroma in the target YUV signal.
In the embodiment of the application, the initial YUV signals in the video signals are extracted, the initial YUV signals are subjected to chrominance space conversion based on the pre-configured first conversion display lookup table of the YUV mode, the converted target YUV signals are obtained, and finally the video signals are converted based on the target YUV signals, so that the video is directly subjected to conversion treatment from the YUV mode, two times of YUV/RGB mutual conversion operation are omitted for each image sampling point, the problem of resource waste in the video signal conversion process is solved, and the technical effect of improving the video signal conversion efficiency is finally realized.
In an optional embodiment of the present application, the video signal conversion performed on the target YUV signal in step 303 includes: and performing at least one of resolution up-down conversion, brightness up-down conversion and color gamut up-down conversion on the target YUV signals.
In video signal conversion, images may be processed by algorithms such as an interlacing technique, a motion compensation technique, an aspect ratio conversion technique, and the like.
Converting the target YUV signal from high definition to ultra-high definition is called up-conversion processing; conversely, the conversion of the target YUV signal into an ultra high definition to high definition signal is called down-conversion processing. The up-conversion processing of the target YUV signals comprises the following steps: the high definition 1080i/50 interlaced scanning signal is converted into progressive scanning signal, then the pixel conversion process is completed by interpolation technique and converted into 2160p/50 format signal. The down-conversion process is basically the same as the up-conversion, and only the line frequency and the aspect ratio of the signal need to be converted during the down-conversion because the field frequency is not converted.
The resolution up-down conversion of the target YUV signal refers to resolution increasing or decreasing processing of the YUV signal, the brightness up-down conversion refers to brightness increasing or decreasing processing of the YUV signal, and the color gamut up-down conversion refers to color gamut increasing or decreasing processing of the YUV signal.
In the embodiment of the application, the ultra/high definition video signal conversion is directly performed on the target YUV signal, so that the signal after the high definition/ultra high definition signal conversion is still the YUV signal, the up-down conversion processing after the YUV signal is converted into the RGB signal is avoided, and the technical effect of improving the video signal conversion efficiency is realized.
In an optional embodiment of the present application, before performing video signal conversion on the target YUV signal in step 303, the converted target YUV signal may be processed and output based on a hardware interface format, and then the conversion processing of the video signal may be performed.
The hardware interface format refers to an output hardware interface format of the video signal, and may include: HDMI full-name Interface (High Definition Multimedia Interface) format, SDI full-name Interface (Serial Digital Interface), or optical fiber network Interface for transmitting signals using uncompressed media stream protocol. And processing the converted target YUV signal according to the hardware interface, namely processing the converted target YUV signal according to the video resolution and the frame rate supported by the hardware interface.
According to the embodiment of the application, the converted target YUV signal is processed and output based on the hardware interface format, incompatibility of a display end in video signal display can be avoided, and the improvement of the adaptability of video output is realized.
Referring to fig. 4, in an alternative embodiment of the present application, before the step 302 performs the chroma-space conversion on the initial YUV signal based on the first conversion display lookup table of the pre-configured YUV mode to obtain the converted target YUV signal, the method further includes steps 401 to 404:
step 401, obtaining a second conversion display lookup table of the RGB mode sent by the user terminal.
The second transform display lookup table may indicate a correspondence between the RGB signal and the YUV signal through a 3D LUT in the RGB mode, that is, each pixel point of the RGB image is converted into a correspondence of a Y value, a U value, and a V value in the YUV mode, for example, in a three-dimensional coordinate system, three sets of coordinates of length, width, and height are used for values expressed as three components of RGB, thereby mapping three-dimensional data of one set of RGB signals to a field of another set of YUV signals.
Step 402, converting each acquired YUV signal into an initial RGB signal.
Based on the initial YUV signal in the video signal to be processed, Y, U, V component signals in YUV are extracted, and the signals of each component in YUV are sequentially converted into corresponding signals in each component of initial RGB according to the conversion matrix.
The values of the parameters in the conversion matrix may be set according to the threshold range of each component in the RGB signal and the threshold range of each component in the YUV signal. For example, the following calculation of the second transformation shows the lookup table 33x33x33 as an example, that is, the numerical value of each coordinate in the target 3D LUT _ is: 33, the 3D LUT entry table entry is: y [0,32], U [0,32], V [0,32 ]; inputting YUV ∈ [0,32 ];
sampling point extraction:
Figure 679196DEST_PATH_IMAGE001
wherein Y [0,32] represents the pixel value range of the Y component in YUV, U [0,32] represents the pixel value range of the U component in YUV, and V [0,32] represents the pixel value range of the V component in YUV.
Step 403, performing mode conversion on the second conversion display lookup table in the RGB mode to obtain a YUV value of the RGB value in the YUV mode in the second conversion display lookup table in the RGB mode.
And performing mode conversion on the second conversion display lookup table of the RGB mode, wherein the mode conversion lookup table is used for converting the corresponding relation between the YUV values and the YUV signals.
The mode conversion of the initial RGB signal refers to converting the RGB mode signal in the video to be processed into a YUV value in a YUV mode.
The YUV value of the RGB value initial RGB signal in the YUV mode in the second conversion display lookup table in the RGB mode is obtained, and may be obtained by determining a coordinate point corresponding to the initial value in the second conversion display lookup table according to the initial value of R, G, B components in the RGB signal extracted in step 402, performing unique address decoding, and finding a target value of Y, U, V components corresponding to the initial value, that is, obtaining the YUV value of the initial RGB signal in the YUV mode.
Step 404, constructing a first conversion display lookup table of the YUV mode based on the corresponding relation between each YUV value and each YUV signal.
The first conversion display lookup table is used for indicating the corresponding relation between each YUV value and each YUV signal.
The first transformation display look-up table, when establishing correspondence indicating the respective YUV values to the respective YUV signals, may establish Y, U, V independent mappings of the respective components, i.e. there are three independent correspondences: y, U, V the three components do not affect each other and are independent mappings:Y=f1(y),U=f2(u),V=f3(v) (ii) a A mapping relation (Y, U, V) = f (Y, U, V) may also be established, for example, a 3D LUT is established, a set of YUV value three-dimensional data is mapped to another set of YUV signal three-dimensional data, and the length, width and height in a three-dimensional coordinate space may be expressed as the values of three components of YUV.
The method comprises the steps of obtaining a second conversion display lookup table based on an RGB mode, converting collected YUV signals into initial RGB signals, carrying out mode conversion on the initial RGB signals to obtain YUV values of the initial RGB signals in the YUV mode, and finally generating a new first conversion display lookup table based on the YUV mode.
Referring to fig. 5, in an alternative embodiment of the present application, the converting step 402 of the acquired YUV signals into the initial RGB signals further includes steps 501 to 503:
step 501, determining the brightness component and the chrominance component of each YUV signal.
The luminance component and the chrominance component of the YUV signal refer to a signal of a Y component, a signal of a U component and a signal of a V component in an initial YUV signal in the video to be processed.
When the brightness component and the chrominance component of each YUV signal are determined, the brightness and the chrominance in the YUV signal in the video to be processed are separated by adopting a component digitalization mode to obtain Y, U, V components, and then three components are respectively digitalized by three analog/digital converters to determine each component of the YUV signal.
Step 502, determining each initial RGB value of the initial RGB signal corresponding to the YUV signal in the RGB mode according to the luminance component, the hue component, the saturation component, and each RGB conversion coefficient for each YUV signal.
The conversion coefficient is a coefficient indicating the difference between each component signal in YUV and each component signal in RGB in the video to be processed. The conversion coefficients may include: the RGB data of an image pixel is converted into a luminance-chrominance separated color space conversion matrix, for example, the conversion matrix is:
Figure 857367DEST_PATH_IMAGE002
wherein R is Y V U R V Representing the conversion coefficient for converting the signal of each component in YUV into R component in original RGB; g Y G U G V Representing the conversion coefficient for converting the signal of each component in YUV into G component in original RGB; b is Y B U B V Representing conversion coefficients for converting the signals of each component in YUV to the B component in the original RGB.
According to the above conversion matrix, the conversion of YUV signals into initial RGB signals can be achieved by the following formula (1): the formula for the conversion is:
Figure 73585DEST_PATH_IMAGE003
(1)
wherein f in the above formula (1) R Representing the corresponding signal in the R component of the original RGB signal, f G Representing the corresponding signal in the G component of the original RGB signal, f B Representing the corresponding signal in the G component of the original RGB signal, f R 、f G 、f B ∈[0.0,1.0](ii) a fy represents a signal corresponding to the Y component in the YUV signal,fu represents a corresponding signal in a U component of the YUV signal, and fv represents a corresponding signal in a V component of the YUV signal.
After extracting each component of YUV signal in the application, each YUV signal that will gather converts initial RGB signal into, and the mode conversion processing can be carried out based on initial RGB signal in the follow-up of being convenient for, obtains the YUV value of each initial RGB signal in YUV mode, realizes that the video is from the direct image conversion processing of doing of YUV mode, has solved the wasting of resources that twice conversion caused, has finally realized the technological effect that improves data processing efficiency.
Referring to fig. 6, in an optional embodiment of the present application, the mode conversion is performed on the initial RGB signals in the step 502 to obtain YUV values of the initial RGB signals in a YUV mode, including steps 601-602:
step 601, performing mode conversion on each initial RGB value in the initial RGB signals by using a tetrahedral interpolation method to obtain a target RGB value of each initial RGB signal in a YUV mode.
The initial RGB signal is the initial RGB signal converted according to the collected YUV signals in step 402.
When mode conversion is performed on each initial RGB value in the initial RGB signal, the original coordinate data may be substituted into a preset mesh intersection lookup table, a preset tetrahedron type lookup table, and a preset tetrahedron vertex lookup table according to the original coordinate data (initial RGB value) of any point p in the RGB mode space to obtain 4 sets of coordinates of 4 vertices of a tetrahedron where the original coordinate data is located in a target color space; and performing tetrahedral interpolation calculation on the 4 groups of coordinates and the weight coefficient to obtain target coordinate data (target RGB value) of the point p in the YUV mode space.
Step 602, convert each target RGB value to a YUV value in YUV mode.
Converting each target RGB value to a YUV value in YUV mode means converting the value of R, G, B components in each target RGB value to the value of Y, U, V components in YUV mode in turn.
When the conversion of the target RGB value is carried out, each target RGB value can be converted into a YUV value in a YUV mode according to the conversion matrix; the conversion matrix is used for converting the RGB data into a luminance and chrominance separated YUV signal space conversion matrix.
For example: the transformation matrix can be expressed as:
Figure 628194DEST_PATH_IMAGE004
wherein, the first and the second end of the pipe are connected with each other,
Figure 511837DEST_PATH_IMAGE005
representing conversion coefficients for converting signals of each component in RGB into Y component in YUV mode;
Figure 719964DEST_PATH_IMAGE006
representing conversion coefficients for converting the signal of each component of RGB into the U component in YUV mode,
Figure 564423DEST_PATH_IMAGE007
representing conversion coefficients for converting the signal of each component of RGB into the Y component of YUV mode.
According to the above conversion matrix, the value of R, G, B components in the output target RGB value can be converted into output YUV by formula (2), and the calculation formula can be expressed as:
Figure 516199DEST_PATH_IMAGE008
(2)
wherein, in the above formula (2)
Figure 660872DEST_PATH_IMAGE009
Representing the corresponding signals in the components of the target RGB,
Figure 39901DEST_PATH_IMAGE010
representing the YUV values in YUV mode.
The method and the device have the advantages that each component based on the YUV signal is obtained in the implementation of the method, the collected YUV signals are converted into the initial RGB signals, the video is directly processed and converted from the YUV mode, the image processing conversion can be directly carried out after the YUV component signals are extracted from the video signals through the processing process, the resource waste caused by two times of conversion is solved, and the technical effect of improving the data processing efficiency is finally achieved.
Referring to fig. 7, in an alternative embodiment of the present application, the video signal processing method further includes steps 701 to 703:
step 701, obtaining a new second conversion display lookup table of the RGB mode sent by the user terminal.
The new second transform display look-up table is used to indicate the correspondence between the new RGB signal and the YUV signal.
Step 702, obtaining a new second conversion display lookup table based on the target RGB values of the initial RGB signals in the YUV mode;
and 703, updating the first conversion display lookup table based on the new second conversion display lookup table of the RGB mode and each initial RGB signal to obtain a new first conversion display lookup table.
The new first transformation displays a look-up table indicating the correspondence between the values of the components in the new YUV and the signals of the YUV.
Converting the YUV signals in the extracted video into initial RGB signals, determining YUV values of the initial RGB signals in a YUV mode according to the initial RGB signals and a new second conversion display lookup table, and finally constructing a new first conversion display lookup table in the YUV mode based on the corresponding relation between the YUV values and the YUV signals.
After a new second conversion display lookup table based on an RGB mode is obtained in the implementation of the application, the first conversion display lookup table is updated based on the new second conversion display lookup table and each initial RGB signal, a new first conversion display lookup table is obtained, the new first conversion display lookup table is given, YUV signals in video signals are processed, a conversion processing relation of the new video signals can be established based on the new conversion display lookup table, and the technical effect of improving the flexibility of video signal processing is achieved.
Referring to fig. 8, in an alternative embodiment of the present application, the step 702 of updating the first transformed display lookup table based on the new second transformed display lookup table and each of the initial RGB signals to obtain a new first transformed display lookup table includes steps 801 to 803:
step 801, look up new RGB values corresponding to each initial RGB signal in a new second transform display look-up table.
The initial RGB signals are generated by converting the collected YUV signals.
Step 802, performing mode conversion on the new RGB values to obtain new YUV values of the new RGB values in the YUV mode.
The mode conversion may be performed on the initial RGB signal, and according to the initial value of R, G, B components in the RGB signal extracted in step 801, a coordinate point corresponding to the initial value is determined in a new second conversion display lookup table, unique address decoding is performed, and a target value of Y, U, V components corresponding to the initial value is found, so that a YUV value of the initial RGB signal in a YUV mode is obtained.
Step 803, a new first conversion display lookup table of the YUV mode is constructed based on the corresponding relationship between each new YUV value and each YUV signal.
The new first transformation display lookup table is used for indicating the corresponding relation between the YUV values and the YUV signals, including the corresponding relation between the YUV component values and the YUV component signals.
After the initial YUV signals in the video signals are extracted, the target YUV signals corresponding to the initial YUV signals can be found according to the new first conversion display lookup table, and the conversion of the image is completed.
In the implementation of the application, after a new second conversion display lookup table based on an RGB mode is obtained, the first conversion display lookup table is updated based on the new second conversion display lookup table and each initial RGB signal to obtain a new first conversion display lookup table; the image conversion can be flexibly carried out based on the new conversion display lookup table, so that the resource waste caused by the need of carrying out two times of conversion again due to the conversion relation between the YUV signal and the RGB signal is avoided, and the technical effect of improving the signal conversion processing efficiency is finally realized.
It should be understood that, although the steps in the flowchart are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least some of the sub-steps or stages of other steps.
Referring to fig. 9, an embodiment of the present application provides a video signal processing apparatus 90, including: a first obtaining module 901, a first converting module 902 and a second converting module 903;
a first obtaining module 901, configured to obtain a video signal to be processed, and extract an initial YUV signal in the video signal to be processed;
a first conversion module 902, configured to perform chrominance space conversion on the initial YUV signal based on a first conversion display lookup table of a pre-configured YUV mode to obtain a converted target YUV signal;
and a second conversion module 903, configured to perform video signal conversion based on the target YUV signal.
Referring to fig. 10, in an alternative embodiment, the apparatus 90 further includes a second obtaining module 904, where the second obtaining module 904 is configured to: acquiring a second conversion display lookup table of the RGB mode sent by the user terminal; converting the collected YUV signals into initial RGB signals; performing mode conversion on the second conversion display lookup table of the RGB mode to obtain YUV values of the RGB values in the YUV mode in the second conversion display lookup table of each RGB mode; and constructing the first conversion display lookup table of the YUV mode based on the corresponding relation between the YUV values and the YUV signals.
In an alternative embodiment, the first conversion module 902 of the apparatus 90 is further configured to: determining the brightness component and the chrominance component of each YUV signal; and for each YUV signal, determining each initial RGB value of the initial RGB signal corresponding to the YUV signal in an RGB mode according to the brightness component, the hue component, the saturation component and each RGB conversion coefficient.
In an alternative embodiment, the first conversion module 902 of the apparatus 90 is further configured to: performing mode conversion on each initial RGB value in the initial RGB signals by adopting a tetrahedral interpolation method to obtain a target RGB value of each initial RGB signal in a YUV mode; and converting each target RGB value into a YUV value in a YUV mode.
In an alternative embodiment, the first conversion module 902 of the apparatus 90 is further configured to: acquiring a new second conversion display lookup table of the RGB mode sent by the user terminal; obtaining a new second conversion display lookup table based on the target RGB value of each initial RGB signal in the YUV mode; and updating the first conversion display lookup table based on the new second conversion display lookup table of the RGB mode and each initial RGB signal to obtain a new first conversion display lookup table.
In an alternative embodiment, the first conversion module 902 of the apparatus 90 is further configured to: looking up a new RGB value corresponding to each initial RGB signal in the new second conversion display lookup table; performing mode conversion on the new RGB values to obtain new YUV values of the new RGB values in a YUV mode; constructing the new first transform display lookup table of the YUV mode based on a correspondence between each of the new YUV values and each of the YUV signals.
In an alternative embodiment, the second conversion module 903 in the apparatus 90 is further configured to:
and performing at least one of resolution up-down conversion, brightness up-down conversion and color gamut up-down conversion on the target YUV signal.
The device provided by the embodiment of the application carries out chrominance space conversion on the initial YUV signal by extracting the initial YUV signal in the video signal and carrying out chrominance space conversion on the initial YUV signal based on the first conversion display lookup table of the pre-configured YUV mode to obtain the converted target YUV signal, and finally carries out video signal conversion based on the target YUV signal, so that the video is directly processed and converted from the YUV mode, two times of YUV/RGB mutual conversion operation are omitted for each image sampling point, the resource waste in the video signal conversion process is solved, and the technical effect of improving the video signal conversion efficiency is finally realized.
For the specific limitations of the video signal processing apparatus, reference may be made to the limitations of the video signal processing method above, and details are not repeated here. The respective modules in the video signal processing apparatus described above may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, the internal structure of which may be as shown in fig. 11. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a video signal processing method as above. The method comprises the following steps: comprising a memory storing a computer program and a processor implementing any of the steps of the video signal processing method as described above when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, is adapted to carry out any of the steps of the above video signal processing method.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A video signal processing method, comprising:
acquiring a video signal to be processed, and extracting an initial YUV signal in the video signal to be processed;
performing chrominance space conversion on the initial YUV signal based on a first conversion display lookup table of a pre-configured YUV mode to obtain a converted target YUV signal;
and converting the video signal based on the target YUV signal.
2. The method of claim 1, wherein before the first transform display lookup table based on the pre-configured YUV mode performs the chroma-space conversion on the initial YUV signal to obtain a converted target YUV signal, the method further comprises:
acquiring a second conversion display lookup table of the RGB mode sent by the user terminal;
converting the collected YUV signals into initial RGB signals;
performing mode conversion on the second conversion display lookup table of the RGB mode to obtain YUV values of the RGB values in the YUV mode in the second conversion display lookup table of each RGB mode;
and constructing the first conversion display lookup table of the YUV mode based on the corresponding relation between the YUV values and the YUV signals.
3. The method according to claim 2, wherein said converting the collected YUV signals into initial RGB signals comprises:
determining a brightness component and a chrominance component of each YUV signal;
and for each YUV signal, determining each initial RGB value of the initial RGB signal corresponding to the YUV signal in an RGB mode according to the brightness component, the chrominance component and each RGB conversion coefficient.
4. The method of claim 3, wherein the performing the mode conversion on the initial RGB signals to obtain YUV values of each of the initial RGB signals in YUV mode comprises:
performing mode conversion on each initial RGB value in the initial RGB signals by adopting a tetrahedral interpolation method to obtain a target RGB value of each initial RGB signal in a YUV mode;
and converting each target RGB value into a YUV value in a YUV mode.
5. The video signal processing method of claim 2, further comprising:
acquiring a new second conversion display lookup table of the RGB mode sent by the user terminal;
obtaining a new second conversion display lookup table based on the target RGB value of each initial RGB signal in the YUV mode;
and updating the first conversion display lookup table based on the new second conversion display lookup table of the RGB mode and each initial RGB signal to obtain a new first conversion display lookup table.
6. The method of claim 5, wherein said updating the first transformed display lookup table based on the new second transformed display lookup table and each of the initial RGB signals to obtain a new first transformed display lookup table comprises:
looking up a new RGB value corresponding to each initial RGB signal in the new second conversion display lookup table;
performing mode conversion on the new RGB values to obtain new YUV values of the new RGB values in a YUV mode;
constructing the new first transform display lookup table of the YUV mode based on a correspondence between each of the new YUV values and each of the YUV signals.
7. The video signal processing method according to claim 1, wherein said performing video signal conversion based on the target YUV signals comprises:
and performing at least one of resolution up-down conversion, brightness up-down conversion and color gamut up-down conversion on the target YUV signals.
8. A video processing apparatus, comprising: the device comprises a first acquisition module, a first conversion module and a second conversion module;
the device comprises a first acquisition module, a second acquisition module and a processing module, wherein the first acquisition module is used for acquiring a video signal to be processed and extracting an initial YUV signal in the video signal to be processed;
the first conversion module is used for carrying out chrominance space conversion on the initial YUV signal based on a first conversion display lookup table of a pre-configured YUV mode to obtain a converted target YUV signal;
and the second conversion module is used for converting the video signal based on the target YUV signal and the target YUV signal.
9. A computer device, comprising: comprising a memory and a processor, said memory storing a computer program, characterized in that said processor realizes the steps of the method according to any of claims 1 to 7 when executing said computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202210988303.3A 2022-08-17 2022-08-17 Video signal processing method and device, computer equipment and storage medium Pending CN115063325A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210988303.3A CN115063325A (en) 2022-08-17 2022-08-17 Video signal processing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210988303.3A CN115063325A (en) 2022-08-17 2022-08-17 Video signal processing method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115063325A true CN115063325A (en) 2022-09-16

Family

ID=83208353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210988303.3A Pending CN115063325A (en) 2022-08-17 2022-08-17 Video signal processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115063325A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110730339A (en) * 2019-11-05 2020-01-24 上海网仕科技有限公司 SDR video signal processing method and device and video coding equipment
CN111107336A (en) * 2019-12-06 2020-05-05 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112272947A (en) * 2018-07-05 2021-01-26 华为技术有限公司 Video signal processing method and device
WO2021104079A1 (en) * 2019-11-27 2021-06-03 深圳市万普拉斯科技有限公司 Video processing method and apparatus, storage medium, and computer device
CN113489930A (en) * 2021-06-10 2021-10-08 中央广播电视总台 Video signal processing method, device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112272947A (en) * 2018-07-05 2021-01-26 华为技术有限公司 Video signal processing method and device
CN110730339A (en) * 2019-11-05 2020-01-24 上海网仕科技有限公司 SDR video signal processing method and device and video coding equipment
WO2021104079A1 (en) * 2019-11-27 2021-06-03 深圳市万普拉斯科技有限公司 Video processing method and apparatus, storage medium, and computer device
CN111107336A (en) * 2019-12-06 2020-05-05 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113489930A (en) * 2021-06-10 2021-10-08 中央广播电视总台 Video signal processing method, device and storage medium

Similar Documents

Publication Publication Date Title
US11025927B2 (en) Pixel pre-processing and encoding
US10242646B2 (en) RGB to YUV format conversion and inverse conversion method and circuit for depth packing and depacking
KR102377807B1 (en) Color volume transforms in coding of high dynamic range and wide color gamut sequences
JP6516851B2 (en) Pixel pre-processing and encoding
JP6472429B2 (en) Method, apparatus and system for determining LUMA values
US11445201B2 (en) Video encoding and decoding method, device, and system, and storage medium
KR102358368B1 (en) Method and device for encoding high dynamic range pictures, corresponding decoding method and decoding device
KR102367205B1 (en) Method and device for encoding both a hdr picture and a sdr picture obtained from said hdr picture using color mapping functions
AU2016212243A1 (en) A method and apparatus of encoding and decoding a color picture
JP7138935B2 (en) HDR wide color gamut video conversion device and HDR wide color gamut video conversion method for converting HDR video into SDR video
WO2021073304A1 (en) Image processing method and apparatus
KR20220020367A (en) Image processing method and apparatus
EP3087725A1 (en) Method of mapping source colors of images of a video content into the target color gamut of a target color device
CN115023729A (en) Method for converting an image and corresponding device
AU2016203181A1 (en) Method, apparatus and system for determining a luma value
US10423587B2 (en) Systems and methods for rendering graphical assets
CN114245027B (en) Video data hybrid processing method, system, electronic equipment and storage medium
CN115063325A (en) Video signal processing method and device, computer equipment and storage medium
JP5337736B2 (en) Transmission signal conversion apparatus, transmission signal conversion program, reception signal conversion apparatus, and reception signal conversion program
JP5337737B2 (en) Transmission signal conversion apparatus, transmission signal conversion program, reception signal conversion apparatus, and reception signal conversion program
JP5323738B2 (en) Video signal transmission device, transmission video signal generation program, video signal reception device, transmission video signal conversion program, and video signal transmission system
JP2018507618A (en) Method and apparatus for encoding and decoding color pictures
CN107241199B (en) Image processing method and device for conference terminal
JP2018195911A (en) Image signal conversion device, dynamic range conversion device, and program of them
US10417994B2 (en) RGB format adjustment and reconstruction method and circuit for depth frame packing and depacking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220916

RJ01 Rejection of invention patent application after publication