CN114125414A - Image saturation enhancement method and coding and decoding processing method, device and system - Google Patents

Image saturation enhancement method and coding and decoding processing method, device and system Download PDF

Info

Publication number
CN114125414A
CN114125414A CN202111394213.3A CN202111394213A CN114125414A CN 114125414 A CN114125414 A CN 114125414A CN 202111394213 A CN202111394213 A CN 202111394213A CN 114125414 A CN114125414 A CN 114125414A
Authority
CN
China
Prior art keywords
scene
enhancement
color
image
saturation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111394213.3A
Other languages
Chinese (zh)
Inventor
杨智尧
杨炳旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202111394213.3A priority Critical patent/CN114125414A/en
Publication of CN114125414A publication Critical patent/CN114125414A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/646Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Color Image Communication Systems (AREA)
  • Image Processing (AREA)

Abstract

An image saturation enhancement method, an encoding and decoding processing method, a device and a system are provided, wherein the image saturation enhancement method comprises the following steps: acquiring scene types of images; determining an enhancement factor used when the image is subjected to saturation enhancement according to the scene category of the image; and performing saturation enhancement on the image according to the enhancement factor. The embodiment of the disclosure also provides a device, a system and a terminal device applying the method, and the embodiment of the disclosure can realize scene-based adaptive saturation enhancement and improve the saturation enhancement effect.

Description

Image saturation enhancement method and coding and decoding processing method, device and system
Technical Field
The present disclosure relates to, but not limited to, image processing technologies, and in particular, to an image saturation enhancement method, and a coding and decoding processing method, device, and system.
Background
The color is one of important contents of video expressive force, and has a remarkable influence on the subjective effect of the video, but due to the problems of limited capability of video acquisition equipment, limited ambient light, weather conditions and the like, the rich colors in the nature cannot be captured accurately, for mobile phone products, the photographic capability of a user also becomes one of the limiting conditions of the video color expressive force, and the video color is generally represented as dim and unsharp video color and poor sensory effect, wherein one reason is that the saturation of the color is low, so that the original hue of the content is lost, and therefore, the video sensory effect is promoted through the saturation enhancement.
At present, a plurality of saturation enhancement schemes are provided, the main difference lies in the difference of color spaces, the common color spaces at present mainly include RGB, YUV, HSV and the like, each type of color space has a corresponding saturation enhancement method, but the effect is still to be improved.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
The embodiment of the disclosure provides an image saturation enhancement method, which includes:
acquiring scene types of images;
determining an enhancement factor used when the image is subjected to saturation enhancement according to the scene category of the image;
and performing saturation enhancement on the image according to the enhancement factor.
The embodiment of the present disclosure further provides a video saturation enhancement device, which includes a memory and a processor, where the memory stores a computer program, and the processor can implement the image saturation enhancement method according to any embodiment of the present disclosure when executing the computer program.
The embodiment of the present disclosure further provides a video encoding processing method, including:
the image saturation enhancement method according to any embodiment of the present disclosure performs saturation enhancement on a video frame from a data source, including: acquiring the scene type of the video frame;
encoding the video frame after saturation enhancement to generate a video code stream, wherein the encoding comprises: and writing the scene type of the video frame into a video code stream.
An embodiment of the present disclosure further provides a video encoding processing apparatus, including:
a first saturation enhancement apparatus configured to perform saturation enhancement on a video frame from a data source according to an image saturation enhancement method according to any embodiment of the present disclosure, including: acquiring the scene type of the video frame;
and the video encoder is used for encoding the video frame with the enhanced saturation degree to generate a video code stream, wherein the video code stream is written with the scene type of the video frame.
The embodiment of the present disclosure further provides a video decoding processing method, including:
decoding a video code stream to obtain a decoded video frame and a scene type of the video frame;
the image saturation enhancement method according to any embodiment of the present disclosure performs saturation enhancement on the decoded video frame, wherein the scene type of the video frame is obtained by decoding.
An embodiment of the present disclosure further provides a video decoding processing apparatus, including:
the video decoder is arranged for decoding the video code stream to obtain a decoded video frame and the scene type of the video frame;
a second saturation enhancement device configured to perform saturation enhancement on the decoded video frame according to the image saturation enhancement method according to any embodiment of the present disclosure, wherein the scene type of the video frame is obtained by decoding.
The embodiment of the present disclosure further provides a video encoding and decoding system, which includes the video encoding processing apparatus according to any embodiment of the present disclosure and the video decoding processing apparatus according to any embodiment of the present disclosure.
The embodiment of the disclosure further provides a code stream, where the code stream is generated according to the video coding processing method described in any embodiment of the disclosure, and the code stream includes information of scene types of video frames.
The embodiment of the present disclosure further provides a non-transitory computer-readable storage medium, which stores a computer program, where the computer program, when executed by a processor, implements an image saturation enhancement method according to any embodiment of the present disclosure, or a video encoding processing method according to any embodiment of the present disclosure, or a video decoding processing method according to any embodiment of the present disclosure.
The embodiment of the disclosure introduces a scene classification function, adjusts the saturation parameter based on the scene category of the image, realizes the self-adaptive saturation enhancement based on the scene, and can improve the subjective quality of the image.
Other aspects will be apparent upon reading and understanding the attached drawings and detailed description.
Drawings
The accompanying drawings are included to provide an understanding of the embodiments of the disclosure, and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the embodiments of the disclosure serve to explain the principles of the disclosure and not to limit the disclosure.
FIG. 1 is a flow chart of an embodiment of an image saturation enhancement method according to the present disclosure;
FIG. 2 is a flow chart of a video saturation enhancement method according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an area where a human body is located obtained by expanding the area where the human face is located according to an embodiment of the present disclosure;
FIG. 4 is a diagram illustrating a division of a YUV color space into multiple color regions according to an embodiment of the present disclosure;
FIG. 5 is a block diagram of an image saturation enhancement system according to one embodiment of the present disclosure;
FIG. 6 is a schematic diagram of an embodiment of an image saturation enhancement apparatus according to the present disclosure;
FIG. 7 is a schematic diagram of a video compression process;
FIG. 8 is a flowchart of a video encoding processing method according to an embodiment of the disclosure;
FIG. 9 is a flowchart of a video decoding processing method according to an embodiment of the disclosure;
fig. 10 is a schematic diagram of a video codec system according to an embodiment of the disclosure.
Detailed Description
The present disclosure describes embodiments, but the description is illustrative rather than limiting and it will be apparent to those of ordinary skill in the art that many more embodiments and implementations are possible within the scope of the embodiments described in the present disclosure.
Throughout the description of the present disclosure, words such as "exemplary" or "for example" are used to indicate examples, illustrations, or illustrations. Any embodiment described in this disclosure as "exemplary" or "e.g.," should not be construed as preferred or advantageous over other embodiments. "and/or" herein is a description of an association relationship for an associated object, meaning that there may be three relationships, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. "plurality" means two or more than two. In addition, for the convenience of clearly describing the technical solutions of the embodiments of the present disclosure, the words "first", "second", and the like are used to distinguish the same items or similar items having substantially the same functions and actions. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance.
In describing representative exemplary embodiments, the specification may have presented the method and/or process as a particular sequence of steps. However, to the extent that the method or process does not rely on the particular order of steps set forth herein, the method or process should not be limited to the particular sequence of steps described. Other orders of steps are possible as will be understood by those of ordinary skill in the art. Therefore, the particular order of the steps set forth in the specification should not be construed as limitations on the claims. Further, the claims directed to the method and/or process should not be limited to the performance of their steps in the order written, and one skilled in the art can readily appreciate that the sequences may be varied and still remain within the spirit and scope of the embodiments of the present disclosure.
In some saturation enhancement schemes, there are some problems, for example, the saturation enhancement strategy cannot be implemented adaptively, for dark colors, the saturation enhancement will bring obvious effect improvement, and for the original content with higher saturation, the problem of supersaturation is easy to occur. On the other hand, obvious distortion appears on human skin color, particularly human face parts are taken as key attention areas, and the influence caused by the distortion is particularly serious.
With the popularization of High Dynamic Range (HDR) video, people have an increased interest in a wide color gamut and expect a better color effect that can be brought by video, and therefore a video saturation enhancement scheme with adaptive capability is urgently needed.
To this end, an embodiment of the present disclosure provides an image saturation enhancement method, as shown in fig. 1, including:
step 110, acquiring scene types of images;
the scene class of the image is determined by scene classifying the image. In this step, the scene classification of the image may be determined by locally executing a scene classification algorithm, but is not limited thereto. For example, the scene type information of the image may be analyzed from the data stream or may be received from an external input.
Step 120, determining an enhancement factor used when performing saturation enhancement on the image according to the scene type of the image;
and step 130, performing saturation enhancement on the image according to the enhancement factor.
The vision has different feelings about the color saturation (simply referred to as saturation) of images photographed in different scenes. If the same saturation enhancement strategy is used for images taken in an indoor home area and images taken in an outdoor scene with the sky as a main body, a satisfactory enhancement effect cannot be achieved. The embodiment of the disclosure introduces a scene classification function, adjusts the saturation parameter based on the scene category of the image, realizes the self-adaptive saturation enhancement based on the scene, and can effectively improve the subjective quality of the image.
The image of the embodiment of the present disclosure is not limited to a specific format, and for example, the image may be a video frame in a video, that is, a video saturation enhancement method is provided, or may be a single picture, such as a picture file in a format of JPG, JPEG, BMP, or the like, and the color effect may be enhanced by using the image saturation enhancement method of the embodiment of the present disclosure.
In an exemplary embodiment of the present disclosure, the acquiring the scene category of the image includes:
acquiring a scene type of an externally input video frame; or
Analyzing a video frame code stream to obtain a scene type of the video frame; or
The method comprises the steps of determining a scene category of a first video frame (namely a head frame) in a video frame sequence according to a set scene classification algorithm, and determining the scene categories of other video frames except the first video frame in the video frame sequence as the scene category of the first video frame, wherein the video frame sequence comprises one or more groups of Pictures (GOPs). For small videos frequently used in daily life of people, the videos are often shot aiming at the same scene, for a video frame sequence, the first frame adopts algorithm classification, and other frames all adopt the scene category of the first frame, so that the operation amount of a saturation enhancement algorithm can be reduced. The length of the video frame sequence can be set for different types of videos, and through reasonable length setting, a good balance can be obtained between the accuracy and the complexity of scene classification.
In an exemplary embodiment of the present disclosure, the enhancing factor is an enhancing proportion, and the enhancing saturation of the image according to the enhancing factor includes: and carrying out saturation enhancement on pixel points in the image according to the following two formulas:
Ug=U+αU;
Vg=V+αV;
wherein U represents the original U component value of the pixel point, UgExpressing the U component value of the enhanced pixel point, V expressing the original V component value of the pixel point, VgThe V component value of the enhanced pixel point is represented, alpha represents an enhancement factor, and the value range of alpha is[1,1.5]。
For YUV data, the chrominance components of which include U and V components, the U and V components may be enhanced in equal proportion when saturation is enhanced. For image data in other formats, the image data can be converted into YUV data to realize saturation enhancement, and then converted back to the original format. However, each color data format has its own saturation enhancement algorithm, and saturation enhancement can be directly realized based on the original data format and the adaptive algorithm, in these algorithms, the enhancement factor can be embodied as one or more values different from the enhancement ratio, a matrix with a plurality of coefficients, and so on. The present disclosure is not limited in any way as to the specific form of enhancement factor and enhancement algorithm.
In an exemplary embodiment of the present disclosure, the determining, according to the scene category of the image, an enhancement factor used when performing saturation enhancement on the image includes:
searching the corresponding relation between a plurality of color areas and enhancement factors set based on the scene categories of the image, wherein the color areas are obtained by dividing a color space, and the corresponding relation between the color areas and the enhancement factors is respectively set based on a plurality of scene categories set by a system;
and determining enhancement factors corresponding to the multiple color regions when the image is subjected to saturation enhancement according to the search result.
The correspondence between the color regions and the enhancement factors may be recorded in one or more correspondence tables, and the correspondence is not limited to the correspondence between values, and may be represented as one or more functions, or any expression manner that can represent the correspondence.
In an exemplary embodiment of the present disclosure, the scene categories are classified into at least three categories, namely "outdoor scene", "indoor scene", and "character scene", wherein the "outdoor scene" includes any one or more of the following scene categories: "sky", "grassland", "forest", "water" and "building"; "indoor scene" includes any one or more of the following scene categories: "family zone", "office zone", and "living zone"; "character scene" includes the scene categories: "character".
Taking this exemplary scene type as an example, when the corresponding relationship between the plurality of color regions and the enhancement factors is set based on the plurality of scene types set by the system, the corresponding relations between the plurality of color regions and the enhancement factors can be set based on the "sky", "grassland", "forest", "water surface", "building", "home zone", "office zone", "living zone" and "person", respectively, for 9 scene categories set by the system, the corresponding relations between the plurality of color regions and the enhancement factors set based on the 9 scene categories can be different from each other, but there may be a plurality of color regions set based on some scene categories and the same correspondence relationship with the enhancement factors, for example, "grass" and "forest" have some similarity in color, and the correspondence of the plurality of color regions and the enhancement factors set based on "grass" and "forest" may be the same. On the basis of a scene with a large difference between "sky" and "home zone", the corresponding relationship between the set multiple color regions and the enhancement factors is different.
In another exemplary embodiment of the present disclosure, the scene categories set by the system do not include the scene categories of the character category, but are classified into two categories, namely "outdoor scene" and "indoor scene", wherein the "outdoor scene" includes any one or more of the following scene categories: "sky", "grassland", "forest", "water" and "building"; "indoor scene" includes any one or more of the following scene categories: "family zone", "office zone" and "living zone". The "home zone" in the above classification refers to a place where a home lives, such as a bedroom, a living room, and the like, and the "office zone" refers to various places for working, and other functional places except the "home zone" and the "office zone" can be classified as "living zones".
The visual perception of saturation for the same color is also different under different scene categories. For example, for "blue", the saturation perceived by vision is often relatively high in a scene with a sky as a main object, or it can be said that vision is used to that "blue" has higher saturation in a scene with a sky as a main object, and furthermore, in a scene with a sky as a main object, enhancing the saturation of "blue" also has a more significant effect on improving the color effect of an image. In contrast, for an indoor scene such as a "home zone", a high saturation level of "blue" is relatively low, and if the saturation level of "blue" in an image captured in the scene of the "home zone" is increased too much, the possibility of generating a sense of distortion is relatively high. Therefore, in another exemplary embodiment of the present disclosure, based on the scene categories of the above-mentioned embodiments, the correspondence relationships between the plurality of color regions and the enhancement factors are respectively set based on a plurality of scene categories that are set, and any one or more of the following 8 setting manners are adopted:
the multiple color regions comprise 'green-blue', and the enhancement factors corresponding to the color regions 'green-blue' set based on the scene category 'sky' are larger than the enhancement factors corresponding to the color regions 'green-blue' set based on other scenes;
the plurality of color regions-regions include "blue", and an enhancement factor corresponding to a color region "blue" set based on a scene category "sky" is greater than an enhancement factor corresponding to a color region "blue" set based on other scenes;
the plurality of color regions comprise 'blue', and the enhancement factor corresponding to the color region 'blue' set on the basis of the scene category 'water surface' and/or 'building' is smaller than the enhancement factor corresponding to the color region 'blue' set on the basis of the scene category 'sky' and larger than the enhancement factor corresponding to the color region 'blue' set on the basis of other scene categories except 'sky';
the plurality of color regions comprise 'green', and the enhancement factor corresponding to the color region 'green' set based on the scene category 'grassland' and/or 'forest' is larger than the enhancement factor corresponding to the color region 'green' set based on other scenes;
the plurality of color areas comprise 'blue-violet', and the enhancement factors corresponding to the color areas 'blue-violet' set on the basis of the scene category 'water surface' and/or 'building' are larger than the enhancement factors corresponding to the color areas 'blue-violet' set on the basis of other scenes;
the plurality of color regions include "red", and an enhancement factor corresponding to a color region "red" set based on one or more of scene categories "home region", "office region", and "active region" is larger than an enhancement factor corresponding to a color region "red" set based on other scenes;
the multiple color regions comprise red and green, and the enhancement factors corresponding to the red and green color regions set based on the office region of the scene category are larger than the enhancement factors corresponding to the red and green color regions set based on other scenes;
the plurality of color regions include "purple", and the enhancement factor corresponding to the color region "purple" set based on the scene category "active region" is larger than the enhancement factor corresponding to the color region "purple" set based on other scenes,
in the embodiment, the enhancement factors corresponding to the color regions are adaptively adjusted based on the object points of the scene, so that the saturation enhancement operation is refined, and a better enhancement effect can be achieved.
In an exemplary embodiment of the present disclosure, the dividing the color space includes: the YUV color space is divided in the following way:
in a rectangular plane coordinate system composed of a U axis and a V axis, the U, V color space is divided into 7 color regions based on the U axis, a ray passing through an origin and two straight lines, and the color regions passing through in sequence in the counterclockwise direction from the positive half axis of the U axis are: "blue violet", "red green", "green blue" and "blue", wherein the ray is defined by the function g (u) -2u, u > -0, and the two straight lines are defined by the functions h (u) -u and h (u) >, respectively
Figure BDA0003369858620000091
And (4) defining.
In this example, the division of color regions is done on the basis of the YUV color space, YUV (also known as YCrCb) being a commonly used type of image data. Where "Y" represents luminance, "U" and "V" represent chrominance, and the saturation of a color may be represented by a "U" component and a "V" component. If the data type of the image is not YUV data, the image can be converted into YUV data and then saturation enhancement is performed. It is easy to understand that, since different image data types correspond to different color spaces and different types of image data can be converted with each other, the image saturation enhancement method according to the embodiment of the present disclosure can also be implemented based on color spaces corresponding to other image data types, for example, RGB color spaces (color spaces under RGB color system), HSV color spaces (color spaces under HSV color system), XYZ color spaces (color spaces under XYZ color system), and so on.
In an exemplary embodiment of the present disclosure, performing saturation enhancement on the image according to the enhancement factor includes: and for the pixel points in the image, according to the enhancement factors corresponding to the color regions to which the colors of the pixel points belong, performing saturation enhancement on the colors of the pixel points. According to the embodiment, for the pixel points in the image, the enhancement factors are searched according to the color regions to which the colors belong, and the enhancement factors corresponding to the color regions are adjusted according to different scene types, so that the self-adaptive saturation enhancement based on the scene and the color regions is realized, the saturation accuracy enhancement can be realized, and the saturation enhancement effect is improved. In some classification algorithms, a plurality of scene categories matched with the image are output and are sorted from high to low according to the matching degree, the scene category of the image in the embodiment refers to the scene category with the highest matching degree with the image, and for those scene classification algorithms which only output one matched scene category, the output scene category is the scene category with the highest matching degree with the image.
The saturation enhancement of the image can improve the color effect of the image, but the obvious distortion of the skin color of a person is easy to occur, particularly the human face part is a key attention area of vision, the influence caused by the distortion is particularly serious, and the subjective quality of the image can be obviously reduced. To avoid this problem, skin color protection is required, for example, for a pixel point whose color belongs to a skin color region, saturation enhancement at a lower degree may be performed or saturation enhancement may not be performed. However, the skin color region is not unique to a human body as a certain region in the color space, and objects similar to skin color, such as food and clothes, are often present in indoor scenes, such as houses and offices. If all the pixel points of which the colors belong to the skin color area are not subjected to saturation enhancement, a satisfactory color enhancement effect cannot be achieved.
In an exemplary embodiment of the present disclosure, the image saturation enhancement method further includes the following skin color protection processing:
when the scene type of the image belongs to a set scene type which needs skin color protection and a face is determined to exist in the image through face detection, an enhancement factor used when the saturation enhancement is carried out on a pixel point representing real skin color in the image is set as a skin color factor;
the pixel points representing the real skin color comprise pixel points which are positioned in the area where the human face is positioned and the color of which belongs to a skin color area, the skin color factor is an enhancement factor which enables the saturation degree to be unchanged or the minimum enhancement factor in the enhancement factors set by all systems, and the set scene types needing skin color protection are part or all of a plurality of scene types set by the systems.
Wherein the skin color region is set as a red region in a color space or a partial region in the red region.
In an example of this embodiment, the above skin color protection processing may be performed before "determining an enhancement factor used for performing saturation enhancement on the image according to the scene category of the image", but may also be performed later, or in parallel, which is not limited by this disclosure.
In one example of the embodiment, when the scene category set by the system includes an artificial scene category (such as "person"), the set scene category needing skin color protection may be the artificial scene category. In another example, even if the scene category set by the system includes an artificial scene category, the set scene category requiring skin color protection may be set to all of the scene categories or to the artificial scene category and other partial scene categories. This is relevant to the specifically adopted scene classification algorithm and the training mode of the scene classification algorithm, if images that need to be subjected to skin color protection (for example, images that contain people and whose face area reaches the set size that needs skin color protection) are basically identified as the scene category of the human subject during classification, only the scene category of the human subject needs to be set as the scene category that needs skin color protection, and since face detection may not be performed for other scene categories, the implementation is relatively simple. On the contrary, other scene types can be set as the scene types needing skin color protection, and after the area where the face is located is determined through face recognition, the real skin color is protected. When all scene types set by the system are set as the scene types requiring skin color protection, the skin color protection process can be executed independently of the scene types of the image.
In one example of this embodiment, skin color protection is necessary in view of the fact that other parts of the human body may be exposed. This example extends the pixel points representing the true skin tone to the area where the human body is located. That is, in this example, the pixel points representing the real skin color further include pixel points located in other regions of the human body and having colors belonging to the skin color region; the other regions of the human body refer to other regions in the region of the human body except the region of the human face, and the region of the human body can be obtained by calculation according to the position and the size of the region of the human face and the size of the image. The "real skin color" herein refers to the color of the real human skin, not the same or similar color as the skin on other objects.
In an example of this embodiment, the number of the skin color factors may be one or more, for example, the skin color area may be divided into a peripheral area and a central area, and different skin color factors are assigned to the peripheral area and the central area, so that the saturation enhancement is not performed on the pixel points whose colors belong to the central area, and the minimum saturation enhancement is performed on the pixel points whose colors belong to the peripheral area. In this example, the skin tone factors may include the smallest of the enhancement factors set by the system.
In an example of this embodiment, the performing saturation enhancement on the image according to the enhancement factor includes:
carrying out saturation enhancement on pixel points representing real skin colors in the image according to the skin color factors;
and performing saturation enhancement on other pixel points except the pixel point representing the real skin color in the image according to enhancement factors corresponding to color regions to which the colors of the other pixel points belong.
That is, the skin tone protection of the present embodiment is enhanced in preference to adaptive saturation based on scene and color region. During specific processing, for each pixel point in the image, if the pixel point represents a real skin color, saturation enhancement is performed according to the skin color factor, and the enhancement factor corresponding to the color region to which the color belongs is not considered any more. Herein, the skin color region is a special region in the color space, and is overlapped with one or more color regions obtained by dividing the color space, and cannot be used alone to determine which pixel points need skin color protection.
In summary, in the embodiment, the face or the area where the human body is located is determined through the face detection, and only the skin color protection is performed on the pixel points in the area where the color belongs to the skin color area, so that the area where the skin color protection is really needed is captured more accurately, while the skin color of the human body is protected, the saturation enhancement can be still performed on the objects with similar colors to improve the color effect, and the subjective quality of the image and the user satisfaction can be effectively improved.
In the case where the scene type set by the system includes a character-oriented scene type, if one image is recognized as the character-oriented scene type, the saturation enhancement for the background in the image may follow a uniform rule such as a uniformly set correspondence relationship of a plurality of color regions and enhancement factors. However, in an exemplary embodiment of the present disclosure, in consideration that the backgrounds of people may be very different, for example, sky, forest or living room, based on the refined saturation enhancement direction, the present embodiment, on the basis of performing the skin color protection processing of the above embodiment, when determining the enhancement factor used when performing saturation enhancement on the image, further performs the following differentiation processing:
when the scene category of the image is determined, determining the scene category with the highest matching degree and the next highest matching degree with the image in a plurality of scene categories set by a system:
when the scene type with the highest matching degree is a scene type mainly comprising a person, determining an enhancement factor used for enhancing the saturation degree of the image according to the scene type of the image when determining the enhancement factor used for enhancing the saturation degree of the image according to the scene type of the image, wherein the enhancement factor used for enhancing the saturation degree of the image is determined according to the scene type with the highest matching degree with the image;
and when the scene type with the highest matching degree is not the scene type mainly containing the character, determining an enhancement factor used for enhancing the saturation of the image according to the scene type of the image, and determining an enhancement factor used for enhancing the saturation of the image according to the scene type with the highest matching degree with the image.
The scene type mainly including the character refers to a scene in which the character is in a significant position in the image, and the scene can be identified according to factors such as the size, the depth and the posture of the character, and the images capable of detecting the face can also be identified as the scene type mainly including the character, and a classification model can be trained through a deep learning algorithm to realize classification of the scene mainly including the character and other scenes.
That is to say, in this embodiment, when the image is identified as the scene category mainly based on the character, in addition to performing skin color protection on the pixel points representing the real skin color in the region where the human face or the human body is located, for other pixel points, the enhancement factor is not determined according to the corresponding relationship between a fixed set of color regions and the enhancement factor, but the scene category with the highest matching degree with the image is recorded during the classification, and the corresponding relationship between the plurality of color regions configured based on the scene category with the highest matching degree and the enhancement factor is used as the corresponding relationship to be searched when determining the enhancement factor of the other pixel points. For example, when the background of the character is dominated by the sky, the pixel point representing the background determines the enhancement factor used for enhancing the saturation based on the corresponding relationship between the color region and the enhancement factor set by the scene category of "sky", and when the background of the character is dominated by a forest, the pixel point representing the background determines the enhancement factor used for enhancing the saturation based on the corresponding relationship between the color region and the enhancement factor set by the scene category of "forest", so that the background difference of the character in the image is fully considered, the self-adaptive saturation enhancement of the background under the scene of the character is realized, and the saturation enhancement effect is better improved.
The scene types set by the system may be various, and some may not include the scene type mainly including a person. Based on the setting of the scene type, the skin color protection processing can still be performed, and the effect of performing adaptive saturation enhancement on the human background in the embodiment can also be realized. For example, in an exemplary embodiment of the present disclosure, the scene category set by the system does not include a scene category with a person as a main body, and at this time, for an image with a person, the background has a large influence on the result of the scene classification, for example, an image with a person with a sky as a main body background has a scene category with a high probability of being identified as "sky". When the saturation of the images is enhanced, after the skin color protection processing is carried out based on the face detection, the enhancement factor used when the saturation of the images is enhanced is determined based on the scene type with the highest matching degree with the images, and the scene type with the highest matching degree with the images is mainly identified according to the main body background, so that the effect of adaptively enhancing the saturation of the human body background can be realized to a certain extent.
Although the above-described embodiment discloses that, in the case where the scene category with the highest degree of matching is the scene category mainly based on a person, the correspondence between the plurality of color spaces set based on the scene category with the second highest degree of matching and the enhancement factors is searched to determine the enhancement factors used when performing saturation enhancement on the image. However, this is not essential. In an exemplary embodiment of the present disclosure, when the scene category with the highest matching degree is a scene category mainly based on a person, the enhancement factor used when the saturation of the image is enhanced is determined by searching a corresponding relationship between a plurality of color spaces set based on the scene category mainly based on the person and the enhancement factor without performing differentiation processing on the background, so that an algorithm can be simplified, and an effect of enhancing the saturation of the image can also be achieved.
The embodiment of the disclosure can realize the video color saturation enhancement function and improve the adaptability to the color characteristics of various scene categories. By introducing a scene classification function, establishing a corresponding relation between a scene category and a color region and an enhancement factor, and adaptively adjusting the saturation enhancement factor, the adaptive saturation enhancement is realized, in order to avoid the problem of human skin color distortion caused by the saturation enhancement, a face detection technology is introduced, the region where a human body is located is determined, the saturation enhancement of pixel points representing real skin color in the region where the human body is located is shielded, the skin color protection function is realized, the saturation enhancement of other objects with skin color is not influenced, and a better saturation enhancement effect can be achieved.
The method aims to solve the problems of unsharp video color, poor subjective perception effect and the like, and improves the adaptability to the color characteristics of various scene categories. An embodiment of the present disclosure provides a video saturation enhancement method, which is used for optimizing video color saturation. The embodiment is based on YUV color space, and video data in other formats can be converted into YUV format. The embodiment introduces scene classification to provide scene information, and simultaneously introduces a face detection technology to determine the region where the human body is located, so as to realize adaptive saturation enhancement based on the scene and protect the real skin color in the human body region. That is, the present embodiment is a scene-based adaptive saturation enhancement method with skin color protection function.
As shown in fig. 2, the method for enhancing video saturation in this embodiment includes:
step 210, judging whether the format of the input video frame is YUV; if not, executing step 220, if it is YUV format, executing step 230;
if not otherwise defined, the video frame in the flow of this embodiment refers to the pointed video frame.
Step 220, converting the format of the video frame into a YUV format;
the main format (or data type) of video data is YUV, and the saturation enhancement of the present embodiment is based on YUV color space. In order to ensure the compatibility with the video data format, this embodiment performs an RGB to YUV conversion operation on a video frame in an RGB format. Video data in other formats are also uniformly converted into YUV format to increase compatibility of various color spaces. In this embodiment, saturation enhancement is performed based on a YUV color space, but the present disclosure is not limited to this, and saturation enhancement may also be performed based on other color spaces such as HSV, XYZ, and for example, when an HSV color space is used as a basis, the HSV color space may be divided to obtain a plurality of color regions, corresponding relationships between the plurality of color regions and enhancement factors are respectively set based on scene categories, and then the determined enhancement factors are substituted into a formula to increase a value of an s (saturation) component representing saturation to implement saturation enhancement.
In step 230, determine if the video frame is the first frame of the video frame sequence? If the frame is the first frame, executing step 240, and if the frame is not the first frame, executing step 250;
the sequence of video frames here may include one or more GOPs, the first frame of which is the first I-frame.
Step 240, determining and storing the scene type of the video frame through the scene classification, and turning to step 260;
in the embodiment, a machine learning algorithm in an Artificial Intelligence (AI) technology is adopted, scene classification is performed according to the color information distribution condition of the content of a video frame, a main part of a picture is taken as a reference (the main part can be judged according to the proportion of the content occupying the whole picture), and a data set for training a scene classification model is collected and sorted on the basis of a Places365 data set.
In one example of the present embodiment, the scene categories set by the system are classified into three categories, namely "outdoor scene", "indoor scene", and "character scene", wherein the "outdoor scene" includes any one or more of the following scene categories: "sky", "grassland", "forest", "water" and "building"; "indoor scene" includes any one or more of the following scene categories: "family zone", "office zone", and "living zone"; "character scene" includes the scene categories: "character". In this example, the scene classification model is constructed using a fast regional convolutional neural network (fast Region-CNN), and the base convolutional layer is VGG 16. By adopting the algorithm, after scene classification is completed, a scene type sorting score is generated. The present embodiment records the scene category ranked first, i.e. the scene category with the highest matching degree with the video frame, for searching the enhancement factor.
In other embodiments, the background in the character scene may also be refined and enhanced, and then the scene categories with the top two scores may be recorded, for example, the character is the preferred scene, and the sky is the second-selected scene (i.e., the scene category with the second highest matching degree with the video frame), and the two are matched to determine the saturation enhancement parameter, so as to complete the subsequent color enhancement operation.
Step 250, taking the scene type of the first frame of the video frame sequence as the scene type of the video frame;
in order to avoid calculation redundancy caused by frequent scene classification, the present embodiment performs scene detection only on the first frame of the video frame sequence, records scene information, and subsequently generates corresponding saturation adjustment parameters, such as enhancement factors, according to the acquired scene information.
Step 260, judging whether the scene type of the video frame is 'person'; if the scene type is "person", go to step 270, if the scene type is not "person", go to step 300;
step 270, finding out enhancement factors corresponding to a plurality of color areas configured based on the character scene;
step 280, determining the area where the human body is located through human face detection;
step 290, determining that the color in the region where the human body is located belongs to a sub-region of a skin color region; proceeding to step 310;
step 300, finding out enhancement factors corresponding to a plurality of color areas configured based on scene types (non-characters) of video frames;
step 310, determining an enhancement factor used when the saturation enhancement is performed on the pixel points in the video frame;
and step 320, executing saturation enhancement on the video frame, outputting the video frame with the enhanced saturation, and ending.
The processing of steps 260 to 320 will be described in detail below.
In this embodiment, for the scene category of "person", face detection is performed, and enhancement processing on skin color is not performed or is not weakened in the region where the face is located, so that protection on real skin color is realized. Such an operation is not performed for other scene categories, and thus it is determined whether the scene category of the video frame is "person" in step 260, and the scene categories of "person" and non-person "are processed separately by being divided into two branches according to the determination result.
In step 280, a face detection operation is performed and the detected face region is extended to the human body region. In an example of this embodiment, the face detection is performed by using a PyramidBox-Lite network, which is a lightweight model constructed based on an SSD target detection algorithm (Single Shot multi box Detector) network. In other examples, however, other face detection algorithms may be employed to achieve face detection. After the face detection is completed, the area where the face is located in the video frame can be determined, and in order to synchronously avoid skin color distortion of other parts of the human body such as limbs, the trunk and the like, the area where the face is located is expanded to determine the area where the human body is located in the video frame.
In one example, a face is referenced to a body ratioThe size and location of the region and the size of the video frame are subject to region expansion as shown in fig. 3. The coordinates of the area where the face is located are obtained through face detection, and are expressed as the coordinates (l) of two opposite vertexes of a rectangle in the figuref,tf)、(rf,bf) The coordinates of the region of the extended human body are the coordinates of the other two rectangular vertices (l)b,tb)、(rb,bb) Expressed, the formula for calculating the coordinates of the region where the human body is located is as follows:
Figure BDA0003369858620000171
wherein E isl、Et、Er、EbThe lengths of the left, top, right and bottom sides of the video frame, respectively, E in the figurel、Et、Er、Eb. And l in the formulaf,tf,rf,bf,lb,tb,rb,bbRespectively correspond to l in the figuref,tf,rf,bf,lb,tb,rb,bb. According to the expansion of the formula, the width W of the region where the human face is located is expanded by 1.5 times to the left and the right respectively by taking the region where the human face is located as the center in the width direction, and the height h of the region where the human face is located is expanded by 6.5 times to the lower side by taking the region where the human face is located as the top in the height direction, and meanwhile, the range of the video frame cannot be exceeded. However, this is merely exemplary, and it is also possible to perform the expansion according to different parameters, such as multiple, and also, in combination with the head pose, more accurately expand the region where the human body is located.
After the region where the human body is located is obtained by expansion, in step 290, this embodiment determines that the color in the region where the human body is located belongs to the pixel points of the skin color region, where these pixel points are the above pixel points representing the real skin color, and the set of these pixel points forms the skin color subregion in the region where the human body is located. In this embodiment, the skin color region is defined as a red region obtained by dividing the color space. In this example, there are two different enhancement modes for the pixel points whose color belongs to the red region: and carrying out skin color protection on pixel points of which the colors in the region of the human body belong to a red region (namely a skin color region), and not carrying out saturation enhancement or only carrying out minimum enhancement. And for the pixel points of which the colors outside the region where the human body is located belong to the red region, not performing skin color protection, and determining enhancement factors used for performing saturation enhancement on the pixel points according to the corresponding relation between the red region and the enhancement factors set based on the scene category of the video frame.
In other embodiments of the present disclosure, the skin color region may also be defined as another region, for example, the skin color region may be defined as an elliptical region with a circle center coordinate point (106, 154) in the YUV color space, and a region with a value of H component (Hue) in the HSV color space of 5-40 is defined as the skin color region, and the present disclosure is not limited to a specific definition manner.
In step 270 of this embodiment, it is necessary to search for enhancement factors corresponding to each of a plurality of color regions configured based on a character scene; in step 300, it is required to search for enhancement factors corresponding to each of a plurality of color regions configured based on the scene category (the scene category other than "person") of the video frame. The following describes in detail the method for dividing the color space, and setting the corresponding relationship between the color spaces and the enhancement factors based on the scene types, and the method for determining the enhancement factors used when performing saturation enhancement on the pixels in the video frame in step 310.
The distribution of color regions for adaptive saturation enhancement in this embodiment is shown in fig. 4, and this embodiment obtains 7 color regions by using YUV color space division. The YUV color space is represented in the figure as a rectangular plane coordinate system with U and V as two coordinate axes, and the color distribution in the coordinate system is controlled by two components of U and V. In the coordinate system, the YUV color space is divided by a U-axis, and a ray and two straight lines passing through an origin, wherein the ray is represented by a function g (U), the two straight lines are represented by a function h (U) and a function f (U), respectively, and h (U), f (U), and g (U) are respectively represented as follows:
Figure BDA0003369858620000181
according to the body colors of the divided color regions, the present embodiment refers to the color regions that sequentially pass in the counterclockwise direction from the positive half axis of the U-axis as: "blue violet", "red green", "green blue" and "blue", the corresponding numbers are 1 to 7, respectively. The division mode can be formed according to the color distribution condition and empirical data in the color matching process.
In this embodiment, the correspondence between the 7 color regions and the enhancement factors is respectively set based on the scene categories "blue-violet", "purple", "red", "green blue", and "blue", because the enhancement factors corresponding to the 7 color regions in different scenes are the same in many cases, in order to save storage resources, two templates (also referred to as tables) for saturation adjustment are established in this embodiment, one is used to store the enhancement factors (also referred to as basic enhancement factors) corresponding to the 7 color regions, and the other is used to store information for adjusting the basic enhancement factors corresponding to which color regions in each scene category to new enhancement factors. An exemplary first table is shown in table 1 below:
TABLE 1
Region numbering Body color Enhanced properties Basic enhancement factor
1 Blue → purple Color transition, medium intensity enhanced buffer 1.25
2 Purple pigment Intermediate strength enhancement zone 1.25
3 Red wine Skin tone area, low isointensity enhancement zone 1.125
4 Red → green Color transition, high intensity enhanced buffer 1.375
5 Green High strength enhancement zone 1.375
6 Green → blue Color transition, high intensity enhanced buffer 1.375
7 Blue (B) Intermediate strength enhancement zone 1.25
In the present embodiment, different color regions are distinguished by the main color (expressed by hue) of the divided color regions, in the above table, the main color is blue → violet, i.e., the color region "blue-violet" which is the region 1 of blue → violet, the main color is red → green, i.e., the color region "red-green", the main color is green → blue, i.e., the region 6 of green → blue, i.e., the color region "green-blue", the region 2 of blue "which is the main color" violet ", the region 3 of red" which is the color region "red" (also referred to as the red region), the region 5 of green "which is the main color" green ", i.e., the color region" green "(also referred to as the green region), and the region 7 of blue" which is the main color blue "(also referred to as the blue region).
As can be seen from the values of the enhancement attribute and the basic enhancement factor in table 1 above, the basic enhancement factors corresponding to different color regions are different, and in this embodiment, the basic enhancement factors are divided into three levels, i.e., high-intensity enhancement, medium-intensity enhancement, and low-intensity enhancement, and the enhancement factors are 1.375, 1.25, and 1.125, respectively. The color regions "blue-violet", "purple" and "blue" belong to the medium intensity enhancement region, the color regions "red-green", "green" and "green-blue" belong to the high intensity enhancement region, and the color region "red" belongs to the low intensity enhancement region, which reflects the feature that the present embodiment has the adaptive saturation enhancement based on the color regions.
From the viewpoint of saving storage space, the enhancement factors corresponding to the three enhancement levels in table 1 may not be directly recorded in the table, but are represented by 2-bit encoding, for example, 00 represents an enhancement factor of 1.375, 01 represents an enhancement factor of 1.25, and 10 represents an enhancement factor of 1.125. The correspondence of the coding and the value of the enhancement factor may be kept separately.
An exemplary second table is shown in table 2 below:
TABLE 2
Figure BDA0003369858620000191
Figure BDA0003369858620000201
From the above tables 1 and 2, it is possible to determine an enhancement factor corresponding to each color region set on a per scene category basis. Taking the scene category of "sky" as an example, it can be directly found from table 2 that the enhancement factors of region No. 6 and region No. 7 set based on "sky" need to be adjusted, and the adjusted enhancement factors are found. The enhancement factor corresponding to the region No. 6 is 1.38, and is used when the saturation enhancement is performed on the pixel points of the region No. 6, to which the colors belong, in the video frame; the enhancement factor corresponding to the region 7 is 1.375, and is used as the enhancement factor when the saturation enhancement is performed on the pixel points of the region 7 to which the colors belong in the video frame. Comparing the basic enhancement factors corresponding to the region No. 6 and the region No. 7 in table 1 and table 2, that is, the color regions "green-blue" and "blue", with the adjusted enhancement factors, it can be seen that the enhancement factors corresponding to the color regions "green-blue" and "blue" set based on the scene type "sky" in this embodiment are both greater than the enhancement factors corresponding to the color regions "green-blue" and "blue" set based on other scenes. In a scene with sky as a main body, the saturation of green, blue and blue is greatly enhanced, and the effect of improving the image color is remarkable.
From table 2, it can be seen that for the scene category of "sky", the enhancement factors corresponding to the region No. 1 to the region No. 5 are not adjusted, and need to be searched from table 1, the basic enhancement factors corresponding to the region No. 1 to the region No. 5 set based on the scene category of "sky" are 1.25, 1.125, 1.375, and for the pixels whose colors belong to the region No. 1 to the region No. 5, the respective corresponding basic enhancement factors are searched from table 1 when saturation enhancement is performed.
The enhancement factors corresponding to the color regions set based on other scene categories can be obtained by looking up the table in the same manner, for example, look-up table 2 and table 1, and it can be known that the enhancement factors corresponding to the color regions "blue-violet", "purple", "red", "green blue", and "blue" set based on the scene category "home zone" are 1.25, 1.375, and 1.25, respectively. Wherein, the enhancement factor corresponding to the color region "red" is looked up from table 2, and the enhancement factors corresponding to the other color regions are looked up from table 1. For another example, in the same row of "grass and forest" in table 2, the color regions to be adjusted and the adjusted enhancement factors representing the scene categories "grass" and "forest" are the same, i.e., for the two scene categories "grass" and "forest", the enhancement factor corresponding to the color region "green" is the adjusted enhancement factor 1.39 in table 2, and the enhancement factors corresponding to the other color regions are the respective corresponding basic enhancement factors in table 1.
It is easy to understand that the basic enhancement factor in table 1 and the adjusted enhancement factor in table 2 of this embodiment are only a convenient term used when two tables are used to find the enhancement factor, and both are referred to as the enhancement factor and do not affect the search. When enhancement factors corresponding to a plurality of color regions set based on a certain scene type are stored in one table, for example, it is possible to directly list enhancement factors corresponding to a plurality of color regions set based on all scene types with 1 table, or to list enhancement factors corresponding to a plurality of color regions set based on N scene types with N tables, respectively, and there is no adjustment,
table 1 and table 2 of this embodiment also show 8 setting manners for setting the corresponding relationship between the color regions and the enhancement factors based on the scene types in the foregoing embodiment, for example: the plurality of color regions comprise 'green-blue', the enhancement factor corresponding to the color region 'green-blue' set based on the scene category 'sky' is larger than the enhancement factor corresponding to the color region 'green-blue' set based on other scenes, and the like; the 8 arrangements listed in the foregoing embodiments are all embodied in the arrangement of the present embodiment. However, it is also possible for the present disclosure to use only some of the 8 arrangements described above.
In particular, as can be seen from table 2, for the scene category "person", the adjustment of the region 3, i.e. the red region (i.e. the skin color region in this embodiment) is conditional, which is different from the adjustment of the enhancement factors corresponding to the color regions in other scene categories. As shown in table 2, in this embodiment, the enhancement factor adjustment is not performed on all the pixels whose colors belong to the red region, and the adjustment is only effective on the pixels in the region where the human body is located, in other words, only on the pixels whose colors belong to the red region in the region where the human body is located, the adjusted enhancement factor, that is, the skin color factor 1, is used to implement skin color protection. And for the pixel points of which the colors outside the region of the human body belong to the red region, the basic enhancement factor 1.125 corresponding to the red region is still used for carrying out saturation enhancement. In other embodiments, when the set skin color area is not identical to the red color area, the color area to be adjusted and the enhanced factor after adjustment may be blank in the scene category of "character" in table 2 without filling. The value of the skin tone factor may be recorded separately.
In this embodiment, the color region to be adjusted in each scene category in table 2 is a color region focused by vision in the scene category, for example, in the scene category of "sky", the color regions "green blue" and "blue" are the most visually sensitive, and the enhancement factors corresponding to these color regions are adjusted to increase the degree of saturation enhancement (except for the skin color factor), while the values of the enhancement factors corresponding to other color regions are the basic enhancement factors, so that the saturation enhancement effect is more obvious.
In step 310, it is necessary to determine an enhancement factor used when performing saturation enhancement on a pixel point in a video frame. When the scene type is 'character', the enhancement factor used is 1 for the pixel points of which the colors in the region of the human body belong to the skin color region (in this embodiment, red region), and the enhancement factor corresponding to the color region determined in the previous step is used as the enhancement factor used when the saturation of the pixel points is enhanced after the color region to which the colors of the pixel points in the region of the human body do not belong to the skin color region and the pixel points outside the region of the human body are determined.
The enhancement factors (basic enhancement factor or extended enhancement factor) set in table 1 and table 2 in this embodiment are used to indicate the proportion of saturation enhancement. When performing saturation enhancement on the video frame in step 320, the enhanced U component value and V component value are obtained by using the following formulas:
(u,v)e=(u+α×u,v+α×v)
in the equation, U, V at the left side of the equation represents the enhanced U component value and V component value, U, V at the right side of the equation represents the original U component value and V component value, and α represents the enhancement factor. The embodiment adopts an algorithm for carrying out equal proportion promotion on the U component and the V component.
As described above, the video saturation enhancement method provided in this embodiment uses the YUV color space as a basis, divides 7 color regions according to color characteristics, and sets the saturation enhancement ratio of each color region. Meanwhile, a scene classification function based on an AI technology and an AI face detection technology are introduced and are respectively used for realizing classification and face identification of 9 scenes, establishing a corresponding relation among 7 color regions, scene categories and enhancement factors and realizing self-adaptive saturation adjustment based on the scene categories and the color regions; the saturation enhancement operation of the pixel points representing the real skin color in the area where the human body is located is shielded, so that the problem of distortion of the skin color of the human body caused by saturation enhancement is avoided, and the skin color protection function is realized.
In order to verify the effectiveness of the embodiment, a standard data set and a mobile phone real shooting data set are selected, so that obvious saturation improvement can be observed, meanwhile, a person scene video is specially tested, and a test result shows that the skin color of a human body is well protected and no obvious distortion condition exists. In the subjective effect evaluation link, 94% of testers can perceive that the saturation is obviously improved, the color is more gorgeous, and meanwhile, the skin color is not obviously distorted.
The video saturation enhancement method can be widely applied to video calls, video conferences and various scenes of shooting and watching videos by mobile phones, and improves the video quality.
In the embodiment shown in fig. 2, for the video frame of the "person" scene category, except for skin color protection, the enhancement factors corresponding to the multiple color regions are all obtained from table 1, and the enhancement factors corresponding to the multiple color regions are not adjusted according to the difference of the person background. In another embodiment of the present disclosure, for a video frame of a "character" scene category, enhancement factors corresponding to a plurality of color regions are also adjusted according to a difference of character backgrounds. Compared with the embodiment shown in fig. 2, the present embodiment is different in that: when the scene category of the video frame is determined through scene classification, the first and second ranked scene categories, namely the two scene categories with the highest matching degree and the second highest matching degree with the video frame, are recorded simultaneously, wherein the scene category with the highest matching degree with the video frame is 'character', and the scene category with the second highest matching degree with the video frame is assumed to be 'sky'. When the enhancement factor for enhancing the saturation of the video frame is determined, in addition to determining the pixel points representing the real skin color and the enhancement factors thereof in the video frame through skin color protection processing, for the enhancement factors corresponding to a plurality of color areas for other pixel points, the enhancement factors corresponding to the plurality of color areas set based on the scene type character are not searched, but the enhancement factors corresponding to the plurality of color areas set based on the sky are searched instead, so that the enhancement factors corresponding to the plurality of color areas can be adjusted according to the difference of the character background, the fine adjustment is achieved, and the effect of enhancing the saturation is improved.
An embodiment of the present disclosure further provides an image saturation enhancement system, as shown in fig. 5, including:
a scene type obtaining module 10 configured to obtain a scene type of the image;
an enhancement factor determining module 20, configured to determine an enhancement factor used when performing saturation enhancement on the image according to the scene type of the image;
a saturation enhancement module 30 configured to perform saturation enhancement on the image according to the enhancement factor.
In an exemplary embodiment of the present disclosure,
the enhancement factor determining module 20 determines, according to the scene category of the image, an enhancement factor used when performing saturation enhancement on the image, including:
searching the corresponding relation between a plurality of color areas and enhancement factors set based on the scene categories of the image, wherein the color areas are obtained by dividing a color space, and the corresponding relation between the color areas and the enhancement factors is respectively set based on a plurality of scene categories set by a system;
and determining enhancement factors corresponding to the multiple color regions when the image is subjected to saturation enhancement according to the search result.
In an exemplary embodiment of the present disclosure,
the saturation enhancement module 30 performs saturation enhancement on the image according to the enhancement factor, including: and for the pixel points in the image, according to the enhancement factors corresponding to the color regions to which the colors of the pixel points belong, performing saturation enhancement on the colors of the pixel points.
In an exemplary embodiment of the present disclosure, the image saturation enhancement system further includes:
the skin color protection module is set to set an enhancement factor used when the saturation enhancement is performed on a pixel point representing the real skin color in the image as a skin color factor when the scene type of the image belongs to the set scene type needing skin color protection and the face detection determines that the face exists in the image; the pixel points representing the real skin color comprise pixel points which are positioned in the area where the human face is positioned and the color of which belongs to a skin color area, the skin color factor is an enhancement factor which enables the saturation degree to be unchanged or the minimum enhancement factor in the enhancement factors set by all systems, and the set scene types needing skin color protection are part or all of a plurality of scene types set by the systems.
The scene category obtaining module, the enhancement factor determining module, the saturation enhancing module, and the skin color protecting module may implement corresponding processing in the saturation enhancing method according to any embodiment of this embodiment, and are not described in detail here.
The present disclosure further provides an image saturation enhancement apparatus, as shown in fig. 6, including a memory 500 and a processor 60, where the memory 50 stores a computer program, and the processor 60, when executing the computer program, can implement the image saturation enhancement method according to any embodiment of the present disclosure.
According to the image saturation enhancement system and device disclosed by the embodiment of the disclosure, by introducing the scene classification function and adjusting the saturation parameter based on the scene category of the image, the adaptive saturation enhancement based on the scene is realized, and the subjective quality of the image can be improved. In addition, the region where the face is located can be determined through face detection, and the protection of real skin color is achieved.
A general video compression process is shown in fig. 7, and includes processes of video acquisition, video preprocessing, video encoding, and the like at an encoding end. At the decoding end, the processes of video decoding, video post-processing, display playing and the like are included. And data is transmitted between the encoding end and the decoding end through code streams.
When the video saturation enhancement method provided by the embodiment of the disclosure is applied to the video compression process, the video saturation enhancement method can be independently used in a video preprocessing stage of a coding end to enhance the color effect of the acquired video frame; or can be used in the video post-processing stage of the decoding end separately to improve the color effect of the decoded video frame. Considering the loss of the code stream transmission process and the difference of the requirements of the encoding end and the decoding end on the saturation, the video saturation enhancement is carried out in the video preprocessing stage of the encoding end so as to enhance the color effect of the collected video frame, and the video saturation enhancement can also be carried out in the video post-processing stage of the decoding end so as to enhance the color effect of the decoded video frame.
When the adaptive saturation enhancement based on the scene is performed at the encoding end, the scene type of the video frame can be transmitted to the video encoder as the attribute information of the video frame, and the video encoder can perform adaptive encoding by using the scene type of the video frame, and can also encode the scene type of the video frame into a code stream and send the code stream to the decoding end. The video decoder can perform adaptive decoding after analyzing the scene type of the video frame from the code stream, and when the decoding end performs adaptive saturation enhancement based on the scene in the embodiment of the disclosure, the scene type of the decoded video frame can be directly utilized, and the scene type of the video frame is obtained without performing scene classification on the video frame, so that the processing of the decoding end is simplified.
An embodiment of the present disclosure further provides a video encoding processing method, as shown in fig. 8, including:
step 510, performing saturation enhancement on a video frame from a data source according to an image saturation enhancement method according to any embodiment of the present disclosure, where the method includes: acquiring the scene type of the video frame;
the image saturation enhancement method according to any embodiment of the present disclosure is a scene-based self-image saturation enhancement method, and is used for processing a video frame, where a scene type of the video frame may be obtained by performing scene classification on the video frame. The method for performing scene classification in this step may adopt any one of the scene classification methods in the foregoing embodiments, but is not limited thereto.
And 520, encoding the video frame with the enhanced saturation to generate a video code stream.
In an exemplary embodiment of the present disclosure, the encoding the video frame after the saturation enhancement to generate a video bitstream includes: and writing the scene type of the video frame into a video code stream. For example, the scene category may be encoded into the codestream as attribute information of the video frame.
An embodiment of the present disclosure further provides a video decoding processing method, as shown in fig. 9, including:
step 610, decoding the video code stream to obtain a decoded video frame;
step 620, performing saturation enhancement on the decoded video frame according to the image saturation enhancement method according to any embodiment of the present disclosure.
In an exemplary embodiment of the present disclosure, the video code stream carries information of a scene type of a video frame, and the decoding of the video code stream can also obtain the information of the scene type of the video frame; accordingly, when the saturation degree of the decoded video frame is enhanced, the scene category of the video frame is obtained from the decoded information. In another embodiment, the video code stream does not carry information of the scene category of the video frame, and the scene category of the video frame is obtained by performing scene classification on the video frame.
An embodiment of the present disclosure further provides a video encoding and decoding system, as shown in fig. 10, including an encoding end device and a decoding end device.
The encoding end device comprises a data source 201 and a video encoding processing device 200, and the decoding end device comprises a video decoding processing device 400 and a display 405. The data source 201 may be, among other things, a video capture device (e.g., a video camera), an archive containing previously captured data, a feed interface to receive data from a content provider, a computer graphics system for generating data, or a combination of these sources. The display 405 may be a liquid crystal display, a plasma display, an organic light emitting diode display, or other type of display device. The video encoding processing device 200 and the video decoding processing device 400 can be implemented using any one of the following circuits or any combination of the following circuits: one or more microprocessors, digital signal processors, application specific integrated circuits, field programmable gate arrays, discrete logic, hardware. If implemented in part in software, the instructions for the software may be stored in a suitable non-volatile computer-readable storage medium and executed in hardware using one or more processors to implement the respective processes.
The video encoding processing device 200 includes:
a first saturation enhancement device 203 configured to perform saturation enhancement on a video frame from a data source according to the image saturation enhancement method according to any embodiment of the present disclosure, wherein a scene type of the video frame is obtained by performing scene classification on the video frame;
a video encoder 205 configured to encode the video frame after saturation enhancement to generate a video code stream;
in an exemplary embodiment of the present disclosure, the first saturation enhancement means 203 is further arranged to output the scene category of the video frame to a video encoder 205; the video encoder 205 encodes the video frame with enhanced saturation to generate a video code stream, including: and writing the scene type of the video frame into a video code stream.
The video encoding processing apparatus 400 includes:
a video decoder 401 configured to decode the video code stream to obtain a decoded video frame;
a second saturation enhancement device 403, configured to perform saturation enhancement on the decoded video frame according to the image saturation enhancement method according to any embodiment of the present disclosure.
In an exemplary embodiment of the present disclosure, the video code stream carries information of a scene type of a video frame, and the video decoder 401 is further configured to output the scene type of the video frame obtained by decoding the video code stream to the second saturation enhancement device 403; when the second saturation enhancement device 403 performs saturation enhancement on the decoded video frame, the scene type of the video frame is obtained by decoding. In another embodiment, the video code stream does not carry information of the scene category, and the second saturation enhancement apparatus 403 obtains the scene category of the video frame by performing scene classification on the video frame.
An embodiment of the present disclosure further provides a video encoding and decoding system, which includes the video encoding and processing apparatus according to any embodiment of the present disclosure and the video decoding and processing apparatus according to any embodiment of the present disclosure.
An embodiment of the present disclosure further provides a code stream, where the code stream is generated according to a video coding processing method for writing scene category information into a video code stream according to the embodiment of the present disclosure, and the code stream includes scene category information of a video frame.
The video coding processing method and device, the video decoding processing method and method, and the video coding and decoding system of the embodiments of the present disclosure adopt the scene-based adaptive image saturation enhancement method of the embodiments of the present disclosure, so that the color effect of the video frame can be improved. And the scene classification of the video frame at the encoding end is written into the code stream and then used for enhancing the video saturation at the decoding end, so that the decoding end does not need to carry out scene classification on the video frame, and the operation of the equipment at the decoding end is simplified.
The present disclosure also provides a non-transitory computer-readable storage medium storing a computer program, wherein the computer program is configured to implement the image saturation enhancement method according to any embodiment of the present disclosure when executed by a processor.
In any one or more of the exemplary embodiments described above, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may comprise computer-readable storage media corresponding to tangible media, such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, such as according to a communication protocol. In this manner, the computer-readable medium may generally correspond to a non-transitory tangible computer-readable storage medium or a communication medium such as a signal or carrier wave. A data storage medium may be any available medium that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementing the techniques described in this disclosure. The computer program product may include a computer-readable medium.
By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection may be termed a computer-readable medium, and if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, for example, the coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory (transitory) media, but are instead directed to non-transitory tangible storage media. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk or blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The instructions may be executed by one or more processors, such as one or more Digital Signal Processors (DSPs), general purpose microprocessors, Application Specific Integrated Circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Thus, the term "processor," as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques may be fully implemented in one or more circuits or logic elements.
The techniques of the embodiments of the present disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an Integrated Circuit (IC), or a set of ICs (e.g., a chipset). Various components, modules, or units are described in embodiments of the disclosure to emphasize functional aspects of devices configured to perform the described techniques, but do not necessarily require realization by different hardware units. Rather, as noted above, the various units may be combined in a codec hardware unit or provided by a collection of interoperating hardware units (including one or more processors as noted above) in conjunction with suitable software and/or firmware.

Claims (23)

1. An image saturation enhancement method, comprising:
acquiring scene types of images;
determining an enhancement factor used when the image is subjected to saturation enhancement according to the scene category of the image;
and performing saturation enhancement on the image according to the enhancement factor.
2. The method of claim 1, wherein:
the determining, according to the scene category of the image, an enhancement factor used when performing saturation enhancement on the image includes:
searching the corresponding relation between a plurality of color areas and enhancement factors set based on the scene categories of the image, wherein the color areas are obtained by dividing a color space, and the corresponding relation between the color areas and the enhancement factors is respectively set based on a plurality of scene categories set by a system;
and determining enhancement factors corresponding to the multiple color regions when the image is subjected to saturation enhancement according to the search result.
3. The method of claim 2, wherein:
the performing saturation enhancement on the image according to the enhancement factor includes: and for the pixel points in the image, according to the enhancement factors corresponding to the color regions to which the colors of the pixel points belong, performing saturation enhancement on the colors of the pixel points.
4. The method of claim 2, wherein:
the scene categories are classified into at least two categories, namely "outdoor scene" and "indoor scene", wherein the "outdoor scene" includes any one or more of the following scene categories: "sky", "grassland", "forest", "water" and "building"; "indoor scene" includes any one or more of the following scene categories: "family zone", "office zone", and "living zone"; or
The scene categories are classified into at least three categories, namely "outdoor scene", "indoor scene", and "character scene", wherein the "outdoor scene" includes any one or more of the following scene categories: "sky", "grassland", "forest", "water" and "building"; "indoor scene" includes any one or more of the following scene categories: "family zone", "office zone", and "living zone"; "character scene" includes the scene categories: "character".
5. The method of claim 4, wherein:
the corresponding relations between the color regions and the enhancement factors are respectively set based on a plurality of set scene types, and the setting modes comprise any one or more of the following setting modes:
the multiple color regions comprise 'green-blue', and the enhancement factors corresponding to the color regions 'green-blue' set based on the scene category 'sky' are larger than the enhancement factors corresponding to the color regions 'green-blue' set based on other scenes;
the plurality of color regions comprise 'blue', and the enhancement factor corresponding to the color region 'blue' set based on the scene category 'sky' is larger than the enhancement factor corresponding to the color region 'blue' set based on other scenes;
the plurality of color regions comprise 'blue', and the enhancement factor corresponding to the color region 'blue' set on the basis of the scene category 'water surface' and/or 'building' is smaller than the enhancement factor corresponding to the color region 'blue' set on the basis of the scene category 'sky' and larger than the enhancement factor corresponding to the color region 'blue' set on the basis of other scene categories except 'sky';
the plurality of color regions comprise 'green', and the enhancement factor corresponding to the color region 'green' set based on the scene category 'grassland' and/or 'forest' is larger than the enhancement factor corresponding to the color region 'green' set based on other scenes;
the plurality of color areas comprise 'blue-violet', and the enhancement factors corresponding to the color areas 'blue-violet' set on the basis of the scene category 'water surface' and/or 'building' are larger than the enhancement factors corresponding to the color areas 'blue-violet' set on the basis of other scenes;
the plurality of color regions include "red", and an enhancement factor corresponding to a color region "red" set based on one or more of scene categories "home region", "office region", and "active region" is larger than an enhancement factor corresponding to a color region "red" set based on other scenes;
the multiple color regions comprise red and green, and the enhancement factors corresponding to the red and green color regions set based on the office region of the scene category are larger than the enhancement factors corresponding to the red and green color regions set based on other scenes;
the plurality of color regions include "purple", and the enhancement factor corresponding to the color region "purple" set based on the scene category "active region" is larger than the enhancement factor corresponding to the color region "purple" set based on other scenes.
6. The method of claim 2, wherein:
the plurality of color regions are obtained by dividing a color space, and include: dividing the YUV color space according to the following modes to obtain a plurality of color areas:
in a rectangular plane coordinate system composed of a U axis and a V axis, the U, V color space is divided into 7 color regions based on the U axis, a ray passing through an origin and two straight lines, and the color regions passing through in sequence in the counterclockwise direction from the positive half axis of the U axis are: "blue violet", "red green", "green blue" and "blue", wherein the ray is defined by the function g (u) -2u, u > -0, and the two straight lines are defined by the functions h (u) -u and h (u) >, respectively
Figure FDA0003369858610000031
And (4) defining.
7. The method of claim 2, 4 or 5, wherein:
the method further comprises the following skin tone protection process:
when the scene type of the image belongs to a set scene type which needs skin color protection and a face is determined to exist in the image through face detection, an enhancement factor used when the saturation enhancement is carried out on a pixel point representing real skin color in the image is set as a skin color factor;
the pixel points representing the real skin color comprise pixel points which are positioned in the area where the human face is positioned and the color of which belongs to a skin color area, the skin color factor is an enhancement factor which enables the saturation degree to be unchanged or the minimum enhancement factor in the enhancement factors set by all systems, and the set scene types needing skin color protection are part or all of a plurality of scene types set by the systems.
8. The method of claim 7, wherein:
the pixel points representing the real skin color also comprise pixel points which are positioned in other areas of the human body and the colors of which belong to skin color areas; the other regions of the human body refer to other regions in the region of the human body except the region of the human face, and the region of the human body is obtained by calculation according to the position and the size of the region of the human face and the size of the image.
9. The method of claim 7, wherein:
the skin color region is set as a red region in a color space or a partial region in the red region.
10. The method of claim 7, wherein:
the performing saturation enhancement on the image according to the enhancement factor includes:
carrying out saturation enhancement on pixel points representing real skin colors in the image according to the skin color factors;
and performing saturation enhancement on other pixel points except the pixel point representing the real skin color in the image according to enhancement factors corresponding to color regions to which the colors of the other pixel points belong.
11. The method of claim 10, wherein:
the scene types set by the system comprise scene types with characters as main bodies;
the determining the scene category of the image comprises: determining the scene category with the highest and the second highest matching degree with the image in a plurality of scene categories set by a system:
when the scene type with the highest matching degree is a scene type with a person as a main body, determining an enhancement factor used for enhancing the saturation of the image according to the scene type of the image, including: determining an enhancement factor used when the image is subjected to saturation enhancement according to the scene category with the highest matching degree with the image;
when the scene type with the highest matching degree is not the scene type mainly containing the character, determining an enhancement factor used for enhancing the saturation of the image according to the scene type of the image, wherein the enhancement factor comprises the following steps: and determining an enhancement factor used when the image is subjected to saturation enhancement according to the scene category with the highest matching degree with the image.
12. The method of claim 2, 3 or 10, wherein:
the determining, according to the scene category of the image, an enhancement factor used when performing saturation enhancement on the image includes: and determining an enhancement factor used when the image is subjected to saturation enhancement according to the scene category with the highest matching degree with the image.
13. The method of claim 1, wherein:
the image is a video frame.
14. The method of claim 13, wherein:
the scene category of the acquired image comprises:
acquiring a scene type of an externally input video frame; or
Analyzing a video frame code stream to obtain a scene type of the video frame; or
The method comprises the steps of determining a scene category of a first video frame in a video frame sequence according to a set scene classification algorithm, and determining scene categories of other video frames except the first video frame in the video frame sequence as the scene category of the first video frame, wherein the video frame sequence comprises one or more picture groups.
15. The method of claim 1, 3 or 10, wherein:
the enhancement factor is an enhancement proportion, and the saturation enhancement of the image according to the enhancement factor comprises the following steps: and carrying out saturation enhancement on pixel points in the image according to the following two formulas:
Ug=U+αU;
Vg=V+αV;
wherein U represents the original U component value of the pixel point, UgExpressing the U component value of the enhanced pixel point, V expressing the original V component value of the pixel point, VgThe V component value of the enhanced pixel point is represented, alpha is an enhancement factor, and the value range of alpha is [1, 1.5 ]]。
16. An image saturation enhancement apparatus comprising a memory and a processor, wherein the memory holds a computer program, and the processor is capable of implementing the image saturation enhancement method according to any one of claims 1 to 15 when executing the computer program.
17. A video encoding processing method, comprising:
the image saturation enhancement method of any one of claims 1 to 15, for performing saturation enhancement on a video frame from a data source, comprising: acquiring the scene type of the video frame;
encoding the video frame after saturation enhancement to generate a video code stream, wherein the encoding comprises: and writing the scene type of the video frame into a video code stream.
18. A video encoding processing apparatus, comprising:
first saturation enhancement means arranged to perform saturation enhancement on a video frame from a data source according to the image saturation enhancement method as claimed in any one of claims 1 to 15, wherein a scene classification of the video frame is obtained by performing scene classification on the video frame;
and the video encoder is used for encoding the video frame with the enhanced saturation degree to generate a video code stream, wherein the video code stream is written with the scene type of the video frame.
19. A video decoding processing method, comprising:
decoding a video code stream to obtain a decoded video frame and a scene type of the video frame;
the image saturation enhancement method according to any one of claims 1 to 15, wherein the scene type of the video frame is obtained by decoding.
20. A video decoding processing apparatus, comprising:
the video decoder is arranged for decoding the video code stream to obtain a decoded video frame and the scene type of the video frame;
second saturation enhancement means arranged to perform saturation enhancement on said decoded video frame according to the image saturation enhancement method as claimed in any one of claims 1 to 15, wherein a scene type of said video frame is obtained by decoding.
21. A video coding/decoding system comprising the video coding processing apparatus according to claim 18 and the video decoding processing apparatus according to claim 20.
22. A codestream, characterized in that it is generated according to the video coding processing method of claim 17, said codestream containing information of the scene type of the video frames.
23. A non-transitory computer-readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the image saturation enhancement method of any one of claims 1 to 15, or the video encoding processing method of claim 17, or the video decoding processing method of claim 19.
CN202111394213.3A 2021-11-23 2021-11-23 Image saturation enhancement method and coding and decoding processing method, device and system Pending CN114125414A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111394213.3A CN114125414A (en) 2021-11-23 2021-11-23 Image saturation enhancement method and coding and decoding processing method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111394213.3A CN114125414A (en) 2021-11-23 2021-11-23 Image saturation enhancement method and coding and decoding processing method, device and system

Publications (1)

Publication Number Publication Date
CN114125414A true CN114125414A (en) 2022-03-01

Family

ID=80440121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111394213.3A Pending CN114125414A (en) 2021-11-23 2021-11-23 Image saturation enhancement method and coding and decoding processing method, device and system

Country Status (1)

Country Link
CN (1) CN114125414A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100073362A1 (en) * 2008-09-23 2010-03-25 Ike Ikizyan Method And System For Scene Adaptive Dynamic 3-D Color Management
CN103369228A (en) * 2012-03-26 2013-10-23 百度在线网络技术(北京)有限公司 Camera setting method and device, and camera
CN106101547A (en) * 2016-07-06 2016-11-09 北京奇虎科技有限公司 The processing method of a kind of view data, device and mobile terminal
CN110447051A (en) * 2017-03-20 2019-11-12 杜比实验室特许公司 The contrast and coloration of reference scene are kept perceptually
CN111127476A (en) * 2019-12-06 2020-05-08 Oppo广东移动通信有限公司 Image processing method, device, equipment and storage medium
CN113610720A (en) * 2021-07-23 2021-11-05 Oppo广东移动通信有限公司 Video denoising method and device, computer readable medium and electronic device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100073362A1 (en) * 2008-09-23 2010-03-25 Ike Ikizyan Method And System For Scene Adaptive Dynamic 3-D Color Management
CN103369228A (en) * 2012-03-26 2013-10-23 百度在线网络技术(北京)有限公司 Camera setting method and device, and camera
CN106101547A (en) * 2016-07-06 2016-11-09 北京奇虎科技有限公司 The processing method of a kind of view data, device and mobile terminal
CN110447051A (en) * 2017-03-20 2019-11-12 杜比实验室特许公司 The contrast and coloration of reference scene are kept perceptually
CN111127476A (en) * 2019-12-06 2020-05-08 Oppo广东移动通信有限公司 Image processing method, device, equipment and storage medium
CN113610720A (en) * 2021-07-23 2021-11-05 Oppo广东移动通信有限公司 Video denoising method and device, computer readable medium and electronic device

Similar Documents

Publication Publication Date Title
Fang et al. A video saliency detection model in compressed domain
US20190156487A1 (en) Automated generation of pre-labeled training data
CN109934776B (en) Model generation method, video enhancement method, device and computer-readable storage medium
US8199165B2 (en) Methods and systems for object segmentation in digital images
US10607324B2 (en) Image highlight detection and rendering
Reinhard et al. Colour spaces for colour transfer
CN108229234B (en) Scannable image generation method fusing digital coding
CN108830208A (en) Method for processing video frequency and device, electronic equipment, computer readable storage medium
Peng et al. Image haze removal using airlight white correction, local light filter, and aerial perspective prior
KR20180002880A (en) Electronic device performing image conversion and its method
JP2009512283A (en) Automatic region of interest detection based on video frame motion
WO2022227308A1 (en) Image processing method and apparatus, device, and medium
CN1450796A (en) Method and apparatus for detecting and/or tracking image or color area of image sequence
KR20120107429A (en) Zone-based tone mapping
CN108875619A (en) Method for processing video frequency and device, electronic equipment, computer readable storage medium
CN103440674B (en) A kind of rapid generation of digital picture wax crayon specially good effect
EP3051488A1 (en) A method and apparatus for inverse-tone mapping a picture
CN115619683B (en) Image processing method, apparatus, device, storage medium, and computer program product
Huang et al. Low light image enhancement network with attention mechanism and retinex model
US20230127009A1 (en) Joint objects image signal processing in temporal domain
US7885458B1 (en) Illuminant estimation using gamut mapping and scene classification
CN110310231B (en) Device and method for converting first dynamic range video into second dynamic range video
CN111079864A (en) Short video classification method and system based on optimized video key frame extraction
US8498496B2 (en) Method and apparatus for filtering red and/or golden eye artifacts
CN116188296A (en) Image optimization method and device, equipment, medium and product thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination