CN113225620A - Video processing method and video processing device - Google Patents

Video processing method and video processing device Download PDF

Info

Publication number
CN113225620A
CN113225620A CN202110475702.5A CN202110475702A CN113225620A CN 113225620 A CN113225620 A CN 113225620A CN 202110475702 A CN202110475702 A CN 202110475702A CN 113225620 A CN113225620 A CN 113225620A
Authority
CN
China
Prior art keywords
scaling
pixel value
value
current frame
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110475702.5A
Other languages
Chinese (zh)
Other versions
CN113225620B (en
Inventor
李福林
戴宇荣
于冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202110475702.5A priority Critical patent/CN113225620B/en
Publication of CN113225620A publication Critical patent/CN113225620A/en
Application granted granted Critical
Publication of CN113225620B publication Critical patent/CN113225620B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed

Abstract

The present disclosure provides a video processing method and a video processing apparatus, the video processing method may include: acquiring a video to be processed; determining the scaling of each pixel value of the current frame of the video to be processed; correcting a scaling of each pixel value of the current frame based on scaling information of a previous frame of the current frame; obtaining a dimmed video frame by applying a corrected scaling to pixel values of the current frame. The method and the device can adjust the brightness distribution of the video picture when recording the video and keep the original picture color undistorted.

Description

Video processing method and video processing device
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a video processing method and a video processing apparatus for dimming a video frame.
Background
Recently, multimedia technology has greatly enriched the lives of people in modern society, especially video technology, meets various requirements of people, is an essential tool in social development and production life, and is a very common means for recording and sharing lives. Terminal equipment such as a digital camera and a smart phone are important tools for video recording, the development of various social entertainment software adds more flexibility to the smart phone, especially the development of the live broadcast industry generates a huge demand on live broadcast of video, and the live broadcast scene has extremely high requirements on instantaneity and low power consumption. When the terminal devices are used for real-time video recording, the influence of the ambient light on the video quality cannot be avoided. When the recording environment or the live broadcast environment is poor, such as the environment is dim and not clear, the video recorded in real time has low quality problems, for example, the picture is under-exposed and not clear, so that the picture can not meet the requirements of the recorder or the viewer.
Real-time video recording requires extremely low delay and extremely low power consumption, otherwise, the situation that picture pause delay is large or recording is influenced by equipment heating is caused. Therefore, solving the influence caused by the recording environment puts higher requirements on the real-time processing method.
Disclosure of Invention
The present disclosure provides a video processing method and a video processing apparatus to solve at least the above-mentioned problems.
According to a first aspect of the embodiments of the present disclosure, there is provided a video processing method, which may include the steps of: acquiring a video to be processed; determining the scaling of each pixel value of the current frame of the video to be processed; correcting a scaling of each pixel value of the current frame based on scaling information of a previous frame of the current frame; obtaining a dimmed video frame by applying a corrected scaling to pixel values of the current frame.
Alternatively, the step of correcting the scaling of the pixel values of the current frame based on the scaling information of the previous frame of the current frame may comprise: determining a maximum scaling and a first average scaling of the current frame from the scaling of each pixel value of the current frame, and calculating a first correction parameter for the scaling of each pixel value of the current frame according to the maximum scaling and the first average scaling; adjusting a first correction parameter based on a second correction parameter included in the zoom information of the previous frame; and correcting the scaling of each pixel value of the current frame by using the adjusted first correction parameter.
Alternatively, the step of adjusting the first correction parameter based on the second correction parameter included in the zoom information of the previous frame may include: calculating a first relative difference between the first correction parameter and the second correction parameter; obtaining a first correction parameter by reducing the second correction parameter by a value corresponding to the first relative difference value in a case where the second correction parameter is greater than or equal to the first correction parameter; in the case where the second correction parameter is smaller than the first correction parameter, the first correction parameter is obtained by increasing the second correction parameter by a value corresponding to the first relative difference.
Optionally, the step of determining the maximum scaling and the first average scaling of the current frame from the scaling of the pixel values of the current frame may comprise: determining a maximum pixel value of the current frame; determining a maximum pixel ratio value of the current frame based on the maximum pixel value; determining the maximum pixel ratio value as a first scaled average value if the first scaled average value is less than or equal to the maximum pixel ratio value.
Optionally, the step of calculating a first correction parameter for the scaling of each pixel value of the current frame from the maximum scaling and the first scaled average value may comprise: adjusting a first scaling average based on a second scaling average included in scaling information of the previous frame; and calculating a first correction parameter according to the maximum scaling and the adjusted average value of the first scaling.
Alternatively, the step of adjusting the first scaling average based on the second scaling average included in the scaling information of the previous frame may include: calculating a second relative difference between the first scaled average and the second scaled average; obtaining a first scaled average value by reducing the second scaled average value by a value corresponding to the second relative difference value in a case where the second scaled average value is greater than or equal to the first scaled average value; in a case where the second scaled average value is smaller than the first scaled average value, the first scaled average value is obtained by increasing the second scaled average value by a value corresponding to the second relative difference value.
Optionally, the step of determining the scaling of each pixel value of the current frame of the video to be processed may include: determining a pixel value probability distribution of the current frame; determining pixel value stretching parameters of the current frame according to the pixel value probability distribution; determining a scaling for each pixel value of the current frame based on the pixel value stretching parameter.
Optionally, the step of determining the pixel value stretching parameter of the current frame according to the pixel value probability distribution may include: determining a first pixel value group for performing light supplement processing on the current frame according to the pixel value probability distribution; determining a pixel value stretching parameter for the current frame based on a size of the first set of pixel values.
Optionally, the step of determining, according to the pixel value probability distribution, a first pixel value group for performing a fill-in light process on the current frame may include: for each pixel value in the pixel values of the current frame, if the sum of the probability residual for the current pixel value and the current pixel value probability is greater than a threshold value, selecting the pixel value as an element in a first pixel value group; if the sum of the probability residual for the current pixel value and the current pixel value probability is less than or equal to the threshold, then the pixel value is not selected as an element in the first set of pixel values.
Optionally, the step of determining, according to the pixel value probability distribution, a first pixel value group for performing a fill-in light process on the current frame may include: if the sum of the probability residual for the current pixel value and the current pixel value probability is greater than a threshold, determining a probability residual for a next pixel value based on the probability residual, the current pixel value probability and the threshold; if the sum of the probability residual for the current pixel value and the current pixel value probability is less than or equal to the threshold, determining the sum of the probability residual and the current pixel value probability as the probability residual for the next pixel value.
Optionally, the step of determining a scaling for each pixel value of the current frame based on the pixel value stretching parameter may comprise: sorting each pixel value in the first pixel value group in a descending order; mapping the pixel value stretching parameters and the sequence numbers of the sequenced pixel values in the first pixel value group to obtain a second pixel value group; the scaling for each pixel value in the second set of pixel values is obtained based on a ratio of each element value in the second set of pixel values to a corresponding pixel value in the first set of pixel values.
According to a second aspect of the embodiments of the present disclosure, there is provided a video processing apparatus, which may include: the acquisition module is configured to acquire a video to be processed; a determining module configured to determine a scaling of each pixel value of a current frame of the video to be processed; a correction module configured to correct a scaling of each pixel value of the current frame based on scaling information of a previous frame of the current frame; a processing module configured to obtain a dimmed video frame by applying the corrected scaling to pixel values of the current frame.
Optionally, the correction module may be configured to: determining a maximum scaling and a first average scaling of the current frame from the scaling of each pixel value of the current frame, and calculating a first correction parameter for the scaling of each pixel value of the current frame according to the maximum scaling and the first average scaling; adjusting a first correction parameter based on a second correction parameter included in the zoom information of the previous frame; and correcting the scaling of each pixel value of the current frame by using the adjusted first correction parameter.
Optionally, the correction module may be configured to: calculating a first relative difference between the first correction parameter and the second correction parameter; obtaining a first correction parameter by reducing the second correction parameter by a value corresponding to the first relative difference value in a case where the second correction parameter is greater than or equal to the first correction parameter; in the case where the second correction parameter is smaller than the first correction parameter, the first correction parameter is obtained by increasing the second correction parameter by a value corresponding to the first relative difference.
Optionally, the correction module may be configured to: determining a maximum pixel value of the current frame; determining a maximum pixel ratio value of the current frame based on the maximum pixel value; determining the maximum pixel ratio value as a first scaled average value if the first scaled average value is less than or equal to the maximum pixel ratio value.
Optionally, the correction module may be configured to: adjusting a first scaling average based on a second scaling average included in scaling information of the previous frame; and calculating a first correction parameter according to the maximum scaling and the adjusted average value of the first scaling.
Optionally, the correction module may be configured to: calculating a second relative difference between the first scaled average and the second scaled average; obtaining a first scaled average value by reducing the second scaled average value by a value corresponding to the second relative difference value in a case where the second scaled average value is greater than or equal to the first scaled average value; in a case where the second scaled average value is smaller than the first scaled average value, the first scaled average value is obtained by increasing the second scaled average value by a value corresponding to the second relative difference value.
Optionally, the determining module may be configured to: determining a pixel value probability distribution of the current frame; determining pixel value stretching parameters of the current frame according to the pixel value probability distribution; determining a scaling for each pixel value of the current frame based on the pixel value stretching parameter.
Optionally, the determining module may be configured to: determining a first pixel value group for performing light supplement processing on the current frame according to the pixel value probability distribution; determining a pixel value stretching parameter for the current frame based on a size of the first set of pixel values.
Optionally, the determining module may be configured to: for each pixel value in the pixel values of the current frame, if the sum of the probability residual for the current pixel value and the current pixel value probability is greater than a threshold value, selecting the pixel value as an element in a first pixel value group; if the sum of the probability residual for the current pixel value and the current pixel value probability is less than or equal to the threshold, then the pixel value is not selected as an element in the first set of pixel values.
Optionally, the determining module may be configured to: if the sum of the probability residual for the current pixel value and the current pixel value probability is greater than a threshold, determining a probability residual for a next pixel value based on the probability residual, the current pixel value probability and the threshold; if the sum of the probability residual for the current pixel value and the current pixel value probability is less than or equal to the threshold, determining the sum of the probability residual and the current pixel value probability as the probability residual for the next pixel value.
Optionally, the determining module may be configured to: sorting each pixel value in the first pixel value group in a descending order; mapping the pixel value stretching parameters and the sequence numbers of the sequenced pixel values in the first pixel value group to obtain a second pixel value group; the scaling for each pixel value in the second set of pixel values is obtained based on a ratio of each element value in the second set of pixel values to a corresponding pixel value in the first set of pixel values.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus, which may include: at least one processor; at least one memory storing computer-executable instructions, wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to perform the video processing method as described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform the video processing method as described above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product, instructions of which are executed by at least one processor in an electronic device to perform the video processing method as described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the method can automatically adjust the light of the video picture according to the video picture, and keeps the original picture color undistorted while adjusting the picture brightness. In addition, the video pictures can be processed in real time, and the correlation among the video pictures is considered, so that the phenomenon of flickering and jumping among the video pictures after dimming is avoided, and the smoothness of the video pictures is ensured. In addition, the method and the device provided by the disclosure are easy to implement, low in complexity, high in real-time performance and low in delay, and have good practicability when a common video is recorded or a real-time video is recorded.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a flow diagram of a video processing method according to an embodiment of the present disclosure;
fig. 2 is a flow diagram of a video processing method according to another embodiment of the present disclosure;
fig. 3 is a block diagram of a video processing device according to an embodiment of the present disclosure;
fig. 4 is a schematic configuration diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 5 is a block diagram of an electronic device according to an embodiment of the disclosure.
Throughout the drawings, it should be noted that the same reference numerals are used to designate the same or similar elements, features and structures.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of the embodiments of the disclosure as defined by the claims and their equivalents. Various specific details are included to aid understanding, but these are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness.
The terms and words used in the following description and claims are not limited to the written meaning, but are used only by the inventors to achieve a clear and consistent understanding of the disclosure. Accordingly, it should be apparent to those skilled in the art that the following descriptions of the various embodiments of the present disclosure are provided for illustration only and not for the purpose of limiting the disclosure as defined by the appended claims and their equivalents.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In the related art, the picture can be processed in real time by adding simple histogram equalization or stretching, gamma mapping, and the like during photographing. For example, the adjustment parameters are calculated by collecting the environmental parameters and the auxiliary parameters, the gamma parameters are calculated based on the adjustment parameters, and the gamma parameters are used for carrying out gamma correction on the picture content, so as to achieve the purpose of improving the picture brightness. However, the gamma correction method adopted to simply map the whole image data often cannot achieve a good effect, and may cause problems of color distortion, picture maladjustment and the like, so that on one hand, the requirements of people on high-quality video pictures cannot be met, and on the other hand, the method deviates from the original recording scene.
Real-time video recording is easily affected by the recording environment, which causes poor quality of real-time video pictures, and therefore a low-delay and low-power consumption method is needed to process the real-time recorded pictures.
The method can process the recorded pictures in real time, thereby obtaining high-quality video pictures and avoiding the dependence on good recorded scenes. In addition, because the adjustment of each picture in the video is closely related to the adjustment of the previous picture, the jump phenomenon of the adjacent pictures can be avoided by considering the correlation among the pictures, and the smoothness of the video pictures is ensured. Therefore, it is possible to realize adjustment of the picture brightness distribution in a color space such as RGB and to ensure undistorted picture colors by maintaining the relative distribution of colors.
Hereinafter, according to various embodiments of the present disclosure, a method, an apparatus, and a system of the present disclosure will be described in detail with reference to the accompanying drawings.
Fig. 1 is a flow diagram of a video processing method according to an embodiment of the present disclosure. The video processing method can be suitable for common video recording or live broadcasting (namely real-time video recording) scenes, and the problem of image shooting or video recording in poor recording environments (such as insufficient light, low environmental contrast and the like) is solved through real-time intelligent dimming.
The video processing method according to the present disclosure may be executed by any electronic device having an image processing function. The electronic device may be a terminal where the user is located, e.g. a terminal used when the anchor is live. The electronic device may be at least one of a smartphone, a tablet, a laptop computer, a desktop computer, and the like. The electronic device may be installed with a target application for dimming an image or a real-time recorded video.
Referring to fig. 1, in step S101, a video to be processed may be acquired. Here, the pending video may refer to a recorded video segment, or may refer to a real-time recorded video, i.e., live data.
In step S102, a scaling of each pixel value of a current frame of the video to be processed may be determined. For each frame in the video to be processed, the following operations may be performed: determining the pixel value probability distribution of the current frame, determining the pixel value stretching parameter of the current frame according to the pixel value probability distribution, and determining the scaling of each pixel value of the current frame based on the pixel value stretching parameter. How the scaling of the pixel values of a frame is determined will be described in detail below for that frame in the video.
A pixel value probability distribution for the current frame may be determined. Here, the pixel value probability distribution may refer to an overall probability distribution of an image. Taking an RGB three-channel 24-bit image (where each channel is 8 bits) as an example, the value range of the pixel value of the image is [0, 255%]The pixel value probability distribution of the pixel value i is piWherein i is more than or equal to 0 and less than or equal to 255,
Figure BDA0003047318540000081
it should be noted that the probability distribution here is a three-channel overall probability distribution, which can guarantee the relative distribution of RGB colors, so that the problem of color distortion can be avoided. However, the above examples are merely exemplary, and the present disclosure is not limited to the above type of image.
A pixel value stretching parameter for the current frame may then be determined from the pixel value probability distribution. Here, the pixel value stretching parameter may also be referred to as a pixel value stretching pitch. The purpose of pixel value stretching is to make the image pixel values more uniformly distributed in the whole screen region, for example, before the pixel value stretching, the original image pixel values may be concentrated in a narrow region, and the narrow region may be enlarged to the whole screen region by the pixel value stretching.
According to an embodiment of the present disclosure, one or more pixel values for performing a light filling process on a current frame may be first selected from pixel values of the current frame according to a pixel value probability distribution to constitute a first pixel value group.
As an example, for all pixel values in the current frame, if the sum of the probability residual for the current pixel value and the current pixel value probability is greater than a threshold, then the pixel value is selected as an element in the first set of pixel values, and a probability residual for the next pixel value is determined based on the probability residual, the current pixel value probability, and the threshold. If the sum of the probability residual for the current pixel value and the current pixel value probability is less than or equal to the threshold, then the pixel value is not selected as an element in the first set of pixel values and the sum of the probability residual and the current pixel value probability is determined as the probability residual for the next pixel value.
In determining the first set of pixel values, the above process may be performed starting with the smallest pixel value in the current frame to determine whether the pixel value satisfies the above condition, i.e., the sum of the probability residual for the current pixel value and the current pixel value probability is greater than a threshold. The probability residual may be set to 0 in advance for the first pixel value to be processed, and then in the subsequent processing, the probability residual is changed according to the above-described processing.
Taking RGB image as an example, r is 0 and s is 0]And t is 1/255, where s represents the first pixel value group, r represents the initial probability residual, and t represents the preset threshold. For 1. ltoreq. i.ltoreq.255, if r + pi> t, then s ═ s, i]And r ═ mod (r + p)iT), otherwise the pixel value i is not selected in the first pixel value group and r ═ r + piWhere mod (a, b) denotes a modulo b operation. All pixel values in the current frame can be traversed in the manner described above to obtain the first set s of pixel values. By generating the first set of pixel values s, pixel values with small probability can be excluded and pixel values with statistical significance can be selected, thus enabling the determination of suitable stretching parameters to meet the requirements for stretching pixel value distribution.
For pixel values that do not belong to the first set of pixel values, these pixel values are merged into neighboring pixels.
After obtaining the first set of pixel values, a corresponding pixel value stretching parameter can be calculated based on the magnitude of the first set of pixel values. Taking an RGB image as an example, assuming that the size of the first pixel value group of the image is w, the pixel value stretching parameter of the image is g 255/w. However, the above examples are merely exemplary rows, and the present disclosure is not limited thereto.
The scaling for each pixel value of the current frame may be determined based on the pixel value stretching parameter. As an example, each pixel value in the first pixel value group may be sorted in order from small to large, the pixel value stretching parameter may be mapped with the sequence number of each sorted pixel value in the first pixel value group to obtain a second pixel value group, and the scaling ratio for each pixel value of the current frame may be obtained based on the ratio of each element value in the second pixel value group to the corresponding pixel value in the first pixel value group.
As an example, the pixel values in the first set s of pixel values may be mapped according to equation (1) below:
sjj g, wherein j is not less than 1 and not more than w (1)
Wherein j is the sorting number in the first pixel value set s, sjIs the jth element value in the mapped second set of pixel values. The pixel values in the first pixel value group s are mapped one by one according to equation (1) to obtain a second pixel value group.
The scaling for each pixel value of the current frame is obtained based on the ratio of each element in the second set of pixel values to the corresponding pixel value in the first set of pixel values, which may be denoted as scaling set all _ ratio. In the case where the first pixel value group does not include all pixel values of the image, for a pixel value not belonging to the first pixel value group, a scaling of a pixel value adjacent to the pixel value belonging to the first pixel value group may be taken as a scaling of the pixel value.
In step S103, the scaling of each pixel value of the current frame is corrected based on the scaling information of the previous frame of the current frame. Specifically, a maximum scaling and a first scaling average value of a current frame are determined from the scaling of each pixel value of the current frame, a first correction parameter for the scaling is calculated from the maximum scaling and the first scaling average value, the first correction parameter is adjusted based on a second correction parameter included in scaling information of a previous frame, and the scaling of each pixel value of the current frame is corrected using the adjusted first correction parameter.
As another example, a maximum pixel value of the current frame may be first determined, a maximum pixel ratio of the current frame may be determined based on the maximum pixel value, and the maximum pixel ratio may be determined as the first scaling average in a case where the first scaling average is less than or equal to the maximum pixel ratio.
Then, the first scaling average may be adjusted based on a second scaling average included in the scaling information of the previous frame, and the first correction parameter may be calculated according to the maximum scaling of the current frame and the adjusted first scaling average.
Alternatively, in adjusting the first scaling average, a second relative difference between the first scaling average and the second scaling average may be calculated, and in the case where the second scaling average is greater than or equal to the first scaling average, the first scaling average may be obtained by reducing the second scaling average by a value corresponding to the second relative difference; in the case where the second scaled average value is smaller than the first scaled average value, the first scaled average value may be obtained by increasing the second scaled average value by a value corresponding to the second relative difference value.
Next, a first relative difference between the first correction parameter of the current frame and the second correction parameter of the previous frame may be calculated, and in the case where the second correction parameter is greater than or equal to the first correction parameter, the adjusted first correction parameter is obtained by reducing the second correction parameter by a value corresponding to the first relative difference. In the case where the second correction parameter is smaller than the first correction parameter, the adjusted first correction parameter is obtained by increasing the second correction parameter by a value corresponding to the first relative difference.
In step S104, a dimmed video frame may be obtained by applying the corrected scaling to each pixel value of the current frame. By performing the above processing on each video frame in the video to be processed, the video after dimming can be obtained.
In the present disclosure, for a first frame (i.e., a frame to be processed first) in a video to be processed, in the case of a previous frame to which the first frame has no reference, no correction may be made on the scaling of each pixel value of the first frame, and all frames following the first frame may be subjected to a correction operation by referring to scaling information of the previous frame (i.e., information used in calculating each frame and calculated information as described above). A detailed description will be made below as to how the correction information of the current frame is determined with reference to the previous frame.
According to the embodiment of the disclosure, the light of the picture can be adjusted according to the recorded picture, and the color of the original picture is kept on the basis of adjusting the brightness of the picture, so that the problems of picture distortion, poor real-time performance and delay in video recording in a poor environment are solved. In addition, the method is simple to realize, low in complexity and practical in common video recording and real-time live broadcasting scenes.
Fig. 2 is a flow diagram of a video processing method according to another embodiment of the present disclosure. For a video recorded by a common video or live broadcast (i.e., real-time video recording), the embodiments according to the present disclosure consider the correlation between video frames to avoid the phenomenon of flickering and jumping between video frames after dimming processing.
Referring to fig. 2, in step S201, video data may be acquired. Here, the video data may refer to a recorded piece of video, or may refer to a real-time recorded video, i.e., live data.
In step S202, for each frame image in the video data, a pixel value probability distribution of the frame image may be determined. Here, the pixel value probability distribution may refer to an overall probability distribution of an image. Taking an RGB three-channel 24-bit image (where each channel is 8 bits) as an example, the value range of the pixel value of the image is [0, 255%]The pixel value probability distribution of the pixel value i is piWherein i is more than or equal to 0 and less than or equal to 255,
Figure BDA0003047318540000111
it should be noted that the probability distribution here is a three-channel overall probability distribution, which can guarantee the relative distribution of RGB colors, so that the problem of color distortion can be avoided. However, the above examples are merely exemplary, and the present disclosure is not limited to the above type of image.
In step S203, a pixel value stretching parameter for each image may be determined from the pixel value probability distribution. For example, for each frame of image, one or more pixel values for performing fill-in processing on the image may first be selected from the pixel values of the image according to a pixel value probability distribution to form a first pixel value group.
In determining the first set of pixel values, processing may be performed starting with the smallest pixel value in the frame image to determine whether the pixel value satisfies a condition that the sum of the probability residual for the current pixel value and the current pixel value probability is greater than a threshold. The probability residual may be set to 0 in advance for the first pixel value to be processed, and then in the subsequent processing, the probability residual is changed according to the above-described processing.
Taking a frame of RGB image as an example, r is 0 and s is 0]And t is 1/255, where s represents the first pixel value group, r represents the initial probability residual, and t represents the preset threshold. For 1. ltoreq. i.ltoreq.255, if r + pi> t, then s ═ s, i]And r ═ mod (r + p)iT), otherwise the pixel value i is not selected in the first pixel value group and r ═ r + piWhere mod (a, b) denotes a modulo b operation. All pixel values in each frame of image of the video may be traversed in the manner described above to obtain the first set s of pixel values. By generating the first set of pixel values s, pixel values with small probability can be excluded and pixel values with statistical significance can be selected, thus enabling the determination of suitable stretching parameters to meet the requirements for stretching pixel value distribution. For pixel values that do not belong to the first set of pixel values, these pixel values are merged into neighboring pixels.
After obtaining the first set of pixel values, a corresponding pixel value stretching parameter can be calculated based on the magnitude of the first set of pixel values. Taking a frame of RGB image as an example, assuming that the size of the first pixel value group of the image is w, the pixel value stretching parameter of the image is g equal to 255/w. However, the above examples are merely exemplary rows, and the present disclosure is not limited thereto.
In step S204, a scaling for each pixel value of the frame image may be determined based on the pixel value stretching parameter. As an example, each pixel value in the first pixel value group may be sorted in order from small to large, the pixel value stretching parameter is mapped with the order number of each sorted pixel value in the first pixel value group to obtain a second pixel value group, and the scaling for each pixel value of the frame image is obtained based on the ratio of each element value in the second pixel value group to the corresponding pixel value in the first pixel value group. For example, each pixel value in the first set of pixel values can be mapped according to equation (1) above to obtain the second set of pixel values.
Next, the scaling for each pixel value of the frame image is obtained based on the ratio of each element in the second pixel value set to the corresponding pixel value in the first pixel value set, which can be denoted as scaling set all _ ratio. In the case where the first pixel value group does not include all pixel values of the image, for a pixel value not belonging to the first pixel value group, a scaling of a pixel value adjacent to the pixel value belonging to the first pixel value group may be taken as a scaling of the pixel value.
In step S205, the scaling of each frame image may be corrected.
As an example, the maximum scaling max _ ratio and the first scaling average mean _ ratio may be determined from the scaling of the frame image, the first correction parameter for the scaling of the frame image may be calculated from the maximum scaling and the first scaling average, and the frame scaling may be corrected based on the first correction parameter.
For example, a maximum pixel value v of the frame image may be determined, and a maximum pixel ratio value q may be determined based on the maximum pixel value v1And a minimum pixel ratio q2. Here, taking a frame of RGB image as an example, q is1255/v, and q2=q1/1.5. However, the above examples are merely exemplary, and the present disclosure is not limited thereto.
The average value mean _ ratio is less than or equal to the maximum pixel ratio value q at the first scaling1In the case of (2), the maximum pixel ratio q can be set1Is determined as the first scaled average mean _ ratio, q1≥mean_ratio,mean_ratio=q1
At a first scaling average mean _ ratio greater than the minimum pixel ratio q2From the first scaled average mean _ ratio and the smallest pixelRatio q2To select a relatively large value as the first scaled average mean _ ratio, i.e. q2<mean_ratio,mean_ratio=max(mean_ratio,q2) Where the max operator indicates taking the maximum value. That is, in any case, a relatively large value will be selected as the first scaled average.
Next, a first correction parameter is calculated from the scale mean _ ratio/max _ ratio, and the scaling of the pixel value of the image is corrected using the first correction parameter.
As another example, the scaling of the current image may be adjusted in consideration of an average of the scaling of the previous image of the current image.
For example, a second relative difference between the first scale average of the current image and the second scale average of the previous image may be calculated based on the second scale average of the previous frame image of the current image, and the first scale average of the current image may be adjusted according to the second relative difference.
For example, the second average value of the scaling ratio of the previous picture of the current picture is mean _ ratio _ prev, a relative difference value is calculated from d ═ mean _ ratio _ prev-mean _ ratio |/mean _ ratio _ prev, and a new average value of the scaling ratio is calculated from the relative difference value as the first average value of the scaling ratio. For example, if mean _ ratio _ prev-mean _ ratio ≧ 0, mean _ ratio ═ mean _ ratio _ prev-mean _ ratio _ prev × daIf mean _ ratio _ prev-mean _ ratio < 0, mean _ ratio ═ mean _ ratio _ prev + mean _ ratio _ prev × daWherein a is a parameter greater than 1. At this time, the new scaling depends on the previous frame scaling, thereby avoiding the generation of the scale jump. However, the above-described example of adjusting the first scaling average value is merely exemplary, and the present disclosure is not limited thereto.
As another example, a first relative difference between a first correction parameter of the current image and a second correction parameter of a previous image may be calculated based on a second correction parameter of an image of a frame previous to the current image, and the first correction parameter of the current image may be adjusted according to the first relative difference.
For exampleThe first correction parameter of the current image is scale _ mean _ ratio/max _ ratio, where mean _ ratio may be the first scaling average calculated according to the original scaling group, or the corrected first scaling average. The second correction parameter of the previous picture of the current picture is scale _ prev, the relative difference value of the first correction parameter and the second correction parameter is calculated according to e ═ scale _ prev-scale |/scale _ prev, a new correction parameter is calculated according to the relative difference value as the first correction parameter, and if the scale _ prev-scale is not less than 0, the scale ═ scale _ prev-scale _ prev × ebIf scale _ prev-scale < 0, scale _ prev + scale _ prev × ebWherein b is a parameter greater than 1. At this time, the new correction parameter depends on the correction parameter of the previous frame, so that the jump phenomenon between frames can be avoided. However, the above-described example of adjusting the correction parameter is merely exemplary, and the present disclosure is not limited thereto.
Embodiments of the present disclosure may correct the first scaled average, the first correction parameter, either individually or in combination.
After the final correction parameters are obtained, the final correction parameters may be applied to the scaling (scaling group all _ ratio) of the frame image to obtain a new scaling group new _ ratio for the frame image. In this way, the size relationship of the original scaling group can be preserved to preserve picture content details.
In step S206, dimmed video data is obtained by applying the corrected respective scaling to each pixel value in the video data. For example, a new scaling group new _ ratio may be applied to the corresponding pixels of the original frame (such as multiplying the scaling ratios by the original pixel values), i.e., a dimmed frame image may be obtained. The dimmed video is obtained by performing the above-described processing on each frame of image in the video data.
The method can automatically adjust the light of the image or the video picture according to the image or the video picture, and keeps the original picture color undistorted while adjusting the picture brightness. In addition, the video pictures can be processed in real time, and the correlation among the video pictures is considered, so that the phenomenon of flickering and jumping among the video pictures after dimming is avoided. In addition, the method and the device provided by the disclosure are easy to implement, low in complexity, high in real-time performance and low in delay, and have good practicability when a common video is recorded or a real-time video is recorded.
Fig. 3 is a block diagram of a video processing device according to an embodiment of the present disclosure.
Referring to fig. 3, the video processing apparatus 300 may include an acquisition module 301, a determination module 302, a processing module 303, and a correction module 304. Each module in the video processing apparatus 300 may be implemented by one or more modules, and the name of the corresponding module may vary according to the type of the module. In various embodiments, some modules in the video processing device 300 may be omitted, or additional modules may also be included. Furthermore, modules/elements according to various embodiments of the present disclosure may be combined to form a single entity, and thus may equivalently perform the functions of the respective modules/elements prior to combination.
The acquisition module 301 may acquire image or video data to be processed. When the obtaining module 301 obtains an image, the image may be individually subjected to dimming processing, for example, the image may be subjected to scaling processing by using the scaling obtained in step S102 shown in fig. 1. When the acquisition module 301 acquires video data, dimming processing may be performed for each frame image in the video. The following explains processing one frame in a video as an example.
The determining module 302 may determine a pixel value probability distribution for the current frame, determine a pixel value stretching parameter for the current frame from the pixel value probability distribution, and determine a scaling for each pixel value of the current frame based on the pixel value stretching parameter.
Alternatively, the determining module 302 may determine a first pixel value group for performing the light filling process on the current frame according to the pixel value probability distribution, and determine the pixel value stretching parameter based on the size of the first pixel value group.
Optionally, for each of the pixel values of the current frame, if the sum of the probability residual for the current pixel value and the current pixel value probability is greater than a threshold, the determining module 302 may select the pixel value as an element in the first pixel value group; if the sum of the probability residual for the current pixel value and the current pixel value probability is less than or equal to the threshold, then determination module 302 may not select the pixel value as an element in the first set of pixel values.
Alternatively, if the sum of the probability residual for the current pixel value and the current pixel value probability is greater than a threshold, the determination module 302 may determine a probability residual for the next pixel value based on the probability residual, the current pixel value probability, and the threshold. If the sum of the probability residual for the current pixel value and the current pixel value probability is less than or equal to the threshold, the determination module 302 may determine the sum of the probability residual and the current pixel value probability as the probability residual for the next pixel value.
Alternatively, the determining module 302 may sort each pixel value in the first pixel value group in descending order, map the pixel value stretching parameter with the sequence number of each sorted pixel value in the first pixel value group to obtain a second pixel value group, and obtain the scaling ratio for each pixel value based on the ratio of each element value in the second pixel value group to the corresponding pixel value in the first pixel value group.
The correction module 304 may correct the scaling of the current frame based on the scaling information of the previous frame of the current frame.
Alternatively, the correction module 304 may determine a maximum scaling and a first scaling average of the current frame from the scaling of the current frame, calculate a first correction parameter for the scaling according to the maximum scaling and the first scaling average, and correct the scaling based on the first correction parameter.
Alternatively, the correction module 304 may determine a maximum pixel value of the current frame, determine a maximum pixel ratio of the current frame based on the maximum pixel value, and determine the maximum pixel ratio as the first scaled average in case the first scaled average is less than or equal to the maximum pixel ratio.
Alternatively, the correction module 304 may calculate a second relative difference between the first scale average and the second scale average based on a second scale average of a previous frame image of a current frame in the video, the first scale average being adjusted according to the second relative difference.
Alternatively, the correction module 304 may calculate a first relative difference between the first correction parameter and the second correction parameter based on a second correction parameter of a previous frame image of a current frame in the video, the first correction parameter being adjusted according to the first relative difference.
Processing module 303 may then apply the corrected scaling to the pixel values of the current frame.
Fig. 4 is a schematic structural diagram of an image processing apparatus of a hardware operating environment according to an embodiment of the present disclosure.
As shown in fig. 4, the image processing apparatus 400 may include: a processing component 401, a communication bus 402, a network interface 403, an input output interface 404, a memory 405, and a power component 404. Wherein a communication bus 402 is used to enable connective communication between these components. The input-output interface 404 may include a video display (such as a liquid crystal display), a microphone and speakers, and a user-interaction interface (such as a keyboard, mouse, touch-input device, etc.), and optionally, the input-output interface 404 may also include a standard wired interface, a wireless interface. The network interface 403 may optionally include a standard wired interface, a wireless interface (e.g., a wireless fidelity interface). Memory 405 may be a high speed random access memory or may be a stable non-volatile memory. The memory 405 may alternatively be a storage device separate from the aforementioned processing component 401.
Those skilled in the art will appreciate that the configuration shown in fig. 4 does not constitute a limitation of the image processing apparatus 400, and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.
As shown in fig. 4, the memory 405, which is a storage medium, may include therein an operating system (such as a MAC operating system), a data storage module, a network communication module, a user interface module, an image processing program, and a database.
In the image processing apparatus 400 shown in fig. 4, the network interface 403 is mainly used for data communication with an external electronic apparatus/terminal; the input/output interface 404 is mainly used for data interaction with a user; the processing component 401 and the memory 405 in the image processing apparatus 400 may be provided in the image processing apparatus 400, and the image processing apparatus 400 executes the video processing method provided by the embodiment of the present disclosure by the processing component 401 calling the image processing program stored in the memory 405 and various APIs provided by the operating system.
The processing component 401 may include at least one processor, and the memory 405 has stored therein a set of computer-executable instructions that, when executed by the at least one processor, perform a video processing method according to an embodiment of the present disclosure. Further, the processing component 401 may perform encoding operations and decoding operations, and the like. However, the above examples are merely exemplary, and the present disclosure is not limited thereto.
The processing component 401 may acquire an image to be processed. Here, the image to be processed may be a single image or a frame image in a recorded video. The processing component 401 may determine a pixel value probability distribution of the image to be processed, determine a pixel value stretching parameter of the image to be processed according to the pixel value probability distribution, determine a scaling for each pixel value of the image to be processed based on the pixel value stretching parameter, and obtain the dimmed image by applying a corresponding scaling to each pixel value.
As an alternative embodiment, the processing component 401 may determine a first pixel value group for performing fill-in processing on the image to be processed according to the pixel value probability distribution, and determine the pixel value stretching parameter based on the size of the first pixel value group.
As an alternative embodiment, for each of the pixel values of a frame of image, if the sum of the probability residual for the current pixel value and the current pixel value probability is greater than a threshold, processing component 401 may select the pixel value as an element in the first set of pixel values; if the sum of the probability residual for the current pixel value and the current pixel value probability is less than or equal to the threshold, processing component 401 may not select the pixel value as an element in the first set of pixel values.
As an alternative embodiment, if the sum of the probability residual for the current pixel value and the current pixel value probability is greater than a threshold, processing component 401 may determine a probability residual for the next pixel value based on the probability residual, the current pixel value probability, and the threshold; if the sum of the probability residual for the current pixel value and the current pixel value probability is less than or equal to the threshold, processing component 401 may determine the sum of the probability residual and the current pixel value probability as the probability residual for the next pixel value.
As an alternative embodiment, processing component 401 may sort each pixel value in the first set of pixel values in order from smaller to larger, map a pixel value stretching parameter with the order of each sorted pixel value in the first set of pixel values to obtain a second set of pixel values, and obtain a scaling for each pixel value of the frame image based on the ratio of each element value in the second set of pixel values to the corresponding pixel value in the first set of pixel values.
As an alternative embodiment, the processing component 401 may determine a maximum scaling and a first scaling average value from the scaling of the frame image, calculate a first correction parameter for the scaling according to the maximum scaling and the first scaling average value, correct the scaling based on the first correction parameter, and perform scaling processing on the pixel values of the frame image using the corrected scaling.
As an alternative embodiment, the processing component 401 may determine a maximum pixel value of the image to be processed, and determine a maximum pixel ratio value of the current frame based on the maximum pixel value. In the event that the first scaled average is less than or equal to the maximum pixel ratio value, the processing component 401 may determine the maximum pixel ratio value as the first scaled average.
In the case that the image to be processed is one frame image in the video, the processing component 401 may calculate a second relative difference value between the first scaling average value and the second scaling average value based on a second scaling average value of a previous frame image of the image to be processed in the video, and adjust the first scaling average value according to the second relative difference value.
In the case where the image to be processed is one frame image in the video, the processing component 401 may calculate a first relative difference value between the first correction parameter and the second correction parameter based on the second correction parameter of the previous frame image of the image to be processed in the video; and adjusting the first correction parameter according to the first relative difference value, and correcting the scaling of the frame image by using the adjusted first correction parameter.
The processing component 401 may implement control of components included in the image processing apparatus 400 by executing a program.
The image processing apparatus 400 may receive or output video and/or audio via the input-output interface 404. For example, a user may output a dimmed image or video or live content via the input-output interface 404.
By way of example, the image processing apparatus 400 may be a PC computer, tablet device, personal digital assistant, smartphone, or other device capable of executing the set of instructions described above. Here, the image processing apparatus 400 need not be a single electronic device, but can be any collection of devices or circuits that can individually or jointly execute the above-described instructions (or sets of instructions). The image processing device 400 may also be part of an integrated control system or system manager, or may be configured as a portable electronic device that interfaces with local or remote (e.g., via wireless transmission).
In image processing apparatus 400, processing component 401 may include a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a programmable logic device, a special-purpose processor system, a microcontroller, or a microprocessor. By way of example and not limitation, processing component 401 may also include an analog processor, a digital processor, a microprocessor, a multi-core processor, a processor array, a network processor, or the like.
The processing component 401 may execute instructions or code stored in a memory, wherein the memory 405 may also store data. Instructions and data may also be sent and received over a network via network interface 403, where network interface 403 may employ any known transmission protocol.
The memory 405 may be integrated with the processing component 401, for example, by having RAM or flash memory disposed within an integrated circuit microprocessor or the like. Further, memory 405 may comprise a stand-alone device, such as an external disk drive, storage array, or any other storage device that may be used by a database system. The memory and processing component 401 may be operatively coupled or may communicate with each other, such as through I/O ports, network connections, etc., such that the processing component 401 can read data stored in the memory 405.
According to an embodiment of the present disclosure, an electronic device may be provided. Fig. 5 is a block diagram of an electronic device 500 according to an embodiment of the disclosure, which may include at least one memory 502 and at least one processor 501, the at least one memory 502 storing a set of computer-executable instructions that, when executed by the at least one processor 501, perform a video processing method according to an embodiment of the disclosure.
Processor 501 may include a Central Processing Unit (CPU), Graphics Processing Unit (GPU), programmable logic device, dedicated processor system, microcontroller, or microprocessor. By way of example, and not limitation, processor 501 may also include analog processors, digital processors, microprocessors, multi-core processors, processor arrays, network processors, and the like.
The memory 502, which is a kind of storage medium, may include an operating system (e.g., a MAC operating system), a data storage module, a network communication module, a user interface module, an image processing program, and a database.
The memory 502 may be integrated with the processor 501, for example, a RAM or flash memory may be disposed within an integrated circuit microprocessor or the like. Further, memory 502 may comprise a stand-alone device, such as an external disk drive, storage array, or any other storage device usable by a database system. The memory 502 and the processor 501 may be operatively coupled or may communicate with each other, such as through I/O ports, network connections, etc., so that the processor 501 can read files stored in the memory 502.
In addition, the electronic device 500 may also include a video display (such as a liquid crystal display) and a user interaction interface (such as a keyboard, mouse, touch input device, etc.). All components of the electronic device 500 may be connected to each other via a bus and/or a network.
Those skilled in the art will appreciate that the configuration shown in FIG. 5 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
According to an embodiment of the present disclosure, there may also be provided a computer-readable storage medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform a video processing method according to the present disclosure. Examples of the computer-readable storage medium herein include: read-only memory (ROM), random-access programmable read-only memory (PROM), electrically erasable programmable read-only memory (EEPROM), random-access memory (RAM), dynamic random-access memory (DRAM), static random-access memory (SRAM), flash memory, non-volatile memory, CD-ROM, CD-R, CD + R, CD-RW, CD + RW, DVD-ROM, DVD-R, DVD + R, DVD-RW, DVD + RW, DVD-RAM, BD-ROM, BD-R, BD-R LTH, BD-RE, Blu-ray or compact disc memory, Hard Disk Drive (HDD), solid-state drive (SSD), card-type memory (such as a multimedia card, a Secure Digital (SD) card or a extreme digital (XD) card), magnetic tape, a floppy disk, a magneto-optical data storage device, an optical data storage device, a hard disk, a magnetic tape, a magneto-optical data storage device, a hard disk, a magnetic tape, a magnetic data storage device, a magnetic tape, a magnetic data storage device, a magnetic tape, a magnetic data storage device, a magnetic tape, a magnetic data storage device, a magnetic tape, a magnetic data storage device, A solid state disk, and any other device configured to store and provide a computer program and any associated data, data files, and data structures to a processor or computer in a non-transitory manner such that the processor or computer can execute the computer program. The computer program in the computer-readable storage medium described above can be run in an environment deployed in a computer apparatus, such as a client, a host, a proxy device, a server, and the like, and further, in one example, the computer program and any associated data, data files, and data structures are distributed across a networked computer system such that the computer program and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by one or more processors or computers.
According to an embodiment of the present disclosure, there may also be provided a computer program product, in which instructions are executable by a processor of a computer device to perform the above-mentioned video processing method.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A video processing method, comprising:
acquiring a video to be processed;
determining the scaling of each pixel value of the current frame of the video to be processed;
correcting a scaling of each pixel value of the current frame based on scaling information of a previous frame of the current frame;
obtaining a dimmed video frame by applying a corrected scaling to pixel values of the current frame.
2. The video processing method of claim 1, wherein the step of correcting the scaling of each pixel value of the current frame based on scaling information of a previous frame of the current frame comprises:
determining a maximum scaling and a first average scaling of the current frame from the scaling of each pixel value of the current frame, and calculating a first correction parameter for the scaling of each pixel value of the current frame according to the maximum scaling and the first average scaling;
adjusting a first correction parameter based on a second correction parameter included in the zoom information of the previous frame;
and correcting the scaling of each pixel value of the current frame by using the adjusted first correction parameter.
3. The video processing method according to claim 2, wherein the step of adjusting the first correction parameter based on the second correction parameter included in the zoom information of the previous frame comprises:
calculating a first relative difference between the first correction parameter and the second correction parameter;
obtaining a first correction parameter by reducing the second correction parameter by a value corresponding to the first relative difference value in a case where the second correction parameter is greater than or equal to the first correction parameter;
in the case where the second correction parameter is smaller than the first correction parameter, the first correction parameter is obtained by increasing the second correction parameter by a value corresponding to the first relative difference.
4. The video processing method of claim 2, wherein the step of determining the maximum scaling and the first average scaling of the current frame from the scaling of the pixel values of the current frame comprises:
determining a maximum pixel value of the current frame;
determining a maximum pixel ratio value of the current frame based on the maximum pixel value;
determining the maximum pixel ratio value as a first scaled average value if the first scaled average value is less than or equal to the maximum pixel ratio value.
5. The video processing method according to claim 2, wherein the step of calculating a first correction parameter for the scaling of each pixel value of the current frame based on the maximum scaling and the first scaling average comprises:
adjusting a first scaling average based on a second scaling average included in scaling information of the previous frame;
and calculating a first correction parameter according to the maximum scaling and the adjusted average value of the first scaling.
6. The video processing method according to claim 5, wherein the step of adjusting the first scaling average based on the second scaling average included in the scaling information of the previous frame comprises:
calculating a second relative difference between the first scaled average and the second scaled average;
obtaining a first scaled average value by reducing the second scaled average value by a value corresponding to the second relative difference value in a case where the second scaled average value is greater than or equal to the first scaled average value;
in a case where the second scaled average value is smaller than the first scaled average value, the first scaled average value is obtained by increasing the second scaled average value by a value corresponding to the second relative difference value.
7. The video processing method according to claim 1, wherein the step of determining the scaling of each pixel value of the current frame of the video to be processed comprises:
determining a pixel value probability distribution of the current frame;
determining pixel value stretching parameters of the current frame according to the pixel value probability distribution;
determining a scaling for each pixel value of the current frame based on the pixel value stretching parameter.
8. A video processing apparatus comprising:
the acquisition module is configured to acquire a video to be processed;
a determining module configured to determine a scaling of each pixel value of a current frame of the video to be processed;
a correction module configured to correct a scaling of each pixel value of the current frame based on scaling information of a previous frame of the current frame;
a processing module configured to obtain a dimmed video frame by applying the corrected scaling to pixel values of the current frame.
9. An electronic device, comprising:
at least one processor;
at least one memory storing computer-executable instructions,
wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to perform the video processing method of any of claims 1 to 7.
10. A computer-readable storage medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform the video processing method of any one of claims 1 to 7.
CN202110475702.5A 2021-04-29 2021-04-29 Video processing method and video processing device Active CN113225620B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110475702.5A CN113225620B (en) 2021-04-29 2021-04-29 Video processing method and video processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110475702.5A CN113225620B (en) 2021-04-29 2021-04-29 Video processing method and video processing device

Publications (2)

Publication Number Publication Date
CN113225620A true CN113225620A (en) 2021-08-06
CN113225620B CN113225620B (en) 2022-09-30

Family

ID=77090333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110475702.5A Active CN113225620B (en) 2021-04-29 2021-04-29 Video processing method and video processing device

Country Status (1)

Country Link
CN (1) CN113225620B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040113906A1 (en) * 2002-12-11 2004-06-17 Nvidia Corporation Backlight dimming and LCD amplitude boost
US20040139462A1 (en) * 2002-07-15 2004-07-15 Nokia Corporation Method for error concealment in video sequences
CN102376082A (en) * 2010-08-06 2012-03-14 株式会社理光 Image processing method and device based on gamma correction
CN110047052A (en) * 2019-04-25 2019-07-23 哈尔滨工业大学 A kind of strong Xanthophyll cycle night vision Enhancement Method based on FPGA

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040139462A1 (en) * 2002-07-15 2004-07-15 Nokia Corporation Method for error concealment in video sequences
US20040113906A1 (en) * 2002-12-11 2004-06-17 Nvidia Corporation Backlight dimming and LCD amplitude boost
CN102376082A (en) * 2010-08-06 2012-03-14 株式会社理光 Image processing method and device based on gamma correction
CN110047052A (en) * 2019-04-25 2019-07-23 哈尔滨工业大学 A kind of strong Xanthophyll cycle night vision Enhancement Method based on FPGA

Also Published As

Publication number Publication date
CN113225620B (en) 2022-09-30

Similar Documents

Publication Publication Date Title
EP3566203B1 (en) Perceptually preserving scene-referred contrasts and chromaticities
US10165198B2 (en) Dual band adaptive tone mapping
CN109076231B (en) Method and device for encoding HDR pictures, corresponding decoding method and decoding device
JP2021521517A (en) HDR image representation using neural network mapping
JP2017168101A (en) Methods, apparatus and systems for extended high dynamic range (&#34;hdr&#34;)-to-hdr tone mapping
CN113518185B (en) Video conversion processing method and device, computer readable medium and electronic equipment
JP2018506916A (en) Method and device for mapping HDR picture to SDR picture and corresponding SDR to HDR mapping method and device
CN112449169B (en) Method and apparatus for tone mapping
JP6948309B2 (en) How and devices to tone map a picture using the parametric tone adjustment function
WO2021143300A1 (en) Image processing method and apparatus, electronic device and storage medium
CN111885312B (en) HDR image imaging method, system, electronic device and storage medium
CN109686342B (en) Image processing method and device
CN114511479A (en) Image enhancement method and device
US10445865B1 (en) Method and apparatus for converting low dynamic range video to high dynamic range video
CN114501023B (en) Video processing method, device, computer equipment and storage medium
WO2017024901A1 (en) Video transcoding method and device
US20210042895A1 (en) Field Programmable Gate Array (FPGA) Implementation and Optimization of Augmented Contrast Limited Adaptive Histogram Equalization
US10079981B2 (en) Image dynamic range adjustment method, terminal, and storage medium
TWI505717B (en) Joint scalar embedded graphics coding for color images
CN113225620B (en) Video processing method and video processing device
WO2017103399A1 (en) Method of processing a digital image, device, terminal equipment and computer program associated therewith
CN114331818A (en) Video image processing method and video image processing apparatus
CN114051126A (en) Video processing method and video processing device
FR3045903A1 (en) METHOD FOR PROCESSING DIGITAL IMAGE, DEVICE, TERMINAL EQUIPMENT AND COMPUTER PROGRAM THEREOF
CN111970564A (en) Optimization method and device for HDR video display processing, storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant