WO2023143233A1 - 视频噪声检测方法、装置、设备及介质 - Google Patents

视频噪声检测方法、装置、设备及介质 Download PDF

Info

Publication number
WO2023143233A1
WO2023143233A1 PCT/CN2023/072550 CN2023072550W WO2023143233A1 WO 2023143233 A1 WO2023143233 A1 WO 2023143233A1 CN 2023072550 W CN2023072550 W CN 2023072550W WO 2023143233 A1 WO2023143233 A1 WO 2023143233A1
Authority
WO
WIPO (PCT)
Prior art keywords
video frame
video
frame
noise
detection method
Prior art date
Application number
PCT/CN2023/072550
Other languages
English (en)
French (fr)
Inventor
陈秋伯
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2023143233A1 publication Critical patent/WO2023143233A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/147Scene change detection

Definitions

  • the present disclosure relates to the technical field of video processing, and in particular to a video noise detection method, device, equipment and medium.
  • the present disclosure provides a video noise detection method, device, equipment and medium.
  • an embodiment of the present disclosure provides a video noise detection method, including:
  • an embodiment of the present disclosure provides a video noise detection device, including:
  • An extracting unit configured to extract a first video frame and a second video frame in the target video, where the first video frame and the second video frame are adjacent video frames;
  • a processing unit configured to perform differential processing on the first video frame and the second video frame to obtain an inter-frame difference image between the first video frame and the second video frame;
  • an intersection determining unit configured to perform flat region detection on the first video frame and the second video frame, and obtain the intersection of flat regions in the first video frame and the second video frame;
  • a calculation unit configured to calculate the temporal noise value corresponding to the first video frame by using the pixel information of the intersection of the flat area in the inter-frame difference image.
  • an embodiment of the present disclosure provides a computing device, including: a processor; a memory configured to store executable instructions, wherein the processor is configured to read the executable instructions from the memory and execute The executable instructions are used to realize the aforementioned video noise detection method.
  • an embodiment of the present disclosure provides a computer-readable storage medium.
  • the storage medium stores a computer program.
  • the processor implements the video noise detection method as described above.
  • the present disclosure provides a computer program, including: instructions, which when executed by a processor cause the processor to execute any video noise detection method provided by the embodiments of the present disclosure.
  • FIG. 1 is a flowchart of a video noise detection method provided by some embodiments of the present disclosure
  • Fig. 2 is a flowchart of obtaining an inter-frame difference image provided by some embodiments of the present disclosure
  • Fig. 3 is a flowchart of determining a common minimum flat area provided by some embodiments of the present disclosure
  • Fig. 4 is a flow chart of determining a noise value in the time domain provided by some embodiments of the present disclosure
  • Fig. 5 is a flow chart of a video noise detection method provided by some other examples of the present disclosure.
  • FIG. 6 is a schematic diagram of a video noise detection device provided by an embodiment of the present disclosure.
  • Fig. 7 shows a schematic structural diagram of a computing device provided by an embodiment of the present disclosure.
  • the term “comprise” and its variations are open-ended, ie “including but not limited to”.
  • the term “based on” is “based at least in part on”.
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one further embodiment”; the term “some embodiments” means “at least some embodiments.” Relevant definitions of other terms will be given in the description below.
  • Fig. 1 is a flowchart of a video noise detection method provided by some embodiments of the present disclosure. As shown in FIG. 1 , the video noise detection method provided by the embodiment of the present disclosure includes steps S101-S104.
  • Computing devices include, but are not limited to, electronic devices such as smartphones, notebook computers, personal digital assistants (PDAs), tablet computers (PADs), portable multimedia players (PMPs), in-vehicle terminals (such as in-vehicle navigation terminals), wearable devices, etc. , which can also include the server.
  • PDAs personal digital assistants
  • PADs tablet computers
  • PMPs portable multimedia players
  • in-vehicle terminals such as in-vehicle navigation terminals
  • wearable devices etc.
  • Step S101 extracting the first video frame and the second video frame in the target video, the first video frame and the second video Frames are adjacent video frames.
  • the computing device may acquire the target video.
  • the computing device may capture images through a built-in camera to form a target video to be processed.
  • the computing device may receive the target video to be processed sent by the electronic device through the network.
  • the computing device can also read the target video stored locally and process the target video.
  • the target video may be a video in an original format, or a video after encoding, which is not particularly limited in this embodiment of the present disclosure.
  • the target video is preferably a video in an original format.
  • the target video is preferably encoded.
  • the computing device After acquiring the target video, the computing device extracts video frames in the target video. Specifically, the computing device extracts the first video frame and the second video frame in the target video.
  • the first video frame and the second video frame are adjacent video frames.
  • the first video frame may be a subsequent video frame among two adjacent video frames
  • the second video frame may be a previous video frame among the two video frames. It should be noted that the horizontal and vertical resolutions of the first video frame and the second video frame are the same.
  • Step S102 Perform differential processing on the first video frame and the second video frame to obtain an inter-frame difference image between the first video frame and the second video frame.
  • the difference processing between the first video frame and the second video frame is to subtract the gray value of the pixel at the corresponding position of the first video frame and the second video frame respectively to obtain the gray value of the pixel difference.
  • the inter-frame difference image of the first video frame and the second video frame is an image composed of the pixel gray level difference values according to the order of corresponding pixels in the first video frame and the second video frame.
  • the shooting device when shooting a target video, the shooting device (such as a smart phone with a built-in camera) moves quickly relative to the shooting object, so that the image content of the first video frame and the second video frame are not the same. If the difference processing is directly performed on the first video frame and the second video frame, the obtained frame difference image may not be able to show the pixel grayscale difference of the object (that is, the fast-moving object) in the two video frames.
  • Fig. 2 is a flowchart of obtaining an inter-frame difference image provided by some embodiments of the present disclosure. As shown in FIG. 2 , in order to solve the aforementioned problems, in some embodiments of the present disclosure, performing the aforementioned step S102 on the computing device may include steps S1021-S1022.
  • Step S1021 Globally align the first video frame and the second video frame to obtain the aligned first video frame and the second video frame.
  • the global alignment of the first video frame and the second video frame is to use one of the first video frame and the second video frame as a reference, and use the image pixel content representing a certain object in the other video frame and the aforementioned as The pixel content of the image representing the same object in the reference video frame is matched.
  • the first video frame is a video frame to be processed
  • the second video frame is a reference video frame
  • the second video frame is used as a reference
  • the first video frame is used as a video frame to be aligned.
  • a video frame is processed to obtain aligned first video frame and second video frame.
  • the global alignment is performed on the first video frame and the second video frame, feature regions in the first video frame and the second video frame may be selected, and the feature regions are matched.
  • the coordinate transformation relationship of the target object represented by the feature regions in the two video frames can be determined.
  • affine transformation is performed on the first image frame according to the aforementioned coordinate transformation relationship, that is, the first image frame relative to the second image frame can be realized global alignment.
  • step S1021 performs global alignment on the first video frame and the second video frame, and obtaining the aligned first video frame and the second video frame may specifically include steps S1021A-S1021B.
  • Step S1021A Perform brightness alignment processing on the first video frame and the second video frame to obtain the brightness-aligned first video frame and the second video frame.
  • the intensity of light irradiating the target object may be different, so that the pixel brightness of the target object represented in the first video frame and the second video frame are not the same, and thus the first video frame
  • the frame-to-frame differential image of the video frame and the second video frame introduces systematic errors.
  • brightness alignment may be first performed on the first video frame and the second video frame.
  • To align the brightness of the first video frame and the second video frame is to adjust the maximum brightness of the first video frame and the second video frame to be consistent, or to adjust the black and white fields in the first video frame and the second video frame to be consistent .
  • the brightness alignment processing is performed on the first video frame and the second video frame, firstly, to the grayscale histograms of pixels in the first video frame and the second video frame, and then according to the grayscale histograms of the first video frame and the second video frame, adjust the The gray value of the pixel is used to adjust the brightness of each pixel in the first video frame according to the adjustment rule, so that the brightness of the first video frame and the second video frame are aligned.
  • the brightness alignment of the first video frame and the second video frame may be represented as having substantially the same distribution patterns of the grayscale histograms of the two frames.
  • Step S1021B Perform a phase alignment operation on the brightness-aligned first video frame and the second video frame to obtain a coordinate transformation relationship between the first video frame and the second video frame.
  • the computing device may select feature regions in the first video frame and the second video frame, and match the feature regions. After realizing the matching of the feature regions in the first video frame and the second video frame, the computing device determines the coordinate transformation relationship of the target object represented by the feature region in the two video frames according to the pixel coordinates of the feature region in the two video frames .
  • the computing device in order to improve processing speed, may include steps B1-B4 when performing phase alignment operations on the first video frame and the second video frame.
  • B1 performing down-sampling by a preset multiple on the brightness-aligned first video frame and the second video frame, to obtain the down-sampled first video frame and the down-sampled second video frame.
  • B2 Perform a phase alignment operation on the downsampled first video frame and the downsampled second video frame to obtain a rotation matrix and a downsampled translation vector.
  • step S1021C may be performed subsequently.
  • Step S1021C Perform affine transformation on the brightness-aligned first video frame by using the coordinate transformation relationship to obtain the affine-transformed first video frame.
  • Step S1022 Perform a difference operation on the aligned first video frame and the second video frame to obtain an inter-frame difference image.
  • a differential operation is performed using pixels representing the same object in the aligned first video frame and the second video frame to obtain an inter-frame difference image.
  • step S1022 when step S1022 is executed, some of the first video frame and the second video frame may appear The pixels do not have corresponding pixels for the difference operation, but the difference operation is only performed on some pixels of the first video frame and the second video frame to obtain an inter-frame difference image.
  • the alignment between the first video frame and the second video frame is global alignment.
  • the global alignment method is especially suitable for processing target videos formed by moving camera devices to capture stationary target objects.
  • the alignment of the first video frame and the second video frame may also be local alignment. That is, when aligning the first video frame and the second video frame, only some pixel areas in the first video frame are aligned. After the local alignment process is performed to obtain the aligned first video frame and the second video frame, a difference operation may be performed on the first video frame and the second video frame to obtain an inter-frame difference image.
  • Step S103 Perform flat area detection on the first video frame and the second video frame, and obtain the intersection of the flat areas in the first video frame and the second video frame.
  • the flat area in the embodiment of the present disclosure is an area where image pixel changes are moderate in the first video frame and the second video frame.
  • the flat area may be an area with a specific color and brightness in the first video frame and the second video frame.
  • Fig. 3 is a flow chart of determining a common minimum flat area provided by some embodiments of the present disclosure. As shown in Figure 3, in some embodiments of the present disclosure, step S103 performs flat area processing on the first video frame and the second video frame, and obtaining the intersection of the flat area in the first video frame and the second video frame may include the steps S1031-S1032.
  • Step S1031 For any one of the first video frame and the second video frame, perform flat area extraction on the video frame to obtain the flat area of the video frame.
  • performing flat area processing on any one of the first video frame and the second video frame is to perform flat area processing on the first video frame and the second video frame respectively.
  • performing flat area processing on the first video frame and the second video frame to obtain the flat area of the video frame may include steps S1031A-S1031C.
  • Step S1031A Perform image segmentation on the video frame to obtain multiple image regions of the video frame.
  • the video frame is segmented to obtain a gradient image of the video frame
  • the gradient image is an image composed of gradients.
  • the video frame image is segmented according to the gradient image to obtain multiple image regions in the video frame.
  • the foregoing gradient image is an image determined according to pixel grayscale differences between adjacent pixels of a video frame. Segmenting the video frame image according to the gradient image is to select pixels larger than a preset value in the gradient image as the edge of the block image, and determine multiple image regions of the video frame based on the edge of the block image.
  • Step S1031B Determine the texture parameters of each image region.
  • each image region may be processed separately to obtain texture parameters of the image regions.
  • the texture parameters of the image area are parameters that characterize the texture features of the image area.
  • the texture feature parameters of the image region may be represented by the trace of the matrix covariance matrix in the image region. Specifically, first, based on the intra-block gradient covariance matrix of each image region, the eigenvalues of the covariance matrix are obtained; then, the eigenvalues of the covariance matrix are accumulated to obtain the trace of the matrix, which is used as the texture parameter of the image region.
  • Step S1031C Taking the image area whose texture parameter is smaller than the preset parameter threshold as a flat area of the video frame.
  • a flat area is an image area in which texture features are less abundant, so an image area whose texture parameter is smaller than a preset parameter threshold can be regarded as a flat area of the video frame.
  • Step S1032 Perform an sum operation on the flat area of the first video frame and the flat area of the second video frame to obtain the intersection of the flat areas.
  • an sum operation is performed on the flat areas of the two video frames to obtain the intersection of the flat areas. That is to say, if the pixel area at a specific position is a flat area in both the first video frame and the second video frame, the pixel area at this specific position can be regarded as the intersection of the flat areas.
  • the computing device may execute step S104.
  • Step S104 Calculate the temporal noise value corresponding to the first video frame by using the pixel information of the intersection of flat areas in the inter-frame difference image.
  • the computing device may use the intersection of the flat areas to filter pixel information in the inter-frame difference image to determine the inter-subframe difference image of the view field noise value. Subsequently, the temporal noise value of the first video frame can be calculated by using the pixel information of the difference image between the sub-frames.
  • Fig. 4 is a flow chart of determining a time-domain noise value provided by some embodiments of the present disclosure.
  • the pixel information in step S104 includes the pixel value of each pixel in the inter-frame difference image at the intersection of the flat area.
  • step S104 includes steps S1041-S1042.
  • Step S1041 Calculate the weighted average of the pixel values of each pixel.
  • Step S1042 Use the weighted average as the time-domain noise value corresponding to the target timestamp.
  • different pixel values correspond to different weights Weights.
  • the electronic device before performing step S1041, may further include step S1043.
  • Step S1043 Based on the preset correspondence relationship between the pixel value and the weight value, determine the weight value corresponding to each pixel value.
  • the correspondence between each pixel value and the weight is stored in the electronic device. After obtaining the pixel information of the intersection of the flat area in the inter-frame difference image, the respective weight values corresponding to each pixel value may be determined based on the foregoing correspondence relationship.
  • step S1041 may be executed.
  • step S1041 may include steps S1041A-S1042B.
  • Step S1041A For each pixel value, calculate the product of the pixel value and the weight value corresponding to the pixel value to obtain a weighted pixel value corresponding to the pixel value.
  • Step S1041B Carry out weighted average according to the weighted pixel values corresponding to each pixel value to obtain the weighted average value.
  • the pixel information of the intersection of flat areas in the inter-frame difference image is x 1 , x 2 ,...,x n
  • the corresponding weight values are ⁇ 1 , ⁇ 2 ,..., ⁇ n
  • steps S1041A–S1041B formula can be used Calculate the weighted average.
  • the intersection of the inter-frame difference image and the flat area is determined according to the adjacent first video frame and the second video frame in the target video, and then according to the intersection of the flat area and the frame difference image Pixel information, calculate the temporal noise value corresponding to the first video frame. Because the inter-frame difference image can evaluate the noise fluctuation characteristics of the video frame from the time domain, the intersection of the flat area is the image area with obvious noise characteristics in the video frame, so the noise feature of the intersection of the flat area in the inter-frame difference image is obvious.
  • the time domain noise value calculated by using the pixel information of the intersection of the flat area in the inter-frame difference image can more accurately evaluate the video frame noise in the time domain, thereby improving the accuracy of noise evaluation.
  • the video noise detection method can identify the temporal noise value of the video frame only with a reduced amount of processing, it can meet the requirement of real-time processing of the captured video.
  • the time-domain noise corresponding to the first video frame is calculated in step S104 After the value, the video noise detection method may also include step S105.
  • Step S105 Determine whether to perform noise reduction processing on the first video frame according to the temporal noise value.
  • noise reduction processing can be performed on the first video frame when the noise value in the time domain exceeds a set value. However, if the temporal noise value does not exceed the set value, no noise reduction processing is performed on the first video frame.
  • Fig. 5 is a flowchart of a video noise detection method provided by some other examples of the present disclosure. As shown in FIG. 5 , in some embodiments of the present disclosure, the video noise detection method includes the aforementioned steps S501 - S506 .
  • Step S501 Extracting a first video frame and a second video frame from the target video, where the first video frame and the second video frame are adjacent video frames.
  • Step S502 Perform differential processing on the first video frame and the second video frame to obtain an inter-frame difference image between the first video frame and the second video frame.
  • Step S503 Perform flat area detection on the first video frame and the second video frame, and obtain the intersection of the flat areas in the first video frame and the second video frame.
  • Step S504 Calculate the temporal noise value corresponding to the first video frame by using the pixel information of the intersection of the flat area in the inter-frame difference image.
  • Step S505 Evaluate the influence of noise sensitivity on the first video frame or the second video frame, and obtain an influence coefficient of noise sensitivity in at least one dimension.
  • the noise sensitivity impact evaluation is performed on the first video frame or the second video frame, and the noise sensitivity impact coefficient of at least one dimension is obtained, which is to use a pre-selected multi-dimensional image evaluation method to evaluate the first
  • the video frame or the second video frame is processed to obtain at least one dimensional noise sensitivity evaluation coefficient.
  • since the second video frame is a reference video frame, it is preferable to perform noise sensitivity impact evaluation on the second video frame at step S505 to obtain a noise sensitivity impact coefficient.
  • the noise sensitivity influence coefficient may include at least one of a detail richness influence coefficient, a displacement rate influence coefficient, and a brightness influence coefficient.
  • the influence coefficient of detail richness is an evaluation coefficient used to characterize the influence of image frame detail richness on noise sensitivity.
  • the detail intensity detection is carried out on the image frame, and the influence coefficient of detail richness can be obtained.
  • the Laplace transform can be used to process the image frame to obtain the detail information of the image frame, and use the detail information to obtain the influence coefficient of detail richness.
  • the displacement rate influence coefficient is an evaluation coefficient used to evaluate the influence of the object displacement rate on the noise sensitivity in two video frames.
  • image displacement detection may be performed on the first video frame and the second video frame to obtain the displacement amount.
  • the image displacement detection of the first video frame and the second video frame is to determine the displacement of the object in the first video frame and the second video frame by performing feature extraction analysis on the first video frame and the second video frame, and the displacement The displacement in the length direction and the displacement in the width direction can be included. Then, based on the displacement amount and one of the first video frame or the second video frame, the displacement rate influence degree coefficient can be obtained.
  • the length displacement can be divided by the image length of the second video frame to obtain the length direction rate influence coefficient
  • the width displacement can be divided by the image width of the second video frame Get the speed influence coefficient in the width direction.
  • the displacement rate influence coefficient can be obtained according to the velocity influence coefficient in the length direction and the velocity influence coefficient in the width direction.
  • the root mean square can be obtained by using the velocity influence coefficient in the length direction and the velocity influence coefficient in the width direction to obtain the displacement velocity influence coefficient.
  • the brightness influence coefficient is an evaluation coefficient used to evaluate the influence of video frame brightness on noise sensitivity.
  • the number of pixels in the video frame whose pixel grayscale exceeds the set value can be counted, and the number of pixels obtained by the statistics is divided by the total number of pixels in the video frame to obtain the partial area of the highlighted area The proportion is used as the brightness influence evaluation coefficient.
  • Step S506 Obtain the noise perception score of the first video frame according to the temporal noise value and the noise perception influence coefficient of at least one dimension.
  • the noise perception score of the first video frame can be obtained according to a pre-specified scoring rule.
  • the video frame score is obtained according to the temporal noise value and the noise sensitivity influence coefficient of at least one dimension
  • the product may be obtained by multiplying the temporal noise value and each noise sensitivity influence coefficient sequentially, and The product is taken as the noise perception score of the first video frame.
  • the noise perception score of the first video frame is obtained according to the temporal noise value and the noise sensitivity influence coefficient of at least one dimension
  • the noise perception score of the first video frame may be obtained by using the time domain noise value to be respectively correlated with each noise sensitivity influence coefficient
  • the products are multiplied to obtain the products, and then the products are added to obtain the noise perception score of the first video frame.
  • the time-domain noise value and the noise sensitivity of each dimension can also be used to influence the coefficient Input into the pre-trained deep learning scoring model, and use the deep learning scoring model to comprehensively process the temporal noise value and the noise perception influence coefficient of each dimension to obtain the noise perception score of the first video frame.
  • the deep learning scoring model is trained by using sample images, evaluation coefficients of each dimension corresponding to the sample images, and manual annotation scoring.
  • Obtaining the noise perception score of the first video by using the temporal noise value and the noise perception score of at least one dimension can evaluate the visual perception of noise in the first video frame, or determine whether to perform noise reduction processing on the first video.
  • the embodiment of the present disclosure also provides a video noise detection device.
  • the video noise detection device can be set in the aforementioned computing device to realize the detection of the target video noise.
  • FIG. 6 is a schematic diagram of a video noise detection device 600 provided by an embodiment of the present disclosure. As shown in FIG. Unit 604.
  • the extraction unit 601 is used for extracting a first video frame and a second video frame in the target video, where the first video frame and the second video frame are adjacent video frames.
  • the processing unit 602 is configured to perform differential processing on the first video frame and the second video frame to obtain an inter-frame difference image between the first video frame and the second video frame.
  • the intersection determination unit 603 is configured to perform flat region detection on the first video frame and the second video frame, to obtain the intersection of the flat regions in the first video frame and the second video frame.
  • the calculating unit 604 is configured to calculate the temporal noise value corresponding to the first video frame by using the pixel information of the intersection of the flat area in the inter-frame difference image.
  • the pixel information includes the pixel value of each pixel in the inter-frame difference image at the intersection of the flat area.
  • the calculation unit 604 includes a weighted average calculation subunit and a noise value calculation subunit.
  • the weighted average calculation subunit is used to calculate the weighted average of the pixel values of each pixel, and the noise value calculation subunit is used to use the weighted average as the temporal noise value corresponding to the target time stamp.
  • the calculation unit 604 further includes a weight value acquisition subunit, which is configured to determine the weight value corresponding to each pixel value based on a preset correspondence between pixel values and weight values.
  • the weighted average calculation subunit first calculates the product of the pixel value and the weight value corresponding to the pixel value for each pixel value to obtain the weighted pixel value corresponding to the pixel value; then, according to the weighted pixel value corresponding to each pixel value A weighted average is performed to obtain a weighted average.
  • the video noise detection apparatus 600 further includes a noise perception influence coefficient acquisition unit and a noise perception score calculation unit.
  • the noise sensitivity influence coefficient acquiring unit is configured to evaluate the noise sensitivity influence on the first video frame and/or the second video frame, and obtain the noise sensitivity influence coefficient of at least one dimension.
  • the video frame scoring unit is used to obtain the noise perception score of the first video frame according to the temporal noise value and the noise perception influence coefficient of at least one dimension, and the noise perception score is used to evaluate the visual perception of noise in the first video frame, and /or, whether to perform noise reduction processing on the first video frame.
  • the noise sensitivity influence coefficient of at least one dimension includes the detail richness influence coefficient
  • the noise sensitivity evaluation is performed on the first video frame or the second video frame to obtain the noise sensitivity evaluation of at least one dimension
  • the coefficient includes: performing detail strength detection on the first video frame or the second video frame to obtain a detail richness influencing coefficient.
  • the evaluation coefficient of at least one dimension includes a displacement rate influence coefficient
  • the noise sensitivity evaluation is performed on the first video frame or the second video frame to obtain the noise sensitivity evaluation coefficient of at least one dimension
  • the method includes: performing image displacement detection according to the first video frame and the second video frame to obtain a displacement rate influence degree coefficient.
  • the evaluation coefficient of at least one dimension includes a brightness influence coefficient
  • the noise sensitivity evaluation is performed on the first video frame or the second video frame to obtain the noise sensitivity evaluation coefficient of at least one dimension, including: A highlight area detection is performed on the first video frame or the second video frame to obtain a brightness influence degree coefficient.
  • the detection unit 603 includes a flat area extraction subunit and an intersection determination subunit.
  • the flat area extracting subunit is used for extracting the flat area of the video frame for any one of the first video frame and the second video frame to obtain the flat area of the video frame.
  • the intersection determination subunit is configured to perform an sum operation on the flat area of the first video frame and the flat area of the second video frame to obtain the intersection of the flat areas.
  • the flat area extraction subunit includes an image area segmentation module, a texture parameter determination module and a flat area selection module.
  • the image area segmentation module is used for image segmentation of the video frame to obtain multiple image areas of the video frame.
  • the texture parameter determination module is used to determine the texture parameters of each image region.
  • the flat area selection module is used to use the image area whose texture parameter is smaller than the preset parameter threshold as the flat area of the video frame.
  • the video noise detection apparatus 600 further includes an alignment unit.
  • the alignment unit is configured to perform global alignment on the first video frame and the second video frame to obtain the aligned first video frame and the second video frame.
  • the processing unit 602 performs a difference operation on the aligned first video frame and the second video frame to obtain an inter-frame difference image.
  • the alignment unit includes a brightness alignment subunit, a coordinate transformation relationship determination subunit, an affine transformation subunit, and an aligned video frame determination subunit.
  • the alignment subunit is configured to perform brightness alignment processing on the first video frame and the second video frame to obtain the brightness-aligned first video frame and the second video frame.
  • the coordinate conversion relationship determining subunit is used to perform phase alignment operation on the brightness-aligned first video frame and the second video frame to obtain the coordinate conversion relationship between the first video frame and the second video frame.
  • the affine transformation sub-unit is used to perform affine transformation on the first video frame after brightness alignment by using the coordinate transformation relationship to obtain the first video frame after affine transformation.
  • the aligned video frame determining subunit is used to use the affine transformed first video frame as the aligned first video frame, and use the luminance-aligned second video frame as the aligned second video frame.
  • the coordinate conversion relationship determination subunit includes a downsampling module, a phase alignment calculation module, an original offset calculation module and a coordinate conversion relationship determination module.
  • the down-sampling module is used for down-sampling the brightness-aligned first video frame and the second video frame by a preset multiple to obtain the down-sampled first video frame and the down-sampled second video frame.
  • the phase alignment operation module is used for performing phase alignment operation on the downsampled first video frame and the downsampled second video frame to obtain a rotation matrix and a downsampled translation vector.
  • the original offset calculation module is used to multiply the downsampled translation vector with a preset multiple to obtain the original offset.
  • the coordinate conversion relationship determination module is used to determine the coordinate conversion relationship according to the rotation matrix and the original offset.
  • the video noise detection device further includes a noise reduction processing unit.
  • the noise reduction processing unit is configured to determine whether to perform noise reduction processing on the first video frame according to the time domain noise value.
  • the video noise detection apparatus 600 shown in FIG. 6 can execute each step in the method embodiment shown in FIG. 1 to FIG. 5 , and realize each process in the method embodiment shown in FIG. 1 to FIG. 5 and effects, which will not be described here.
  • An embodiment of the present disclosure also provides a computing device, which may include a processor and a memory, and the memory may be used to store executable instructions.
  • the processor can be used to read executable instructions from the memory, and execute the executable instructions to implement the information display method in the above embodiments.
  • Fig. 7 shows a schematic structural diagram of a computing device provided by an embodiment of the present disclosure. Referring to FIG. 7 in detail below, it shows a schematic structural diagram of a computing device 700 suitable for implementing an embodiment of the present disclosure.
  • the computing device 700 in the embodiment of the present disclosure may be an electronic device or a server.
  • electronic equipment can Including but not limited to mobile phones, notebook computers, digital broadcast receivers, PDA (personal digital assistants), PAD (tablet computers), PMP (portable multimedia players), vehicle terminals (such as vehicle navigation terminals), wearable devices, Mobile terminals such as digital TVs, desktop computers, smart home devices, etc. and fixed terminals.
  • computing device 700 shown in FIG. 7 is only an example, and should not limit the functions and application scope of the embodiments of the present disclosure.
  • the computing device 700 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 701, which may be stored in a read-only memory (ROM) 702 or loaded into a random Various appropriate actions and processes are executed by accessing programs in the memory (RAM) 703 . In the RAM 703, various programs and data necessary for the operation of the information processing device 700 are also stored.
  • the processing device 701, ROM 702, and RAM 703 are connected to each other through a bus 704.
  • An input/output (I/O) interface 705 is also connected to the bus 704 .
  • input devices 706 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speaker, vibration an output device 707 such as a computer; a storage device 708 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 709.
  • Communication means 709 may allow computing device 700 to communicate with other devices wirelessly or by wire to exchange data. While FIG. 7 shows computing device 700 having various means, it is to be understood that implementing or possessing all of the means shown is not a requirement. More or fewer means may alternatively be implemented or provided.
  • An embodiment of the present disclosure also provides a computer-readable storage medium, the storage medium stores a computer program, and when the computer program is executed by a processor, the processor implements the information display method in the above-mentioned embodiments.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from a network via communication means 709, or from storage means 708, or from ROM 702.
  • the processing device 701 the above-mentioned functions defined in the information display method of the embodiment of the present disclosure are executed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two.
  • a computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any of the above combination. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to: wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
  • clients and servers can communicate using any currently known or future developed network protocol, such as HTTP, and can be interconnected with any form or medium of digital data communication (eg, a communication network).
  • a communication network examples include local area networks (“LANs”), wide area networks (“WANs”), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network of.
  • LANs local area networks
  • WANs wide area networks
  • Internet internetworks
  • peer-to-peer networks e.g., ad hoc peer-to-peer networks
  • the above-mentioned computer-readable medium may be included in the above-mentioned computing device, or may exist independently without being assembled into the computing device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the computing device, the computing device is made to perform: extracting the first video frame and the second video frame in the target video, the first The video frame and the second video frame are adjacent video frames; the first video frame and the second video frame are differentially processed to obtain an inter-frame differential image between the first video frame and the second video frame; for the first video frame Perform flat area detection with the second video frame to obtain the intersection of the flat area in the first video frame and the second video frame; use the pixel information in the inter-frame difference image of the intersection of the flat area to calculate the time domain corresponding to the first video frame noise value.
  • the computer program codes for performing the operations of the present disclosure may be written in one or more programming languages or a combination thereof, the above-mentioned programming languages include but not limited to object-oriented programming languages - such as Java, Smalltalk, C++, and also conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider). Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet connection any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider). Internet connection).
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of a unit does not constitute a limitation of the unit itself under certain circumstances.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs System on Chips
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or Flash), Fiber Optic, Compact Disk ROM (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or Flash erasable programmable read only memory
  • CD-ROM Compact Disk ROM
  • optical storage device magnetic storage device, or any suitable combination of the foregoing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本公开涉及一种视频噪声检测方法、装置、设备及介质。其中,视频噪声检测方法包括在目标视频中抽取第一视频帧和第二视频帧,第一视频帧和第二视频帧为相邻视频帧;将第一视频帧和第二视频帧进行差分处理,得到第一视频帧与第二视频帧之间的帧间差分图像;对第一视频帧和第二视频帧进行平坦区域检测,得到第一视频帧与第二视频帧中平坦区域的交集;利利用所述平坦区域的交集在所述帧间差分图像中的像素信息,计算所述第一视频帧对应的时域噪声值。本方法能够在时域上较为准确地评价视频帧噪声,并可以满足对拍摄视频进行实时处理的需求。

Description

视频噪声检测方法、装置、设备及介质
相关申请的交叉引用
本申请是以中国申请号为202210102693.X,申请日为2022年1月27日,题目为“视频噪声检测方法、装置、设备及介质”的申请为基础,并主张其优先权,该中国申请的公开内容在此作为整体引入本申请中。
技术领域
本公开涉及视频处理技术领域,尤其涉及一种视频噪声检测方法、装置、设备及介质。
背景技术
当前,用户习惯于采用智能手机内置摄像头拍摄视频。但是,由于手机内置摄像头的性能限制,采用智能手机内置摄像头拍摄的某些类型的视频可能具有很强的噪声,较强噪声严重影响拍摄视频的观看体验。
为了降低视频噪声造成的观看体验,需要采用噪声检测算法识别拍摄视频的噪声,进而再去除视频噪声。但是,目前业界提供的噪声检测算法无法兼顾准确率和实时性,也就无法满足对视频噪声实时处理的需求。
发明内容
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开提供了一种视频噪声检测方法、装置、设备及介质。
第一方面,本公开实施例提供一种视频噪声检测方法,包括:
在目标视频中抽取第一视频帧和第二视频帧,所述第一视频帧和所述第二视频帧为相邻视频帧;
将所述第一视频帧和所述第二视频帧进行差分处理,得到所述第一视频帧与所述第二视频帧之间的帧间差分图像;
对所述第一视频帧和所述第二视频帧进行平坦区域检测,得到所述第一视频帧与所述第二视频帧中平坦区域的交集;
利用所述平坦区域的交集在所述帧间差分图像中的像素信息,计算所述第一视频帧 对应的时域噪声值。
第二方面,本公开实施例提供一种视频噪声检测装置,包括:
抽取单元,用于在目标视频中抽取第一视频帧和第二视频帧,所述第一视频帧和所述第二视频帧为相邻视频帧;
处理单元,用于将所述第一视频帧和所述第二视频帧进行差分处理,得到所述第一视频帧与所述第二视频帧之间的帧间差分图像;
交集确定单元,用于对所述第一视频帧和所述第二视频帧进行平坦区域检测,得到所述第一视频帧与所述第二视频帧中平坦区域的交集;
计算单元,用于利用所述平坦区域的交集在所述帧间差分图像中的像素信息,计算所述第一视频帧对应的时域噪声值。
第三方面,本公开实施例提供一种计算设备,包括:处理器;存储器,用于存储可执行指令,其中,所述处理器用于从所述存储器中读取所述可执行指令,并执行所述可执行指令以实现如前所述的视频噪声检测方法。
第四方面,本公开实施例提供一种计算机可读存储介质所述存储介质存储有计算机程序,当所述计算机程序被处理器执行时,使得处理器实现如前所述的视频噪声检测方法。
第五方面,本公开提供了一种计算机程序,包括:指令,所述指令当由处理器执行时使所述处理器执行本公开实施例提供的任一视频噪声检测方法。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1是本公开一些实施例提供的视频噪声检测方法的流程图;
图2是本公开一些实施例提供的得到帧间差分图像的流程图;
图3是本公开一些实施例提供的确定共同最小平坦区域的流程图;
图4是本公开一些实施例提供的确定时域噪声值的流程图;
图5是本公开另外一些实例提供的视频噪声检测方法的流程图;
图6是本公开实施例提供的视频噪声检测装置的示意图;
图7示出了本公开实施例提供的一种计算设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
图1是本公开一些实施例提供的视频噪声检测方法的流程图。如图1所示,本公开实施例提供的视频噪声检测方法包括步骤S101-S104。
应当注意的是,本公开实施例提供的视频噪声检测方法可以由计算设备执行。计算设备包括但不限于诸如智能手机、笔记本电脑、个人数字助理(PDA)、平板电脑(PAD)、便携式多媒体播放器(PMP)、车载终端(例如车载导航终端)、可穿戴设备等的电子设备,还可以包括服务器。
步骤S101:在目标视频中抽取第一视频帧和第二视频帧,第一视频帧和第二视频 帧为相邻视频帧。
本公开实施例中,计算设备可以获取目标视频。例如,在计算设备为智能手机等电子设备的情况下,计算设备可以通过内置的摄像头拍摄影像而形成待处理的目标视频。再例如,在计算设备为服务器的情况下,计算设备可以接收电子设备通过网络发送的待处理的目标视频。当然,计算设备还可以读取本地存储的目标视频,并对目标视频进行处理。
目标视频可以是原始格式的视频,也可以经过编码处理后的视频,本公开实施例并不做特别地限定。例如,在计算设备为电子设备,并且目标视频为电子设备拍摄形成的待处理目标视频的情况下,目标视频优选为原始格式的视频。在计算设备为服务器并且目标视频为通过网络发送的待处理视频的情况下,为保证视频传输的实时性,目标视频优选为经过编码处理后的视频。
计算设备在获取目标视频后,提取目标视频中的视频帧。具体的,计算设备会提取目标视频中的第一视频帧和第二视频帧。第一视频帧和第二视频帧为相邻的视频帧。实际应用中,第一视频帧可以是两个相邻视频帧中的在后视频帧,第二视频帧是两个视频帧中的在前视频帧。应当注意的是,第一视频帧和第二视频帧在横向和竖向上的分辨率相同。
步骤S102:将第一视频帧和第二视频帧进行差分处理,得到第一视频帧与第二视频帧之间的帧间差分图像。
本公开一些实施例中,将第一视频帧和第二视频帧进行差分处理,是分别将第一视频帧和第二视频帧对应位置处的像素的灰度值进行相减,得到像素灰度差值。也就是说,第一视频帧和第二视频帧的帧间差分图像是由前述像素灰度差值按照第一视频帧和第二视频帧中对应像素的排序顺序组成的图像。
一些具体应用中,在拍摄目标视频时,拍摄装置(例如具有内置摄像头的智能手机)相对于拍摄对象快速地移动,使得拍摄的第一视频帧和第二视频帧的图像内容并不相同。如果直接经第一视频帧和第二视频帧进行差分处理,得到的帧差分图像可能并不能显示拍摄对象(也就是快速移动物体)在两个视频帧中的像素灰度差异。
图2是本公开一些实施例提供的得到帧间差分图像的流程图。如图2所示,为解决前述问题,在本公开的一些实施例中,在计算设备执行前述步骤S102可以包括步骤 S1021-S1022。
步骤S1021:对第一视频帧和第二视频帧进行全局对齐,得到对齐后的第一视频帧和第二视频帧。
对第一视频帧和第二视频帧进行全局对齐,是将第一视频帧和第二视频帧中的一个视频帧作为基准,将另一个视频帧中表示某一物体的图像像素内容和前述作为基准的视频帧中表示同一物体的图像像素内容进行匹配。
本公开实施例中,因为第一视频帧为待处理的视频帧,第二视频帧为参照视频帧,所以采用第二视频帧作为基准,将第一视频帧作为待对齐的视频帧,对第一视频帧进行处理而得到对齐后的第一视频帧和第二视频帧。
具体的,对第一视频帧和第二视频帧进行全局对齐,可以选取第一视频帧和第二视频帧中的特征区域,并将特征区域进行匹配。在实现第一视频帧和第二视频帧中特征区域的匹配后,根据特征区域在两个视频帧中的像素坐标,可以确定特征区域表示的目标对象在两个视频帧中的坐标转换关系。
在确定目标对象在第一图像帧和第二图像帧中的坐标转换关系后,根据前述的坐标转换关系对第一图像帧进行仿射变换,即可以实现第一图像帧相对于第二图像帧的全局对齐。
在本公开的一些实施例中,步骤S1021对第一视频帧和第二视频帧进行全局对齐,得到对齐后的第一视频帧和第二视频帧具体可以包括步骤S1021A-S1021B。
步骤S1021A:对第一视频帧和第二视频帧进行亮度对齐处理,得到亮度对齐后的第一视频帧和第二视频帧。
在实际应用中,由于环境光线强度的实时变化,照射目标对象的光线强度可能并不相同,使得第一视频帧和第二视频帧中表示的目标对象的像素亮度并不相同,进而使得第一视频帧和第二视频帧的帧间差分图像引入系统误差。
为避免前述问题,在本公开的一些实施例中,在对第一视频帧和第二视频帧进行全局的相位对齐前,可以首先对第一视频帧和第二视频帧进行亮度对齐。对第一视频帧和第二视频帧进行亮度对齐,是将第一视频帧和第二视频帧的最大亮度调整为一致,或者将第一视频帧和第二视频帧中的黑白场调整为一致。
在本公开的一些实施例中,对第一视频帧和第二视频帧进行亮度对齐处理,首先得 到第一视频帧和第二视频帧中像素的灰度直方图,随后根据第一视频帧和第二视频帧的灰度直方图,按照预先设定的同向调整规则调整第一视频帧中像素的灰度值,使得第一视频帧中各个像素点的亮度按照调整规则调整,进而使得第一视频帧和第二视频帧的亮度对齐。具体实施例中,第一视频帧和第二视频帧的亮度对齐可以表现为二者的灰度直方图的分布样式大体相同。
步骤S1021B:对亮度对齐后的第一视频帧和第二视频帧进行相位对齐运算,得到第一视频帧与第二视频帧之间的坐标转换关系。
计算设备可以选取第一视频帧和第二视频帧中的特征区域,并将特征区域进行匹配。在实现第一视频帧和第二视频帧中特征区域的匹配后,计算设备根据特征区域在两个视频帧中的像素坐标,确定特征区域表示的目标对象在两个视频帧中的坐标转换关系。
在一些实施例中,在本公开的一些实施例中,为了提高处理速度,计算设备在对第一视频帧和第二视频帧进行相位对齐运算时可以包括步骤B1-B4。
B1:对亮度对齐后的第一视频帧和第二视频帧进行预设倍数的下采样,得到下采样后的第一视频帧和下采样后的第二视频帧。
B2:对下采样后的第一视频帧和下采样后的第二视频帧进行相位对齐运算,得到旋转矩阵和下采样平移向量。
B3:将下采样平移向量与预设倍数相乘,得到原始偏移量。
采用前述方法确定第一视频帧和第二视频帧的坐标转换关系后,随后可以执行步骤S1021C。
步骤S1021C:利用坐标转换关系对亮度对齐后的第一视频帧进行仿射变换,得到仿射变换后的第一视频帧。
在得到坐标转换关系后,利用前述的坐标转换关系对亮度对齐后的第一视频帧进行仿射变换,也就是进行旋转和/或平移变换,即可以得到仿射变换后的第一视频帧。
步骤S1022:对对齐后的第一视频帧和第二视频帧进行差分运算,得到帧间差分图像。
在得到对齐后的第一视频帧和第二视频帧后,采用对齐的第一视频帧和第二视频帧中表示相同物体的像素进行差分运算,即可以得到帧间差分图像。
应当注意的是,在执行步骤S1022时,可能出现第一视频帧和第二视频帧中的某些 像素并没有对应的用于差分运算的像素,而仅对第一视频帧和第二视频帧的部分像素进行差分运算,得到帧间差分图像。
在本公开的前述实施例中,对第一视频帧和第二视频帧进行对齐为全局对齐。全局对齐方式尤其适用于处理移动拍摄装置拍摄静止目标对象形成的目标视频。
在本公开的其他实施例中,对第一视频帧和第二视频帧进行对齐也可以是局部对齐。即在对第一视频帧和第二视频帧进行对齐时,仅是第一视频帧中的部分像素区域进行对齐处理。在进行局部对齐处理,得到对齐后的第一视频帧和第二视频帧后,可以对第一视频帧和第二视频帧进行差分运算,得到帧间差分图像。
步骤S103:对第一视频帧和第二视频帧进行平坦区域检测,得到第一视频帧与第二视频帧中平坦区域的交集。
本公开实施例中的平坦区域是第一视频帧和第二视频帧中图像像素变化较为缓和的区域。例如,平坦区域可以是第一视频帧和第二视频帧中具有特定颜色和亮度的区域。
图3是本公开一些实施例提供的确定共同最小平坦区域的流程图。如图3所示,在本公开的一些实施例中,步骤S103对第一视频帧和第二视频帧进行平坦区域处理,得到第一视频帧与第二视频帧中平坦区域的交集可以包括步骤S1031-S1032。
步骤S1031:针对第一视频帧和第二视频帧中的任一视频帧,对视频帧进行平坦区域提取,得到视频帧的平坦区域。
本公开实施例中,对第一视频帧和第二视频帧中的任一视频帧进行平坦区域处理,是分别对第一视频帧和第二视频帧进行平坦区域处理。
在一些实施例中,对第一视频帧和第二视频帧中进行平坦区域处理,得到视频帧的平坦区域可以包括步骤S1031A-S1031C。
步骤S1031A:对视频帧进行图像分割,得到视频帧的多个图像区域。
在一些实施例中,本公开实施例中对视频帧进行图像分割,可以求取视频帧的梯度图像,梯度图像是采用梯度构成的图像。随后根据梯度图像对视频帧图像进行分割,得到视频帧中的多个图像区域。前述的梯度图像是根据视频帧相邻像素之间像素灰度差值确定的图像。根据梯度图像对视频帧图像进行分割,是选择梯度图像中大于预设值的像素作为分块图像的边缘,并基于分块图像的边缘确定视频帧的多个图像区域。
步骤S1031B:确定每个图像区域的纹理参数。
在得到多个图像区域后,随后可以对各个图像区域分别进行处理,得到图像区域的纹理参数。图像区域的纹理参数是表征图像区域纹理特征的参数。
在本公开的一些实施例中,图像区域纹理特征参数可以采用图像区域内矩阵协方差矩阵的迹表示。具体的,首先基于每个图像区域的块内梯度协方差矩阵,得到协方差矩阵的特征值;随后,将协方差矩阵的特征值进行累加得到矩阵的迹,作为图像区域的纹理参数。
步骤S1031C:将纹理参数小于预设参数阈值的图像区域作为视频帧的平坦区域。
纹理参数越大,则表明该图像区域中的纹理信息越丰富。而平坦区域是其中纹理特征较不丰富的图像区域,因此可以将纹理参数小于预设参数阈值的图像区域作为视频帧的平坦区域。
步骤S1032:对第一视频帧的平坦区域和第二视频帧的平坦区域进行求与运算,得到平坦区域的交集。
在分别得到第一视频帧的平坦区域和第二视频帧的平坦区域后,对两个视频帧的平坦区域进行求与运算,可以得到平坦区域的交集。也就是说,如果某一特定位置的像素区域在第一视频帧和第二视频帧中均为平坦区域,则此特定位置的像素区域可以作为平坦区域的交集。
在得到共同最小平坦区域后,计算设备可以执行步骤S104。
步骤S104:利用平坦区域的交集在在帧间差分图像中的像素信息,计算第一视频帧对应的时域噪声值。
本公开实施例中,计算设备在得到平坦区域的交集后,可以利用平坦区域的交集对帧间差分图像中的像素信息进行筛选,确定视域噪声值的子帧间差分图像。随后利用子帧间差分图像的像素信息,可以计算第一视频帧的时域噪声值。
图4是本公开一些实施例提供的确定时域噪声值的流程图。如图4所示,在本公开的一些实施例中,步骤S104中的像素信息包括平坦区域的交集在帧间差分图像中的各个像素的像素值。在此情况下,步骤S104包括步骤S1041-S1042。
步骤S1041:计算各个像素的像素值的加权平均值。
步骤S1042:将加权平均值作为目标时间戳对应的时域噪声值。
在本公开的一些实施例中,不同的像素值(也就是像素的灰度值)对应不同的加权 权重。此时,为了得到目标时间戳对应的时域噪声值,首先需要计算各个像素的像素值的加权平均值,随后将加权平均值作为目标时间戳对应的时域噪声值。
具体实施中,在执行步骤S1041之前,电子设备还可以包括步骤S1043。
步骤S1043:基于预设的像素值与权重值的对应关系,确定各个像素值各自对应的权重值。
本公开实施例中,电子设备中存储有各个像素值与权重之的对应关系。在获取到平坦区域的交集在帧间差分图像中的像素信息后,可以基于前述的对应关系,确定各个像素值各自对应的权重值。
在确定权重值后,可以执行步骤S1041。具体实施中,步骤S1041可以包括步骤S1041A-S1042B。
步骤S1041A:针对每一像素值,计算像素值与像素值对应的权重值的乘积,得到像素值对应的加权像素值。
步骤S1041B:根据各个像素值各自对应的加权像素值进行加权平均,得到加权平均值。
例如,平坦区域的交集在帧间差分图像中的像素信息分别为x1,x2,…,xn,对应的权重值为ω12,…,ωn,则采用步骤S1041A–S1041B可以采用公式计算得到加权平均值。
本公开实施例提供的视频噪声检测方法,根据目标视频中相邻的第一视频帧和第二视频帧确定帧间差分图像和平坦区域的交集,再根据平坦区域的交集和帧差分图像中的像素信息,计算第一视频帧对应的时域噪声值。因为帧间差分图像能够从时域上评价视频帧的噪声波动特征,平坦区域的交集为视频帧内噪声特征较为明显的图像区域,所以在帧间差分图像中平坦区域的交集的噪声特征明显,从而利用平坦区域的交集在帧间差分图像中的像素信息计算得到的时域噪声值,在时域上较为准确地评价视频帧噪声,从而提高了噪声评价的准确率。此外,采用本公开实施例提供的方法,由于视频噪声检测方法仅需要减少的处理量就可以识别出视频帧的时域噪声值,其可以满足对拍摄视频进行实时处理的需求。
在本公开的一些实施例中,在采用步骤S104计算得到第一视频帧对应的时域噪声 值后,视频噪声检测方法还可以包括步骤S105。
步骤S105:根据时域噪声值,确定是否对第一视频帧进行降噪处理。
通过根据时域噪声值确定是否对第一视频帧进行降噪处理,可以在时域噪声值超过设定值的情况下,对第一视频帧进行降噪处理。而在时域噪声值没有超过设定值的情况下,并不对第一视频帧进行降噪处理。
图5是本公开另外一些实例提供的视频噪声检测方法的流程图。如图5所示,在本公开的一些实施例中,视频噪声检测方法除了包括前述的步骤S501-S506。
步骤S501:在目标视频中抽取第一视频帧和第二视频帧,第一视频帧和第二视频帧为相邻视频帧。
步骤S502:将第一视频帧和第二视频帧进行差分处理,得到第一视频帧与第二视频帧之间的帧间差分图像。
步骤S503:对第一视频帧和第二视频帧进行平坦区域检测,得到第一视频帧与第二视频帧中平坦区域的交集。
步骤S504:利用平坦区域的交集在帧间差分图像中的像素信息,计算第一视频帧对应的时域噪声值。
前述步骤S501-S504与前文实施例中步骤S101-S104相同,具体可以参见前文表述,此处不再复述。
步骤S505:对第一视频帧或第二视频帧进行噪声感受度影响评价,得到至少一个维度的噪声感受度影响系数。
本公开实施例中,对第一视频帧或者第二视频帧进行噪声感受度影响评价,得到至少一个维度的噪声感受度影响系数,是采用预先选定的多个维度的图像评价方法对第一视频帧或者第二视频帧进行处理,得到至少一个维度的噪声感受度评价系数。在本公开实施例中,因为第二视频帧为基准视频帧,所以在采用步骤S505优选地对第二视频帧进行噪声感受度影响评价,得到噪声感受度影响系数。
在本公开的一些实施例中,噪声感受度影响系数可以包括细节丰富度影响系数、位移速率影响度系数和亮度影响度系数中的至少一种。
细节丰富度影响系数是用于表征图像帧细节丰富程度对噪声感受度影响的评价系数。对图像帧进行细节强度检测,可以得到细节丰富度影响系数。本公开一些实施例中, 可以采用拉普拉斯变换对图像帧进行处理,得到图像帧的细节信息,并利用细节信息得到细节丰富度影响系数。
位移速率影响度系数是用于评价两个视频帧中物体位移速率对噪声感受度影响的评价系数。本公开一些实施例中,可以对第一视频帧和第二视频帧进行图像位移检测,得到位移量。对第一视频帧和第二视频帧进行图像位移检测是通过对第一视频帧和第二视频帧进行特征提取分析,确定物体在第一视频帧和第二视频帧中的位移量,位移量可以包括长度方向的位移量和宽度方向的位移量。随后可以基于位移量,以及第一视频帧或者第二视频帧之一得到位移速率影响度系数。以第二视频帧得到位移评价系数为例,可以将长度位移量与第二视频帧的图像长度相除得到长度方向速率影响度系数,可以将宽度位移量与第二视频帧的图像宽度相除得到宽度方向速率影响度系数。最后,根据长度方向速率影响度系数和宽度方向速率影响度系数可以得到位移速率影响度系数。具体的,可以采用长度方向速率影响度系数和宽度方向速率影响度系数求取均方根,得到位移速率影响度系数。
亮度影响度系数是用于评价视频帧亮度对噪声感受度影响的评价系数。在本公开一些实施例中,可以将视频帧中像素灰度超过设定的值的像素数量进行统计,将统计得到的像素数量和视频帧的总像素数量进行相除,得到高亮区域部分面积占比作为亮度影响度评价系数。
步骤S506:根据时域噪声值和至少一个维度的噪声感受度影响系数,得到第一视频帧的噪声感受得分。
在得到时域噪声值和至少一个维度的噪声感受度影响系数后,可以按照预先指定的评分规则,得到第一视频帧的噪声感受得分。
在本公开的一些实施例中,根据时域噪声值和至少一个维度的噪声感受度影响系数得到视频帧评分,可以采用时域噪声值和各个噪声感受度影响系数依次相乘得到乘积,并将乘积作为第一视频帧的噪声感受得分。
在本公开的另外一些实施例中,根据时域噪声值和至少一个维度的噪声感受度影响系数得到第一视频帧的噪声感受得分,可以采用时域噪声值分别和各个噪声感受度影响系数相乘得到乘积,随后将各个乘积相加得到第一视频帧的噪声感受得分。
在本公开的其他实施例中,还可以将时域噪声值和各个维度的噪声感受度影响系数 输入到预先训练得到深度学习评分模型中,采用深度学习评分模型对时域噪声值和各个维度的噪声感受度影响系数进行综合处理的,得到第一视频帧的噪声感受得分。其中深度学习评分模型是采用样本图像、样本图像对应的各个维度的评价系数和人工标注评分训练得到。
通过采用时域噪声值和至少一个维度的噪声感受得分得到第一视频的噪声感受得分,可以评价第一视频帧中噪声的视觉感受度,或者确定是否对第第一视频进行降噪处理。
本公开实施例还提供一种视频噪声检测装置。视频噪声检测装置可以设置在前述的计算设备中,以实现对目标视频噪声的检测。
图6是本公开实施例提供的视频噪声检测装置600的示意图,如图6所示,本公开实施例提供的视频噪声检测装置600可以包括抽取单元601、处理单元602、交集确定单元603和计算单元604。
抽取单元601用于在目标视频中抽取第一视频帧和第二视频帧,第一视频帧和第二视频帧为相邻视频帧。
处理单元602用于将第一视频帧和第二视频帧进行差分处理,得到第一视频帧与第二视频帧之间的帧间差分图像。
交集确定单元603用于对第一视频帧和第二视频帧进行平坦区域检测,得到第一视频帧与第二视频帧中平坦区域的交集。
计算单元604用于利用平坦区域的交集在帧间差分图像中的像素信息,计算第一视频帧对应的时域噪声值。
在本公开的一些实施例中,像素信息包括平坦区域的交集在帧间差分图像中的各个像素的像素值。对应的,计算单元604包括加权平均值计算子单元和噪声值计算子单元。加权平均值计算子单元用于计算各个像素的像素值的加权平均值,噪声值计算子单元用于将加权平均值作为目标时间戳对应的时域噪声值。
在本公开的一些实施例中,计算单元604还包括权重值获取子单元,权重值获取子单元用于基于预设的像素值与权重值的对应关系,确定各个像素值各自对应的权重值。对应的,加权平均值计算子单元首先针对每一像素值,计算像素值与像素值对应的权重值的乘积,得到像素值对应的加权像素值;随后根据各个像素值各自对应的加权像素值 进行加权平均,得到加权平均值。
在本公开的一些实施例中,视频噪声检测装置600还包括噪声感受度影响系数获取单元和噪声感受得分计算单元。噪声感受度影响系数获取单元用于对第一视频帧和/或第二视频帧进行噪声感受度影响评价,得到至少一个维度的噪声感受度影响系数。视频帧评分单元用于根据时域噪声值和至少一个维度的噪声感受度影响系数,得到第一视频帧的噪声感受得分,噪声感受得分用于评价第一视频帧中噪声的视觉感受度,和/或,是否对第一视频帧进行降噪处理。
在本公开的一些实施例中,至少一个维度的噪声感受度影响系数包括细节丰富度影响系数,对第一视频帧或第二视频帧进行噪声感受度评价,得到至少一个维度的噪声感受度评价系数,包括:对第一视频帧或者第二视频帧进行细节强度检测,得到细节丰富度影响系数。
在本公开的一些实施例中,至少一个维度的评价系数包括位移速率影响度系数,对第一视频帧或第二视频帧进行噪声感受度感受评价,得到至少一个维度的噪声感受度评价系数,包括:根据第一视频帧和第二视频帧进行图像位移检测,得到位移速率影响度系数。
在本公开的一些实施例中,至少一个维度的评价系数包括亮度影响度系数,对第一视频帧或第二视频帧进行噪声感受度评价,得到至少一个维度的噪声感受度评价系数,包括:对第一视频帧或第二视频帧进行高亮区域检测,得到亮度影响度系数。
在本公开的一些实施例中,检测单元603包括平坦区域提取子单元和交集确定子单元。平坦区域提取子单元用于针对第一视频帧和第二视频帧中的任一视频帧,对视频帧进行平坦区域提取,得到视频帧的平坦区域。交集确定子单元用于对第一视频帧的平坦区域和第二视频帧的平坦区域进行求与运算,得到平坦区域的交集。
在本公开的一些实施例中,平坦区域提取子单元包括图像区域分割模块、纹理参数确定模块和平坦区域选定模块。图像区域分割模块用于对视频帧进行图像分割,得到视频帧的多个图像区域。纹理参数确定模块用于确定每个图像区域的纹理参数。平坦区域选定模块用于将纹理参数小于预设参数阈值的图像区域作为视频帧的平坦区域。
在本公开的一些实施例中,视频噪声检测装置600还包括对齐单元。对齐单元用于对第一视频帧和第二视频帧进行全局对齐,得到对齐后的第一视频帧和第二视频帧。对 应的,处理单元602对对齐后的第一视频帧和第二视频帧进行差分运算,得到帧间差分图像。
在本公开的一些实施例中,对齐单元包括亮度对齐子单元、坐标转换关系确定子单元、仿射变换子单元和对齐视频帧确定子单元。对齐子单元用于对第一视频帧和第二视频帧进行亮度对齐处理,得到亮度对齐后的第一视频帧和第二视频帧。坐标转换关系确定子单元用于对亮度对齐后的第一视频帧和第二视频帧进行相位对齐运算,得到第一视频帧与第二视频帧之间坐标转换关系。仿射变换子单元用于利用坐标转换关系对亮度对齐后的第一视频帧进行仿射变换,得到仿射变换后的第一视频帧。对齐视频帧确定子单元用于将仿射变换后的第一视频帧作为对齐后的第一视频帧,以及将亮度对齐后的第二视频帧作为对齐后的第二视频帧。
在本公开的一些实施例中,坐标转换关系确定子单元包括下采样模块、相位对齐运算模块、原始偏移量计算模块和坐标转换关系确定模块。下采样模块用于对亮度对齐后的第一视频帧和第二视频帧进行预设倍数的下采样,得到下采样后的第一视频帧和下采样后的第二视频帧。相位对齐运算模块用于对下采样后的第一视频帧和下采样后的第二视频帧进行相位对齐运算,得到旋转矩阵和下采样平移向量。原始偏移量计算模块用于将下采样平移向量与预设倍数相乘,得到原始偏移量。坐标转换关系确定模块用于根据旋转矩阵和原始偏移量确定坐标转换关系。
在本公开的一些实施例中,视频噪声检测装置还包括降噪处理单元。降噪处理单元用于根据时域噪声值,确定是否对第一视频帧进行降噪处理。
需要说明的是,图6所示的视频噪声检测装置600可以执行图1至图5所示的方法实施例中的各个步骤,并且实现图1至图5所示的方法实施例中的各个过程和效果,在此不做赘述。
本公开实施例还提供了一种计算设备,该计算设备可以包括处理器和存储器,存储器可以用于存储可执行指令。其中,处理器可以用于从存储器中读取可执行指令,并执行可执行指令以实现上述实施例中的信息显示方法。
图7示出了本公开实施例提供的一种计算设备的结构示意图。下面具体参考图7,其示出了适于用来实现本公开实施例中的计算设备700的结构示意图。
本公开实施例中的计算设备700可以为电子设备或者服务器。其中,电子设备可以 包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)、可穿戴设备、等等的移动终端以及诸如数字TV、台式计算机、智能家居设备等等的固定终端。
需要说明的是,图7示出的计算设备700仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图7所示,该计算设备700可以包括处理装置(例如中央处理器、图形处理器等)701,其可以根据存储在只读存储器(ROM)702中的程序或者从存储装置708加载到随机访问存储器(RAM)703中的程序而执行各种适当的动作和处理。在RAM 703中,还存储有信息处理设备700操作所需的各种程序和数据。处理装置701、ROM 702以及RAM 703通过总线704彼此相连。输入/输出(I/O)接口705也连接至总线704。
通常,以下装置可以连接至I/O接口705:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置706;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置707;包括例如磁带、硬盘等的存储装置708;以及通信装置709。通信装置709可以允许计算设备700与其他设备进行无线或有线通信以交换数据。虽然图7示出了具有各种装置的计算设备700,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
本公开实施例还提供了一种计算机可读存储介质,该存储介质存储有计算机程序,当计算机程序被处理器执行时,使得处理器实现上述实施例中的信息显示方法。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置709从网络上被下载和安装,或者从存储装置708被安装,或者从ROM 702被安装。在该计算机程序被处理装置701执行时,执行本公开实施例的信息显示方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的 组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述计算设备中所包含的;也可以是单独存在,而未装配入该计算设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该计算设备执行时,使得该计算设备执行:在目标视频中抽取第一视频帧和第二视频帧,第一视频帧和第二视频帧为相邻视频帧;将第一视频帧和第二视频帧进行差分处理,得到第一视频帧与第二视频帧之间的帧间差分图像;对第一视频帧和第二视频帧进行平坦区域检测,得到第一视频帧与第二视频帧中平坦区域的交集;利用平坦区域的交集在帧间差分图像中的像素信息,计算第一视频帧对应的时域噪声值。
在本公开实施例中,可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言 —诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器 (CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (19)

  1. 一种视频噪声检测方法,包括:
    在目标视频中抽取第一视频帧和第二视频帧,所述第一视频帧和所述第二视频帧为相邻视频帧;
    将所述第一视频帧和所述第二视频帧进行差分处理,得到所述第一视频帧与所述第二视频帧之间的帧间差分图像;
    对所述第一视频帧和所述第二视频帧进行平坦区域检测,得到所述第一视频帧与所述第二视频帧中平坦区域的交集;
    利用所述平坦区域的交集在所述帧间差分图像中的像素信息,计算所述第一视频帧对应的时域噪声值。
  2. 根据权利要求1所述的视频噪声检测方法,其中,所述像素信息包括所述平坦区域的交集在所述帧间差分图像中的各个像素的像素值,所述利用所述平坦区域的交集在所述帧间差分图像中的像素信息,计算目标时间戳对应的时域噪声值,包括:
    计算所述各个像素的像素值的加权平均值;
    将所述加权平均值作为所述目标时间戳对应的时域噪声值。
  3. 根据权利要求2所述的视频噪声检测方法,还包括:
    在计算所述各个像素的像素值的加权平均值之前,基于预设的像素值与权重值的对应关系,确定所述各个像素值各自对应的权重值。
  4. 根据权利要求3所述的视频噪声检测方法,其中,所述计算所述各个像素的像素值的加权平均值,包括:
    针对每一像素值,计算所述像素值与所述像素值对应的权重值的乘积,得到所述像素值对应的加权像素值;
    根据所述各个像素值各自对应的加权像素值进行加权平均,得到所述加权平均值。
  5. 根据权利要求1至4任一项所述的视频噪声检测方法,还包括:
    对所述第一视频帧或所述第二视频帧中的至少一项进行噪声感受度影响评价,得到至少一个维度的噪声感受度影响系数。
  6. 根据权利要求5所述的视频噪声检测方法,还包括:
    在所述利用所述平坦区域的交集在所述帧间差分图像中的像素信息,计算所述第一 视频帧对应的时域噪声值之后,根据所述时域噪声值和所述至少一个维度的噪声感受度影响系数,得到所述第一视频帧的噪声感受得分,所述噪声感受得分用于评价所述第一视频帧中噪声的视觉感受度,和/或,是否对所述第一视频帧进行降噪处理。
  7. 根据权利要求5或6所述的视频噪声检测方法,其中:
    所述至少一个维度的噪声感受度影响系数包括细节丰富度影响系数,所述对所述第一视频帧或所述第二视频帧进行噪声感受度评价,得到至少一个维度的噪声感受度评价系数,包括:对所述第一视频帧或者所述第二视频帧进行细节强度检测,得到所述细节丰富度影响系数;和/或
    所述至少一个维度的评价系数包括位移速率影响度系数,所述对所述第一视频帧或所述第二视频帧进行噪声感受度感受评价,得到至少一个维度的噪声感受度评价系数,包括:根据所述第一视频帧和所述第二视频帧进行图像位移检测,得到所述位移速率影响度系数;和/或
    所述至少一个维度的评价系数包括亮度影响度系数,所述对所述第一视频帧或所述第二视频帧进行噪声感受度评价,得到至少一个维度的噪声感受度评价系数,包括:对所述第一视频帧或第二视频帧进行高亮区域检测,得到所述亮度影响度系数。
  8. 根据权利要求1至7任一项所述的视频噪声检测方法,其中,所述对所述第一视频帧和所述第二视频帧进行平坦区域检测,得到所述第一视频帧与所述第二视频帧中平坦区域的交集,包括:
    针对所述第一视频帧和所述第二视频帧中的任一视频帧,对所述视频帧进行平坦区域提取,得到所述视频帧的平坦区域;
    对所述第一视频帧的平坦区域和所述第二视频帧的平坦区域进行求与运算,得到所述平坦区域的交集。
  9. 根据权利要求8所述的视频噪声检测方法,其中,所述对所述视频帧进行平坦区域提取,得到所述视频帧的平坦区域,包括:
    对所述视频帧进行图像分割,得到所述视频帧的多个图像区域;
    确定每个所述图像区域的纹理参数;
    将纹理参数小于预设参数阈值的图像区域作为所述视频帧的平坦区域。
  10. 根据权利要求1至9任一项所述的视频噪声检测方法,还包括:
    在所述将所述第一视频帧和所述第二视频帧进行差分处理,得到所述第一视频帧与所述第二视频帧之间的帧间差分图像之前,对所述第一视频帧和所述第二视频帧进行全局对齐,得到对齐后的第一视频帧和第二视频帧。
  11. 根据权利要求10所述的视频噪声检测方法,其中,所述将所述第一视频帧和所述第二视频帧进行差分处理,得到所述第一视频帧与所述第二视频帧之间的帧间差分图像,包括:
    对所述对齐后的第一视频帧和所述第二视频帧进行差分运算,得到所述帧间差分图像。
  12. 根据权利要求10或11所述的视频噪声检测方法,其中,所述对所述第一视频帧和所述第二视频帧进行全局对齐,得到对齐后的第一视频帧和第二视频帧,包括:
    对所述第一视频帧和所述第二视频帧进行亮度对齐处理,得到亮度对齐后的第一视频帧和第二视频帧;
    对所述亮度对齐后的第一视频帧和第二视频帧进行相位对齐运算,得到所述第一视频帧与所述第二视频帧之间坐标转换关系;
    利用所述坐标转换关系对所述亮度对齐后的第一视频帧进行仿射变换,得到仿射变换后的第一视频帧;
    将所述仿射变换后的第一视频帧作为所述对齐后的第一视频帧,以及将所述亮度对齐后的第二视频帧作为所述对齐后的第二视频帧。
  13. 根据权利要求12所述的视频噪声检测方法,其中,所述对所述亮度对齐后的第一视频帧和第二视频帧进行相位对齐运算,得到所述第一视频帧与所述第二视频帧之间的坐标转换关系,包括:
    对所述亮度对齐后的第一视频帧和第二视频帧进行预设倍数的下采样,得到下采样后的第一视频帧和下采样后的第二视频帧;
    对所述下采样后的第一视频帧和所述下采样后的第二视频帧进行相位对齐运算,得到旋转矩阵和下采样平移向量;
    将所述下采样平移向量与所述预设倍数相乘,得到原始偏移量;
    根据所述旋转矩阵和所述原始偏移量确定所述坐标转换关系。
  14. 根据权利要求1至13任一项所述的视频噪声检测方法,还包括:
    在计算所述第一视频帧对应的时域噪声值后,根据所述时域噪声值,确定是否对所述第一视频帧进行降噪处理。
  15. 根据权利要求1至14任一项所述的视频噪声检测方法,其中,将所述第一视频帧和所述第二视频帧进行差分处理,得到所述第一视频帧与所述第二视频帧之间的帧间差分图像包括:
    分别将第一视频帧和第二视频帧对应位置处的像素的灰度值进行相减,得到像素灰度差值;
    将所述像素灰度差值按照第一视频帧和第二视频帧中对应像素的排序顺序组成的图像,作为第一视频帧和第二视频帧的帧间差分图像。
  16. 一种视频噪声检测装置,包括:
    抽取单元,用于在目标视频中抽取第一视频帧和第二视频帧,所述第一视频帧和所述第二视频帧为相邻视频帧;
    处理单元,用于将所述第一视频帧和所述第二视频帧进行差分处理,得到所述第一视频帧与所述第二视频帧之间的帧间差分图像;
    交集确定单元,用于对所述第一视频帧和所述第二视频帧进行平坦区域检测,得到所述第一视频帧与所述第二视频帧中平坦区域的交集;
    计算单元,用于利用所述平坦区域的交集在所述帧间差分图像中的像素信息,计算所述第一视频帧对应的时域噪声值。
  17. 一种计算设备,包括:
    处理器;
    存储器,用于存储可执行指令;
    其中,所述处理器用于从所述存储器中读取所述可执行指令,并执行所述可执行指令以实现上述权利要求1-15中任一项所述的视频噪声检测方法。
  18. 一种计算机可读存储介质,其中,所述存储介质存储有计算机程序,当所述计算机程序被处理器执行时,使得处理器实现上述权利要求1-15中任一项所述的视频噪声检测方法。
  19. 一种计算机程序,包括:
    指令,所述指令当由处理器执行时使所述处理器执行根据权利要求1-15中任一项 所述的视频噪声检测方法。
PCT/CN2023/072550 2022-01-27 2023-01-17 视频噪声检测方法、装置、设备及介质 WO2023143233A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210102693.X 2022-01-27
CN202210102693.XA CN116567196A (zh) 2022-01-27 2022-01-27 视频噪声检测方法、装置、设备及介质

Publications (1)

Publication Number Publication Date
WO2023143233A1 true WO2023143233A1 (zh) 2023-08-03

Family

ID=87470658

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/072550 WO2023143233A1 (zh) 2022-01-27 2023-01-17 视频噪声检测方法、装置、设备及介质

Country Status (2)

Country Link
CN (1) CN116567196A (zh)
WO (1) WO2023143233A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120162526A1 (en) * 2010-12-22 2012-06-28 Jaewon Shin Method and System for Detecting Analog Noise in the Presence of Mosquito Noise
US20120275655A1 (en) * 2011-04-27 2012-11-01 Sony Corporation Image processing apparatus, image processing method, and program
CN104717402A (zh) * 2015-04-01 2015-06-17 中国科学院自动化研究所 一种空时域联合噪声估计系统
WO2016185708A1 (ja) * 2015-05-18 2016-11-24 日本電気株式会社 画像処理装置、画像処理方法、および、記憶媒体
US10062154B1 (en) * 2015-02-11 2018-08-28 Synaptics Incorporated System and method for adaptive contrast enhancement
CN108805851A (zh) * 2017-04-26 2018-11-13 杭州海康威视数字技术股份有限公司 一种图像时域噪声的评估方法及装置
CN112085682A (zh) * 2020-09-11 2020-12-15 成都国科微电子有限公司 图像降噪方法、装置、电子设备及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120162526A1 (en) * 2010-12-22 2012-06-28 Jaewon Shin Method and System for Detecting Analog Noise in the Presence of Mosquito Noise
US20120275655A1 (en) * 2011-04-27 2012-11-01 Sony Corporation Image processing apparatus, image processing method, and program
US10062154B1 (en) * 2015-02-11 2018-08-28 Synaptics Incorporated System and method for adaptive contrast enhancement
CN104717402A (zh) * 2015-04-01 2015-06-17 中国科学院自动化研究所 一种空时域联合噪声估计系统
WO2016185708A1 (ja) * 2015-05-18 2016-11-24 日本電気株式会社 画像処理装置、画像処理方法、および、記憶媒体
CN108805851A (zh) * 2017-04-26 2018-11-13 杭州海康威视数字技术股份有限公司 一种图像时域噪声的评估方法及装置
CN112085682A (zh) * 2020-09-11 2020-12-15 成都国科微电子有限公司 图像降噪方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN116567196A (zh) 2023-08-08

Similar Documents

Publication Publication Date Title
CN114584849B (zh) 视频质量评估方法、装置、电子设备及计算机存储介质
WO2019233244A1 (zh) 图像处理方法、装置、计算机可读介质及电子设备
CN112419151B (zh) 图像退化处理方法、装置、存储介质及电子设备
US20150215590A1 (en) Image demosaicing
CN112733820B (zh) 障碍物信息生成方法、装置、电子设备和计算机可读介质
CN110796664B (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
CN112561839B (zh) 视频裁剪方法、装置、存储介质及电子设备
CN113222983A (zh) 图像处理方法、装置、可读介质和电子设备
CN115375536A (zh) 图像处理方法及设备
CN112800276B (zh) 视频封面确定方法、装置、介质及设备
CN113038176A (zh) 视频抽帧方法、装置和电子设备
WO2023143233A1 (zh) 视频噪声检测方法、装置、设备及介质
WO2023138540A1 (zh) 边缘提取方法、装置、电子设备及存储介质
CN111310595A (zh) 用于生成信息的方法和装置
WO2023020268A1 (zh) 一种手势识别方法、装置、设备及介质
WO2022116947A1 (zh) 视频裁剪方法、装置、存储介质及电子设备
CN112801997B (zh) 图像增强质量评估方法、装置、电子设备及存储介质
CN111737575B (zh) 内容分发方法、装置、可读介质及电子设备
CN117115070A (zh) 图像评估方法、装置、设备、存储介质和程序产品
CN114841870A (zh) 图像处理方法、相关装置和系统
CN110290381B (zh) 视频质量评估方法、装置、电子设备及计算机存储介质
CN112418233A (zh) 图像处理方法、装置、可读介质及电子设备
CN114037716A (zh) 图像分割方法、装置、设备及存储介质
WO2023024986A1 (zh) 一种视频流畅度确定方法、装置、设备及介质
WO2023093481A1 (zh) 基于傅里叶域的超分图像处理方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23746115

Country of ref document: EP

Kind code of ref document: A1