CN116055710A - Video time domain noise evaluation method, device and system - Google Patents

Video time domain noise evaluation method, device and system Download PDF

Info

Publication number
CN116055710A
CN116055710A CN202210957168.6A CN202210957168A CN116055710A CN 116055710 A CN116055710 A CN 116055710A CN 202210957168 A CN202210957168 A CN 202210957168A CN 116055710 A CN116055710 A CN 116055710A
Authority
CN
China
Prior art keywords
video
value
frame
color block
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210957168.6A
Other languages
Chinese (zh)
Other versions
CN116055710B (en
Inventor
黎昕
陈祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202210957168.6A priority Critical patent/CN116055710B/en
Publication of CN116055710A publication Critical patent/CN116055710A/en
Application granted granted Critical
Publication of CN116055710B publication Critical patent/CN116055710B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a method, a device and a system for evaluating video time domain noise. According to the evaluation method, when the time domain noise of the video of the electronic equipment is evaluated, a first evaluation value for representing the time domain noise of each color block group in the video is calculated based on a preset algorithm according to the L value, the a value and the b value of the pixel point in the video of the standard color card shot by the electronic equipment. And then carrying out weight addition calculation on the first evaluation values of the color block groups to obtain a second evaluation value, wherein the second evaluation value can represent video time domain noise of the electronic equipment. By the method, the video time domain noise of different electronic devices can be evaluated, so that unified evaluation standards of the video time domain noise can be provided for the electronic devices. And based on the second evaluation value obtained by the evaluation method, the video time domain noise of the electronic equipment can be objectively represented.

Description

Video time domain noise evaluation method, device and system
Technical Field
The embodiment of the application relates to the field of electronic equipment, in particular to a method, a device and a system for evaluating video time domain noise.
Background
When the electronic equipment is used for shooting video, noise is generated after the video signal of each frame picture is influenced, wherein if the noise of the front frame picture and the rear frame picture is different, the time domain noise exists in the video. When the video with time domain noise is played, the video picture will have a noise point which obviously flickers.
In evaluating the video capturing performance of the electronic device, the video capturing performance of the electronic device may be evaluated by taking the video time domain noise as an evaluation criterion, i.e. by evaluating the video time domain noise of the electronic device. For example: if video time domain noise exists or the video time domain noise is strong, the video shooting performance of the electronic equipment is poor. If the video time domain noise does not exist or exists, but the video time domain noise is weaker, and the video shooting performance of the electronic equipment is stronger.
At present, there is no unified evaluation standard for the video time domain noise of each electronic device, and the evaluation is mainly performed on the video time domain noise of the electronic device according to the playing effect of the video shot by the electronic device. For example, the video temporal noise of the electronic device is evaluated by visually observing whether there are flickering noise points, flickering frequencies of the noise points, brightness of the noise points, and the like in a video picture of a video shot by the electronic device. The subjective evaluation mode of the video time domain noise can not accurately reflect the existence and strength of the video time domain noise, and further the evaluation result of the video shooting performance of the electronic equipment is also inaccurate.
Disclosure of Invention
The embodiment of the application provides a method, a device and a system for evaluating video time domain noise, which are used for providing unified and objective evaluation standards for the video time domain noise of electronic equipment.
In a first aspect, an embodiment of the present application provides a method for evaluating video temporal noise, where the method includes:
and acquiring a first video, wherein the first video is acquired by shooting a standard color card by the electronic equipment to be evaluated. The standard color card comprises at least two color blocks, wherein different color blocks correspond to different gray values. And acquiring a second video according to the first video, wherein the second video comprises Lab images corresponding to the appointed frames in the first video. And obtaining an L value, an a value and a b value corresponding to each pixel point in each frame of Lab image in the second video. And calculating a first evaluation value of each color block group in the second video according to the L value, the a value and the b value corresponding to each pixel point in each frame Lab image, wherein the first evaluation value refers to a weighted value of the variance of the L value, the variance of the a value and the variance of the b value corresponding to each pixel point in the same color block in each frame Lab image in the second video, and the same color block refers to the color block at the same position in each frame Lab image. The first evaluation value is used to characterize the temporal noise of the corresponding set of color blocks in the second video. A second evaluation value of the second video is calculated based on the first evaluation value of each of the color block groups in the second video. The second evaluation value refers to a weighted value of the first evaluation value of each color block group in the second video, and the second evaluation value is used for representing the video time domain noise of the electronic device.
According to the method, when the video time domain noise of the electronic equipment is evaluated, based on a preset algorithm, an evaluation value for representing the video time domain noise is calculated according to the L value, the a value and the b value of the pixel point in the video of the standard color card shot by the electronic equipment, so that the video time domain noise of the electronic equipment is represented by the evaluation value. The preset algorithm can be used for evaluating the video time domain noise of different electronic devices, so that the evaluation standard of the video time domain noise of each electronic device has uniformity. And the evaluation value obtained based on the preset algorithm has stronger objectivity, so that the evaluation result of the video time domain noise of each electronic device has stronger objectivity.
In one implementation, the first video includes a first video clip obtained by the electronic device capturing a standard color chart under at least two light source parameters. Or, the first video comprises at least two second video segments, wherein each second video segment is obtained by the electronic device by shooting a standard color card under one light source parameter, and the at least two second video segments correspond to different light source parameters. The Lab image corresponding to the designated frame comprises an image obtained by the electronic equipment shooting the standard color card under each light source parameter. According to the method, if the video time domain noise of the electronic equipment under different light source parameters needs to be tested, the electronic equipment switches at least two light source parameters in one first video segment to shoot. If the video time domain noise of the electronic device under the same light source parameter needs to be tested, the electronic device shoots in a second video clip under only one light source parameter. Therefore, different types of videos can be acquired by adopting different shooting modes so as to evaluate the time domain noise of the videos of the electronic equipment in different shooting scenes.
In one implementation, the light source parameters include: the number of light sources, the brightness of the light sources, the color temperature of the light sources. If a plurality of light sources exist, the light sources are the same, and the light sources are arranged around the standard color card in a dispersing way, so that the light rays of the light sources are uniformly distributed on the standard color card. According to the method, the video time domain noise of the electronic equipment under different light source parameters can be evaluated, and the comprehensiveness of the evaluation of the video time domain noise of the electronic equipment is improved. Meanwhile, the light rays of the light sources are uniformly distributed on the standard color card, so that the influence on the evaluation process due to different brightness on each color block is avoided, and the evaluation accuracy is improved.
In one implementation, obtaining a second video from a first video includes: and extracting the RGB image corresponding to the appointed frame from the first video. And converting the RGB image corresponding to each appointed frame into a Lab image. And arranging Lab images corresponding to the designated frames according to the sequence of the designated frames in all frames of the first video to obtain a second video. According to the method, each appointed frame RGB image in the first video is converted into the Lab image, so that the brightness characteristic of each frame image can be effectively highlighted. Because the time domain noise is related to the brightness, the video time domain noise can be more conveniently analyzed according to the Lab image, and a more accurate analysis result can be obtained.
In one implementation, the specified frames include all frames in the first video. Alternatively, the designated frame includes a portion of frames in the first video, the designated frame being located within a designated period of time of the first video, wherein a number of frames of the designated frame that are spaced between adjacent frames is a prime number. According to the method, the evaluation can be performed according to all frame images in the first video so as to ensure the evaluation accuracy, and the evaluation can be performed according to part of frame images in the first video so as to improve the evaluation efficiency. And the frame number of the interval between partial frames is ensured to be prime number, so that the second video obtained by frame extraction is ensured not to be influenced by the encoding process, and the evaluation quality is improved.
In one implementation manner, after obtaining the L value, the a value and the b value corresponding to each pixel point in the Lab image, the method further includes: and removing invalid pixel points or invalid Lab images according to the L value corresponding to each pixel point in the Lab images. The invalid pixels are pixels in the Lab image, and the absolute values of the differences between the L values of the pixels in the same position in the previous frame of Lab image and the pixel in the next frame of Lab image are all larger than a preset threshold. The invalid Lab image refers to a Lab image including invalid pixels. According to the method, the validity of the pixel points participating in the evaluation is ensured by removing the invalid pixel points or the invalid Lab image, so that the influence of the invalid pixel points on the evaluation result is avoided, and the accuracy of the evaluation result is further effectively ensured.
In one implementation, before calculating the first evaluation value of each color block group in the second video according to the L value, the a value, and the b value corresponding to each pixel point in each frame Lab image, the method further includes: identifying a mark point in a Lab image of a first frame in the second video, wherein the mark point is preset on a standard color card. And determining the position information of each color block in the Lab image of the first frame according to the mark points. And determining the position information of each color block in other Lab images in the second video according to the position information of each color block in the first frame Lab image, wherein the position information of each color block in the other Lab images is the same as the position information of each color block in the first frame Lab image. And determining each color block group of the second video according to the position information of each color block in each frame Lab image in the second video, wherein each color block group of the second video comprises the corresponding color block in each frame Lab image. According to the method, based on the continuity of each frame Lab image in the second video, the position information of each color block in each frame Lab image in the second video can be quickly obtained only by identifying the position information of each color block in the first frame Lab image, so that the color blocks positioned at the same position in each frame Lab image can be quickly and accurately clustered to obtain each color block group of the second video, and the efficiency and the accuracy of calculating the first evaluation value based on each color block group of the second video are ensured.
In one implementation, calculating the first evaluation value of each color block group in the second video according to the L value, the a value, and the b value corresponding to each pixel point in each frame Lab image includes: and calculating the variance of the L value, the variance of the a value and the variance of the b value corresponding to the same pixel point group in each color block group in the second video, wherein the same pixel point group comprises pixel points positioned at the same position in each frame Lab image. And calculating the mean value of the variances of the L values, the mean value of the variances of the a values and the mean value of the variances of the b values of the same pixel point groups in each color block group in the second video. And calculating the average value of the variances of the L values, the average value of the variances of the a values and the average value of the variances of the b values of the same pixel point groups in each color block group in the second video according to the weights of the L values, the a values and the b values, and obtaining a first evaluation value of each color block group in the second video. According to the method, the variance of the L value, the variance of the a value and the variance of the b value corresponding to the same pixel point group in each color block group in the second video are calculated to correlate the Lab images of each frame in the second video so as to represent the change condition of the time domain noise on each pixel point in the second video. And, the change condition of the time domain noise on each color block group in the second video can be represented by the first evaluation value of each color block group.
In one implementation, the weight of the L value is greater than the weight of the a value and the weight of the L value is greater than the weight of the b value. According to the above method, the L value of the second video may be emphasized by weight calculation, i.e., the luminance feature of the second video is further emphasized. Therefore, the accuracy of evaluating the time domain noise of each color block group in the second video can be effectively improved.
In one implementation, calculating a second evaluation value for a second video from a first evaluation value for each set of color blocks in the second video includes: and calculating a weighted value of the first evaluation value of each color block group in the second video according to the weight of the color block group. The weights of the color block groups are related to the gray values of the corresponding color blocks, wherein the smaller the gray values of the color blocks are, the higher the weights of the corresponding color block groups are. According to the method, the second evaluation value obtained through weight calculation can correlate the time domain noise of each color block group in the second video, namely the second evaluation value can represent the change condition of the time domain noise of each frame Lab image in the second video, namely the time domain noise of the second video can be objectively and accurately represented. And, because the smaller the gray value is, the higher the sensitivity to illumination is, namely, the more remarkable brightness characteristic is provided, the weight of the color block group with the smaller gray value can be improved to highlight the color block group with the more remarkable brightness characteristic, so that the accuracy of evaluating the time domain noise of the second video is effectively improved.
In a second aspect, an embodiment of the present application provides an apparatus for evaluating video temporal noise, including: the first acquisition unit is used for acquiring a first video, and the first video is acquired by shooting a standard color card by the electronic equipment to be evaluated. The standard color card comprises at least two color blocks, wherein different color blocks correspond to different gray values. And the second acquisition unit is used for acquiring a second video according to the first video, wherein the second video comprises Lab images corresponding to the appointed frames in the first video. And the third acquisition unit is used for acquiring the L value, the a value and the b value corresponding to each pixel point in each frame Lab image in the second video. The first evaluation unit is configured to calculate a first evaluation value of each color block group in the second video according to the L value, the a value, and the b value corresponding to each pixel point in each frame Lab image, where the first evaluation value refers to a weighted value of a variance of the L value, a variance of the a value, and a variance of the b value corresponding to each pixel point in each frame Lab image, and the same color block refers to a color block located at the same position in each frame Lab image. The first evaluation value is used to characterize the temporal noise of the corresponding set of color blocks in the second video. And the second evaluation unit is used for calculating a second evaluation value of the second video according to the first evaluation value of each color block group in the second video. The second evaluation value refers to a weighted value of the first evaluation value of each color block group in the second video, and the second evaluation value is used for representing the video time domain noise of the electronic device.
According to the evaluation device for the video time domain noise, when the video time domain noise of the electronic equipment is evaluated, the evaluation value for representing the video time domain noise of the video is calculated based on the L value, the a value and the b value of the pixel point in the video of the standard color card shot by the electronic equipment based on the preset algorithm, so that the video time domain noise of the electronic equipment is represented by the evaluation value. The preset algorithm can be used for evaluating the video time domain noise of different electronic devices, so that the evaluation standard of the video time domain noise of each electronic device has uniformity. And the evaluation value obtained based on the preset algorithm has stronger objectivity, so that the evaluation result of the video time domain noise of each electronic device has stronger objectivity.
In one implementation, the first video includes a first video clip obtained by the electronic device capturing a standard color chart under at least two light source parameters. Or, the first video comprises at least two second video segments, wherein each second video segment is obtained by the electronic device by shooting a standard color card under one light source parameter, and the at least two video segments correspond to different light source parameters. The Lab image corresponding to the designated frame comprises an image obtained by the electronic equipment shooting the standard color card under each light source parameter. Thus, if the electronic device needs to be tested for video time domain noise under different light source parameters, the electronic device switches at least two light source parameters in one first video segment to shoot. If the video time domain noise of the electronic device under the same light source parameter needs to be tested, the electronic device shoots in a second video clip under only one light source parameter. Therefore, different types of videos can be acquired by adopting different shooting modes so as to evaluate the time domain noise of the videos of the electronic equipment in different shooting scenes.
In one implementation, the light source parameters include: the number of light sources, the brightness of the light sources, the color temperature of the light sources. If a plurality of light sources exist, the light sources are the same, and the light sources are arranged around the standard color card in a dispersing way, so that the light rays of the light sources are uniformly distributed on the standard color card. Therefore, the video time domain noise of the electronic equipment under different light source parameters can be evaluated, and the comprehensiveness of the evaluation of the video time domain noise of the electronic equipment is improved. Meanwhile, the light rays of the light sources are uniformly distributed on the standard color card, so that the influence on the evaluation process due to different brightness on each color block is avoided, and the evaluation accuracy is improved.
In one implementation, the second obtaining unit is configured to extract an RGB image corresponding to the specified frame from the first video. The second acquisition unit is also used for converting the RGB image corresponding to each designated frame into a Lab image. The second obtaining unit is further configured to arrange Lab images corresponding to the specified frames according to the ordering of the specified frames in all frames of the first video, so as to obtain a second video. In this way, each specified frame RGB image in the first video is converted into Lab image, so that the brightness characteristic of each frame image can be effectively highlighted. Because the time domain noise is related to the brightness, the time domain noise of the video can be more conveniently analyzed according to the Lab image, and a more accurate analysis result can be obtained.
In one implementation, the specified frames include all frames in the first video. Alternatively, the designated frame includes a portion of frames in the first video, the designated frame being located within a designated period of time of the first video, wherein a number of frames of the designated frame that are spaced between adjacent frames is a prime number. Therefore, the evaluation can be performed according to all frame images in the first video to ensure the evaluation accuracy, and the evaluation can be performed according to part of frame images in the first video to improve the evaluation efficiency. And the frame number of the interval between partial frames is ensured to be prime number, so that the second video obtained by frame extraction is ensured not to be influenced by the encoding process, and the evaluation quality is improved.
In one implementation manner, after the third obtaining unit obtains the L value, the a value, and the b value corresponding to each pixel point in the Lab image, the third obtaining unit is further configured to reject invalid pixel points or invalid Lab images according to the L value corresponding to each pixel point in the Lab image. The invalid pixels are pixels in the Lab image, and the absolute values of the differences between the L values of the pixels in the same position in the previous frame of Lab image and the pixel in the next frame of Lab image are all larger than a preset threshold. The invalid Lab image refers to a Lab image including invalid pixels. Therefore, the validity of the pixel points participating in the evaluation is ensured by eliminating invalid pixel points or invalid Lab images, the influence of the invalid pixel points on the evaluation result is avoided, and the accuracy of the evaluation result is further effectively ensured.
In one implementation manner, before calculating the first evaluation value of each color block in the second video according to the L value, the a value and the b value corresponding to each pixel point in each frame of Lab image, the first evaluation unit is further configured to identify a mark point in the first frame of Lab image in the second video, where the mark point is preset on the standard color card. And determining the position information of each color block in the Lab image of the first frame according to the mark points. The first evaluation unit is further configured to determine, according to the position information of each color block in the first frame Lab image, the position information of each color block in other Lab images in the second video, where the position information of each color block in the other Lab images is the same as the position information of each color block in the first frame Lab image. The first evaluation unit is further configured to determine each color block group of the second video according to the position information of each color block in each frame Lab image in the second video, where each color block group of the second video includes a corresponding color block in each frame Lab image. Therefore, based on the continuity of each frame Lab image in the second video, the position information of each color block in each frame Lab image in the second video can be quickly obtained only by identifying the position information of each color block in the first frame Lab image, so that the color blocks positioned at the same position in each frame Lab image can be quickly and accurately clustered to obtain each color block group of the second video, and the efficiency and the accuracy of calculating the first evaluation value based on each color block group of the second video can be ensured.
In one implementation manner, when calculating the first evaluation value of each color block in the second video according to the L value, the a value and the b value corresponding to each pixel point in each frame Lab image, the first evaluation unit is configured to calculate the variance of the L value, the variance of the a value and the variance of the b value corresponding to the same pixel point group in each color block group in the second video, where the same pixel point group includes pixel points located at the same position in each frame Lab image. The first evaluation unit is further configured to calculate a mean value of variances of L values, a mean value of variances of a values, and a mean value of variances of b values of the same pixel groups in each color block group in the second video. The first evaluation unit is further configured to calculate, according to weights of the L value, the a value, and the b value, a weighted value of a mean value of variances of the L values, a mean value of variances of the a values, and a mean value of variances of the b values of each same pixel point group in each color block group in the second video, and obtain a first evaluation value of each color block group in the second video. In this way, the variance of the L value, the variance of the a value and the variance of the b value corresponding to the same pixel point group in each color block group in the second video are calculated to correlate the Lab images of each frame in the second video so as to represent the change condition of the time domain noise on each pixel point in the second video. And, the change condition of the time domain noise on each color block group in the second video can be represented by the first evaluation value of each color block group.
In one implementation, the weight of the L value is greater than the weight of the a value and the weight of the L value is greater than the weight of the b value. In this way, the L value of the second video can be emphasized by the weight calculation, i.e., the luminance feature of the second video is emphasized more. Therefore, the accuracy of evaluating the time domain noise of each color block group in the second video can be effectively improved.
In one implementation, the second evaluation unit is configured to calculate, when calculating the second evaluation value of the second video according to the first evaluation value of each color block in the second video, a weighted value of the first evaluation value of each color block group in the second video according to the weight of the color block group. The weights of the color block groups are related to the gray values of the corresponding color blocks, wherein the smaller the gray values of the color blocks are, the higher the weights of the corresponding color block groups are. In this way, the second evaluation value obtained through weight calculation can correlate the time domain noise of each color block group in the second video, namely, the second evaluation value can represent the change condition of the time domain noise of each frame Lab image in the second video, namely, the time domain noise of the second video can be objectively and accurately represented. And, because the smaller the gray value is, the higher the sensitivity to illumination is, namely, the more remarkable brightness characteristic is provided, the weight of the color block group with the smaller gray value can be improved to highlight the color block group with the more remarkable brightness characteristic, so that the accuracy of evaluating the time domain noise of the second video is effectively improved.
In a third aspect, an embodiment of the present application provides a system for evaluating video temporal noise, including: an erection device of electronic equipment, a standard color card, a light source and terminal equipment. The terminal device includes a memory and a processor. The erection device is used for installing the electronic equipment to be evaluated. The standard color card comprises at least two color blocks, wherein different color blocks correspond to different gray values. The light source is used for providing uniform illumination for the standard color card. The memory stores program instructions that, when executed by the processor, cause the terminal device to perform the methods of the above aspects and their respective implementations.
In a fourth aspect, an embodiment of the present application provides a terminal device, including: a processor and a memory; the memory stores program instructions that, when executed by the processor, cause the user equipment to perform the methods of the above aspects and their respective implementations.
In a fifth aspect, embodiments of the present application further provide a chip system, where the chip system includes a processor and a memory, and the memory stores program instructions that, when executed by the processor, cause the chip system to perform the method in each of the above aspects and their respective implementations. For example, information related to the above method is generated or processed.
In a sixth aspect, embodiments of the present application further provide a computer readable storage medium having stored therein program instructions that, when executed on a computer, cause the computer to perform the methods of the above aspects and implementations thereof.
Drawings
Fig. 1 is a schematic structural diagram of an evaluation system for video time domain noise provided in an embodiment of the present application;
fig. 2 is a schematic diagram of an image of a frame in a video obtained by shooting a standard color card by using an electronic device according to an embodiment of the present application;
FIG. 3 is another schematic structural diagram of a video temporal noise estimation system according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a terminal device provided in an embodiment of the present application;
fig. 5 is a flowchart of a method 500 for evaluating video temporal noise provided in an embodiment of the present application;
fig. 6 is a flowchart of a method 600 for acquiring a second video in the method 500 for evaluating video temporal noise according to the embodiment of the present application;
FIG. 7 is a flowchart of a method 700 for determining each group of color blocks in a second video in a method 500 for evaluating temporal noise in a video provided in an embodiment of the present application;
fig. 8 is a flowchart of a method 800 for calculating a first evaluation value of each color block group in a second video in the method 500 for evaluating video temporal noise according to the embodiment of the present application;
FIG. 9 is an exemplary diagram of a method 500 for evaluating video temporal noise provided by an embodiment of the present application;
FIG. 10 is a schematic diagram of building a two-dimensional coordinate system on color patches provided by an embodiment of the present application;
fig. 11 is a schematic structural diagram of an evaluation device for video time domain noise according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of another apparatus for evaluating video temporal noise according to an embodiment of the present application.
Detailed Description
The terms first, second, third and the like in the description and in the claims and drawings are used for distinguishing between different objects and not for limiting the specified sequence.
In the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
The terminology used in the description of the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application, as will be described in detail with reference to the accompanying drawings.
When the video shooting performance of the electronic device is evaluated, the video time domain noise of the electronic device can be used as an evaluation standard, namely, the video shooting performance of the electronic device is evaluated by evaluating the video time domain noise of the video shot by the electronic device. For example: if video time domain noise exists or the video time domain noise is strong, the video shooting performance of the electronic equipment is poor. If the video time domain noise does not exist or exists, but the video time domain noise is weaker, and the video shooting performance of the electronic equipment is stronger.
At present, there is no unified evaluation standard for the video time domain noise of each electronic device, and the evaluation is mainly performed on the video time domain noise of the electronic device according to the playing effect of the video shot by the electronic device. For example, the video temporal noise of the electronic device is evaluated by visually observing whether there are flickering noise points, flickering frequencies of the noise points, brightness of the noise points, and the like in a video picture of a video shot by the electronic device. The subjective evaluation mode of the video time domain noise can not accurately reflect the existence and strength of the video time domain noise, and further the evaluation result of the video shooting performance of the electronic equipment is also inaccurate.
In order to solve the above problems, the present application provides an evaluation system for video time domain noise, which is used for evaluating the video time domain noise of each electronic device. Fig. 1 is a schematic diagram of an evaluation system 100 for video temporal noise according to an embodiment of the present application. As shown in fig. 1, the evaluation system 100 of video time domain noise includes an erection device 12 of an electronic apparatus 11, a standard color card 13, a light source 14, and a terminal apparatus 15.
The electronic device 11 has a video capturing function, and the electronic device 11 may be a mobile phone, a video camera, a digital camera, or the like.
The mounting device 12 may be a stand, a platform, or the like. The erection device 12 is used for installing the electronic equipment 11 to be evaluated, and the erection device 12 can be manually or automatically adjusted in height to adjust the height of the electronic equipment 11 from the horizontal plane. The mounting means 12 can also be manually or automatically adjusted in distance to the standard color card 13 to adjust the distance between the electronic device 11 and the standard color card 13. The mounting device 12 can also be manually or automatically adjusted in angle with the standard color card 13 to adjust the angle between the electronic device 11 and the standard color card 13. The electronic device 11 is focused on the standard color card 13 by adjusting the height of the erection device 12, the distance between the erection device and the standard color card 13 and the distance between the erection device and the standard color card 13, and the whole content of the standard color card 13 is ensured to be displayed in the view-finding frame of the electronic device 11. By using the erection device 12, the electronic equipment 11 with different models and different types can be ensured to be focused on the standard color card 13, and the whole content of the standard color card 13 is ensured to be displayed in the view-finding frame of the electronic equipment 11, so that the evaluation of the video time domain noise of different electronic equipment 11 is realized.
The standard color chart 13 includes at least two color patches, wherein different color patches correspond to different gray values, so that the electronic device 11 can shoot objects corresponding to different gray values at the same time, and further the video shooting capability of the electronic device 11 for shooting objects with different gray values can be tested. In some embodiments, the standard color card 13 includes a marker point for identifying positional information, such as position coordinates, angles, etc., of the standard color card 12 within the viewfinder of the electronic device 11. The marking point is also used to identify that the entire content of the standard color card 13 is displayed in the viewfinder of the electronic device 11, whereby the marking point is located at least on the outer contour of the entire content of the standard color card 13. Taking the example that the standard color card 13 includes the first color patch 131, the second color patch 132, and the third color patch 133 (the higher the gray value, the darker the color), the standard color card 13 includes the upper left corner mark point 134, the lower left corner mark point 135, the upper right corner mark point 136, and the lower right corner mark point 137, which are located at the four top corners of the standard color card 13, respectively, and which are located outside the three color patches, the description will be given. Fig. 2 is a schematic diagram of a frame of image in a video obtained by capturing a standard color card 13 by the electronic device 11, and by identifying whether four mark points are all located in the viewfinder 201 of the electronic device 11, it is determined whether all three color patches of the standard color card 13 are located in the viewfinder 201. As shown in fig. 2, if all four marking points are located in the view-finder frame 201 of the electronic device 11, all three color patches of the standard color card 13 are located in the view-finder frame 201. By identifying the position information of the four mark points in the view frame 201 of the electronic device 11, the position information of the three color patches of the standard color chart 13 in the view frame 201 is determined. For example: the upper left corner mark point 134 is located in the upper left corner of the viewfinder 201, the lower left corner mark point 135 is located in the lower left corner of the viewfinder 201, the upper right corner mark point 136 is located in the upper right corner of the viewfinder 201, and the lower right corner mark point 137 is located in the lower right corner of the viewfinder 201. From the positional relationship of the four mark points and the three color patches, positional information of the three color patches in the viewfinder 201, that is, positional information of the three color patches in the photographed image can be determined. Such as a first color patch 131, a second color patch 132, and a third color patch 133 in that order from left to right.
The light source 14 is used to provide illumination for the standard color card 13. The light efficiency of the standard color card 13 can be adjusted by adjusting the light source parameters of the light source 14, which may include: the number of light sources 14, the brightness of the light sources 14, the color temperature of the light sources 14, etc. For example: the same brightness of the light sources 14, the greater the number of light sources 14, the higher the brightness on the standard color chart 13. The same number of light sources 14, the higher the brightness of the light sources 14, the higher the brightness on the standard color chart 13. One or more light sources 14 can be used, and the light sources 14 provide uniform illumination for the standard color card 13, so that the influence of different brightness on the standard color card 13 (each color block) on the subsequent evaluation process is avoided, and the accuracy of the evaluation is improved. As shown in fig. 1, the video temporal noise evaluation system 100 includes only one light source 14, and the standard color card 13 is uniformly illuminated by the one light source 14. If there are multiple light sources 14, the multiple light sources 14 are the same light source 14, i.e. the parameters such as brightness and color temperature of the multiple light sources 14 are the same. The light sources 14 are uniformly dispersed around the standard color card 13, so that the light of the light sources 14 can be uniformly distributed on the standard color card 13, i.e. the light sources 14 provide uniform illumination for the standard color card 13. Taking two light sources 14 as an example, as shown in fig. 3, the evaluation system 100 for time domain noise comprises two light sources 14, where the two light sources 14 are uniformly distributed around the standard color chart 13, and provide uniform illumination for the standard color chart 13.
In some embodiments, as shown in fig. 1 and 3, the evaluation system may also include a background plate 16. In the process of shooting the standard color card 13 by the electronic device 11, the background plate 16 is used as the base color of the standard color card 13, the shot images are all images of the background plate 16 except the image of the standard color card 13, and a unified background image can be provided for each frame of image in the first video through the background plate 16 so as to avoid the influence of the background image on subsequent evaluation.
The terminal device 15 is used for evaluating the video capturing capability of the electronic device 11 based on the video captured by the electronic device 11. In the embodiment of the present application, the terminal device 15 may be a computer, a camera, or the like. Fig. 4 is a schematic hardware structure of the terminal device 15 according to the embodiment of the present application. As shown in fig. 4, the terminal device 15 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, a SIM card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It will be appreciated that the structure illustrated in fig. 4 does not constitute a specific limitation on the terminal device 15. In another embodiment of the present application, the terminal device 15 may comprise more or less components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units. For example, the processor 110 may include an application processor (Application Processor, AP), a Modem (Modem), a graphics processor (Graphics Processing Unit, GPU), an image signal processor (Image Signal Processor, ISP), a controller, a video codec, a digital signal processor (Digital Signal Processor, DSP), a baseband processor, and/or a Neural network processor (Neural-network Processing Unit, NPU), etc. The different processing units may be separate devices or may be integrated in one or more processors.
The charge management module 140 is configured to receive a charge input from a charger. The charger may be a wireless charger or a wired charger.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like.
The wireless communication function of the terminal device 15 can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the terminal device 15 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G etc. applied on the terminal device 15.
The wireless communication module 160 may provide solutions for wireless communication including WLAN (e.g., wi-Fi network), BT, global navigation satellite system (Global Navigation Satellite System, GNSS), frequency modulation (Frequency Modulation, FM), near field wireless communication technology (Near Field Communication, NFC), infrared technology, etc. applied on the terminal device 15. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate and amplify the signal, and convert the signal into electromagnetic waves to radiate the electromagnetic waves through the antenna 2.
In some embodiments, in examples where wireless communication module 160 provides bluetooth communication, wireless communication module 160 may be specifically a bluetooth chip. The bluetooth chip may include one or more memories, one or more processors, and the like. The processor in the bluetooth chip can perform operations such as frequency modulation, filtering, operation, judgment and the like on the electromagnetic wave received by the antenna 2, and convert the processed signal into electromagnetic wave to radiate, i.e. the electromagnetic wave does not need to be processed by the processor 110.
The terminal device 15 realizes a display function through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor.
The display screen 194 is used for displaying images, videos, or the like. A series of graphical user interfaces (Graphical User Interface, GUI) may be displayed on the display 194 of the terminal device 15.
The terminal device 15 may implement a photographing function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The camera 193 is used to capture still images or video.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the terminal device 15.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the terminal device 15 and data processing by executing instructions stored in the internal memory 121.
The terminal device 15 may implement audio functions such as music playing, recording, etc. through the audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, and application processor, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195 or withdrawn from the SIM card interface 195 to enable contact and separation with the terminal apparatus 15. The terminal device 15 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 195 may be used to insert multiple cards simultaneously. The SIM card interface 195 may also be compatible with external memory cards. The terminal device 15 interacts with the network through the SIM card to realize functions such as communication and data communication.
Above the above components, an operating system such as iOS operating system, android operating system, windows operating system, and the like is run. An application may be installed and run on the operating system. In other embodiments, there may be multiple operating systems running within terminal device 15.
The terminal device 15 is electrically/communicatively connected to the electronic device 11, the stand of the standard color card 13, and the light source 14, respectively, and the electronic device 11, the stand of the standard color card 13, and the light source 14 can be controlled by the terminal device 15. For example: the terminal device 15 controls the electronic device 11 to capture a video of the standard color chart 13 and acquire the video. The terminal device 15 controls a cradle for mounting the standard color chart 13 to adjust the positional information of the standard color chart 13. The terminal device 15 controls the light source parameters of the light source 14 to meet the evaluation requirements.
In order to more uniformly and objectively evaluate the video time domain noise of the electronic device 11, the embodiment of the application provides an evaluation method of the video time domain noise, and the terminal device 15 may evaluate the video time domain noise of the electronic device 11 according to the evaluation method. Fig. 5 is a flow chart of a method 500 provided by an embodiment of the present application. As shown in fig. 5, the method 500 includes the following steps S501-S505:
in step S501, a first video is acquired.
The first video refers to an original video obtained by photographing the standard color card 13 by the electronic device 11 to be evaluated. Each frame image in the first video is a three primary color RGB image. After the electronic device 11 starts the image capturing function, a video of a specified duration is captured for the standard color card 13 under each light source parameter, and a first video is obtained.
The type of the first video used for the evaluation is also different according to the evaluation requirements of the different video time domain noise of the electronic device, and correspondingly, the manner in which the electronic device 11 collects the first video is also different.
In one implementation, after the electronic device 11 starts the image capturing function, a video is captured for the standard color card 13, that is, a first video clip is captured, and then the image capturing function is turned off, so as to obtain a first video. The electronic device 11 switches at least two light source parameters during the capturing of the first video clip. Wherein the electronic device 11 captures video of a specified duration for the standard color card 13 under each light source parameter. The first video simultaneously contains images obtained by the electronic device 11 continuously shooting the standard color card 13 under different light source parameters, and can reflect the change characteristics of the video time domain noise of the electronic device 11 affected by the light source parameters. Thus, the first video may be used to evaluate the video temporal noise of the electronic device 11 under different light source parameters. The video capturing of the electronic device 11 on the color blocks with different gray values in the standard color card 13 is synthesized, so that the contingency caused by that the electronic device 11 captures only a single gray value can be avoided, and the referenceability of the video captured by the electronic device 11 can be improved.
In another implementation, after the electronic device 11 starts the image capturing function, a first video is captured for the standard color card 13 under the first light source parameter, i.e. a first second video clip is captured, and then the image capturing function is turned off. Switching the first light source parameter to a second light source parameter, starting the image capturing function of the electronic device 11, capturing a second video for the standard color card 13 under the second light source parameter, namely capturing a second video segment, and then closing the image capturing function. Repeating the shooting process to obtain N (N is a positive integer greater than 0, and the number of N is equal to the number of preset light source parameters) second video clips, wherein the N second video clips are first videos. The first video is composed of N mutually independent second video segments, and each second video segment corresponds to only one light source parameter, and mainly reflects video time domain noise of the electronic device 11 for different gray values. Thus, each second video segment of the first video may be used to evaluate video temporal noise of the electronic device 11 for different gray values. Wherein, integrating each video shot by the electronic device 11 under multiple light source parameters can avoid the accidental occurrence caused by shooting by the electronic device 11 under a single light source parameter, thereby improving the referenceability of the video shot by the electronic device 11.
Step S502, obtaining a second video according to the first video. The second video comprises Lab images corresponding to the appointed frames in the first video.
The RGB image mainly characterizes the color value of each pixel point in the image through a red R channel, a green G channel and a blue B channel, namely the RGB image mainly highlights the color characteristics of the pixel point. If the video has temporal noise, a noise point (pixel point where temporal noise occurs) that is obviously flickering occurs in the video. As can be seen from the representation of temporal noise, temporal noise is strongly correlated with the luminance of a pixel. To more accurately characterize temporal noise of the video, the luminance characteristics of the pixels may be emphasized, whereby, in some embodiments, the RGB image may be converted to a Lab image and the temporal noise evaluated from the Lab image. The Lab image refers to an image under a Lab color model, which consists of three elements, namely a brightness L channel and two color channels a and b, wherein the color included in the channel a is from dark green (low brightness value) to gray (medium brightness value) to bright pink (high brightness value); the b channel includes colors ranging from bright blue (low intensity value) to gray (medium intensity value) to yellow (high intensity value). It can be seen that the color values of the Lab image on the L, a and b channels (L, a and b, respectively) can characterize the brightness characteristics. Thus, the time domain noise can be estimated more accurately from the Lab image.
The designated frame RGB images in the first video are converted to Lab images to evaluate temporal noise from the designated frame Lab images.
Fig. 6 is a flowchart of a method 600 for acquiring a second video according to an embodiment of the present application. As shown in fig. 6, the method 600 includes the following steps S601-S603:
step S601 extracts RGB images corresponding to the specified frames from the first video.
In one implementation, all frames in the first video may be used as designated frames, that is, all frame RGB images in the first video are converted into Lab images, so as to obtain the second video. Thus, the complete video is taken as an evaluation base of time domain noise, and the accuracy of evaluation can be improved by improving the quantity and the integrity of data participating in the evaluation.
In another implementation, a part of frames in the first video may be used as the designated frame, that is, the RGB images of the part of frames in the first video are converted into Lab images, and the Lab images form the second video according to the sequence of the corresponding frames. The specified frame may be a frame in a specified order among all frames of the first video, such as: the first video includes 100 frames of images, and the designated frames include 1 st, 10 th, 20 th, 30 th, 40 th, 50 th, 60 th, 70 th, 80 th, 90 th, and 100 th frames. The designated frame may also be a frame located within a designated period of time of the first video, and the interval between adjacent ones of the designated frames may be a designated number of frames. For example: the first video is 10 seconds in total, the designated frame is located at 3-6 seconds of the first video, and assuming that the 3-6 seconds of the first video include 30-60 frames among all frames, if 3 frames are spaced between adjacent frames among the designated frames, the designated frames include 30, 33, 36, 39, 42, 45, 48, 51, 54, 57, 60 frames. In some embodiments, the number of frames in the interval between adjacent ones of the frames is specified as a prime number to accommodate the encoding constraint.
Step S602, converting the RGB image corresponding to each designated frame into a Lab image.
The RGB image can be converted into a Lab image by means of an XYZ color model. For example: three-channel disassembly can be performed on the RGB image corresponding to the designated frame, so that an image corresponding to the R channel, an image corresponding to the G channel and an image corresponding to the B channel are obtained, and the characteristic vectors of the RGB image are formed by the color values corresponding to the image of the R channel, the image of the G channel and the image of the B channel. The relationship between the RGB color model and the XYZ color model is satisfied, and the product of the feature vector of the RGB image and the preset coefficient matrix is equal to the feature vector of the XYZ image, wherein the sum of coefficients corresponding to each channel in the preset coefficients is approximately equal to 1. Thus, the feature vector of the XYZ image can be obtained. According to a preset conversion formula from an XYZ color model to a Lab color model, the values on the L channel, the a channel and the b channel can be calculated according to the feature vectors of the XYZ image, and the conversion from the RGB image to the Lab image is completed.
The RGB image corresponding to each specified frame may be converted into a Lab image according to the above procedure.
Step S603, arranging Lab images corresponding to the designated frames according to the sequence of the designated frames in all frames of the first video to obtain a second video.
Because the time domain noise relates to the noise change between the front frame and the rear frame, the time sequence of Lab images of each frame in the second video needs to be strictly ensured, namely the front frame sequence and the rear frame sequence of Lab images of each frame in the second video are strictly ensured, and therefore the evaluation effectiveness of the time domain noise is ensured.
Lab images corresponding to the designated frames can be arranged according to the sequence of the designated frames in all frames of the first video so as to ensure the time sequence among the Lab images of the frames. For example: the designated frames comprise 1 st, 10 th, 20 th, 30 th, 40 th, 50 th, 60 th, 70 th, 80 th, 90 th and 100 th frames in the first video, and Lab images corresponding to the 1 st, 10 th, 20 th, 30 th, 40 th, 60 th, 70 th, 80 th, 90 th and 100 th frames are arranged according to the sequence of 1 st, 10 th, 20 th, 30 th, 50 th, 60 th, 80 th, 90 th and 100 th frames, so that a second video is obtained.
Step S503, obtaining an L value, an a value, and a b value corresponding to each pixel point in each Lab image frame in the second video.
Each frame of Lab image can be disassembled according to the L-channel, the a-channel, and the b-channel, to obtain images corresponding to the L-channel, the a-channel, and the b-channel. Wherein the image corresponding to the L channel reflects the L value of the Lab image, the image corresponding to the a channel reflects the a value of the Lab image, and the image corresponding to the b channel reflects the b value of the Lab image. And disassembling the image corresponding to the L channel according to the pixel points, so that the L value of each pixel point can be obtained. And disassembling the image corresponding to the a channel according to the pixel points, so that the a value of each pixel point can be obtained. And disassembling the image corresponding to the b channel according to the pixel points, so as to obtain the b value of each pixel point.
In some embodiments, after the L value corresponding to each pixel point in the Lab image is obtained, invalid pixels or invalid Lab images can be removed according to the L value corresponding to each pixel point, so as to perform optimization processing on the Lab images, ensure validity of the Lab images after the optimization processing, and further ensure validity of time domain noise evaluation according to the Lab images after the optimization processing.
The invalid pixels are pixels in which the absolute value of the difference value of the L value between the Lab image and the pixels in the same position in the previous frame Lab image and the next frame Lab image is larger than a preset threshold. Taking an ith frame Lab image in the second video as an example, each frame Lab image in the second video comprises 10×10 pixels, wherein the previous frame Lab image of the ith frame Lab image is the ith-1 frame Lab image in the second video, and the next frame Lab image of the ith frame Lab image is the (i+1) th frame Lab image in the second video. The position of each pixel point in the Lab image may be represented in two-dimensional coordinates, where each pixel point is located in the first quadrant, and the coordinates of the pixel point located in the lower left corner in the Lab image are (1, 1). Thus, the pixel points in the i-1 th frame Lab image and the pixel points in the i+1 th frame Lab image, which are positioned at the same positions as the pixel points in the i frame Lab image, can be determined. For example: the pixel point located on (1, 1) in the i-1 th frame Lab image, and the pixel point located on (1, 1) in the i+1th frame Lab image are the same positions as the pixel point located on (1, 1) in the i-th frame Lab image. And respectively calculating the difference value of the L value of each pixel point in the ith frame image and the pixel point at the same position in the (i-1) th frame image, the difference value of the L value of each pixel point in the ith frame image and the pixel point at the same position in the (i+1) th frame image, and calculating the absolute value of the obtained difference value. The absolute value represents the change condition of the L value of the ith frame image and the previous and subsequent frame images on each pixel point, wherein if the absolute value of the obtained difference value is larger than a preset threshold value, the change of the L value is larger, namely the change of the L value of the corresponding pixel point is abnormal, and the pixel point is not suitable for being used as the pixel point for evaluating the time domain noise, namely the invalid pixel point.
According to the above manner, invalid pixel points in each frame of Lab image in the second video can be identified.
In one implementation, invalid pixels in each frame of Lab image can be removed, and only the remaining pixels (valid pixels) are reserved for subsequent evaluation. Therefore, invalid pixel points can be accurately removed, and the number of the valid pixel points is ensured, so that the accuracy of subsequent evaluation of time domain noise is ensured.
In another implementation, the Lab image containing invalid pixels is called an invalid Lab image, the invalid Lab image is removed, and only the rest Lab image (the Lab image with all valid pixels) is remained for the subsequent evaluation process. In this way, the influence of invalid pixels in the invalid Lab image on surrounding pixels can be effectively avoided, so that the effectiveness of the pixels participating in the time domain noise evaluation process is further ensured.
In another implementation, if the number of invalid pixels in a frame Lab image is less than or equal to the number threshold, the frame Lab image is retained and only invalid pixels in the frame Lab image are removed. If the number of invalid pixels in one frame of Lab image is greater than the number threshold, the invalid Lab image is proposed. Therefore, whether the corresponding frame Lab image is required to be proposed or not can be judged through the number of invalid pixel points, so that the problem that the validity of the rest pixel points is difficult to ensure even if the invalid pixel points are removed and the problem that the frame Lab image is difficult to accurately represent due to the fact that the invalid pixel points are removed can be avoided, the number of Lab images participating in the evaluation of time domain noise and the continuity of each frame Lab image in the second video finally participating in the evaluation of time domain noise can be ensured through the Lab images with more valid pixel points.
In step S504, a first evaluation value of each color block group in the second video is calculated according to the L value, the a value and the b value corresponding to each pixel point in each frame Lab image.
The first evaluation value refers to a weighted value of the variance of the L value, the variance of the a value, and the variance of the b value corresponding to each pixel point in the same color block in each frame Lab image in the second video. The same color patch refers to a color patch located at the same position in each frame of Lab image.
Fig. 7 is a flowchart of a method 700 for determining each group of color blocks in a second video provided in an embodiment of the present application. As shown in fig. 7, the method 700 includes the following steps S701-S704:
step S701, identifying a mark point in the first frame Lab image in the second video.
The mark points in the first frame Lab image refer to images taken of the mark points in the standard color card 13 above. The mark point in the first frame Lab image can be identified by image recognition or the like. The standard color card 13 comprises m color blocks, four vertex angles of the standard color card are respectively provided with a marking point, wherein the left upper corner of the 1 st color block is the left upper corner marking point, the left lower corner of the 1 st color block is the left lower corner marking point, the right upper corner of the m color block is the right upper corner marking point, the right lower corner of the m color block is the right lower corner marking point, and the Lab image corresponding to the i-th frame RGB image obtained by shooting the standard color card 13 by the electronic equipment 11 is taken as a first frame Lab image in the second video. The four marker points in the frame Lab image can be identified by image recognition techniques. For example: the upper left corner mark point is positioned at the upper left corner of the frame Lab image, the lower left corner mark point is positioned at the lower left corner of the frame Lab image, the upper right corner mark point is positioned at the upper right corner of the frame Lab image, and the lower right corner mark point is positioned at the lower right corner of the frame Lab image.
Step S702, determining the position information of each color block in the Lab image of the first frame according to the mark points.
The position information of each color patch in the first frame Lab image can be determined in combination with the recognized mark point and the positional relationship of the mark point and each color patch in the standard color card 13. In the above example, the m color blocks in the frame Lab image can be accurately determined to be the 1 st color block, the 2 nd color block … and the m th color block in order from left to right.
In step S703, the position information of each color block in the other Lab images in the second video is determined according to the position information of each color block in the first frame Lab image.
In the process of photographing the standard color card 13, the pattern of the standard color card 13 is not changed, that is, the positions of the color blocks in the standard color card 13 are not changed, so that the position information of the color blocks in the standard color card 13 in each frame image of the video is not changed. For example, if the second video includes a p-frame Lab image, based on the position information of m color blocks in the first frame Lab image, it can be determined that the position information of m color blocks in each of the rest frame Lab images is also the 1 st color block, the 2 nd color block …, and the m th color block in order from left to right. Therefore, the position information of each color block in the Lab images of other frames can be quickly and accurately determined by only determining the position information of each color block in the Lab image of the first frame without identifying the position information of the color block in the Lab image of each frame.
Step S704, determining each color block group of the second video according to the position information of each color block in each frame Lab image in the second video.
Each set of color blocks of the second video includes a corresponding color block in the respective frame Lab image, that is, each set of color blocks of the second video is a set of color blocks located at the same position in the respective frame Lab image. Still, the following example: if the position information of the m color blocks in each frame Lab image is the 1 st color block, the 2 nd color block … and the m color block from left to right, the 1 st color block group of the second video comprises the 1 st color block in the p frame Lab image, the 2 nd color block group of the second video comprises the 2 nd color block … in the p frame Lab image, and the m color block group of the second video comprises the m color block in the p frame Lab image.
In some embodiments, the positional information of each color patch in each frame of Lab image may be represented by coordinates. In one implementation, a two-dimensional coordinate system may be established with one marker point as the origin. Based on the positional relationship between the four vertices of each color block and the marking point, the position coordinates of the four vertices of each color block are determined so as to represent the position coordinates of the corresponding color block through the position coordinates of the four vertices of each color block. In another implementation, a two-dimensional coordinate system may be established with one vertex of one color block as an origin, and based on the position relationship between the remaining three vertices of the color block and the four vertices of the remaining color block and the vertex of the color block, position coordinates of the four vertices of each color block are determined, so as to represent the position coordinates of the corresponding color block by the position coordinates of the four vertices of each color block. For example: and establishing a coordinate system by taking the left lower corner of the color block positioned at the leftmost side of each color block as an origin, wherein the rest color blocks are positioned in a first quadrant of the coordinate system. The position information of the color patch can be more accurately represented based on the position coordinates of the four vertices of each color patch.
Fig. 8 is a flowchart of a method 800 for calculating a first evaluation value of each color block group in a second video according to an embodiment of the present application. As shown in fig. 8, the method 800 includes the following steps S801 to S803:
in step S801, the variance of the L value, the variance of the a value, and the variance of the b value corresponding to the same pixel point group in each color block group in the second video are calculated.
The same pixel group of the second video includes the pixel points located at the same position in each frame Lab image, that is, the same pixel group of the second video is a set of the pixel points located at the same position in each frame Lab image. The same pixel groups in the second video are divided according to color blocks, and the same pixel groups in the same color block are used as a unit for calculating the first evaluation value.
After determining the position information of each group of color blocks in the second video, the position information of each pixel point in each color block may be further determined. If the position information of each color block group in the second video is represented by the position coordinates, the position coordinates of each pixel point can also be determined according to the position relation of the pixel point in the corresponding color block. Thus, based on the position coordinates of the pixel points in the Lab image of each frame, the same pixel point groups in each color block can be determined, and the process can refer to the process of determining the pixel points in the same position in the adjacent frame when determining the invalid pixel point in step S503, which will not be described herein.
The variance of the L value of the same pixel group is the variance of the L value of each pixel located at the same position in each frame Lab image, and satisfies the following formula (1):
Figure BDA0003791822930000141
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0003791822930000142
representing the variance of L values of the kth same pixel group in the second video, wherein K is more than or equal to 1 and less than or equal to K, and K is the total number of the same pixel groups in the corresponding color block group, and M L The kth identical in Lab image of each frame (p-frame) representing the second videoAverage value of L values of pixel point group, L 1 、L 2 ...、L p And the L values respectively represent the kth same pixel point group in Lab images of each frame in the second video, and p is the total frame number of the second video. />
The variance of the a value of the same pixel group is the variance of the a value of each pixel located at the same position in each frame Lab image, and satisfies the following formula (2):
Figure BDA0003791822930000143
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0003791822930000144
representing the variance of a value of the kth same pixel group in the second video, wherein K is more than or equal to 1 and less than or equal to K, and K is the total number of the same pixel groups in the corresponding color block group, and M a Average value of a value of the kth same pixel point group in Lab image of each frame (p frame) representing the second video, a 1 、a 2 ...、a p The a values of the kth same pixel point group in each frame Lab image in the second video are respectively represented, and p is the total frame number of the second video.
The variance of the b value of the same pixel group is the variance of the b value of each pixel located at the same position in each frame Lab image, and satisfies the following formula (3):
Figure BDA0003791822930000145
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0003791822930000146
representing the variance of the b values of the kth same pixel group in the second video, wherein K is more than or equal to 1 and less than or equal to K, and K is the total number of the same pixel groups in the corresponding color block group, and M b Average value of b values of kth same pixel point group in Lab image of each frame (p frame) representing second video, b 1 、b 2 ...、b p B values respectively representing the kth same pixel point group in each frame Lab image in the second video, wherein p is the second videoTotal number of frames of frequency.
Step S802, calculating the mean of the variance of the L value, the mean of the variance of the a value and the mean of the variance of the b value of each identical pixel point group in each color block group in the second video.
After calculating the variance of the L value, the variance of the a value, and the variance of the b value of each identical pixel group in each color block group, calculating the mean value of the variances of the L values of each identical pixel group in each color block group, satisfying the following formula (4):
Figure BDA0003791822930000147
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0003791822930000148
representing the mean of the variance of the L values of the same set of pixels of the K sets within each color block set.
Calculating the mean value of the variance of the a values of the same pixel point groups in each color block group, and satisfying the following formula (5):
Figure BDA0003791822930000149
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA00037918229300001410
representing the mean of the variances of the a values of the K groups of identical pixel points within each color block group.
Calculating the mean value of the variance of the b values of the same pixel point groups in each color block group, and satisfying the following formula (6):
Figure BDA00037918229300001411
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA00037918229300001412
representing the mean of the variance of the b values of the K groups of identical pixel points within each color block group.
Step 803, calculating a weighted value of the mean value of the variance of the L value, the mean value of the variance of the a value and the mean value of the variance of the b value of each same pixel point group in each color block group in the second video according to the weights of the L value, the a value and the b value, and obtaining a first evaluation value of each color block group in the second video.
The first evaluation value of each patch group satisfies the following formula (7):
Figure BDA00037918229300001413
wherein Q is i1 A first evaluation value representing the ith color block group, 1.ltoreq.i.ltoreq.m, m representing the total number of color block groups in the second video, q L Weights representing the value of L, q a Weight, q, representing the value of a b A weight representing the value of b, wherein q L +q a +q b =1。
The first evaluation value is used to characterize the temporal noise of the corresponding set of color blocks in the second video. And (3) calculating the variance of the L value, the variance of the a value and the variance of the b value corresponding to the same pixel point group in each color block group in the second video to correlate the Lab images of each frame in the second video so as to reflect the change condition of time domain noise on each pixel point in the second video. And, the change condition of the time domain noise on the corresponding color block group in the second video can be reflected through the first evaluation value of each color block group. Wherein, the smaller the first evaluation value is, the smaller the time domain noise on the corresponding color block group is, the larger the first evaluation value is, and the larger the time domain noise on the corresponding color block group is.
In some embodiments, the weight of the L value is greater than the weight of the a value and the weight of the L value is greater than the weight of the b value. In this way, the L value of the second video can be emphasized by the weight calculation, i.e., the luminance feature of the second video is emphasized more. Therefore, the accuracy of evaluating the time domain noise of each color block group in the second video can be effectively improved.
In step S505, a second evaluation value of the second video is calculated based on the first evaluation value of each color block group in the second video.
The second evaluation value refers to a weighted value of the first evaluation value of each color block group in the second video. The second evaluation value satisfies the following formula (8):
Figure BDA0003791822930000151
wherein Q is 2 Represents the second evaluation value, q i Representing the weight of the ith color block group, 1.ltoreq.i.ltoreq.m, where m represents the total number of color block groups in the second video.
The second evaluation value is used to characterize the video temporal noise of the electronic device. The second evaluation value obtained through weight calculation can correlate the time domain noise of each color block group in the second video, namely the second evaluation value can represent the change condition of the time domain noise of each frame Lab image in the second video, namely the time domain noise of the second video can be objectively and accurately represented. Thereby, the video temporal noise of the electronic device in the temporal noise dimension can be evaluated based on the second evaluation value.
In some embodiments, the weights of the respective color patch groups are set according to the gray values of the color patches, for example: the smaller the gray value of a color patch, the higher the weight of the corresponding color patch group. Since the smaller the gray value is, the higher the sensitivity to illumination is, namely, the more remarkable brightness characteristic is provided, the weight of the color block group with the smaller gray value can be increased to highlight the color block group with the more remarkable brightness characteristic, and therefore the accuracy of evaluating the time domain noise of the second video is effectively improved.
The method for evaluating the video time domain noise provided in the embodiment of the present application is further described below with reference to an example.
Fig. 9 is an exemplary diagram of a method 500 provided by an embodiment of the present application.
The steps of the method 500 performed by the terminal device 15 will be exemplarily described below with reference to the evaluation system shown in fig. 3, taking the electronic device 11 as a mobile phone and taking the standard color card 13 as shown in fig. 2 as an example. As shown in fig. 9, this example includes steps S901 to S905, which correspond to steps S501 to S505, respectively, in the method 500 shown in fig. 5.
In step S901, the terminal device 15 acquires a first video obtained by photographing the standard color card 13 with the cellular phone.
The mobile phone is mounted on the mounting device 12, and by adjusting the mounting device 12, the whole content (mark points and color patches) on the standard color card 13 is located in the view-finder frame of the mobile phone, fig. 2 can be referred to. The light source 14 is turned on to provide uniform illumination for the color card 13. The terminal device 15 is turned on to control the light source parameters of the light source 14 through the terminal device 15 and evaluate the video shooting function of the mobile phone.
Taking two preset light source parameters as examples, the first video shot by the mobile phone comprises a plurality of sections of videos, and each section of video corresponds to one light source parameter. Starting the shooting function of the mobile phone, shooting the video of the standard color card 13 under the first light source parameter, for example, shooting the standard color card 13 under the first light source parameter to obtain a video of a first section of 10 seconds, switching the first light source parameter to a second light source parameter, and shooting the standard color card 13 under the second light source parameter to obtain a video of a second section of 10 seconds. And closing the shooting function of the mobile phone, and forming a first video by a first video of 10 seconds and a second video of 10 seconds. If the frame rate of the photographing function of the cellular phone is 30 frames/second, the first video includes 600 frames of RGB images.
The mobile phone transmits the shot first video to the terminal equipment 15, and the terminal equipment 15 evaluates the video shooting function of the mobile phone according to the first video.
In step S902, the terminal device 15 acquires a second video from the first video.
After the terminal device 15 acquires the first video from the mobile phone, the terminal device extracts the RGB image of the specified frame from the first video to generate the second video. Taking the 5 th-15 th second of the designated frames in the first video as an example, the 5 th frame is the interval between adjacent ones of the designated frames. The specified frames include 151 th, 156 th, 161 th, … th, 300 th frames in the first video, and 30 th frames in total. And respectively converting the 30 frames of RGB images into Lab images, and sequentially arranging the 30 frames of Lab images to obtain a second video.
In step S903, the terminal device 15 acquires an L value, an a value, and a b value corresponding to each pixel point in each frame Lab image in the second video.
Taking an example in which each color block includes 100 pixels arranged in 10×10, the terminal device 15 acquires L values, a values, and b values corresponding to 300 pixels in each frame of Lab image.
And identifying invalid pixel points in each frame Lab image according to the absolute value of the difference value of the L value of each pixel point in each frame Lab image and the pixel point at the same position in the front frame Lab image and the rear frame Lab image. For example: the L value of the pixel point located at the lower left corner in the second frame Lab image in the second video is 20, while the L value of the pixel point located at the same position in the first frame Lab image in the second video is 85, and the L value of the pixel point located at the same position in the third frame Lab image in the second video is 84. It can be seen that the absolute value of the difference between the L value of the pixel point located at the lower left corner in the second frame Lab image and the L value of the pixel point located at the lower left corner in the first frame Lab image is 65, and the absolute value of the difference between the L value of the pixel point located at the lower left corner in the second frame Lab image and the L value of the pixel point located at the lower left corner in the third frame Lab image is 64. If the preset threshold value is 45, the pixel point positioned at the lower left corner in the Lab image of the second frame is the invalid pixel point. And removing the invalid pixel points from the second frame Lab image, and reserving the rest pixel points as pixel point data for subsequent evaluation, namely taking all the pixel points except the pixel points at the lower left corner in the second frame Lab image as pixel point data for evaluating the video time domain noise of the mobile phone.
In step S904, the terminal device 15 calculates a first evaluation value for each color block group in the second video from the L value, the a value, and the b value corresponding to each pixel point in each frame Lab image.
The terminal device 15 recognizes a mark point in the first frame Lab image of the second video, and determines positional information of each color patch in the first frame Lab image based on the recognized mark point. As shown in fig. 2, the terminal device 15 recognizes that the upper left corner mark point 134 is located at the upper left corner in the first frame Lab image, the lower left corner mark point 135 is located at the lower left corner in the first frame Lab image, the upper right corner mark point 136 is located at the upper right corner in the first frame Lab image, and the lower right corner mark point 137 is located at the lower right corner in the first frame Lab image. From the positional relationship between the four mark points and the three color patches in the standard color chart 13, it is possible to determine the first color patch 131, the second color patch 132, and the third color patch 133 in this order from left to right in the first frame Lab image. If a pixel point located at the lower left corner of the first color block 131 is taken as an origin, a two-dimensional coordinate system is established with a line parallel to the lower edge of each color block in the first frame Lab image as an x-axis, and a line parallel to the left edge of the first color block 131 in the first frame Lab image as a y-axis, so that the first color block 131, the second color block 132, and the third color block 133 are all located in the first quadrant of the two-dimensional coordinate system, a two-dimensional coordinate system as shown in fig. 10 can be referred to. According to the two-dimensional coordinate system, the position information of each color patch can be represented using the position coordinates of the four vertices of each color patch. For example: the positional information of the first color block 131 is (0, 0), (0, 9), (10, 0). The positional information of the second patch 132 is (11,0), (11, 9), (20, 0). The positional information of the third color block 133 is (21,0), (21, 9), (30,9), (30, 0). The mobile phone can directly determine the position information of each color block in the rest of the frame Lab images according to the position information of each color block in the first frame Lab image, and the position information of each color block in the rest of the frame Lab images is consistent with the position information of each color block in the first frame Lab image. The handset can determine three color block groups of the second video, wherein each color block group comprises color blocks with the same position information in each frame Lab image.
The terminal device 15 determines the same set of pixels in each color block of the second video. Taking the two-dimensional coordinate system shown in fig. 10 as an example, the pixel located on (0, 0) in each frame of Lab image is one same pixel group in the first color block group of the second video. Thus, the terminal device 15 can determine 100 identical pixel point groups within each color block group of the second video.
The terminal device 15 calculates the variance of the L value, the variance of the a value, and the variance of the b value corresponding to each identical pixel point group in each color block group in the second video. The mobile phone can perform calculation according to formulas (1) - (3) in step S801. Wherein, k is more than or equal to 1 and less than or equal to 100, p is more than or equal to 30, M L Average value of L values of kth same pixel point group in 30-frame Lab image representing second video, M a Average value M of a value of the kth same pixel point group in 30-frame Lab image representing second video b 30 frame Lab map representing second videoThe average value of the b values of the kth same pixel point group in the image. L (L) 1 、L 2 …、L 30 Respectively representing the L value and a of the kth same pixel point group in each frame Lab image in the second video 1 、a 2 …、a 30 A value b respectively representing the kth same pixel point group in each frame Lab image in the second video 1 、b 2 …、b 30 And b values of the kth same pixel point group in each frame Lab image in the second video are respectively represented. Wherein, the pixel points participating in the calculation are all the pixel points after the invalid pixel points are removed.
The terminal device 15 calculates the mean of the variances of the L values, the mean of the variances of the a values, and the mean of the variances of the b values of the same pixel point groups in each of the color block groups in the second video. The terminal device 15 can perform calculation according to formulas (4) - (6) in step S802. Where k=100.
The terminal device 15 calculates a weighted value of the mean of the variances of the L values, the mean of the variances of the a values, and the mean of the variances of the b values of each same pixel group in each color block group in the second video, and obtains a first evaluation value of each color block group in the second video. The terminal device 15 may perform calculation according to the formula (7) in step S803. Taking the first color block group of the second video as an example, if the average value of the variances of the L values of the same pixel groups in the first color block group of the second video is 1, the average value of the variances of the a values of the same pixel groups in the first color block group of the second video is 0.5, the average value of the variances of the b values of the same pixel groups in the first color block group of the second video is 02, the weight of the L values is 0.5, the weight of the a values is 0.3, the weight of the b values is 0.2, and the first evaluation value of the first color block group of the second video is 0.69. According to the above method, the first evaluation values of the second color block group and the third color block group of the second video can be calculated. For example: the first evaluation value of the second color block group of the second video is calculated to be 0.45, and the first evaluation value of the third color block group of the second video is calculated to be 0.40.
In step S905, the terminal device 15 calculates a second evaluation value of the second video from the first evaluation value of each color block group in the second video.
The terminal device 15 may perform calculation according to formula (8) in step S505. If the weight of the first color block group is 0.2, the weight of the second color block group is 0.3, the weight of the third color block group is 0.5, and the second evaluation value of the second video is 0.473.
The video time domain noise of the handset may be evaluated based on a second evaluation value of the second video. For example: when the second evaluation value is less than 0.5, the video capturing capability of the mobile phone is considered to be excellent. When the second evaluation value is greater than or equal to 0.5 and the second evaluation value is less than 3, the video shooting capability of the mobile phone is considered to be normal. When the second evaluation value is greater than or equal to 3, the video shooting capability of the mobile phone is considered to be poor. Thus, since the second evaluation value is 0.473, less than 0.5, the video capturing capability of the mobile phone can be considered to be excellent according to the above standard.
Fig. 11 is a schematic structural diagram of an apparatus for evaluating video temporal noise according to an embodiment of the present application.
In some embodiments, the terminal device 15 may implement the corresponding functions by means of the hardware arrangement shown in fig. 11. As shown in fig. 11, the apparatus for evaluating video temporal noise may include: a receiver 1101, a memory 1102 and a processor 1103.
In one implementation, the processor 1103 may include one or more processing units, such as: the processor 1103 may include an application processor, a modem processor, a graphics processor, an image signal processor, a controller, a video codec, a digital signal processor, a baseband processor, and/or a neural network processor, etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors. A memory 1102 is coupled to the processor 1103 for storing various software programs and/or sets of instructions. In some embodiments, memory 1102 may include volatile memory and/or nonvolatile memory. The receiver 1101 may include radio frequency circuitry, a mobile communication module, a wireless communication module, etc. for enabling the terminal device 15 to receive the first video.
In one embodiment, the software program and/or sets of instructions in the memory 1102, when executed by the processor 1103, cause the terminal device 15 to perform the method steps of: and acquiring a first video, wherein the first video is acquired by shooting a standard color card by the electronic equipment to be evaluated. The standard color card comprises at least two color blocks, wherein different color blocks correspond to different gray values. And acquiring a second video according to the first video, wherein the second video comprises Lab images corresponding to the appointed frames in the first video. And obtaining an L value, an a value and a b value corresponding to each pixel point in each frame of Lab image in the second video. And calculating a first evaluation value of each color block group in the second video according to the L value, the a value and the b value corresponding to each pixel point in each frame Lab image, wherein the first evaluation value refers to a weighted value of the variance of the L value, the variance of the a value and the variance of the b value corresponding to each pixel point in the same color block in each frame Lab image in the second video, and the same color block refers to the color block at the same position in each frame Lab image. The first evaluation value is used to characterize the temporal noise of the corresponding set of color blocks in the second video. A second evaluation value of the second video is calculated based on the first evaluation value of each of the color block groups in the second video. The second evaluation value refers to a weighted value of the first evaluation value of each color block group in the second video, and the second evaluation value is used for representing the video time domain noise of the electronic device.
In this way, when the video time domain noise of the electronic equipment is evaluated, based on a preset algorithm, an evaluation value for representing the video time domain noise is calculated according to the L value, the a value and the b value of the pixel point in the video of the standard color card shot by the electronic equipment, so that the video time domain noise of the electronic equipment is represented by the evaluation value. The preset algorithm can be used for evaluating the video time domain noise of different electronic devices, so that the evaluation standard of the video time domain noise of each electronic device has uniformity. And the evaluation value obtained based on the preset algorithm has stronger objectivity, so that the evaluation result of the video time domain noise of each electronic device has stronger objectivity.
Optionally, the first video comprises a first video clip, the first video clip being obtained by the electronic device capturing the standard color chart under at least two light source parameters. Or, the first video comprises at least two second video segments, wherein each second video segment is obtained by the electronic device by shooting a standard color card under one light source parameter, and the at least two video segments correspond to different light source parameters. The Lab image corresponding to the designated frame comprises an image obtained by the electronic equipment shooting the standard color card under each light source parameter. Thus, if the electronic device needs to be tested for video time domain noise under different light source parameters, the electronic device switches at least two light source parameters in one first video segment to shoot. If the video time domain noise of the electronic device under the same light source parameter needs to be tested, the electronic device shoots in a second video clip under only one light source parameter. Therefore, different types of videos can be acquired by adopting different shooting modes so as to evaluate different video time domain noises of the electronic equipment.
Optionally, the light source parameters include: the number of light sources, the brightness of the light sources, the color temperature of the light sources. If a plurality of light sources exist, the light sources are the same, and the light sources are arranged around the standard color card in a dispersing way, so that the light rays of the light sources are uniformly distributed on the standard color card. Therefore, the video time domain noise of the electronic equipment under different light source parameters can be evaluated, and the comprehensiveness of the evaluation of the video time domain noise of the electronic equipment is improved. Meanwhile, the light rays of the light sources are uniformly distributed on the standard color card, so that the influence on the evaluation process due to different brightness on each color block is avoided, and the evaluation accuracy is improved.
Optionally, the software program and/or the plurality of sets of instructions in the memory 1102, when executed by the processor 1103, cause the terminal device 15 to perform the following method steps in acquiring the second video from the first video: and extracting the RGB image corresponding to the appointed frame from the first video. And converting the RGB image corresponding to each appointed frame into a Lab image. And arranging Lab images corresponding to the designated frames according to the sequence of the designated frames in all frames of the first video to obtain a second video. In this way, each specified frame RGB image in the first video is converted into Lab image, so that the brightness characteristic of each frame image can be effectively highlighted. Because the time domain noise is related to the brightness, the time domain noise of the video can be more conveniently analyzed according to the Lab image, and a more accurate analysis result can be obtained.
Optionally, the specified frames include all frames in the first video. Alternatively, the designated frame includes a portion of frames in the first video, the designated frame being located within a designated period of time of the first video, wherein a number of frames of the designated frame that are spaced between adjacent frames is a prime number. Therefore, the evaluation can be performed according to all frame images in the first video to ensure the evaluation accuracy, and the evaluation can be performed according to part of frame images in the first video to improve the evaluation efficiency. And the frame number of the interval between partial frames is ensured to be prime number, so that the second video obtained by frame extraction is ensured not to be influenced by the encoding process, and the evaluation quality is improved.
Optionally, when the software program and/or the plurality of sets of instructions in the memory 1102 are executed by the processor 1103, the terminal device 15 is further configured to perform the following method steps after performing the acquisition of the L value, the a value, and the b value corresponding to each pixel point in the Lab image: and removing invalid pixel points or invalid Lab images according to the L value corresponding to each pixel point in the Lab images. The invalid pixels are pixels in the Lab image, and the absolute values of the differences between the L values of the pixels in the same position in the previous frame of Lab image and the pixel in the next frame of Lab image are all larger than a preset threshold. The invalid Lab image refers to a Lab image including invalid pixels. Therefore, the validity of the pixel points participating in the evaluation is ensured by eliminating invalid pixel points or invalid Lab images, the influence of the invalid pixel points on the evaluation result is avoided, and the accuracy of the evaluation result is further effectively ensured.
Optionally, when the software program and/or the plurality of sets of instructions in the memory 1102 are executed by the processor 1103, the terminal device 15 is further configured to, before executing the calculation of the first evaluation value of each color block set in the second video according to the L value, the a value and the b value corresponding to each pixel point in each frame Lab image, execute the following method steps: identifying a mark point in a Lab image of a first frame in the second video, wherein the mark point is preset on a standard color card. And determining the position information of each color block in the Lab image of the first frame according to the mark points. And determining the position information of each color block in other Lab images in the second video according to the position information of each color block in the first frame Lab image, wherein the position information of each color block in the other Lab images is the same as the position information of each color block in the first frame Lab image. And determining each color block group of the second video according to the position information of each color block in each frame Lab image in the second video, wherein each color block group of the second video comprises the corresponding color block in each frame Lab image. Therefore, based on the continuity of each frame Lab image in the second video, the position information of each color block in each frame Lab image in the second video can be quickly obtained only by identifying the position information of each color block in the first frame Lab image, so that the color blocks positioned at the same position in each frame Lab image can be quickly and accurately clustered to obtain each color block group of the second video, and the efficiency and the accuracy of calculating the first evaluation value based on each color block group of the second video can be ensured.
Optionally, when the software program and/or the plurality of sets of instructions in the memory 1102 are executed by the processor 1103, the terminal device 15 is caused to perform the following method steps in calculating the first evaluation value of each color block set in the second video according to the L value, the a value and the b value corresponding to each pixel point in each frame Lab image: and calculating the variance of the L value, the variance of the a value and the variance of the b value corresponding to the same pixel point group in each color block group in the second video, wherein the same pixel point group comprises pixel points positioned at the same position in each frame Lab image. And calculating the mean value of the variances of the L values, the mean value of the variances of the a values and the mean value of the variances of the b values of the same pixel point groups in each color block group in the second video. And calculating the average value of the variances of the L values, the average value of the variances of the a values and the average value of the variances of the b values of the same pixel point groups in each color block group in the second video according to the weights of the L values, the a values and the b values, and obtaining a first evaluation value of each color block group in the second video. In this way, the variance of the L value, the variance of the a value and the variance of the b value corresponding to the same pixel point group in each color block group in the second video are calculated to correlate the Lab images of each frame in the second video so as to represent the change condition of the time domain noise on each pixel point in the second video. And, the change condition of the time domain noise on each color block group in the second video can be represented by the first evaluation value of each color block group.
Optionally, the weight of the L value is greater than the weight of the a value, and the weight of the L value is greater than the weight of the b value. In this way, the L value of the second video can be emphasized by the weight calculation, i.e., the luminance feature of the second video is emphasized more. Therefore, the accuracy of evaluating the time domain noise of each color block group in the second video can be effectively improved.
Optionally, when the software program and/or the plurality of sets of instructions in the memory 1102 are executed by the processor 1103, the terminal device 15 is caused to perform the following method steps in calculating the second evaluation value of the second video according to the first evaluation value of each color block set in the second video: and calculating a weighted value of the first evaluation value of each color block group in the second video according to the weight of the color block group. The weights of the color block groups are related to the gray values of the corresponding color blocks, wherein the smaller the gray values of the color blocks are, the higher the weights of the corresponding color block groups are. In this way, the second evaluation value obtained through weight calculation can correlate the time domain noise of each color block group in the second video, namely, the second evaluation value can represent the change condition of the time domain noise of each frame Lab image in the second video, namely, the time domain noise of the second video can be objectively and accurately represented. And, because the smaller the gray value is, the higher the sensitivity to illumination is, namely, the more remarkable brightness characteristic is provided, the weight of the color block group with the smaller gray value can be improved to highlight the color block group with the more remarkable brightness characteristic, so that the accuracy of evaluating the time domain noise of the second video is effectively improved.
In addition, in some embodiments, the terminal device 15 may implement the corresponding functions by means of software modules. As shown in fig. 12, the apparatus for evaluating video time domain noise for realizing the function of the above-described terminal device 15 behavior includes: a first acquisition unit 1201, a second acquisition unit 1202, a third acquisition unit 1203, a first evaluation unit 1204, and a second evaluation unit 1205.
A first acquiring unit 1201 is configured to acquire a first video, where the first video is acquired by capturing a standard color card of an electronic device to be evaluated. The standard color card comprises at least two color blocks, wherein different color blocks correspond to different gray values. A second obtaining unit 1202, configured to obtain a second video according to the first video, where the second video includes a Lab image corresponding to a specified frame in the first video. The third obtaining unit 1203 is configured to obtain an L value, an a value, and a b value corresponding to each pixel point in each Lab image frame in the second video. The first evaluation unit 1204 is configured to calculate, according to the L value, the a value, and the b value corresponding to each pixel point in each frame Lab image, a first evaluation value of each color block group in the second video, where the first evaluation value is a weighted value of a variance of the L value, a variance of the a value, and a variance of the b value corresponding to each pixel point in each frame Lab image, and the same color block is a color block located at the same position in each frame Lab image. The first evaluation value is used to characterize the temporal noise of the corresponding set of color blocks in the second video. A second evaluation unit 1205 is configured to calculate a second evaluation value of the second video according to the first evaluation value of each color block group in the second video. The second evaluation value refers to a weighted value of the first evaluation value of each color block group in the second video, and the second evaluation value is used for representing the video time domain noise of the electronic device.
In this way, when the video time domain noise of the electronic equipment is evaluated, based on a preset algorithm, an evaluation value for representing the video time domain noise is calculated according to the L value, the a value and the b value of the pixel point in the video of the standard color card shot by the electronic equipment, so that the video time domain noise of the electronic equipment is represented by the evaluation value. The preset algorithm can be used for evaluating the video time domain noise of different electronic devices, so that the evaluation standard of the video time domain noise of each electronic device has uniformity. And the evaluation value obtained based on the preset algorithm has stronger objectivity, so that the evaluation result of the video time domain noise of each electronic device has stronger objectivity.
In one implementation, the first video includes a first video clip obtained by the electronic device capturing a standard color chip under at least two light source parameters. Or, the first video comprises at least two second video segments, wherein each second video segment is obtained by the electronic device by shooting a standard color card under one light source parameter, and the at least two second video segments correspond to different light source parameters. The Lab image corresponding to the designated frame comprises an image obtained by the electronic equipment shooting the standard color card under each light source parameter. Thus, if the electronic device needs to be tested for video time domain noise under different light source parameters, the electronic device switches at least two light source parameters in one first video segment to shoot. If the video time domain noise of the electronic device under the same light source parameter needs to be tested, the electronic device shoots in a second video clip under only one light source parameter. Therefore, different types of videos can be acquired by adopting different shooting modes so as to evaluate the time domain noise of the videos of the electronic equipment in different shooting scenes.
In one implementation, the light source parameters include: the number of light sources, the brightness of the light sources, the color temperature of the light sources. If a plurality of light sources exist, the light sources are the same, and the light sources are arranged around the standard color card in a dispersing way, so that the light rays of the light sources are uniformly distributed on the standard color card. Therefore, the video time domain noise of the electronic equipment under different light source parameters can be evaluated, and the comprehensiveness of the evaluation of the video time domain noise of the electronic equipment is improved. Meanwhile, the light rays of the light sources are uniformly distributed on the standard color card, so that the influence on the evaluation process due to different brightness on each color block is avoided, and the evaluation accuracy is improved.
In one implementation, the second obtaining unit 1202 is configured to extract an RGB image corresponding to the specified frame from the first video. The second acquisition unit 1202 is also configured to convert an RGB image corresponding to each specified frame into a Lab image. The second obtaining unit 1202 is further configured to arrange Lab images corresponding to the specified frames according to the ordering of the specified frames in all frames of the first video, so as to obtain a second video. In this way, each specified frame RGB image in the first video is converted into Lab image, so that the brightness characteristic of each frame image can be effectively highlighted. Because the time domain noise is related to the brightness, the time domain noise of the video can be more conveniently analyzed according to the Lab image, and a more accurate analysis result can be obtained.
In one implementation, the specified frames include all frames in the first video. Alternatively, the designated frame includes a portion of frames in the first video, the designated frame being located within a designated period of time of the first video, wherein a number of frames of the designated frame that are spaced between adjacent frames is a prime number. Therefore, the evaluation can be performed according to all frame images in the first video to ensure the evaluation accuracy, and the evaluation can be performed according to part of frame images in the first video to improve the evaluation efficiency. And the frame number of the interval between partial frames is ensured to be prime number, so that the second video obtained by frame extraction is ensured not to be influenced by the encoding process, and the evaluation quality is improved.
In one implementation manner, after the third obtaining unit 1203 obtains the L value, the a value, and the b value corresponding to each pixel in the Lab image, the third obtaining unit is further configured to reject invalid pixels or invalid Lab images according to the L value corresponding to each pixel in the Lab image. The invalid pixels are pixels in the Lab image, and the absolute values of the differences between the L values of the pixels in the same position in the previous frame of Lab image and the pixel in the next frame of Lab image are all larger than a preset threshold. The invalid Lab image refers to a Lab image including invalid pixels. Therefore, the validity of the pixel points participating in the evaluation is ensured by eliminating invalid pixel points or invalid Lab images, the influence of the invalid pixel points on the evaluation result is avoided, and the accuracy of the evaluation result is further effectively ensured.
In one implementation manner, before calculating the first evaluation value of each color block in the second video according to the L value, the a value, and the b value corresponding to each pixel point in the Lab image of each frame, the first evaluation unit 1204 is further configured to identify a mark point in the Lab image of the first frame in the second video, where the mark point is preset on the standard color card. And determining the position information of each color block in the Lab image of the first frame according to the mark points. The first evaluation unit 1204 is further configured to determine, according to the position information of each color block in the first frame Lab image, the position information of each color block in other Lab images in the second video, where the position information of each color block in the other Lab images is the same as the position information of each color block in the first frame Lab image. The first evaluation unit 1204 is further configured to determine, according to the position information of each color block in each frame Lab image in the second video, each color block group of the second video, where each color block group of the second video includes a corresponding color block in each frame Lab image. Therefore, based on the continuity of each frame Lab image in the second video, the position information of each color block in each frame Lab image in the second video can be quickly obtained only by identifying the position information of each color block in the first frame Lab image, so that the color blocks positioned at the same position in each frame Lab image can be quickly and accurately clustered to obtain each color block group of the second video, and the efficiency and the accuracy of calculating the first evaluation value based on each color block group of the second video can be ensured.
In one implementation manner, when calculating the first evaluation value of each color block in the second video according to the L value, the a value and the b value corresponding to each pixel point in each frame Lab image, the first evaluation unit 1204 is configured to calculate the variance of the L value, the variance of the a value and the variance of the b value corresponding to the same pixel point group in each color block group in the second video, where the same pixel point group includes pixel points located at the same position in each frame Lab image. The first evaluation unit 1204 is further configured to calculate a mean of variances of L values, a mean of variances of a values, and a mean of variances of b values of the same pixel groups in each of the color block groups in the second video. The first evaluation unit 1204 is further configured to calculate, according to weights of the L value, the a value, and the b value, a weighted value of a mean of variances of the L values, a mean of variances of the a values, and a mean of variances of the b values of the same pixel groups in each color block group in the second video, to obtain a first evaluation value of each color block group in the second video. In this way, the variance of the L value, the variance of the a value and the variance of the b value corresponding to the same pixel point group in each color block group in the second video are calculated to correlate the Lab images of each frame in the second video so as to represent the change condition of the time domain noise on each pixel point in the second video. And, the change condition of the time domain noise on each color block group in the second video can be represented by the first evaluation value of each color block group.
In one implementation, the weight of the L value is greater than the weight of the a value and the weight of the L value is greater than the weight of the b value. In this way, the L value of the second video can be emphasized by the weight calculation, i.e., the luminance feature of the second video is emphasized more. Therefore, the accuracy of evaluating the time domain noise of each color block group in the second video can be effectively improved.
In one implementation, the second evaluation unit 1205 is configured to calculate a weighted value of the first evaluation value of each color block group in the second video according to the weight of the color block group when calculating the second evaluation value of the second video according to the first evaluation value of each color block in the second video. The weights of the color block groups are related to the gray values of the corresponding color blocks, wherein the smaller the gray values of the color blocks are, the higher the weights of the corresponding color block groups are. In this way, the second evaluation value obtained through weight calculation can correlate the time domain noise of each color block group in the second video, namely, the second evaluation value can represent the change condition of the time domain noise of each frame Lab image in the second video, namely, the time domain noise of the second video can be objectively and accurately represented. And, because the smaller the gray value is, the higher the sensitivity to illumination is, namely, the more remarkable brightness characteristic is provided, the weight of the color block group with the smaller gray value can be improved to highlight the color block group with the more remarkable brightness characteristic, so that the accuracy of evaluating the time domain noise of the second video is effectively improved.
The embodiments of the present application also provide a computer storage medium, in which program instructions are stored, which when run on a computer, cause the computer to perform the methods of the above aspects and their respective implementations.
Embodiments of the present application also provide a computer program product which, when run on a computer, causes the computer to perform the methods of the above aspects and their respective implementations.
The application also provides a chip system. The system-on-a-chip comprises a processor for supporting the apparatus or device to implement the functions involved in the above aspects, e.g. to generate or process information involved in the above methods. In one possible design, the system on a chip further includes a memory for storing program instructions and data necessary for the apparatus or device described above. The chip system can be composed of chips, and can also comprise chips and other discrete devices.
The foregoing detailed description of the invention has been presented for purposes of illustration and description, and it should be understood that the foregoing is by way of illustration and description only, and is not intended to limit the scope of the invention.

Claims (15)

1. A method for evaluating video temporal noise, the method comprising:
acquiring a first video, wherein the first video is obtained by shooting a standard color card by electronic equipment to be evaluated, and the standard color card comprises at least two color blocks, wherein different color blocks correspond to different gray values;
acquiring a second video according to the first video, wherein the second video comprises Lab images corresponding to appointed frames in the first video;
acquiring an L value, an a value and a b value corresponding to each pixel point in each frame of Lab image in the second video;
calculating a first evaluation value of each color block group in the second video according to the L value, the a value and the b value corresponding to each pixel point in each frame Lab image, wherein the first evaluation value refers to a weighted value of the variance of the L value, the variance of the a value and the variance of the b value corresponding to each pixel point in the same color block in each frame Lab image in the second video, the same color block refers to a color block in the same position in each frame Lab image, and the first evaluation value is used for representing time domain noise of the corresponding color block group in the second video;
and calculating a second evaluation value of the second video according to the first evaluation value of each color block group in the second video, wherein the second evaluation value refers to a weighted value of the first evaluation value of each color block group in the second video, and the second evaluation value is used for representing video time domain noise of the electronic equipment.
2. The method of claim 1, wherein the first video comprises a first video clip, the first video clip being obtained by the electronic device capturing the standard color chip under at least two light source parameters; or the first video comprises at least two second video segments, wherein each second video segment is obtained by shooting the standard color card by the electronic equipment under one light source parameter, and the at least two second video segments correspond to different light source parameters;
the Lab image corresponding to the designated frame comprises an image obtained by the electronic equipment shooting the standard color card under each light source parameter.
3. The method of claim 2, wherein the light source parameters comprise: the color temperature of the standard color card is equal to the color temperature of the light source, and the light sources are distributed around the standard color card, so that the light rays of the light sources are uniformly distributed on the standard color card.
4. The method of claim 1, wherein the acquiring the second video from the first video comprises:
Extracting an RGB image corresponding to the appointed frame from the first video;
converting the RGB image corresponding to each appointed frame into a Lab image;
and arranging Lab images corresponding to the appointed frames according to the ordering of the appointed frames in all frames of the first video to obtain the second video.
5. The method of any of claims 1-4, wherein the designated frame comprises all frames in the first video or the designated frame comprises a portion of frames in the first video, the designated frame being within a designated time period of the first video, wherein a number of frames of the designated frame that are spaced between adjacent frames is a prime number.
6. The method of claim 1, further comprising, after acquiring the L value, the a value, and the b value for each pixel in the Lab image:
and removing invalid pixels or invalid Lab images according to the L value corresponding to each pixel in the Lab images, wherein the invalid pixels refer to pixels in the Lab images, the absolute value of the difference value between the L value of each pixel in the same position in the previous frame of Lab image and the L value of each pixel in the next frame of Lab image are larger than a preset threshold, and the invalid Lab images refer to Lab images comprising the invalid pixels.
7. The method according to claim 1, wherein before calculating the first evaluation value of each color block group in the second video according to the L value, the a value, and the b value corresponding to each pixel point in the Lab image of each frame, the method further comprises:
identifying a mark point in a Lab image of a first frame in the second video, wherein the mark point is preset on the standard color card;
determining the position information of each color block in the Lab image of the first frame according to the mark points;
determining the position information of each color block in other Lab images in the second video according to the position information of each color block in the first frame Lab image, wherein the position information of each color block in the other Lab images is the same as the position information of each color block in the first frame Lab image;
and determining each color block group of the second video according to the position information of each color block in each frame Lab image in the second video, wherein each color block group of the second video comprises a corresponding color block in each frame Lab image.
8. The method according to claim 7, wherein calculating the first evaluation value of each color block group in the second video according to the L value, the a value, and the b value corresponding to each pixel point in the Lab image of each frame comprises:
Calculating the variance of the L value, the variance of the a value and the variance of the b value corresponding to the same pixel point group in each color block group in the second video, wherein the same pixel point group comprises pixel points positioned at the same position in each frame Lab image;
calculating the mean value of the variance of the L value, the mean value of the variance of the a value and the mean value of the variance of the b value of each same pixel point group in each color block group in the second video;
and calculating the average value of the variances of the L values, the average value of the variances of the a values and the average value of the variances of the b values of each same pixel point group in each color block group in the second video according to the weights of the L values, the a values and the b values, and obtaining a first evaluation value of each color block group in the second video.
9. The method of claim 8, wherein the weight of the L value is greater than the weight of the a value and the weight of the L value is greater than the weight of the b value.
10. The method of claim 1, wherein calculating the second evaluation value of the second video from the first evaluation value of each color block group in the second video comprises:
and calculating the weight value of the first evaluation value of each color block group in the second video according to the weight of the color block group, wherein the weight of the color block group is related to the gray value of the corresponding color block, and the smaller the gray value of the color block is, the higher the weight of the corresponding color block group is.
11. An apparatus for evaluating video temporal noise, the apparatus comprising:
the first acquisition unit is used for acquiring a first video, wherein the first video is obtained by shooting a standard color card by electronic equipment to be evaluated, and the standard color card comprises at least two color blocks, wherein different color blocks correspond to different gray values;
a second obtaining unit, configured to obtain a second video according to the first video, where the second video includes a Lab image corresponding to a specified frame in the first video;
the third obtaining unit is used for obtaining an L value, an a value and a b value corresponding to each pixel point in each frame Lab image in the second video;
the first evaluation unit is configured to calculate, according to the L value, the a value, and the b value corresponding to each pixel point in each frame Lab image, a first evaluation value of each color block group in the second video, where the first evaluation value refers to a weighted value of a variance of the L value, a variance of the a value, and a variance of the b value corresponding to each pixel point in the same color block in each frame Lab image in the second video, the same color block refers to a color block located at the same position in each frame Lab image, and the first evaluation value is used to characterize time domain noise of the corresponding color block group in the second video;
And the second evaluation unit is used for calculating a second evaluation value of the second video according to the first evaluation value of each color block group in the second video, wherein the second evaluation value refers to a weighted value of the first evaluation value of each color block group in the second video, and the second evaluation value is used for representing video time domain noise of the electronic equipment.
12. A system for evaluating video temporal noise, the system comprising: the electronic equipment comprises an erection device, a standard color card, a light source and terminal equipment, wherein the terminal equipment comprises a memory and a processor;
the erection device is used for installing electronic equipment to be evaluated;
the standard color card comprises at least two color blocks, wherein different color blocks correspond to different gray values;
the light source is used for providing uniform illumination for the standard color card;
the memory stores program instructions that, when executed by the processor, cause the terminal device to perform the method of any of claims 1-10.
13. A terminal device, comprising: a processor and a memory; the memory stores program instructions that, when executed by the processor, cause the terminal device to perform the method of any of claims 1-10.
14. A chip system, comprising: a memory and a processor; the memory stores program instructions that, when executed by the processor, cause the chip system to perform the method of any of claims 1-10.
15. A computer storage medium having stored therein program instructions which, when run on a computer, cause the computer to perform the method of any of claims 1-10.
CN202210957168.6A 2022-08-10 2022-08-10 Video time domain noise evaluation method, device and system Active CN116055710B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210957168.6A CN116055710B (en) 2022-08-10 2022-08-10 Video time domain noise evaluation method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210957168.6A CN116055710B (en) 2022-08-10 2022-08-10 Video time domain noise evaluation method, device and system

Publications (2)

Publication Number Publication Date
CN116055710A true CN116055710A (en) 2023-05-02
CN116055710B CN116055710B (en) 2023-10-20

Family

ID=86118810

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210957168.6A Active CN116055710B (en) 2022-08-10 2022-08-10 Video time domain noise evaluation method, device and system

Country Status (1)

Country Link
CN (1) CN116055710B (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050107982A1 (en) * 2003-11-17 2005-05-19 Zhaohui Sun Method and system for noise estimation from video sequence
US20070285580A1 (en) * 2006-06-07 2007-12-13 Arthur Mitchell Temporal noise analysis of a video signal
JP2009171162A (en) * 2008-01-15 2009-07-30 Olympus Corp Video signal processor, video signal processing program, video signal processing method, electronic device
US20100118203A1 (en) * 2008-11-12 2010-05-13 Chih-Yu Cheng Method and device for estimating video noise
CN101977311A (en) * 2010-11-03 2011-02-16 上海交通大学 Multi-characteristic analysis-based CG animation video detecting method
US20120163726A1 (en) * 2010-12-28 2012-06-28 Samsung Electronics Co., Ltd. Noise filtering method and apparatus considering noise variance and motion detection
US20130155193A1 (en) * 2011-12-15 2013-06-20 Canon Kabushiki Kaisha Image quality evaluation apparatus and method of controlling the same
CN103856727A (en) * 2014-03-24 2014-06-11 北京工业大学 Multichannel real-time video splicing processing system
CN106530248A (en) * 2016-10-28 2017-03-22 中国南方电网有限责任公司 Method for intelligently detecting scene video noise of transformer station
CN108805851A (en) * 2017-04-26 2018-11-13 杭州海康威视数字技术股份有限公司 A kind of appraisal procedure and device of image noise in time domain
CN112082738A (en) * 2020-08-24 2020-12-15 南京理工大学 Performance evaluation test system and test method for color night vision camera
CN112286255A (en) * 2020-09-29 2021-01-29 北京空间飞行器总体设计部 On-orbit noise evaluation method for high-stability temperature measurement and control system
US20210303919A1 (en) * 2018-08-23 2021-09-30 Hangzhou Hikvision Digital Technology Co., Ltd. Image processing method and apparatus for target recognition
CN113612996A (en) * 2021-07-30 2021-11-05 百果园技术(新加坡)有限公司 Video denoising method and device based on time domain filtering
CN113674159A (en) * 2020-05-15 2021-11-19 北京三星通信技术研究有限公司 Image processing method and device, electronic equipment and readable storage medium
CN113709453A (en) * 2021-09-13 2021-11-26 北京车和家信息技术有限公司 Video quality evaluation method, device, equipment and medium
CN113705665A (en) * 2021-08-26 2021-11-26 荣耀终端有限公司 Training method of image transformation network model and electronic equipment
CN113706414A (en) * 2021-08-26 2021-11-26 荣耀终端有限公司 Training method of video optimization model and electronic equipment
CN113781334A (en) * 2021-08-27 2021-12-10 苏州浪潮智能科技有限公司 Method, device, terminal and storage medium for comparing difference between images based on colors

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050107982A1 (en) * 2003-11-17 2005-05-19 Zhaohui Sun Method and system for noise estimation from video sequence
US20070285580A1 (en) * 2006-06-07 2007-12-13 Arthur Mitchell Temporal noise analysis of a video signal
JP2009171162A (en) * 2008-01-15 2009-07-30 Olympus Corp Video signal processor, video signal processing program, video signal processing method, electronic device
US20100118203A1 (en) * 2008-11-12 2010-05-13 Chih-Yu Cheng Method and device for estimating video noise
CN101977311A (en) * 2010-11-03 2011-02-16 上海交通大学 Multi-characteristic analysis-based CG animation video detecting method
US20120163726A1 (en) * 2010-12-28 2012-06-28 Samsung Electronics Co., Ltd. Noise filtering method and apparatus considering noise variance and motion detection
US20130155193A1 (en) * 2011-12-15 2013-06-20 Canon Kabushiki Kaisha Image quality evaluation apparatus and method of controlling the same
CN103856727A (en) * 2014-03-24 2014-06-11 北京工业大学 Multichannel real-time video splicing processing system
CN106530248A (en) * 2016-10-28 2017-03-22 中国南方电网有限责任公司 Method for intelligently detecting scene video noise of transformer station
CN108805851A (en) * 2017-04-26 2018-11-13 杭州海康威视数字技术股份有限公司 A kind of appraisal procedure and device of image noise in time domain
US20210303919A1 (en) * 2018-08-23 2021-09-30 Hangzhou Hikvision Digital Technology Co., Ltd. Image processing method and apparatus for target recognition
CN113674159A (en) * 2020-05-15 2021-11-19 北京三星通信技术研究有限公司 Image processing method and device, electronic equipment and readable storage medium
CN112082738A (en) * 2020-08-24 2020-12-15 南京理工大学 Performance evaluation test system and test method for color night vision camera
CN112286255A (en) * 2020-09-29 2021-01-29 北京空间飞行器总体设计部 On-orbit noise evaluation method for high-stability temperature measurement and control system
CN113612996A (en) * 2021-07-30 2021-11-05 百果园技术(新加坡)有限公司 Video denoising method and device based on time domain filtering
CN113705665A (en) * 2021-08-26 2021-11-26 荣耀终端有限公司 Training method of image transformation network model and electronic equipment
CN113706414A (en) * 2021-08-26 2021-11-26 荣耀终端有限公司 Training method of video optimization model and electronic equipment
CN113781334A (en) * 2021-08-27 2021-12-10 苏州浪潮智能科技有限公司 Method, device, terminal and storage medium for comparing difference between images based on colors
CN113709453A (en) * 2021-09-13 2021-11-26 北京车和家信息技术有限公司 Video quality evaluation method, device, equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄异嵘;李汶隆;刘川杰;: "基于双流网络的视频图像去噪算法", 中国新技术新产品, no. 16 *

Also Published As

Publication number Publication date
CN116055710B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
CN111179282B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN111050269B (en) Audio processing method and electronic equipment
CN108307125B (en) Image acquisition method, device and storage medium
CN109345485B (en) Image enhancement method and device, electronic equipment and storage medium
KR100657522B1 (en) Apparatus and method for out-focusing photographing of portable terminal
CN112449120A (en) High dynamic range video generation method and device
CN116055712B (en) Method, device, chip, electronic equipment and medium for determining film forming rate
CN108718388B (en) Photographing method and mobile terminal
CN114422340B (en) Log reporting method, electronic equipment and storage medium
CN113709464A (en) Video coding method and related device
CN114610193A (en) Content sharing method, electronic device, and storage medium
CN113052056A (en) Video processing method and device
CN112188094B (en) Image processing method and device, computer readable medium and terminal equipment
CN116055710B (en) Video time domain noise evaluation method, device and system
KR101750058B1 (en) Apparatus and method for generating high dynamic range image
CN113096022B (en) Image blurring processing method and device, storage medium and electronic device
CN116055894B (en) Image stroboscopic removing method and device based on neural network
CN116091392B (en) Image processing method, system and storage medium
CN114724055A (en) Video switching method, device, storage medium and equipment
CN110321782B (en) System for detecting human body characteristic signals
CN111918047A (en) Photographing control method and device, storage medium and electronic equipment
CN115412678B (en) Exposure processing method and device and electronic equipment
CN115546514B (en) Picture noise calculation method and device and picture test system
CN115631250B (en) Image processing method and electronic equipment
CN113052815B (en) Image definition determining method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant