CA2739189C - System and method for video quality parametric tests - Google Patents

System and method for video quality parametric tests Download PDF

Info

Publication number
CA2739189C
CA2739189C CA2739189A CA2739189A CA2739189C CA 2739189 C CA2739189 C CA 2739189C CA 2739189 A CA2739189 A CA 2739189A CA 2739189 A CA2739189 A CA 2739189A CA 2739189 C CA2739189 C CA 2739189C
Authority
CA
Canada
Prior art keywords
pixels
signal
ratio
video
imaging device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CA2739189A
Other languages
French (fr)
Other versions
CA2739189A1 (en
Inventor
Vicente Miranda
Rafael Villanueva
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Contec LLC
Original Assignee
Contec LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Contec LLC filed Critical Contec LLC
Priority to CA2739189A priority Critical patent/CA2739189C/en
Publication of CA2739189A1 publication Critical patent/CA2739189A1/en
Application granted granted Critical
Publication of CA2739189C publication Critical patent/CA2739189C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/004Diagnosis, testing or measuring for television systems or their details for digital television systems

Abstract

A method and system for automatically evaluating the quality of a signal transmitted by an imaging device. The amount of noise in the video signal is analyzed using an algorithm that does not require comparison to a template signal. The results of the algorithm determine whether the signal meets a pre-determined threshold value for signal quality.

Description

System and Method for Video Quality Parametric Tests Field [0001) The invention relates to a method and system for evaluating video image quality.
Specifically, this invention relates to a means for determining video image quality without comparing the image to a reference pattern. Herein, "image" refers to still or motion images (i.e. video).

Background
[0002] Video providers seek to provide the clearest possible signal to an end user. While the clearest signal would be one that is sent from a nearby originating source directly to an individual user, it is cost prohibitive to send signals in such a manner.
Television signals are generally sent from a satellite to a cable television headend, or they may originate from digital files stored by a video server at the headend. The signal is then merged with locally generated content and advertising for distribution. In order to maximize available resources, signals are multiplexed so that multiple signals can be sent over a single communication channel. The multiplexed signal is received by an imaging device such as a set top box, demultiplexed and decoded and then sent to a video display device such as a television or computer screen. During this process, the video quality can degrade at one or more points resulting in an image that is not satisfactory for the viewer.
While the quality of the video image can be instantly evaluated by human viewers, subjective testing is time consuming and is not useful for operational monitoring, line testing, or other types of periodic or continual monitoring and testing that are required to ensure the fidelity of a transmission. Subjective testing also varies according to the user's point of view.
[0003] Automated image evaluation systems compare the transmitted and/or decoded signals to a template or other "golden" standard signal. Such systems measure things like the Peak Signal to Noise Ratio (PSNR) using metrics such as Mean Squared Error I

calculations (MSE). Typical values for the PSNR in lossy image and video compression are between 30 and 50dB, where higher is better. Acceptable values for wireless transmission quality loss are considered to be about 20dB to 25dB. However, PSNR does not always rank quality of an image or video sequence in the same way that a person would. Furthermore, the original video or template is generally not available at an arbitrary place for evaluation.
[0004] There is therefore a need in the art for an automated means of identifying degradation in a video image without requiring comparison to a reference image.
Brief Description of the Drawings
[0005] Fig. 1 illustrates a system for evaluating video quality according to an embodiment of the invention.
[0006] Fig. 2 illustrates a process for testing a signal according to an embodiment of the invention.
[0007] Fig. 3 illustrates a process for evaluating the amount of noise in a signal according to an embodiment of the invention.
[0008] Fig. 4 illustrates a process for determining the acceptable amount of noise in a signal according to an embodiment of the invention.
[0009] Fig. 5 illustrates a process for evaluating the amount of noise in a signal according to an embodiment of the invention.

Detailed Description
[0010] Overview
[0011] References to "one embodiment" or "an embodiment" do not necessarily refer to the same embodiment, although they may.
[0012] Unless the context clearly requires otherwise, throughout the description and the claims, the words "comprise," "comprising," and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in the sense of "including, but not limited to." Words using the singular or plural number also include the plural or singular number respectively. Additionally, the words "herein,"
"above,"
"below" and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of this application.
When the claims use the word "or" in reference to a list of two or more items, that word covers all of the following interpretations of the word: any of the items in the list, all of the items in the list and any combination of the items in the list.
[0013] In some embodiments the present invention evaluates video impairments such as "noise" or "snow" in a video image by evaluating differences between nearby pixels.
Such a process may be used on still or motion images produced, decoded, or transmitted by any type of imaging device (e.g. a set top box). In some embodiments, the evaluation is used to determine the functioning of the imaging device. The testing may be used with any type of imaging device such as a set top box or any other device that translates image signals and sends them to a display.
[0014] Detailed Description of Certain Embodiments
[0015] An exemplary imaging system 100 is shown in Fig. 1 where one or more video or image sources 110 sends a signal to an imaging device 112 such as a set top box. The imaging device 112 comprises a processor 114, memory 116 and logic 118. The imaging device '112 may also comprise other components generally found in a set top box including, but not limited to, a receiver and a modem. In some embodiments, the processor 114 and memory 116 may be part of a dedicated processor. The memory may represent disk-based optical or magnetic storage units, electronic memories, as well as portions or combinations of these and other memory devices. In some embodiments, the memory may be in a separate device from the processor. The logic may be executed using an application-specific integrated circuit (ASIC), hardware, software or a combination thereof (see Implementations and Alternatives). The devices in system 100 may communicate by any means known to those skilled in the art, for example through a bus, a communication network, or one or more internal connections. In some embodiments, the communication may be wireless.
[0016] The term "processor" should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, read-only memory (ROM) for storing software, random access memory (RAM), and non-volatile storage. Other hardware, conventional and/or custom, may also be included.
[0017] In some embodiments, the imaging device 112 may include an MPEG
decoding processor and/or QAM (quadrature amplitude modulation) decoder. The imaging device may also comprise a graphics processor. Graphics processors are useful in processing graphics intensive data, particularly 3-D data, associated with Internet browsing, gaming and multimedia applications such as those associated with MHEG (Multimedia and Hypermedia information coding Experts Group) set-top box applications.
[0018] The imaging device 112 decodes and reformats the signal and then sends the translated signal to a display device 120 and/or to a testing device 122. The display device 120 may be any sort of screen, monitor, television or other display medium generally used for displaying images. The testing device 122 may be any device used to monitor or test an imaging device such as a set top box. For example, the testing device 122 may be a QuickTest -based automated solution.
[0019] In addition to transmitting signals, the imaging device 112 may also receive signals from the display device 120 or the testing device 122. In some embodiments, the imaging device 112 may also send signals back to the image source such as a head end.
The testing device 122 may comprise a processor 124, logic 126 and memory 128.
The memory 128 may represent disk-based optical or magnetic storage units, electronic memories, as well as portions or combinations of these and other memory devices. Like the imaging device, in some embodiments the memory of the testing device may be in a separate device from the processor. The logic may be implemented in many ways, for example using an application-specific integrated circuit (ASIC), hardware, software or a combination thereof. The devices may communicated by any means known to those skilled in the art, for example through a bus, a communication network, or one or more internal connections.
[0020] In some embodiments, the testing device 122 is coupled to a computer for use as a diagnostic tool using a web interface. In other embodiments, the diagnostic system is comprised by the testing device 122. The testing device 122 comprises logic described in further detail below and makes a determination regarding the quality of the signal transmitted by the imaging device 112. If the quality of the signal meets pre-set standards, the imaging device 112 is determined to be properly functioning. If the imaging device fails to meet the preset standards, the imaging device 112 is determined to require repair or replacement.
[00211 Referring to Fig. 2, a signal 210 is received from a head end via a network to a coaxial cable or other receiving mechanism by an imaging device such as a set top box.
Generally, data streams are multiplexed together into a transport stream so that many video and audio channels are received simultaneously, together with various services.
The imaging device tunes a particular channel and service. Video data may be in an MPEG format (Moving Picture Experts Group) or other type of coded format and is decoded 214 into a signal that can be received by a testing or display device.
In this instance, the decoded signal is sent to the testing device 216, for example using S-Video or CBVS.
[0022] As shown in Fig. 3, the test device receives the signal 310 from an imaging device. In some' embodiments, the video signal may comprise smooth areas in the background of the images. In other embodiments, the signal may be a color bar or other solid test pattern. The Y component of the video signal is extracted 312. The Y
component represents the luma information; however any component that conveys the brightness of an image may be used. Generally, the weighted values of R, G, and B are added together to produce a single Y signal, representing the overall brightness of a particular spot. A filter is applied 312 to identify noisy pixels. The filter is used to compute the second derivatives of an image which measure the rate at which the first derivatives change. This determines if a change in adjacent pixel values is an edge or a continuous progression.
[00231 The filter can be any filter which can be used to determine if a change in adjacent pixel values is an edge or a continuous progression. For example, the filter may be a Laplace filter or directional filter or any other filter that recognizes small changes in contrast around pixels. In one embodiment the filter is a 2d order Laplace Filter for the 2"d spatial derivate of an image. It is omnidirectional (i.e. it can detect pixel change in any direction). The filter applies convolution with a kernel matrix of or similar to:

S

1 -l -1 [0024] The ratio of noise pixels to total pixels is then determined 314 and compared to a pre-determined threshold 316. Results of the comparison may be used locally by the test device to identify the unit under test as faulty. In some embodiments the comparison may be sent to the head end. In other embodiments the comparison may result in a particular code appearing on the set top box or the testing device. In further embodiments, the ratio may be sent to a web interface for comparison to the pre-determined threshold for a particular model of imaging device.
[002] As shown in Fig. 4, threshold values for excessive noise in a signal may be determined by sending a signal 410 to an imaging system. The signal is measured for attenuation 420 and then measured again 414 just prior to input into the device under test 416. The signal is provided to the testing device 418, the mean noise value determined 420, and from this value a determination is made whether the noise is `visible' 422.
[0026] In one embodiment a National Instrumentsrm PXI-1411 may be employed to acquire the image. The image may be displayed in a CRT monitor to evaluate the picture quality and determine when the noise is visible for human vision. The amount of noise that enters in the device under test may be controlled via a programmable RF
attenuator.
The device under test's gain control may attempt to compensate for the noise.
If the device under test is overstressed, the noise may become `visible' as the device compensation is overwhelmed.
[0027] In Fig. 5, a signal is received 510 by a testing device from the imaging device or other device under test. The testing device extracts the Y signal from the color image 520. In some embodiments, an image RGB format may be converted to other formats (e.g. HSL) before extraction of the luma component. A Laplace filter or other filter is applied 530 to recognize small changes in contrast around pixels. The image is then binarized and the noisy pixels are counted 540. The number of noisy and normal pixels are statistically analyzed and the ratio determined (mean value) 560. The mean value is compared to a threshold value determined by any means applicable, for example as in Fig. 4. If the mean value exceeds the pre-determined threshold, the signal is rejected 580.
If the mean value is below the pre-determined threshold the signal is accepted. One or more rejected signals from a particular imaging device under test indicate that the device under test is faulty. In some embodiments, a pre-determined number of signals must be rejected before an imaging device is determined to be faulty.
[0028] Reporting that the device is faulty may result in a signal appearing in the testing device, a signal sent to the device under test, a signal sent to the head end or any other means of indicating that the imaging device under test is faulty.
[0029] Example Determination of Threshold Values [0030] A signal was transmitted from the Contec headend'to four SA 4200HD
devices (Scientific Atlanta, San Jose, CA) tuned to Channel 53. The RF signal attenuation was measured prior to receipt by the SA 4200HD devices. The attenuation level was measured using a spectrum analyzer at a test point at the input of the SA4200HD and the RF level reading from SA4200HD determined therefrom.
[0031] The signal was then processed by the SA 4200HD devices and outputted to a IMAQ PXI-1411 Video Image Acquire Card (National Instruments, Austin, TX). The mean value of the attenuation was determined and the visibility of the noise in the video image was evaluated.
100321 As shown below, a mean value of 2.3 or higher resulted in visible noise in the video image.

Measure of Noise in Analog Video Channel Unit #1- SA 4200HD
DUT RF Input Mean RF Attenuation (dB) (dBmV) Value Noise Visible?
0 0 0.03 no 6 -5 0.04 no 9 -9 0.06 no 12 -12 0.15 no 15 -14 0.64 no 18 -18 2.5 yes Unit #2 - SA 4200HD
DUT RF Input Mean RF Attenuation (dB) (dBmV) Value Noise Visible?
0 1 0.04 no 6 -4 0.04 no 9 -8 0.08 no 12 -11 0.17 no 15 -13 0.7 no 18 -17 2.8 yes Unit #3 - SA 4200HD
DUT RF Input Mean RF Attenuation (dB) (dBmV) Value Noise Visible?
0 1 0.04 no 6 -4 0.04 no 9 -8 0.07 no 12 -11 0.14 no 15 -14 0.55 no 18 -17 2.3 yes Unit #4 - SA 4200HD
DUT RF Input Mean RF Attenuation (dB) (dBmV) Value Noise Visible?
0 0 0.03 no 6 -6 0.05 no 9 -9 0.08 no 12 -12 0.18 no 15 -15 0.74 no 18 -18 2.8 yes Implementations and Alternatives In general, the techniques, procedures, devices and systems described herein may be implemented via logic distributed in one or more devices. The particular distribution and choice of logic is a design decision that will vary according to implementation.
[0033] "Logic" refers to physical embodiments of signals and/or information that may be applied to influence the operation of a device. Software, hardware, electronic and optical memory, and firmware are examples of physical structure that may embody logic.
Hardware logic may be embodied in circuits. In general, logic may comprise combinations of software, hardware, and/or firmware.
[0034] Those skilled in the art will appreciate that logic may be distributed throughout one or more devices, and/or may be comprised of combinations of instructions in memory, processing capability, circuits, and so on. Therefore, in the interest of clarity and correctness logic may not always be distinctly illustrated in drawings of devices and systems, although it is inherently present therein.
[0035] Those having skill in the art will appreciate that there are various logic implementations by which processes and/or systems described herein can be effected (e.g., hardware, software, and/or firmware), and that the preferred vehicle will vary with the context in which the processes are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a hardware and/or firmware vehicle; alternatively, if flexibility is paramount, the implementer may opt for a solely software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, and/or firmware.
Hence, there are several possible vehicles by which the processes described herein may be effected, none of which is inherently superior to the other in that any vehicle to be utilized is a choice dependent upon the context in which the vehicle will be deployed and the specific concerns (e.g., speed, flexibility, or predictability) of the implementer, any of which may vary. Those skilled in the art will recognize that optical aspects of implementations may involve optically-oriented hardware, software, and or firmware.

[0036] The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, flowcharts, and/or examples.
Insofar as such block diagrams, flowcharts, and/or examples comprise one or more functions and/or operations, it will be understood as notorious by those within the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof Several portions of the subject matter described herein may be implemented via Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), or other integrated "formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in standard integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and/or firmware would be well within the skill of one of skill in the art in light of this disclosure. In addition, those skilled in the art will appreciate that the mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies equally regardless of the particular type of signal bearing media used to actually carry out the distribution. Examples of a signal bearing media include, but are not limited to, the following: recordable type media such as floppy disks, hard disk drives, CD ROMs, digital tape, and computer memory; and transmission type media such as digital and analog communication links using TDM or IP based communication links (e.g., packet links).
[0037] In a general sense, those skilled in the art will recognize that the various aspects described herein which can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or any combination thereof can be viewed as being composed of various types of "electrical circuitry." Consequently, as used herein "electrical circuitry" includes, but is not limited to, electrical circuitry having at least one discrete electrical circuit, electrical circuitry having at least one integrated circuit, electrical circuitry having at least one application specific integrated circuit, electrical circuitry forming a general purpose computing device configured by a computer program (e.g., a general purpose computer configured by a computer program which at least partially carries out processes and/or devices described herein, or a microprocessor configured by a computer program which at least partially carries out processes and/or devices described herein), electrical circuitry forming a memory device (e.g., forms of random access memory), and/or electrical circuitry forming a communications device (e.g., a modem, communications switch, or optical-electrical equipment).
[0038] Those skilled in the art will recognize that it is common within the art to describe devices and/or processes in the fashion set forth herein, and thereafter use standard engineering practices to integrate such described devices and/or processes into larger systems. That is, at least a portion of the devices and/or processes described herein can be integrated into a network processing system via a reasonable amount of experimentation.
[0039] The foregoing described aspects depict different components comprised within, or connected with, different other components. It is to be understood that such depicted architectures are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively "associated"
such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as "associated with" each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being "operably connected", or "operably coupled", to each other to achieve the desired functionality.

Claims (19)

We claim:
1. A method for measuring the amount of noise in a video image, the method comprising:
receiving a signal from an imaging device; extracting a luma component from a color image from the imaging device; applying a filter to compute the second derivatives of the extracted luma component; determining a ratio of noise pixels to total pixels in the second derivative; and comparing the ratio to a pre-determined ratio.
2. The method of claim 1, wherein the filter is a Laplace filter.
3. The method of claims 1 or 2, wherein pixels are binarized prior to determining the ratio of noise pixels to total pixels.
4. The method of any one of claims 1 to 3, wherein the signal comprises smooth areas in the background of the images.
5. The method of any one of claims 1 to 4, wherein the signal comprises a color bar pattern.
6. The method of any one of claims 1 to 5, wherein if the ratio exceeds the pre-determined ratio the signal is rejected.
7. The method of any one of claims 1 to 6, wherein if the ratio is less than the pre-determined ratio the signal is accepted.
8. The method of any one of claims 1 to 7, wherein if the ratio exceeds the pre-determined ratio, the imaging device is determined to be faulty.
9. An apparatus for determining the functioning of an imaging device comprising; a processor; a memory, coupled to the processor, wherein the memory is configured to allow the processor to: obtain a video signal; extract a luma component from the video signal; filter the luma component to determine a ratio of noise pixels to total pixels in a second derivative of the extracted luma component; and compare the ratio of noise pixels to total pixels to a pre-determined ratio;
wherein if the processor determines that the ratio of noise pixels to total pixels exceeds the pre-determined ratio, the processor emits an error code indicating that the imaging device is non-functioning.
10. The apparatus of claim 9, wherein the filter is a Laplace filter.
11. The apparatus of claim 9, wherein the filter is an omnidirectional filter.
12. The apparatus of any one of claims 9 to 11, wherein the filter recognizes small changes in contrast around pixels.
13. The apparatus of any one of claims 9 to 12, wherein the luma component is the Y signal of a color image.
14. A system for video quality assessment comprising: a video imaging device; a testing device for measuring the amount of noise in a video image from the video imaging device; wherein the testing device comprises logic that enables determination of the amount of noise in a video picture by evaluating the contrast between neighboring pixels and calculating the ratio of noisy pixels to total pixels in a second derivative of extracted luma components of the video image.
15. The system of claim 14, wherein the contrast between neighboring pixels is determined by applying a Laplace filter.
16. The system of claim 14, wherein the contrast between neighboring pixels is determined by applying an omnidirectional filter.
17. The system of any one of claims 14 to 16, wherein the video imaging device sends a signal to the testing device.
18. The system of any one of claims 14 to 17, wherein the contrast between pixels is determined in a Y signal.
19. The system of any one of claims 14 to 18, comprising logic to control an RF attenuation of a video image signal to induce noise in the video imaging device.
CA2739189A 2011-05-05 2011-05-05 System and method for video quality parametric tests Active CA2739189C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA2739189A CA2739189C (en) 2011-05-05 2011-05-05 System and method for video quality parametric tests

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CA2739189A CA2739189C (en) 2011-05-05 2011-05-05 System and method for video quality parametric tests

Publications (2)

Publication Number Publication Date
CA2739189A1 CA2739189A1 (en) 2012-11-05
CA2739189C true CA2739189C (en) 2018-07-17

Family

ID=47143327

Family Applications (1)

Application Number Title Priority Date Filing Date
CA2739189A Active CA2739189C (en) 2011-05-05 2011-05-05 System and method for video quality parametric tests

Country Status (1)

Country Link
CA (1) CA2739189C (en)

Also Published As

Publication number Publication date
CA2739189A1 (en) 2012-11-05

Similar Documents

Publication Publication Date Title
US8730327B2 (en) System and method for video quality parametric tests
Winkler Perceptual video quality metrics—A review
Leszczuk et al. Recent developments in visual quality monitoring by key performance indicators
Mu et al. Framework for the integrated video quality assessment
Olsson et al. Objective methods for assessment of video quality: state of the art
US9077972B2 (en) Method and apparatus for assessing the quality of a video signal during encoding or compressing of the video signal
EP2833639B1 (en) Method and device for evaluating video quality
US20090060027A1 (en) Compressed Signal Subjective Quality Ratings Prediction
Huynh-Thu et al. No-reference temporal quality metric for video impaired by frame freezing artefacts
Leszczuk et al. Key indicators for monitoring of audiovisual quality
KR101333107B1 (en) Objective Picture Quality Assessment system and Equipment for Visual Digital Media
Barkowsky et al. Hybrid video quality prediction: reviewing video quality measurement for widening application scope
Pullano et al. PSNR evaluation and alignment recovery for mobile satellite video broadcasting
CA2739189C (en) System and method for video quality parametric tests
Kim et al. Subjective and objective quality assessment of videos in error-prone network environments
KR20040047883A (en) System and method for testing the compliance of a digital decoding device
CA2408435C (en) Method for testing video sequences
Reiter et al. Comparing apples and oranges: subjective quality assessment of streamed video with different types of distortion
Keimel et al. Improving the verification process of video quality metrics
US20130135482A1 (en) Method of estimating video quality at any resolution
Rau Automated test system for digital TV receivers
Punchihewa et al. A survey of coded image and video quality assessment
Bretillon et al. Method for image quality monitoring on digital television networks
Bretillon et al. Quality meter and digital television applications
Li et al. Visual quality evaluation for images and videos

Legal Events

Date Code Title Description
EEER Examination request

Effective date: 20160420