WO2009055318A1 - Method and system for processing videos - Google Patents

Method and system for processing videos Download PDF

Info

Publication number
WO2009055318A1
WO2009055318A1 PCT/US2008/080420 US2008080420W WO2009055318A1 WO 2009055318 A1 WO2009055318 A1 WO 2009055318A1 US 2008080420 W US2008080420 W US 2008080420W WO 2009055318 A1 WO2009055318 A1 WO 2009055318A1
Authority
WO
WIPO (PCT)
Prior art keywords
sub
units
unit
validity
status
Prior art date
Application number
PCT/US2008/080420
Other languages
French (fr)
Inventor
Ankit Rattan Arora
Mukul Chowdhary
Original Assignee
Motorola, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Motorola, Inc. filed Critical Motorola, Inc.
Publication of WO2009055318A1 publication Critical patent/WO2009055318A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/467Embedding additional information in the video signal during the compression process characterised by the embedded information being invisible, e.g. watermarking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)

Abstract

A method and system (500) for processing a video are provided. The video includes a plurality of frames. The method includes receiving (604) a frame from a video source. The frame includes a plurality of sub-units. Further, the method includes initializing (608) each sub-unit with a validity metric corresponding to an invalid status when the first frame is received from the video source. Furthermore, the method includes assigning (612) a sub-unit a validity metric corresponding to a valid status when it is an intra-coded sub-unit. Moreover, the method includes assigning (704) a sub-unit a validity metric corresponding to a first validity status when it is a predicted sub-unit that has been derived by using motion compensation on a set of sub-units in the reference frames.

Description

METHOD AND SYSTEM FOR PROCESSING VIDEOS
FIELD OF INVENTION
[0001] The present invention relates, in general, to video processing and more specifically, to a method and system for processing a video for optimal rendering of the video.
BACKGROUND OF THE INVENTION
[0002] Video content has become an intrinsic form of data in recent years. Typically, a video comprises a plurality of temporally ordered images known as 'frames'. Each frame carries a substantial amount of information, which makes the frame large in size. Consequently, the video is also large in size, which significantly increases its storage requirements and transmission costs and also causes transmission delays. However, the disadvantages associated with the large size of the videos can be reduced by compressing the video by using various video-coding techniques known in the art.
[0003] Some known video-coding techniques compress the video by reducing its temporal redundancy, which is present because of few differences between adjacent frames. Temporal redundancy can be reduced by only registering the changes from a first frame to a second frame while storing the second frame. Further, each frame is divided into sub-units called Macro-Blocks (MBs), which can be encoded individually. MBs that encode image information and are not dependent on other MBs for decoding are known as Intra-coded MBs (I-MBs). Those MBs that require references to other MBs for decoding are known as Predicted MBs (P-MBs). An Intra-coded frame (I-frame) is a frame that contains only image information and correspondingly only I-MBs, and a Predicted frame (P-frame) is a frame that can be derived by using references to other frames, and therefore, can contain P-MBs as well as I-MBs.
[0004] Generally, the first frame of a video is an I-frame, which acts as a reference frame for subsequent P-frames. The reference I-frame is required to correctly decode and clearly display the P-frames. However, often the reference I- frame can either be unavailable or corrupt. Examples of such scenarios include playing a video with a missing or corrupt first I-frame, joining a live video stream in the middle, rewinding or fast-forwarding a video, and other similar scenarios. In such scenarios, the display can be distorted to begin with, and can clear up gradually as I-MBs are received in P-frames. This distorted display can be quite objectionable and inconvenient for a user.
[0005] An existing technique attempts to reduce the inconvenience caused to the user due to the distorted display. The technique tracks the arrival of I-MBs in various parts of the frame. The arrival of I-MBs in a part of the frame clears the display in that part. This is known as 'refreshing' of that particular part of the frame. In this technique, a Scoreboard is maintained that is refreshed along with the frame. Thereafter, display of the video is initiated when the whole Scoreboard has been refreshed and the display is absolutely clear. However, the technique does not account for the scenario where a frame clear enough for display corresponds to a partially refreshed Scoreboard. Further, the technique does not account for the scenario where a fully refreshed Scoreboard does not correspond to a frame clear enough for display. [0006] In one such scenario, underlying motion in the video can cause the display to clear faster because un-refreshed parts of the frame can be over- written by refreshed data due to the underlying motion. On the other hand, underlying motion in the video may also cause the display of the video to begin prematurely. In this case, even though the entire frame has been refreshed, the display may still be distorted because refreshed data may have been over-written by un-refreshed data due to the underlying motion. However, the technique mentioned above may not be able to account for these scenarios.
[0007] In light of the above, there is a need for a method and system for processing videos that is capable of optimally rendering the video in case the reference I-frame is unavailable. Further, the method and system should be able to account for the underlying motion in the video.
BRIEF DESCRIPTION OF THE FIGURES
[0008] The accompanying figures, where like reference numerals refer to identical or functionally similar elements throughout the separate figures, and which, together with the detailed description below, are incorporated in and form part of the specification, serve to further illustrate various embodiments and explain various principles and advantages, all in accordance with the present invention.
[0009] FIG. 1 illustrates a technique for displaying a video based on a
Scoreboard by using an exemplary 'Akiyo' video sequence, in accordance with the prior art; [0010] FIG. 2 illustrates a scenario of delay in display of a video by using an exemplary 'Bus' video sequence, in accordance with the prior art;
[0011] FIG. 3 illustrates a scenario of premature display of a video by using an exemplary 'Foreman' video sequence, in accordance with the prior art;
[0012] FIG. 4 illustrates an exemplary system where various embodiments of the present invention can be practiced;
[0013] FIG. 5 illustrates a block diagram of an exemplary system for processing videos, in accordance with an embodiment of the present invention;
[0014] FIGs 6 and 7 illustrate a flow diagram of a method for processing a video, in accordance with an embodiment of the present invention;
[0015] FIGs 8, 9 and 10 illustrate a flow diagram of a method for processing a video, in accordance with another embodiment of the present invention;
[0016] FIG. 11 is a flow diagram illustrating a method for storing a validity metric of a sub-unit, in accordance with an embodiment of the present invention;
[0017] FIG. 12 is a flow diagram illustrating a method for a full-pixel motion- compensation technique for a predicted sub-unit, in accordance with an embodiment of the present invention;
[0018] FIG. 13 is a flow diagram illustrating a method for a modified sub- pixel motion compensation technique for a predicted sub-unit, in accordance with an embodiment of the present invention; and [0019] FIG. 14 is a flow diagram illustrating a method for accounting for a corrupt sub-unit while processing a video, in accordance with an embodiment of the present invention.
[0020] Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale.
For example, the dimensions of some of the elements in the figures may be exaggerated, relative to other elements, to help in improving an understanding of the embodiments of the present invention.
DETAILED DESCRIPTION
[0021] Before describing in detail the particular method and system for processing videos, in accordance with various embodiments of the present invention, it should be observed that the present invention resides primarily in combinations of method steps related to the method and system for processing videos. Accordingly, the apparatus components and method steps have been represented, where appropriate, by conventional symbols in the drawings, showing only those specific details that are pertinent for an understanding of the present invention, so as not to obscure the disclosure with details that will be readily apparent to those with ordinary skill in the art, having the benefit of the description herein.
[0022] In this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article or apparatus that comprises a list of elements does not include only those elements but can include other elements not expressly listed or inherent to such a process, method, article or apparatus. An element proceeded by "comprises ... a" does not, without more constraints, preclude the existence of additional identical elements in the process, method, article or apparatus that comprises the element. The term "another," as used in this document, is defined as at least a second or more. The term "includes", as used herein, is defined as comprising.
[0023] For one embodiment, a method for processing a video is provided. The video includes a plurality of frames. The method includes receiving a frame of the plurality of frames from a video source. Each frame of the plurality of frames includes a plurality of sub-units. Further, the method includes initializing each sub-unit of the plurality of sub-units with a validity metric corresponding to an invalid status. The initialization step is performed when the frame is a first frame of the plurality of frames received from the video source. Moreover, the method includes assigning a sub-unit the validity metric corresponding to a valid status when the sub-unit is an intra-coded sub-unit. Furthermore, the method includes assigning a sub-unit the validity metric corresponding to a first validity status when the sub-unit is a predicted sub-unit that is derived by using motion compensation on a set of sub-units in one or more reference frames. Furthermore, the method includes deciding when to display the frame on a display device, based on a set of predefined criteria.
[0024] For another embodiment, a system capable of processing videos is provided. The system includes a receiver that is capable of receiving a frame of a plurality of frames of a video from a video source. Each frame of the plurality of frames includes a plurality of sub-units. Further, the system includes a processor that is configured to initialize each sub-unit of the plurality of sub-units with a validity metric corresponding to an invalid status. The sub-units are initialized when the frame is a first frame of the plurality of frames received from the video source. The processor is also configured to assign a sub-unit the validity metric corresponding to a valid status when the sub-unit is an intra-coded sub-unit. Moreover, the processor is configured to assign a sub-unit the validity metric corresponding to a first validity status when the sub-unit is a predicted sub-unit that is derived by using motion compensation on a set of sub-units in one or more reference frames. Furthermore, the processor is configured to decide when to display the frame, based on a set of predefined criteria.
[0025] FIG. 1 illustrates a technique for displaying a video, based on a Scoreboard, by using an exemplary 'Akiyo' video sequence, in accordance with the prior art. The Akiyo video sequence is a standard video test sequence that is used to analyze various characteristics of video-coding techniques. Other examples of standard video test sequences include, but are not limited to, 'Bus' video sequence, 'Foreman' video sequence and 'Flower' video sequence. For one embodiment, the technique tracks the arrival of Intra-coded Macro Blocks (I- MBs) in various parts of a frame. The arrival of I-MBs in a part of the frame clears that part of the frame. This is known as refreshing of that particular part of the frame. In this technique, a Scoreboard is maintained that is refreshed along with the frame. The technique starts the display of the frame when the whole Scoreboard has been refreshed. [0026] Referring to FIG. 1, various frames of the Akiyo video sequence and the corresponding scoreboards maintained by the technique have been illustrated. For the purpose of describing the technique, frame 102, frame 104 and frame 106 of the Akiyo video sequence and corresponding Scoreboard 108, Scoreboard 110 and Scoreboard 112 have been shown. For one embodiment, the frame 102 can show frame number 8 of the Akiyo video sequence, which has been refreshed in very few parts. The Scoreboard 108 can be the Scoreboard corresponding to the frame 102. The parts in Scoreboard 108 that are white can be the parts that have been refreshed. These parts overlap with the clear parts in frame 102. Similarly, the frame 104 can show frame number 64 and the frame 106 can show frame number 132 of the Akiyo video sequence. The Scoreboard 110 can be the Scoreboard corresponding to the frame 104, and the Scoreboard 112 can be the Scoreboard corresponding to the frame 106. As seen in FIG. 1, the frame 104 is much clearer as compared to the frame 102, and the frame 106 is absolutely clear. Correspondingly, as also seen in FIG. 1, the Scoreboard 110 has much more refreshed parts than the Scoreboard 108, and the Scoreboard 112 is completely refreshed. Further, the technique can begin the display of the video on a display device at the reception of the frame 106, when the Scoreboard 112 is completely refreshed.
[0027] FIG. 2 illustrates a scenario of delay in display of a video by using an exemplary 'Bus' video sequence, in accordance with the prior art. The Bus video sequence is a standard video test sequence that is used to analyze various characteristics of video-coding techniques. For the purpose of describing the scenario of delay in the display of the video, a frame 202 of the Bus video sequence and corresponding Scoreboard 204 have been shown. The frame 202 can show frame number 20 of the video sequence. The Scoreboard 204 can be the Scoreboard corresponding to the frame 202. It can be seen in FIG. 2 that the Scoreboard 204 is not completely refreshed but the frame 202 appears absolutely clear. For one embodiment, this may occur due to the presence of underlying motion in the video. The underlying motion in the video may cause refreshed data to overwrite the un-refreshed data. As can be seen in FIG. 2, the frame 202 is clear enough to be displayed. However, the technique described in FIG. 1 starts the display at, for instance, frame number 128 of the video sequence when the entire Scoreboard has been refreshed. Consequently, a delay of 108 frames after the frame 202 is introduced in the display of the video, even though the frame 202 is clear enough to be displayed.
[0028] FIG. 3 illustrates a scenario of premature display of a video by using an exemplary 'Foreman' video sequence, in accordance with the prior art. The Foreman video sequence is a standard video test sequence that is used to analyze various characteristics of video-coding techniques. A frame 302 shows, for example, frame number 34 of the Foreman video sequence. Further, a Scoreboard 304 has been shown that corresponds to the frame 302. It can be seen in FIG. 3 that the Scoreboard 304 is completely refreshed but frame 302 is not completely clear and is distorted in some parts. These parts may appear distorted because the refreshed data in these parts has been overwritten by un-refreshed data. However, the technique described in FIG. 1 will start the display at frame number 34 because the Scoreboard 304 is fully refreshed. [0029] FIG. 4 illustrates an exemplary system 400 where various embodiments of the present invention can be practiced. Examples of the system 400 can include, but are not limited to, a television system, a wireless communication system, a cell phone system, a computer system, and the Internet system. The system 400 can include a video source and one or more processors and display devices. For the purpose of this description, the system 400 is shown to include a video source 402, a processor 404 and a display device 406. Examples of the video source 402 can include, but are not limited to, a streaming video source, a stored video source, a Base Transceiver Station (BTS), an Internet server, and a television station. Examples of the display device 406 can include, but are not limited to, a television screen, a cell phone screen, a computer monitor, an LCD projector, and a Personal Digital Assistant (PDA) screen. The processor 404 is connected to the video source 402 and the display device 406.
[0030] For one embodiment, the video source 402 can send video data to the processor 404 via a link. The link can be a wired link or a wireless link. For one embodiment, the video data can be encoded video data. The encoding of the video data may be based on a video-coding standard. Examples of the video- coding standard can include a Moving Picture Experts Group (MPEG) standard, an International Telecommunications Union-Telecommunication (ITU-T) standard, an H.264 standard, a Video Codec (VC) standard, and the like. The processor 404 processes the received video data to enable the display device 406 to display the video. Processing of the video data can include various operations such as decoding the encoded video, determining when to display the video, reconstructing frames of the video, and the like. Thereafter, the processor 404 forwards the processed video data to the display device 406. The processed video data may be forwarded via a wired link or a wireless link. Further, the display device 406 displays the video by using the processed video data.
[0031] For one embodiment, the processor 404 and the display device 406 can be separate devices. For example, the processor 404 can be part of a set-top box and the display device can be a television. For another embodiment, the processor 404 and the display device 406 can be part of the same device. For example, both the processor 404 and the display device 406 can be present in a cell phone or a notebook computer.
[0032] FIG. 5 illustrates a block diagram of an exemplary system 500 for processing videos, in accordance with an embodiment of the present invention. The system 500 includes a receiver 502 and a processor 504. It will be apparent to those ordinarily skilled in the art that the system 500 can include all or even a fewer number of components than the components shown in FIG. 5. Further, those ordinarily skilled in the art will understand that the system 500 can include additional components that are not shown here, since they are not germane to the operation of the system 500, in accordance with the inventive arrangements. To describe the system 500, reference is made to FIG. 4 although it should be understood that the system 500 can also be implemented in any other suitable environment or network. In an exemplary scenario, the system 500 can be a subset of the system 400 where the processor 504 corresponds to the processor 404.
[0033] The system 500 can process a video to decide when to display a frame of the video. Examples of the system 500 can include, but are not limited to, a cell phone, a Personal Computer (PC), a notebook computer, a set-top box, and a television. The receiver 502 can receive a frame of a plurality of frames of the video from a video source. Examples of the video source can include, but are not limited to, a streaming video source, a stored video source, a Base Transceiver Station (BTS), an Internet server, and a television station. For one embodiment, the frame can include a plurality of sub-units. Examples of the sub-units can include, but are not limited to, a pixel, a macro-block and a block of pixels.
[0034] The processor 504 is configured to initialize each sub-unit of the plurality of sub-units with a validity metric corresponding to an invalid status. Initialization is performed when the frame is the first frame of the plurality of frames to be received from the video source. The validity metric of a sub-unit is a measure of the clarity of the sub-unit to be displayed. For example, the sub-unit is clear enough to be displayed when intra-coded information has been received for the sub-unit. In this case, the validity metric can correspond to a valid status, indicating that the sub-unit is suitable for display. In case the sub-unit is not clear enough to be displayed, the validity metric can correspond to the invalid status, indicating that the sub-unit is unsuitable for display.
[0035] For one embodiment, the validity metric can be a binary value where 'one' can represent a valid status and 'zero' can represent an invalid status, or vice versa. For another embodiment, the validity metric can be a numerical value indicating the degree of validity or clarity of the sub-unit. Examples of the numerical value can include a rank value, a percentage value, and the like. Further, the degree of validity can be based on various criteria such as a weighted average of other validity metrics. [0036] The processor 504 is also configured to assign the validity metric corresponding to the valid status to a sub-unit when the sub-unit is an intra-coded sub-unit. Moreover, the processor 504 is configured to assign the validity metric corresponding to a first validity status to a sub-unit when the sub-unit is a predicted sub-unit. The predicted sub-unit is derived by using motion compensation on a set of sub-units in one or more reference frames. Examples of motion compensation can include, but are not limited to, a full-pixel motion compensation technique, a half-pixel motion compensation technique and a sub- pixel motion compensation technique. The reference frames can include frames that come before the current frame (previous frames) as well as frames that come after the current frame (subsequent frames) in the video. For one embodiment, a full-pixel motion compensation technique can be used for motion compensation. In the full-pixel motion compensation technique, the set of sub-units includes a single sub-unit. In this case, the first validity status corresponds to the valid status when the single sub-unit has a validity metric corresponding to the valid status.
Conversely, the first validity status corresponds to the invalid status when the single sub-unit has a validity metric corresponding to the invalid status.
[0037] For another embodiment, a sub-pixel motion compensation technique can be used for motion compensation. In this case, the first validity status corresponds to the valid status when a number of sub-units of the set of sub-units, with the validity metric corresponding to the valid status, is more than the number of sub-units of the set of sub-units with the validity metric corresponding to the invalid status. Conversely, the first validity status corresponds to the invalid status when a number of sub-units of the set of sub-units, with the validity metric corresponding to the invalid status, is more than the number of sub-units of the set of sub-units with the validity metric corresponding to the valid status. Further, the first validity status can be based on a predefined technique when a number of sub- units of the set of sub-units, with the validity metric corresponding to the valid status, is equal to the number of sub-units of the set of sub-units with the validity metric corresponding to the invalid status. An example of the predefined technique can include assigning the first validity status corresponding to the valid status by default. Another example of the predefined technique can include assigning the first validity metric corresponding to the validity metric of any sub- unit of the set of sub-units at random.
[0038] The processor 504 is also configured to decide when to display the frame. This decision is based on a set of predefined criteria, which can include, for example, a percentage value of a number of sub-units with their validity metric corresponding to the valid status, being more than a predetermined threshold value. For one embodiment, the frame can be displayed when the percentage value of the number of sub-units with their validity metric corresponding to the valid status, is more than 97 percent. Another exemplary criterion could be to decide to display the frame when it is an intra-coded frame. Yet another criterion can be to decide to display the frame, based on a pattern-matching technique that is applied to the validity metrics of the plurality of sub-units.
[0039] For one embodiment, the system 500 can include a display device 508 to display the frame. Examples of the display device 508 can include, but are not limited to, a television screen, a cell phone screen, a computer monitor, an LCD projector, and a Personal Digital Assistant (PDA) screen. In an exemplary scenario, the display device 508 can correspond to the display device 406.
[0040] For one embodiment, the system 500 can also include a memory module 506 that can store the validity metric of the sub-unit. In an exemplary scenario, the validity metric can be stored as a distinct value. In another exemplary scenario, the validity metric can be stored by altering one or more Least Significant Bits (LSBs) of a characteristic value of the sub-unit. Examples of the characteristic value of the sub-unit can include, but are not limited to, a luminance value and a chrominance value. For example, the luminance value can be represented by an eight bit number. In this case, the LSB of the luminance value may be altered to store the validity metric. The LSB of the luminance value that is altered is also referred to as a validity bit. The validity bit can assume a value of either 'one' or 'zero'. The validity bit with the value 'one' represents the validity metric corresponding to the valid status and the value 'zero' represents the validity metric corresponding to the invalid status.
[0041] FIGs 6 and 7 illustrate a flow diagram of a method for processing a video, in accordance with an embodiment of the present invention. The video can be processed, for example, for optimal rendering of the video. To describe the flow diagram, reference will be made to FIG. 5, although it should be understood that the flow diagram can be implemented in any other suitable environment or network. Moreover, the invention is not limited to the order in which the steps have been listed in the flow diagram.
[0042] The method for processing the video is initiated at step 602. The video includes a plurality of frames. At step 604, a frame of the plurality of frames is received from a video source. For one embodiment, the receiver 502 can receive the frame from the video source. Examples of the video source can include, but are not limited to, a streaming video source, a stored video source, a Base Transceiver Station (BTS), an Internet server, a content server and a television station. The frame includes a plurality of sub-units. Examples of the sub-units can include, but are not limited to, a pixel, a macro-block and a block of pixels.
[0043] At step 606, it is determined whether the frame is a first frame of the plurality of frames received from the video source. For one embodiment, the processor 504 can be configured to determine whether the frame is the first frame of the plurality of frames received from the video source. In an exemplary scenario, the first frame can be the starting frame of the video. If it is determined that the frame is the first frame received from the video source, step 608 is performed.
[0044] At step 608, each sub-unit of the plurality of sub-units is initialized with a validity metric corresponding to an invalid status. For one embodiment, the processor 504 can be configured to initialize each sub-unit of the plurality of sub-units with a validity metric corresponding to an invalid status. The validity metric of a sub-unit is a measure of the clarity of the sub-unit to be displayed. For example, the sub-unit is clear enough to be displayed when intra-coded information has been received for it. In this case, the validity metric can correspond to a valid status, indicating that the sub-unit is suitable for display. In case the sub-unit is not clear enough to be displayed, the validity metric can correspond to the invalid status, indicating that the sub-unit is unsuitable for display. Thereafter, the method flow passes on to step 610. [0045] If it is determined at step 606 that the frame is not the first frame received from the video source, step 610 is performed. At step 610, it is determined whether a sub-unit is an intra-coded sub-unit. For one embodiment, the processor 504 can be configured to determine whether the sub-unit is an intra- coded sub-unit. If it is determined at step 610 that the sub-unit is an intra-coded sub-unit, step 612 is performed. At step 612, the sub-unit is assigned the validity metric corresponding to a valid status. For one embodiment, the processor 504 can be configured to assign the sub-unit the validity metric corresponding to the valid status. Thereafter, the method flow passes on to step 702.
[0046] If it is determined at step 610 that the sub-unit is not an intra-coded sub-unit, step 702 is performed. At step 702, it is determined whether the sub-unit is a predicted sub-unit that has been derived by using motion compensation on a set of sub-units in one or more reference frames. For one embodiment, the processor 504 determines whether the sub-unit is a predicted sub-unit that has been derived by using motion compensation on the set of sub-units in one or more reference frames. Examples of the motion compensation can include, but are not limited to, a full-pixel motion compensation technique, a half-pixel motion compensation technique, and a sub-pixel motion compensation technique. The reference frames can include frames that come before the current frame (previous frames) as well as frames that come after the current frame (subsequent frames) in the video. For one embodiment, the motion compensation can be modified to automatically carry forward the validity metrics of the plurality of sub-units to a subsequent frame. If it is determined at step 702 that the sub-unit is a predicted sub-unit that has been derived by using motion compensation on the set of sub- units in one or more reference frames, step 704 is performed.
[0047] At step 704, the sub-unit is assigned the validity metric corresponding to a first validity status. For one embodiment, the processor 504 can be configured to assign the sub-unit the validity metric corresponding to the first validity status. The first validity status is determined, based on the motion compensation used. Thereafter, the method flow proceeds to step 706.
[0048] If it is determined at step 702 that the sub-unit is not a predicted sub- unit, derived by using motion compensation on the set of sub-units in one or more reference frames, step 706 is performed. For one embodiment, the above mentioned steps are repeated until all the sub-units in the frame have been analyzed and assigned the relevant validity metrics. At step 706, it is determined whether a set of predefined criteria has been satisfied. For one embodiment, the processor 504 can be configured to determine whether the set of predefined criteria has been satisfied. The set of predefined criteria can include, for example, a percentage value of a number of sub-units with their validity metric corresponding to the valid status being more than a predetermined threshold value.
[0049] If it is determined at step 706 that the set of predefined criteria has been satisfied, step 708 is performed. At step 708, it is decided to display the frame on the display device 508. For one embodiment, the processor 504 can be configured to decide to display the frame on the display device 508. Thereafter, the method flow proceeds to step 712. [0050] If it is determined at step 706 that the set of predefined criteria has not been satisfied, step 710 is performed. At step 710, it is decided not to display the frame on the display device 508. For one embodiment, the processor 504 can be configured to decide not to display the frame on the display device 508. Thereafter, the method is terminated at step 712. The method can be repeated for each frame of the plurality of frames of the video.
[0051] FIGs 8, 9 and 10 illustrate a flow diagram of a method for processing a video, in accordance with another embodiment of the present invention. The video can be processed for its optimal rendering. To describe the flow diagram, reference is made to FIG. 5, although it should be understood that the flow diagram can also be implemented in any other suitable environment or network. Moreover, the invention is not limited to the order in which the steps have been listed in the flow diagram.
[0052] The method for processing a video is initiated at step 802. The video includes a plurality of frames. At step 804, it is determined whether a frame of the plurality of frames is available from a video source. For one embodiment, the receiver 502 can be configured to determine whether a frame of the plurality of frames is available from the video source. If it is determined at step 804 that a frame of the plurality of frames is available from the video source, step 808 is performed. In a scenario where the frame is not available from the video source, the method is terminated at step 806.
[0053] At step 808, the frame of the plurality of frames is received from the video source. For one embodiment, the receiver 502 can be configured to receive the frame from the video source. Further, the frame can be received via either a wired link or a wireless link. Examples of the video source can include, but are not limited to, a streaming video source, a stored video source, a Base Transceiver Station (BTS), an Internet server, and a television station. The frame includes a plurality of sub-units. Examples of the sub-units can include, but are not limited to, a pixel, a macro-block and a block of pixels.
[0054] At step 810, it is determined whether a display flag is active. For one embodiment, the processor 504 can be configured to determine whether the display flag is active. If it is determined at step 810 that the display flag is active, the method flow directly proceeds to step 1008. At step 1008, the frame is displayed on the display device 508. However, if it is determined at step 810 that the display flag is not active, step 812 is performed.
[0055] At step 812, it is determined whether the frame is a first frame of the plurality of frames received from the video source. For one embodiment, the processor 504 can be configured to determine whether the frame is the first frame received from the video source. If it is determined at step 812 that the frame is the first frame received from the video source, step 814 is performed.
[0056] At step 814, each sub-unit of the plurality of sub-units is initialized with a validity metric corresponding to an invalid status. For one embodiment, the processor 504 can be configured to initialize each sub-unit of the plurality of sub-units with a validity metric corresponding to an invalid status. For one embodiment, the initialization can be performed by storing the validity metric corresponding to the invalid status for each sub-unit of the plurality of sub-units at the memory module 506. The validity metric of a sub-unit is a measure of the clarity of the sub-unit to be displayed. For example, the sub-unit is clear enough for display when intra-coded information is received for it. In this case, the validity metric corresponds to a valid status, indicating that the sub-unit is suitable for display. In case the sub-unit is not clear enough to be displayed, the validity metric corresponds to the invalid status, indicating that the sub-unit is unsuitable for display.
[0057] For one embodiment, the validity metric can be a binary value where 'one' can represent the valid status and 'zero' can represent the invalid status, or vice versa. For another embodiment, the validity metric can be a numerical value representing the degree of validity or clarity of the sub-unit. Examples of the numerical value can include a rank value, a percentage value, and the like.
Further, the degree of validity can be based on various criteria such as a weighted average of other validity metrics.
[0058] For another embodiment, the validity metric can be stored on the memory module 506. In an exemplary scenario, the validity metric can be stored as a distinct value. In another exemplary scenario, the validity metric can be stored by modifying a characteristic value of the sub-unit. Examples of the characteristic value can include, but are not limited to, a luminance value and a chrominance value. In an exemplary scenario, the characteristic value can be modified by altering one or more of its Least Significant Bits (LSBs). For example, the LSB of the luminance value can be altered to store the validity metric. In this case, the luminance value is represented by an eight bit number. The LSB of the luminance value that is altered can be referred to as a validity bit. The validity bit can assume a value of either 'one' or 'zero'. The validity bit with the value 'one' can represent the validity metric corresponding to the valid status and the value 'zero' can represent the validity metric corresponding to the invalid status.
[0059] Further, if it is determined at step 812 that the frame is not the first frame received from the video source, step 902 is performed. At step 902, it is determined whether a sub-unit is an intra-coded sub-unit. For one embodiment, the processor 504 can be configured to determine whether the sub-unit is an intra- coded sub-unit. If it is determined at step 902 that the sub-unit is an intra-coded sub-unit, step 904 is performed.
[0060] At step 904, the sub-unit is assigned the validity metric corresponding to a valid status. For one embodiment, the processor 504 can be configured to assign the sub-unit the validity metric corresponding to the valid status. For one embodiment, the validity metric corresponding to the valid status may be assigned by storing the validity metric of the sub-unit at the memory module 506. Thereafter, the method flow proceeds to step 906.
[0061] If it is determined at step 902 that the sub-unit is not an intra-coded sub-unit, step 906 is performed. At step 906, it is determined whether the sub-unit is a predicted sub-unit that has been derived by using motion compensation on a set of sub-units in one or more reference frames. For one embodiment, the processor 504 determines whether the sub-unit is a predicted sub-unit that has been derived by using motion compensation on the set of sub-units in one or more reference frames. Motion compensation is a technique that is used to reduce temporal redundancy between frames while encoding the video. By using motion compensation, a sub-unit is represented by a set of sub-units known as reference sub-units and a residue component. The residue component is the difference between a characteristic value of the predicted sub-unit and a combination of the reference sub-units. Further, the difference in the positions of the predicted sub- unit and the combination of the reference sub-units is known as a motion vector. Examples of motion compensation techniques can include, but are not limited to, a full-pixel motion compensation technique, a half-pixel motion compensation technique, and a sub-pixel motion compensation technique. For one embodiment, motion compensation can be modified to preserve the quality of the video when the characteristic value is being modified. The reference frames can include frames that come before the current frame (previous frames) as well as frames that come after the current frame (subsequent frames) in the video. Further, if it is determined at step 906 that the sub-unit is a predicted sub-unit that has been derived by using motion compensation on the set of sub-units in one or more reference frames, step 908 is performed.
[0062] At step 908, the first validity status is determined. For one embodiment, the processor 504 can be configured to determine the first validity status. The first validity status is determined, based on the motion compensation being used. For one embodiment, a full-pixel motion compensation technique is used for motion compensation. In the full-pixel motion compensation technique, the set of sub-units includes a single sub-unit. In this case, the first validity status is determined as the valid status when the single sub-unit has a validity metric corresponding to the valid status. Conversely, the first validity status is determined as the invalid status when the single sub-unit has a validity metric corresponding to the invalid status. [0063] For another embodiment, a sub-pixel motion compensation technique is used for motion compensation. In this case, the first validity status is determined as the valid status when a number of sub-units of the set of sub-units, with the validity metric corresponding to the valid status, is more than the number of sub-units of the set of sub-units with the validity metric corresponding to the invalid status. Conversely, the first validity status is determined as the invalid status when a number of sub-units of the set of sub-units, with the validity metric corresponding to the invalid status, is more than the number of sub-units of the set of sub-units with the validity metric corresponding to the valid status. Further, the first validity status can be determined, based on a predefined technique, when a number of sub-units of the set of sub-units, with the validity metric corresponding to the valid status, is equal to a number of sub-units of the set of sub-units with the validity metric corresponding to the invalid status. An example of the predefined technique can be assigning the first validity status corresponding to the valid status by default. Another example of the predefined technique can be assigning the first validity metric corresponding to the validity metric of any sub-unit of the set of sub-units at random.
[0064] For yet another embodiment, a sub-pixel motion compensation technique is used for motion compensation. In this case, the first validity metric is determined, based on the validity metric of each sub-unit of the set of sub-units.
Further, the first validity metric can also be based on a set of significance values associated with the set of sub-units. In an exemplary scenario, the significance values can be weights associated with each sub-unit of the set of sub-units. The weights can be based on, for example, the proximity of the sub-units to the predicted sub-unit or a predefined pattern.
[0065] At step 910, the sub-unit is assigned the validity metric corresponding to the first validity status. For one embodiment, the processor 504 can be configured to assign the sub-unit the validity metric corresponding to the first validity status. The validity metric corresponding to the first validity status may be assigned by storing the validity metric of the sub-unit on the memory module 506.
[0066] If it is determined at step 906 that the sub-unit is not a predicted sub- unit that has been derived by using motion compensation on the set of sub-units in one or more reference frames, step 912 is performed. At step 912, it is determined whether all sub-units of the plurality of sub-units have been processed. For one embodiment, the processor 504 can be configured to determine whether all sub- units of the plurality of sub-units have been processed. If it is determined at step 912 that all sub-units of the plurality of sub-units have not been processed, the method flow proceeds to step 902. Further, if it is determined at step 912 that all sub-units of the plurality of sub-units have been processed, step 1002 is performed.
[0067] At step 1002, it is determined whether a set of predefined criteria has been satisfied. For one embodiment, the processor 504 can be configured to determine whether the set of predefined criteria has been satisfied. The set of predefined criteria can include, for example, a percentage value of a number of sub-units with their validity metric, corresponding to the valid status, being more than a predetermined threshold value. For example, the criterion could decide whether the frame should be displayed, based on whether the percentage value of the number of sub-units with the validity metric corresponding to the valid status is more than 97 percent. Another exemplary criterion could decide whether the frame should be displayed, based on whether the frame is an intra-coded frame. Yet another criterion could decide whether the frame should be displayed, based on a pattern-matching technique applied to the validity metrics of the plurality of sub-units.
[0068] If it is determined at step 1002 that the set of predefined criteria has been satisfied, step 1004 is performed. At step 1004, it is decided to display the frame on the display device 508. For one embodiment, the processor 504 can be configured to decide to display the frame on the display device 508.
[0069] At step 1006, the display flag is set as active. The processor 504 can be configured to set the display flag as active. Thereafter, the frame is displayed on the display device 508 at step 1008. For one embodiment, the processor 504 can be configured to send the processed video to the display device 508 to be displayed.
[0070] If it is determined at step 1002 that the set of predefined criteria has not been satisfied, step 1010 is performed. At step 1010, it is decided not to display the frame on the display device 508. The processor 504 can be configured to decide not to display the frame on the display device 508. Thereafter, the display flag is set as inactive at step 1012. The processor 504 can be configured to set the display flag as inactive. [0071] At step 1014, the validity metrics of the plurality of sub-units are forwarded to a subsequent frame. For one embodiment, the processor 504 can be configured to forward the validity metrics of the plurality of sub-units to the subsequent frame. In an exemplary scenario, the validity metrics can be forwarded by storing them on the memory module 506. In this case, the validity metrics can be accessed by the subsequent frame through the memory module 506. In another exemplary scenario, the validity metrics can be forwarded by modifying the motion compensation such that the validity metrics are automatically carried forward by the modified motion compensation. Thereafter, the method flow proceeds to step 804. If it is determined at step 804 that a frame of the plurality of frames is not available from the video source, the method is terminated at step 806.
[0072] FIG. 11 is a flow diagram illustrating a method for storing a validity metric of a sub-unit, in accordance with an embodiment of the present invention. To describe the flow diagram, reference is made to FIG. 5, although it should be understood that the flow diagram can also be implemented in any other suitable environment or network. Moreover, the invention is not limited to the order in which the steps have been listed in the flow diagram.
[0073] The method for storing a validity metric of a sub-unit is initiated at step 1102. At step 1104, a characteristic value of the sub-unit is accessed. For one embodiment, the processor 504 can be configured to access the characteristic value of the sub-unit. Examples of the characteristic value can include, but are not limited to, a luminance value and a chrominance value. The characteristic value is accessed to store the validity metric of the sub-unit. [0074] Thereafter, at step 1106, the characteristic value is modified by altering one or more Least Significant Bits (LSBs) of the characteristic value. For one embodiment, the processor 504 can be configured to modify the characteristic value by altering the one or more LSBs of the characteristic value. A Least Significant Bit (LSB) is the right-most bit in a binary number. One or more LSBs are the one or more bits of the binary number that are the closest to, and including, the LSB. For example, one LSB of the luminance value can be altered to store the validity metric. In this case, the luminance value can be represented by an eight bit binary number. The LSB of the luminance value that is altered can be referred to as a validity bit. For one embodiment, the validity bit can assume a value of either 'one' or 'zero'. The validity bit with the value 'one' can represent the validity metric corresponding to the valid status, and the value 'zero' can represent the validity metric corresponding to the invalid status. The method is terminated at step 1108.
[0075] For one embodiment, the validity metric may be stored by modifying the characteristic value in a different way from that described in FIG. 11. For example, the Most Significant Bits (MSBs) may be altered instead of the LSBs. However, it will be readily apparent to a person ordinarily skilled in the art that variations and extensions of the method for storing the validity metric can be used in various other embodiments.
[0076] FIG. 12 is a flow diagram illustrating a method for a full-pixel motion- compensation technique for a predicted sub-unit, in accordance with an embodiment of the present invention. To describe the flow diagram, reference is made to FIG. 5, although it should be understood that the flow diagram can also be implemented in any other suitable environment or network. Moreover, the invention is not limited to the order in which the steps have been listed in the flow diagram. The full-pixel motion compensation technique described using FIG. 12 can be used when one or more Least Significant Bits (LSBs) are being altered to store validity metrics for sub-units.
[0077] The method for performing a full-pixel motion compensation technique for a predicted sub-unit is initiated at step 1202. The predicted sub-unit can be derived by using the full-pixel motion compensation technique on a set of sub-units in one or more reference frames. In this case, the set of sub-units includes a single sub-unit. The reference frames can include frames that come before the current frame (previous frames), as well as frames that come after the current frame (subsequent frames) in the video. At step 1204, a convergent- rounding technique is applied to a residue component of a characteristic value of the predicted sub-unit. For one embodiment, the processor 504 can be configured to apply the convergent-rounding technique to the residue component of the characteristic value of the predicted sub-unit. In case a Least Significant Bit (LSB) of the characteristic value is being modified to store the validity metric, the convergent-rounding technique rounds only the LSB. In this case, the convergent rounding technique rounds an odd number to the nearest multiple of 4, so that nearly half the odd numbers are rounded up and the other half rounded down.
This is done to ensure that no bias is introduced in the characteristic value due to rounding. Further, convergent-rounding is applied to preserve the validity metric of the predicted sub-unit when the validity metric is stored by altering one or more Least Significant Bits (LSBs) of the characteristic value. For example, if the validity metric is stored by altering the LSB of a luminance value, the convergent- rounding technique is applied to the LSB of the residue component of the luminance value. This makes the LSB of the residue component zero.
[0078] Thereafter, at step 1206, the rounded residue component is added to the characteristic value of the single sub-unit to obtain the predicted sub-unit. For one embodiment, the processor 504 can be configured to add the rounded residue component to the characteristic value of the single sub-unit, to obtain the predicted sub-unit. By applying the convergent-rounding technique, the validity metric of the single sub-unit can be carried forward to the predicted sub-unit. In light of the example mentioned above, when the rounded residue component of the luminance value is added to the luminance value of the single sub-unit, the LSB of the luminance value of the single sub-unit is preserved. This occurs because adding a zero to the LSB of the luminance value of the single sub-unit results in the LSB of the luminance value of the single sub-unit. Since this LSB represents the validity metric of the predicted sub-unit, by preserving the LSB, the validity metric is carried forward. The method is terminated at step 1208.
[0079] FIG. 13 is a flow diagram illustrating a method for a modified sub- pixel motion compensation technique for a predicted sub-unit, in accordance with an embodiment of the present invention. To describe the flow diagram, reference is made to FIG. 5, although it should be understood that the flow diagram can also be implemented in any other suitable environment or network. Moreover, the invention is not limited to the order in which the steps have been listed in the flow diagram. The modified sub-pixel motion compensation technique described using FIG. 13 can be used when one or more Least Significant Bits (LSBs) are being altered to store validity metrics for sub-units.
[0080] The method for performing a modified sub-pixel motion compensation technique for a predicted sub-unit is initiated at step 1302. The predicted sub-unit can be derived by using the modified sub-pixel motion compensation technique on a set of sub-units in one or more reference frames. The reference frames can include frames that come before the current frame (previous frames) as well as frames that come after the current frame (subsequent frames) in the video. At step 1304, a characteristic value of each sub-unit of the set of sub-units is altered by removing one or more Least Significant Bits (LSBs) from the characteristic value.
For one embodiment, the processor 504 can be configured to alter the characteristic value of each sub-unit of the set of sub-units by removing one or more Least Significant Bits (LSBs) from the characteristic value. Further, a validity metric of the predicted sub-unit can be stored by altering the one or more LSBs. These one or more LSBs can be removed by laterally shifting each bit of the characteristic value. For one embodiment, this lateral shifting can be performed by dividing the characteristic value by two as many times as required, to remove the one or more LSBs. Each division by two removes one LSB of the characteristic value. For one embodiment, the removed one or more LSBs of each sub-unit of the set of sub-units may be stored to retain the validity metrics of the set of sub-units.
[0081] At step 1306, the altered characteristic value of each sub-unit of the set of sub-units and a residue component of the characteristic value of the predicted sub-unit can be added to obtain a resultant value. In an exemplary scenario, the set of sub-units can include four sub-units and one LSB can be modified to store the validity metric for each sub-unit. For one embodiment, in the above mentioned scenario, the characteristic values of the four sub-units can be altered by dividing each characteristic value by four. This would result in the removal of two LSBs. Thereafter, the resultant value can be obtained by adding the altered characteristic values of the 4 sub-units and the residue component. For another embodiment, in the above mentioned scenario, the characteristic values of the four sub-units can be divided by two to remove the LSB. The altered characteristic values can be added to obtain a combined characteristic value. In an exemplary scenario, a round control mode bit may be added to the combined characteristic value. Thereafter, the resultant value can be obtained by dividing the combined characteristic value by two and adding the result to the residue component. These scenarios have been described as examples of various methods for performing steps 1304 and 1306. However, it will be readily apparent to a person ordinarily skilled in the art that variations and extensions of the method for altering the characteristic value and obtaining the resultant value can be used in various other embodiments.
[0082] At step 1308, a predetermined rounding technique can be applied to the resultant value from step 1306. For one embodiment, the predetermined rounding technique can be based on a mapping of the characteristic values on to a set of rounded characteristic values. An example of the mapping, in case one LSB is being modified to store the validity metric, is provided in Table 1. Table 1 also includes the corresponding results that can be obtained on applying the convergent-rounding technique.
Figure imgf000035_0001
Table 1 : Mapping of characteristic values onto rounded characteristic values by using the predetermined rounding technique and the convergent-rounding technique
[0083] As seen in Table 1, in the case of the convergent-rounding technique, the odd original characteristic values are rounded to the nearest multiple of four. For example, the characteristic value of 3 is rounded to 4 and the characteristic value of 5 is also rounded to 4. However, on using the convergent-rounding technique, a slight drift has been observed in the characteristic value. To compensate for this drift, the convergent-rounding technique has been modified. This modified convergent-rounding technique is the predetermined rounding technique. In the predetermined rounding technique, apart from rounding every odd number to the nearest multiple of four, a slight bias has been introduced towards the central values of the characteristic value. In an exemplary scenario, the characteristic value can take values from 0 to 255 with the central value being 128. Therefore, to introduce a bias towards the central values, starting from the original characteristic value of 5, every eighth value is rounded up if the original characteristic value is less than 128. As seen in Table 1, the original characteristic value of 5 is rounded to 4 in the case of convergent-rounding. However, in the case of the predetermined rounding technique, the original characteristic value of 5 is rounded to 6. Similarly, starting from the original characteristic value of 135, every eighth value is rounded down if the original characteristic value is more than 128. For example, the original characteristic value of 135 is rounded to 136 in the case of convergent-rounding. However, in the case of the predetermined rounding technique, the original characteristic value of 135 is rounded to 134.
[0084] At step 1310, the one or more LSBs of the rounded resultant value are replaced with bits corresponding to a first validity status. This provides the characteristic value of the predicted sub-unit with the validity metric of the predicted sub-unit. Thereafter, the method is terminated at step 1312.
[0085] FIG. 14 is a flow diagram illustrating a method for accounting for a corrupt sub-unit while processing a video, in accordance with an embodiment of the present invention. To describe the flow diagram, reference is made to FIG. 5, although it should be understood that the flow diagram can also be implemented in any other suitable environment or network. Moreover, the invention is not limited to the order in which the steps have been listed in the flow diagram.
[0086] The method for accounting for a corrupt sub-unit while processing a video is initiated at step 1402. At step 1404, it is determined whether the sub-unit is a corrupt sub-unit. For one embodiment, the processor 504 can be configured to determine whether the sub-unit is a corrupt sub-unit. If it is determined at step
1404 that the sub-unit is a corrupt sub-unit, step 1406 is performed.
[0087] At step 1406, the sub-unit is assigned a validity metric corresponding to an invalid status. The processor 504 can be configured to assign the validity metric corresponding to the invalid status to the sub-unit. If it is determined at step 1404 that the sub-unit is not a corrupt sub-unit, the method is terminated at step 1408.
[0088] Various embodiments of the present invention, as described above, offer several advantages, some of which are discussed here. The present invention provides a method for processing a video so that the video can be optimally rendered in case the first I-frame is not available. Further, the present invention reduces the delay in the display of the video, to reduce the waiting time for a user.
The present invention also accounts for the underlying motion in the video and the scenario of premature display of the video. Furthermore, an embodiment of the present invention does not require any additional memory for storing the validity metrics.
[0089] It will be appreciated that the method and system for processing videos, described herein, may comprise one or more conventional processors and unique stored program instructions that control the one or more processors, to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the system described herein. The non-processor circuits can include, but are not limited to, signal drivers, clock circuits, power-source circuits and user-input devices. As such, these functions may be interpreted as steps of a method for enabling the processing of a video. Alternatively, some or all the functions can be implemented by a state machine that has no stored program instructions, or in one or more application-specific integrated circuits (ASICs), in which each function, or some combinations of certain of the functions, are implemented as custom logic. Of course, a combination of the two approaches can also be used. Thus, methods and means for these functions have been described herein.
[0090] It is expected that one with ordinary skill, notwithstanding possibly significant effort and many design choices motivated by, for example, available time, current technology and economic considerations, when guided by the concepts and principles disclosed herein, will be readily capable of generating such software instructions, programs and ICs with minimal experimentation.
[0091] In the foregoing specification, the invention and its benefits and advantages have been described with reference to specific embodiments. However, one of with ordinary skill in the art will appreciate that various modifications and changes can be made without departing from the scope of the present invention, as set forth in the claims. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present invention. The benefits, advantages, solutions to problems and any element(s) that may cause any benefit, advantage or solution to occur or become more pronounced are not to be construed as critical, required or essential features or elements of any or all the claims. The invention is defined solely by the appended claims, including any amendments made during the pendency of this application and all equivalents of those claims, as issued.

Claims

WHAT IS CLAIMED IS:
1. A method for processing a video, the video comprising a plurality of frames, the method comprising: receiving a frame of the plurality of frames from a video source, wherein the frame comprises a plurality of sub-units; initializing each sub-unit of the plurality of sub-units with a validity metric corresponding to an invalid status when the frame is a first frame, of the plurality of frames, received from the video source; assigning a sub-unit the validity metric corresponding to a valid status when the sub-unit is an intra-coded sub-unit; assigning a sub-unit the validity metric corresponding to a first validity status when the sub-unit is a predicted sub-unit that is derived by using motion compensation on a set of sub-units in one or more reference frames; and deciding when to display the frame on a display device, based on a set of predefined criteria.
2. The method as recited in claim 1 further comprising determining the first validity status when a full-pixel motion compensation technique is used for the motion compensation, wherein the set of sub-units comprises a single sub-unit, and wherein the first validity status is: the valid status when the single sub-unit has a validity metric corresponding to the valid status; and the invalid status when the single sub-unit has a validity metric corresponding to the invalid status.
3. The method as recited in claim 1 further comprising determining the first validity status when a sub-pixel motion compensation technique is used for the motion compensation, wherein the first validity status is based on at least one of: a set of significance values associated with the set of sub-units; and the validity metric of each sub-unit of the set of sub-units.
4. The method as recited in claim 1 further comprising determining the first validity status when a sub-pixel motion compensation technique is used for the motion compensation, wherein the first validity status is: the valid status when a number of sub-units of the set of sub-units having the validity metric corresponding to the valid status is more than a number of sub-units of the set of sub-units having the validity metric corresponding to the invalid status; the invalid status when a number of sub-units of the set of sub-units having the validity metric corresponding to the invalid status is more than a number of sub-units of the set of sub-units having the validity metric corresponding to the valid status; and based on a predefined technique, when a number of sub-units of the set of sub- units having the validity metric corresponding to the valid status is equal to a number of sub-units of the set of sub-units having the validity metric corresponding to the invalid status.
5. The method as recited in claim 4, wherein the predefined technique comprises one of: assigning the first validity status corresponding to the valid status; and assigning the first validity status corresponding to the validity metric of a sub- unit of the set of sub-units.
6. The method as recited in claim 1, wherein the set of predefined criteria comprises at least one of: a percentage value of a number of sub-units having the validity metric corresponding to the valid status being more than a predetermined threshold value; the frame being an intra-coded frame; and a pattern-matching technique based on validity metrics of the plurality of sub- units.
7. The method as recited in claim 1 further comprising storing the validity metric on a memory module.
8. The method as recited in claim 7, wherein storing the validity metric comprises modifying a characteristic value of a sub-unit in the frame.
9. The method as recited in claim 8, wherein the characteristic value is selected from the group comprising a luminance value and a chrominance value.
10. The method as recited in claim 8 further comprising modifying the motion compensation to preserve a quality of the video when the characteristic value is modified.
11. The method as recited in claim 8, wherein modifying the characteristic value comprises altering one or more Least Significant Bits (LSBs) of the characteristic value.
12. The method as recited in claim 11 further comprising modifying a sub-pixel motion compensation technique, wherein the modified sub-pixel motion compensation technique comprises: altering the characteristic value of each sub-unit of the set of sub-units by removing the one or more LSBs from the characteristic value, wherein the one or more LSBs are removed by laterally shifting each bit of the characteristic value; adding the altered characteristic value of each sub-unit of the set of sub-units and a residue component of the characteristic value of the predicted sub-unit to obtain a resultant value; applying a predetermined rounding technique to the resultant value, wherein the predetermined rounding technique is based on a mapping of the characteristic value onto a set of rounded characteristic values; and replacing the one or more LSBs of the rounded resultant value with bits corresponding to the first validity status.
13. The method as recited in claim 8 further comprising applying a convergent- rounding technique to a residue component of the characteristic value of the predicted sub-unit, when a full-pixel motion compensation technique is used for the motion compensation.
14. The method as recited in claim 1 further comprising forwarding validity metrics of the plurality of sub-units to a subsequent frame.
15. The method as recited in claim 1, wherein each sub-unit is selected from the group comprising a pixel, a macro-block and a block of pixels.
16. The method as recited in claim 1 further comprising assigning the validity metric to the sub-unit based on a degree of validity of the sub-unit.
17. The method as recited in claim 1 further comprising setting status of a display flag as: active when it is decided to display the frame based on the set of predefined criteria; and inactive when it is decided not to display the frame based on the set of predefined criteria.
18. The method as recited in claim 17 further comprising displaying the frame directly when the status of the display flag is active.
19. The method as recited in claim 1 further comprising assigning the sub-unit the validity metric corresponding to the invalid status when the sub-unit is a corrupt sub- unit.
20. The method as recited in claim 1, wherein the video source is selected from the group comprising a streaming video source, a stored video source, a Base Transceiver Station (BTS), an Internet server, and a television station.
21. A system comprising : a receiver capable of receiving a frame of a plurality of frames of a video from a video source, wherein the frame comprises a plurality of sub-units; and a processor configured for: initializing each sub-unit of the plurality of sub-units with a validity metric corresponding to an invalid status when the frame is a first frame, of the plurality of frames, received from the video source; assigning a sub-unit the validity metric corresponding to a valid status when the sub-unit is an intra-coded sub-unit; assigning a sub-unit the validity metric corresponding to a first validity status when the sub-unit is a predicted sub-unit that is derived by using motion compensation on a set of sub-units in one or more reference frames; and deciding when to display the frame, based on a set of predefined criteria.
22. The system as recited in claim 21, wherein the set of sub-units comprises a single sub-unit, and wherein the first validity status is: the valid status when the single sub-unit has a validity metric corresponding to the valid status; and the invalid status when the single sub-unit has a validity metric corresponding to the invalid status; when a full-pixel motion compensation technique is used for the motion compensation.
23. The system as recited in claim 21, wherein the first validity status is: the valid status when a number of sub-units of the set of sub-units having the validity metric corresponding to the valid status is more than a number of sub-units of the set of sub-units having the validity metric corresponding to the invalid status; the invalid status when a number of sub-units of the set of sub-units having the validity metric corresponding to the invalid status is more than a number of sub-units of the set of sub-units having the validity metric corresponding to the valid status; and based on a predefined technique, when a number of sub-units of the set of sub- units having the validity metric corresponding to the valid status is equal to a number of sub-units of the set of sub-units having the validity metric corresponding to the invalid status; when a sub-pixel motion compensation technique is used for the motion compensation.
24. The system as recited in claim 23, wherein the predefined technique comprises one of: assigning the first validity status corresponding to the valid status; and assigning the first validity status corresponding to the validity metric of a sub- unit of the set of sub-units.
25. The system as recited in claim 21 , wherein the set of predefined criteria comprises at least one of: a percentage value of a number of sub-units having the validity metric corresponding to the valid status being more than a predetermined threshold value; the frame being an intra-coded frame; and a pattern-matching technique based on validity metrics of the plurality of sub- units.
26. The system as recited in claim 21 further comprising a display device for displaying the frame.
27. The system as recited in claim 21 further comprising a memory module capable of storing the validity metric of the sub-unit.
28. The system as recited in claim 27, wherein the validity metric is stored by altering one or more Least Significant Bits (LSBs) of a characteristic value of the sub- unit.
29. The system as recited in claim 28, wherein the characteristic value is selected from the group comprising a luminance value and a chrominance value.
30. The system as recited in claim 21, wherein each sub-unit is selected from the group comprising a pixel, a macro-block and a block of pixels.
PCT/US2008/080420 2007-10-23 2008-10-20 Method and system for processing videos WO2009055318A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN2216DE2007 2007-10-23
IN2216/DEL/2007 2007-10-23

Publications (1)

Publication Number Publication Date
WO2009055318A1 true WO2009055318A1 (en) 2009-04-30

Family

ID=40579937

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2008/080420 WO2009055318A1 (en) 2007-10-23 2008-10-20 Method and system for processing videos

Country Status (1)

Country Link
WO (1) WO2009055318A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070008323A1 (en) * 2005-07-08 2007-01-11 Yaxiong Zhou Reference picture loading cache for motion prediction
US20070242085A1 (en) * 2001-12-31 2007-10-18 Weybrew Steven T Method and apparatus for image blending

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070242085A1 (en) * 2001-12-31 2007-10-18 Weybrew Steven T Method and apparatus for image blending
US20070008323A1 (en) * 2005-07-08 2007-01-11 Yaxiong Zhou Reference picture loading cache for motion prediction

Similar Documents

Publication Publication Date Title
CN109618179B (en) Rapid play starting method and device for ultra-high definition video live broadcast
US9172969B2 (en) Local macroblock information buffer
US10819994B2 (en) Image encoding and decoding methods and devices thereof
US20070291131A1 (en) Apparatus and Method for Controlling Image Coding Mode
JP6348188B2 (en) Robust encoding and decoding of pictures in video
CN110868625A (en) Video playing method and device, electronic equipment and storage medium
US7447266B2 (en) Decoding device and decoding program for video image data
US8811483B2 (en) Video processing apparatus and method
JP2011524682A (en) Media stream processing
US20180255317A1 (en) Method for reconstructing video stream
JP4709155B2 (en) Motion detection device
US9509940B2 (en) Image output device, image output method, and recording medium
US10778991B1 (en) Adaptive group of pictures (GOP) encoding
US6831951B2 (en) Image data processing apparatus and motion compensation processing method used therefor, and recording medium
JP2005520417A (en) Method and apparatus for performing smooth transitions between FGS coding configurations
JP2022097392A (en) Offloading video coding processes to hardware for better density-quality tradeoffs
US20080056381A1 (en) Image compression and decompression with fast storage device accessing
US7864864B2 (en) Context buffer address determination using a plurality of modular indexes
US20070242749A1 (en) Image frame compression of video stream with fast random accessing and decompressing
WO2009055318A1 (en) Method and system for processing videos
US20100165205A1 (en) Video signal sharpening apparatus, image processing apparatus, and video signal sharpening method
US20100008642A1 (en) Video apparatus and method thereof
JP2018514133A (en) Data processing method and apparatus
US7512325B2 (en) Method and apparatus for MPEG video processing
US8284838B2 (en) Apparatus and related method for decoding video blocks in video pictures

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08842492

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08842492

Country of ref document: EP

Kind code of ref document: A1