CN101873489A - Signal processing method and system - Google Patents

Signal processing method and system Download PDF

Info

Publication number
CN101873489A
CN101873489A CN201010150205A CN201010150205A CN101873489A CN 101873489 A CN101873489 A CN 101873489A CN 201010150205 A CN201010150205 A CN 201010150205A CN 201010150205 A CN201010150205 A CN 201010150205A CN 101873489 A CN101873489 A CN 101873489A
Authority
CN
China
Prior art keywords
video
motion vector
frame
wirelesshd
coded message
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201010150205A
Other languages
Chinese (zh)
Inventor
陈雪敏
马库斯·凯勒曼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Broadcom Corp
Zyray Wireless Inc
Original Assignee
Zyray Wireless Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zyray Wireless Inc filed Critical Zyray Wireless Inc
Publication of CN101873489A publication Critical patent/CN101873489A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/142Detection of scene cut or scene change
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/521Processing of motion vectors for estimating the reliability of the determined motion vectors or motion vector field, e.g. for smoothing the motion vector field or for correcting motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/587Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • H04N19/615Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding using motion compensated temporal filtering [MCTF]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/87Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving scene cut or scene change detection in combination with video compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness

Abstract

The present invention relates to a kind of signal processing method and system, particularly motion compensation frame per second up conversion is carried out in compression and uncompressed video streams.Video receiver is used for receiving three-dimensional video stream from video transmitter.The three-dimensional video stream that receives can include a plurality of frame of video and corresponding coding information.This coded message, for example block motion vector, block encoding pattern, quantized level and/or quantification residual data are extracted out to be used for that a plurality of frame of video that receive are carried out frame rate up-conversion.This coded message at the video transmitter place by to from video source for example the compress three-dimensional video of Internet Protocol Television network carry out entropy decoding and generate.When what receive is not during the compress three-dimensional video, and video receiver uses the block motion vector that extracts and the relevant confidence level-consistent degree measured value not compress three-dimensional video execution frame rate up-conversion to reception.When receive be the compress three-dimensional video time, earlier the compress three-dimensional video that receives is carried out video decompression before the video receiver conducting frame rate up conversion.

Description

Signal processing method and system
Technical field
The present invention relates to Digital Video Processing, particularly, specific embodiment of the present invention relates to and a kind of the method and system of motion compensation frame per second up conversion is carried out in compression and uncompressed video streams.
Background technology
Main progress in the video display technology comprises the panel display screen based on liquid crystal display (LCD) or plasma display panel (PDP) technology, this panel display screen has substituted the cathode ray tube (CRT) technology fast, and the latter is used as main display device in nearly half a century.A remarkable achievement of new video Display Technique is that image can adopt higher picture rate to show by the mode of lining by line scan on panel display screen now.New video display technology has also promoted the quick conversion from SD (SD) TV (TV) to high definition (HD) TV.
Form with low picture rate can be used for old-fashioned video compression system, is used for showing on new-type display screen old-fashioned video.May there be restriction on the channel capacity, the demonstration of the low picture rate image of influence.For example, imagine a 30Hz video sequence, broadcast by mobile network and terminal (for example can from the mobile phone of the video sequence of server received code).Yet because bandwidth constraints, only the video sequence of low bit rate is transmitted.The result is, encoder removes two in the picture that is transmitted per three, and for example obtaining, the picture rate is the sequence of 10Hz.Available channel capacity in the different video service can be different.Also be different in the legacy system zones of different in the world, for example NTSC, SECAM or PAL.
Compare the follow-up system that will introduce in conjunction with the accompanying drawings of the present invention, other limitation of prior art and drawback are conspicuous for the person of ordinary skill of the art.
Summary of the invention
The invention provides and a kind of the method and/or the system of motion compensation frame per second up conversion are carried out in compression and uncompressed video streams, carried out representing fully and describing in conjunction with at least one width of cloth accompanying drawing, and obtained more complete elaboration in the claims.
According on the one hand, the invention provides a kind of signal processing method, comprising:
In video receiver:
Reception comprises the three-dimensional video stream of a plurality of frame of video and corresponding codes information;
From the three-dimensional video stream that is received, extract described coded message; And
The coded message that use extracts is carried out frame rate up-conversion to a plurality of frame of video of described reception.
As preferably, the described coded message that extracts comprises block motion vector, block encoding pattern, quantized level and/or quantizes one or more in the residual data.
As preferably, described coded message is generated by the compress three-dimensional video from video source being carried out entropy decoding (entropy decoding) by video transmitter, wherein said video source from cable TV network, Internet Protocol Television network, satellite broadcast network, mobile communications network, video camera and/or camera one of them.
As preferably, a plurality of frame of video of described reception comprise a plurality of decoded video frames, and it is to construct by the described compress three-dimensional video that decompresses from video source at described video transmitter place.
As preferably, described method further comprises:
Based on the described coded message that extracts, be each the generation pixel motion vector in a plurality of decoded video frames of described reception;
Pixel motion vector calculating kinematical vector confidence level (confidence) and/or motion vector unanimity degree (consistence) for the described generation of correspondence.
As preferably, described method further comprises: based on pixel motion vector and the motion vector confidence level that is calculated and/or the consistent degree of motion vector of described generation, generate a plurality of interpolations (interpolated) frame of video from a plurality of decoded video frames of described reception.
As preferably, the three-dimensional video stream of described reception comprises the compress three-dimensional video.
As preferably, described method comprises that further the compress three-dimensional video compression with described reception shortens a plurality of decoded video frames into.
As preferably, described method further comprises:
Based on the described coded message that extracts, be each the generation pixel motion vector in described a plurality of decoded video frames; And
Pixel motion vector calculating kinematical vector confidence level and/or motion vector unanimity degree for the described generation of correspondence.
As preferably, described method further comprises: based on pixel motion vector and the motion vector confidence level that is calculated and/or the consistent degree of motion vector of described generation, generate a plurality of interpolation frame of video from described a plurality of decoded video frames.
According on the one hand, the invention provides a kind of signal processing system, comprising:
Be used for the one or more circuit in the video receiver, wherein said one or more circuit are used to receive the three-dimensional video stream that comprises a plurality of frame of video and corresponding codes information;
Described one or more circuit extracts described coded message from the three-dimensional video stream that is received; And
Described one or more circuit uses the coded message that extracts that a plurality of frame of video of described reception are carried out frame rate up-conversion.
As preferably, the described coded message that extracts comprises block motion vector, block encoding pattern, quantized level and/or quantizes one or more in the residual data.
As preferably, described coded message is generated by the compress three-dimensional video from video source being carried out entropy decoding by video transmitter, wherein said video source from cable TV network, Internet Protocol Television network, satellite broadcast network, mobile communications network, video camera and/or camera one of them.
As preferably, a plurality of frame of video of described reception comprise a plurality of decoded video frames, and it is to construct by the described compress three-dimensional video that decompresses from video source at described video transmitter place.
As preferably, described one or more circuit are based on the described coded message that extracts, and are that in a plurality of decoded video frames of described reception each generates pixel motion vector; And
Described one or more circuit is the pixel motion vector calculating kinematical vector confidence level and/or the motion vector unanimity degree of the described generation of correspondence.
As preferably, described one or more circuit generate a plurality of interpolation frame of video based on the pixel motion vector and motion vector confidence level that is calculated and/or the consistent degree of motion vector of described generation from a plurality of decoded video frames of described reception.
As preferably, the three-dimensional video stream of described reception comprises the compress three-dimensional video.
As preferably, described one or more circuit shorten the compress three-dimensional video compression of described reception into a plurality of decoded video frames.
As preferably, described one or more circuit are based on the described coded message that extracts, and are that in described a plurality of decoded video frames each generates pixel motion vector; And
Described one or more circuit is the pixel motion vector calculating kinematical vector confidence level and/or the motion vector unanimity degree of the described generation of correspondence.
As preferably, described one or more circuit generate a plurality of interpolation frame of video based on the pixel motion vector and motion vector confidence level that is calculated and/or the consistent degree of motion vector of described generation from described a plurality of decoded video frames.
Various advantage of the present invention, various aspects and character of innovation, and the details of the embodiment of example shown in it will describe in detail in following description and accompanying drawing.
Description of drawings
Fig. 1 is that it is used for by the WirelessHD transmission link video flowing being sent to the WirelessHD receiver from the WirelessHD reflector according to the block diagram of the typical radio high definition system of the embodiment of the invention;
Fig. 2 is the schematic diagram according to the typical radio high definition reflector of the embodiment of the invention, and it is used for sending not compression (decompressed) video flowing by the WirelessHD transmission link;
Fig. 3 is the schematic diagram according to the type solution compression engine of the embodiment of the invention, and its video decompression that is used to wireless launcher is handled;
Fig. 4 is the schematic diagram according to the typical radio high definition receiver of the embodiment of the invention, and it is used for receiving uncompressed video streams by the WirelessHD transmission link;
Fig. 5 is that it is used for motion compensated interpolated by the WirelessHD receiver according to the schematic diagram of the typical frame rate up-conversion engine of the embodiment of the invention;
Fig. 6 is the schematic diagram that inserts exemplary interpolation frame of video according to the embodiment of the invention between two reference video frames;
Fig. 7 is the schematic diagram according to the exemplary motion vector of the interpolation frame of video of the embodiment of the invention;
Fig. 8 is the schematic diagram according to the typical radio high definition reflector of the embodiment of the invention, and it is used for sending compressing video frequency flow by the WirelessHD transmission link;
Fig. 9 is the schematic diagram according to the typical radio high definition receiver of the embodiment of the invention, and it is used for receiving compressing video frequency flow by the WirelessHD transmission link;
Figure 10 uses WirelessHD to carry out the flow chart of the step of motion compensation frame per second up conversion according to the embodiment of the invention to compression and uncompressed video streams;
Figure 11 is the flow chart that is used for the step of video decompression according to the embodiment of the invention;
Figure 12 is by the flow chart of WirelessHD receiver to the step of compression and the performed motion compensation frame per second up conversion of uncompressed video streams according to the embodiment of the invention.
Embodiment
Some embodiment of the present invention relates to and a kind of the method and/or the system of motion compensation frame per second up conversion is carried out in compression and uncompressed video streams.Various embodiment of the present invention can comprise video receiver, WirelessHD receiver for example, and it is used for from video transmitter (for example WirelessHD reflector) receiver, video stream (for example three-dimensional video stream).A plurality of frame of video that the three-dimensional video stream that receives can include coded message and be used to show.This coded message can be extracted out, and is used to a plurality of frame of video that receive are carried out frame rate up-conversion to show.Described coded message, for example block motion vector, block encoding pattern, quantized level and/or quantize residual data, can be generated by the compress three-dimensional video from video source being carried out the entropy decoding by the WirelessHD reflector, wherein said video source is from for example Internet Protocol Television network and satellite broadcast network.The three-dimensional video stream that receives can be unpressed, perhaps compresses.Receiving under the situation of a plurality of decoded video frames, the WirelessHD receiver can use the corresponding coded message that extracts and relevant measured value for example motion vector confidence level and/or motion vector unanimity degree, for the decoded video frames that receives generates a plurality of interpolation frame of video.
Receiving compressed video for example under the situation of MPEG-2 or MPEG-4, the compress three-dimensional video that the WirelessHD receiver decompresses and receives obtains a plurality of decoded video frames.Decompression operation can be carried out before frame rate up-conversion.The WirelessHD receiver can be carried out foregoing frame rate up-conversion to a plurality of decoded video frames that generate.
Fig. 1 is that it is used for by the WirelessHD transmission link video flowing being sent to the WirelessHD receiver from the WirelessHD reflector according to the block diagram of the typical radio high definition system of the embodiment of the invention.Referring to Fig. 1, show WirelessHD system 100.
WirelessHD system 100 comprises video source 110, WirelessHD reflector 120, antenna 122, WirelessHD transmission link 130, WirelessHD receiver 140, antenna 142 and display device 150.Video source 110 comprises cable TV network 111, Internet Protocol Television network 112, satellite broadcast network 113, mobile communications network 114, camera 115 and/or video camera 116 respectively.WirelessHD system 100 can pass through for example WirelessHD transmission link 130 transmission high definition audio and videos of Radio Link.WirelessHD system 100 can support various industry standards, for example WirelessHD interface (WirelessHD) standard and/or WirelessHD interface (WHDI) standard.WirelessHD system 100 can support various three-dimensional professional, for example the 3D visual object of the 3D program of stereoscopic picture plane and stereoscopic picture plane.
Video source 110 comprises suitable logic, circuit and/or code, is used to provide the compressing video frequency flow with low frame per second to WirelessHD reflector 120.This compressing video frequency flow comprises two-dimensional video stream and/or three-dimensional video stream.This compressing video frequency flow can generate by using various video compression algorithms, but for example MPEG-2, MPEG-4, MPEG-4/AVC, VC1, VP6 and/or other forward, oppositely, the compression algorithm of defined in the video format of bi-directional predictive coding.The compressing video frequency flow that receives can be provided by direct video source, for example camera 115 and/or video camera 116.The compressing video frequency flow that receives can be provided by various indirect video source, for example cable TV network 111, Internet Protocol Television network 112, satellite broadcast network 113 and/or mobile communications network 114.
Antenna 122 comprises suitable logic, circuit and/or code, is used for transmitting in radio frequency band.Thus, the signal of emission comprises uncompressed video data and/or the compressed video data to WirelessHD receiver 140.Although individual antenna 122 has been shown among Fig. 1, the present invention is not limited to this.Therefore, can use one or more antennas in radio frequency band, to transmit to WirelessHD receiver 140, and not break away from spirit of the present invention and protection range from WirelessHD reflector 120.
WirelessHD reflector 120 comprises suitable logic, circuit and/or code, can be used for exchanging various data for example compressed video data and/or uncompressed video data with WirelessHD receiver 140 by WirelessHD transmission link 130.Thus, WirelessHD reflector 120 can exchange compression and/or unpressed two-dimensional video stream and/or three-dimensional video stream by WirelessHD transmission link 130 with WirelessHD receiver 140.WirelessHD reflector 120 can receive the compressing video frequency flow with low frame per second from video source 110.The compressing video frequency flow with low frame per second that receives can send WirelessHD receiver 140 to by WirelessHD transmission link 130.Among the embodiment, WirelessHD reflector 120 can be communicated by letter with WirelessHD receiver 140 to determine WirelessHD receiver 140 supported video formats.That determined video format comprises is for example unpressed, MPEG-2, MPEG-4, VC1 and/or VP6.Thus, the compressing video frequency flow that WirelessHD reflector 120 receives with the video format emission of determining is no matter be unpressed or compression.
Give under the situation of WirelessHD receiver 140 at the unpressed video flowing of WirelessHD reflector 120 emissions, the compressing video frequency flow that WirelessHD reflector 120 at first decompresses and receives from video source 110, then, the video flowing that decompresses by 130 emissions of WirelessHD transmission link is given WirelessHD receiver 140.Among another embodiment, WirelessHD reflector 120 always extracts coded message by the entropy decoding in the compressing video frequency flow that receives of video source 110.The coded message that is extracted can comprise the block motion vector relevant with the compressing video frequency flow that receives, block encoding pattern, quantized level and/or quantize residual data.The coded message that extracts can use determined video format to format or format again, and can send to WirelessHD receiver 140 by WirelessHD transmission link 130 with the compressing video frequency flow or the uncompressed video streams that receive.
WirelessHD transmission link 130 comprises suitable logic, circuit and/or code, is used to support the transmission of WirelessHD signal.WirelessHD transmission link 130 can transmit high-definition signal according to the WirelessHD standard.The WirelessHD standard is based on that 7GHz continuous bandwidth around the 60GHz radio frequency stipulates.WirelessHD can be used for the not compressed digital transmission of the combination of full HD video and audio frequency and data-signal.WirelessHD is equal to high-definition media interface (HDMI) in theory substantially.HDMI is used to transmit the not compact audio/video interface of compressed digital-data.Thus, WirelessHD transmission link 130 can transmit uncompressed video streams and compressing video frequency flow between WirelessHD reflector 120 and WirelessHD receiver 140.WirelessHD transmission link 130 can be handled the transmission data rate up to for example 25Gbit/s, thereby the video flowing of expectation can be adjusted to higher resolution, color depth and/or scope.Thus, WirelessHD transmission link 130 can transmit the 3D signal according to for example 3DTV technology, to support various 3D data services, and the 3D program of adjusting based on vast scale for example.
Antenna 142 comprises suitable logic, circuit and/or code, is used for received signal in radio frequency band.Thus, antenna 142 receivable vision signals comprise not compression and/or the compressing video frequency flow from WirelessHD reflector 140.Although individual antenna 142 has been shown among Fig. 1, the present invention is not limited to this.Therefore, WirelessHD receiver 140 can use one or more antennas in radio frequency band from WirelessHD reflector 120 received signals, and do not break away from spirit of the present invention and protection range.
WirelessHD receiver 140 comprises suitable logic, circuit and/or code, is used for receiving various data, for example compressing video frequency flow and/or uncompressed video streams from WirelessHD reflector 120 via WirelessHD transmission link 130 by antenna 142.Thus, WirelessHD receiver 140 can receive compression and/or not compress two-dimensional video stream and/or three-dimensional video stream from WirelessHD reflector 120.In one embodiment of the present of invention, WirelessHD receiver 140 can be communicated by letter with the video format that provides support with WirelessHD reflector 120.That this video format comprises is for example unpressed, MPEG-2, MPEG-4, VC1 and/or VP6.Thus, WirelessHD receiver 140 can receive uncompressed video streams or compressing video frequency flow according to WirelessHD reflector 120 determined video formats.Receiving from WirelessHD reflector 120 under the situation of uncompressed video streams, WirelessHD receiver 140 can extract coded message from the uncompressed video streams that receives.The coded message that is extracted can comprise the block motion vector relevant with the original compression video flowing of the uncompressed video streams that receives, block encoding pattern, quantized level and/or quantize residual data.The code signal that extracts is used in WirelessHD receiver 140 places and carries out frame rate up-conversion.WirelessHD receiver 140 inserts one or more centres (interpolation) frame of video for each uncompressed video frames in the uncompressed video streams that receives in the frame rate up-conversion process.
WirelessHD receiver 140 can be by for example HDMI interface and/or display port (DP) interface and display device 150 these interpolation frame of video of transmission, so that this interpolation frame of video is shown to the user.Receiving from WirelessHD reflector 120 under the situation of compressing video frequency flow, WirelessHD receiver 140 is decoded and is extracted coded message by the compressing video frequency flow that receives being carried out entropy.WirelessHD receiver 140 can de-compress into a decoded video frame sequence with the compressing video frequency flow that receives.WirelessHD receiver 140 can be with this decoded video frame sequence as reference frame, takes for example piece motion vector of the coded message that extracts into consideration, carries out frame rate up-conversion.In the frame rate up-conversion processing procedure, WirelessHD receiver 140 can be each decoded video frames and inserts one or more centres (interpolation) frame of video.This interpolation frame of video can send display device 150 to by for example HDMI and/or DP, so that the interpolation frame of video is shown to the user.
Display device 150 comprises suitable logic, circuit and/or code, and the frame of video that is used for receiving from WirelessHD receiver 140 is shown to the user.Thus, display device 150 can be presented to the beholder with 3-D view.Display device 150 can use various interface, and for example HDMI, Ethernet and/or DP communicate by letter with WirelessHD receiver 140.
Although WirelessHD system 100 has been shown among Fig. 1, the present invention is not limited to this.Thus, WirelessHD reflector 120 and WirelessHD receiver 140 can be supported the 3DTV technology and not break away from spirit of the present invention and scope.WirelessHD reflector 120 and WirelessHD receiver 140 can be supported wireless or wire communication and do not break away from spirit of the present invention and scope.Wireless or the wire communication of being supported can be high definition or SD, is used for 2D and/or 3D business.
In the operation, WirelessHD reflector 120 can receive compressing video frequency flow from video source 110 by antenna 122.WirelessHD reflector 120 can extract coded message from the video flowing that receives.The coded message that is extracted can comprise for example relevant with the compressing video frequency flow that receives block motion vector, block encoding pattern, quantized level and/or quantize residual data.WirelessHD reflector 120 can by WirelessHD transmission link 130 and target receiver for example WirelessHD receiver 140 communicate by letter and give WirelessHD receiver 140 used video format, for example unpressed, MPEG-2 and/or MPEG-4 to be identified for transmission of video.Coded message that extracts and the video flowing that is received can use determined video format to format or format again, to send to WirelessHD receiver 140 together.WirelessHD receiver 140 can extract coded message and no matter be unpressed or compression, to be used to carry out frame rate up-conversion from the video flowing that receives.At the video flowing that receives is under the situation of overcompression, and WirelessHD receiver 140 is carried out video decompression to construct a decoded video frame sequence before frame rate up-conversion.WirelessHD receiver 140 can insert one or more interpolation frame of video for the decoded video frames of each uncompressed video frames or structure in the frame rate up-conversion process.This interpolation frame of video can send display device 150 to be shown to the user by for example HDMI, Ethernet and/or DP.
Fig. 2 is the schematic diagram according to the typical radio high definition reflector of the embodiment of the invention, and it is used for sending uncompressed video streams by the WirelessHD transmission link.WirelessHD reflector 200 comprises decompression engine 210, processor 220 and memory 230.
Decompression engine 210 comprises suitable logic, circuit and/or code, is used for the compressing video frequency flow that receives from video source 110 is decompressed, and generates/construct decoded video frames.Decompression engine 210 can be carried out various video decompression technology, for example entropy decoding, re-quantization, inverse transformation and motion compensated prediction.Decompression engine 210 can provide coded message, for example block motion vector, block encoding pattern, quantized level and/or quantification residual data.The coded message that decompression engine 210 provides can by the target receiver for example WirelessHD receiver 140 be used to carry out frame rate up-conversion.
Processor 220 comprises suitable logic, circuit and/or code, is used for receiving compressing video frequency flow from video source 110.The compressing video frequency flow that receives can comprise compression two-dimensional video stream and/or compress three-dimensional video flowing.Processor 220 can transmit the compressing video frequency flow of reception and give decompression engine 210, to carry out various video decodes and/or decompression operation, for example entropy decoding, re-quantization, inverse transformation and motion compensated prediction.The decoded video frames that decompression engine 210 is provided is transmitted to for example WirelessHD receiver 140 of target receiver with the coded message that extracts.Processor 220 can be communicated by letter with memory 230 to provide various video decode algorithms to carry out various decode operations to decompression engine 210.Processor 220 can be communicated by letter with the video format of determining that corresponding video transmission is supported with WirelessHD receiver 140.Determined video format is used to format by processor 220 will be to decoded video frames that sends WirelessHD receiver 140 to and the coded message that extracts.
Memory 230 comprises suitable logic, circuit and/or code, is used for stored information, for example processor 220 and decompression engine 210 employed executable instruction and data.This executable instruction can comprise the decoding algorithm that is used for various decode operations by decompression engine 210.These data can comprise decoded video frames and the coded message that extracts, for example block motion vector, block encoding pattern, quantized level and quantification residual data.Memory 230 can comprise RAM, ROM, low delay nonvolatile memory for example flash memory and/or other suitable electronic data memory.
In the operation, processor 220 from video source for example Internet Protocol Television network 112 receive compressing video frequency flow with low frame per second.Processor 220 can send the compressing video frequency flow that receives to decompression engine 210 and carry out various video decodes or decompression operation, for example entropy decoding, re-quantization, inverse transformation and motion compensated prediction.Decompression engine 210 can offer processor 220 with decoded video frames and the coded message that is associated (for example block motion vector, block encoding pattern, quantized level and quantification residual data).Decompression engine 210 can utilize the various video decode algorithms that are stored in the memory 230 to carry out corresponding Video processing operation.Processor 220 merges decoded video frames and the coded message that extracts with the form of determining, this form will be applicable to for example WirelessHD receiver 140 of target receiver.
Fig. 3 is the schematic diagram according to the type solution compression engine of the embodiment of the invention, and its video decompression that is used to wireless launcher is handled.Referring to Fig. 3, show decompression engine 300.This decompression engine 300 comprises entropy decoding unit 310, inverse quantization unit 320, inverse transformation block 330, combiner 340 and motion compensated prediction unit 350.
Entropy decoding unit 310 comprises suitable logic, circuit and/or code, and data through entropy coding are used to decode.Entropy decoding unit 310 can convert the binary bits of entropy coding data to symbol (quantification residual data), and it will provide or send to decoder module subsequently, and for example inverse quantization unit 320 and inverse transformation block 330 are to obtain decoded video frames.Binary bits can be carried out by variety of way to the conversion of symbol.For example, in MPEG, can use variable length decoding (VLD) and then to use haul distance decoding (RLD) to realize the entropy decoding.The entropy coding data can be the compressing video frequency flows that receives from video source 110, comprise compression two-dimensional video stream and/or compress three-dimensional video flowing.Thus, entropy is separated unit 310 can extract coded message from the compressing video frequency flow that receives, for example block motion vector.The coded message that extracts can comprise for example relevant with the compressing video frequency flow that receives block motion vector, block encoding pattern, quantized level and/or quantize residual data.
Inverse quantization unit 320 can comprise suitable logic, circuit and/or code, be used for the quantification residual data from the decoded video frames of entropy decoding unit 310 is carried out convergent-divergent and/or convergent-divergent adjustment again, to reconstruct frame of video, the immediate representative form of every kind of color and its is associated with limited group of color.Inverse quantization unit 320 can be used for for example reducing the appreciable distortion in the image of reconstruct.
Inverse transformation block 330 can comprise suitable logic, circuit, interface and/or code, is used at each frame of video through re-quantization from inverse quantization unit 320, and the basic pattern (basispattern) of standard is merged to form residual macroblock.
Motion compensated prediction (MCP) unit 350 can comprise suitable logic, circuit and/or code, be used for and offer video encoder at the prediction of the macro block in the uncompressed video frames, for example offer the two-dimensional video encoder and be used to compress two-dimensional video stream, or offer the 3 d video encoding device and be used for the compress three-dimensional video flowing.The pixel intensity (pixel intensity) of the macro block in the present frame, can be based on motion model and before/afterwards the picture element density of the macro block in the reference frame predict.The pixel intensity of prediction and the difference between the actual current pixel intensity can be considered to predicated error.This predicated error can send combiner 340 to and be used for the interior corresponding not compressed macroblocks of reconstruct present frame.
Combiner 340 can comprise suitable logic, circuit and/or code, is used for obtaining the not compressed macroblocks of reconstruct with merging from the residual macroblock of inverse transformation block 330 and corresponding prediction error information from motion compensated prediction unit 350.
In the operation, entropy decoding unit 310 can receive compressing video frequency flow from video source 110.Entropy decoder 310 converts the binary bits of the compressing video frequency flow of reception to the video quantizing residual data.This quantification residual data can offer inverse quantization unit 320.320 pairs of these quantification residual datas of inverse quantization unit carry out convergent-divergent again, reconstruct the frame of video with limited group of color.The frame of video of reconstruct is transmitted to inverse transformation block 330.The frame of video of 330 pairs of reconstruct of inverse transformation block is carried out inverse transformation, obtains including the residual frame of video of a plurality of residual macroblock.Residual macroblock in the afterimage can obtain by the macro block in the reconstructing video frame and one or more standard basic pattern are compared.Residual frame of video merges in combiner 340 with from the predicated error of motion compensated prediction unit 350, obtains the decoding/uncompressed video frames of reconstruct.
Fig. 4 is the schematic diagram according to the typical radio high definition receiver of the embodiment of the invention, and it is used for receiving uncompressed video streams by the WirelessHD transmission link.Referring to Fig. 4, show WirelessHD receiver 400.WirelessHD receiver 400 comprises frame rate up-conversion engine 410, processor 420 and memory 430.
Frame rate up-conversion engine 410 comprises suitable logic, circuit, interface and/or code, be used for the up conversion frame per second so that the high image quality that obtains because of the high-quality video source to be provided, this high-quality video source comprises for example image of digital camera, camera video and/or telecine conversion.Thus, frame rate up-conversion engine 410 is carried out frame rate up-conversion by using the coded message that extracts from uncompressed video streams, and no matter WirelessHD receiver 400 receive do not compress two-dimensional video stream or compress three-dimensional video flowing not.The coded message that extracts comprises for example relevant with the original compression video flowing of the uncompressed video streams that receives block motion vector, block encoding pattern, quantized level and/or quantizes residual data.Frame rate up-conversion engine 410 can use various frame rate up-conversion algorithms (for example frame repeats and the linear interpolation by time filtering) to make up the interpolation frame of video with higher frame per second, so that for example show on the display device 150 at new-type display screen.
Processor 420 comprises suitable logic, circuit, interface and/or code, is used to handle the decoded video frames that receives from WirelessHD reflector 120.Processor 420 can send decoding or the uncompressed video frames that receives to frame rate up-conversion engine 410, carries out up conversion with the frame per second to the decoded video frames that receives.Processor 420 is carried out the frame of video interpolation by frame rate up-conversion engine 410, and the interpolation frame of video that obtains can be presented on the display device 150.
Memory 430 comprises suitable logic, circuit and/or code, is used for stored information, for example processor 420 and frame rate up-conversion engine 410 employed executable instruction and data.This executable instruction comprises the frame rate up-conversion algorithm, and it can be used by frame rate up-conversion engine 410.These data comprise decoded video frames and the coded message that extracts, for example block motion vector, block encoding pattern, quantized level and quantification residual data.These data comprise the interpolation frame of video that is made up by frame rate up-conversion engine 410, are used for showing on display device 150.Memory 430 can comprise RAM, ROM, low delay nonvolatile memory for example flash memory and/or other suitable electronic data memory.
In the operation, processor 420 receives decoding or the uncompressed video streams with low frame per second by WirelessHD transmission link 130 from WirelessHD reflector 110.The decoded video streams that receives can comprise decoding two-dimensional video stream and/or decoding three-dimensional video stream.Processor 420 sends the decoded video frames that receives to frame rate up-conversion engine 410, carries out up conversion with the frame per second to the decoded video frames that receives.Processor 420 and frame rate up-conversion engine 410 use memory 430 to carry out frame rate up-conversion.Frame rate up-conversion engine 410 is carried out frame rate up-conversion, obtains the interpolation frame of video.Processor 420 is communicated by letter with display device 150 and is shown to the user with the interpolation frame of video that will obtain.
Fig. 5 is that it is used for motion compensated interpolated by the WirelessHD receiver according to the schematic diagram of the typical frame rate up-conversion engine of the embodiment of the invention.Referring to Fig. 5, show digital noise reduction filter 510, pixel motion vector maker 52., pixel motion vector estimation device 530, frame rate up-conversion device 540 and scene change detector 550.
Digital noise reduction filter 510 comprises suitable logic, circuit and/or code, is used for the decoded video frames that receives from WirelessHD reflector 110 is carried out noise reduction.It is necessary carrying out noise reduction earlier before carrying out other processing, can obtain the better pictures quality.Digital noise reduction filter 510 can use various noise reduction technologies (for example removing piece, decyclization or other noise reduction filtering) before carrying out frame rate up-conversion the decoded video frames (reference picture) that receives to be carried out noise reduction.
Pixel motion vector maker 520 comprises suitable logic, circuit and/or code, is used to generate pixel motion vector.This pixel motion vector can generate from block motion vector, and this block motion vector extracts from the decoded video frames that is received from WirelessHD reflector 110.The block motion vector that 520 refinings of pixel motion vector maker extract, and the block motion vector after the refining resolved into pixel motion vector.Pixel motion vector further convergent-divergent or again convergent-divergent to be used to make up the frame of video of interpolation (or insertion).Pixel motion vector maker 520 can transmit pixel motion vector to motion vector estimation device 530 and frame rate up-conversion device 540.
Motion vector estimation device 530 can comprise suitable logic, circuit, interface and/or code, and relevant motion vector confidence level and/or the motion vector unanimity degree of pixel motion vector that is used for pixel motion vector maker 520 is generated estimated.The motion vector confidence level of the pixel motion vector that generates can be calculated by the whole bag of tricks, comprises the quantification residual data and relevant quantized level of the decoded video frames that use receives from WirelessHD reflector 110.The quantification residual data can extract from the decoded video frames that receives with relevant quantized level and obtain.Less quantized level with less residual data can cause higher motion vector confidence level, and the higher quantization level with more residual data can produce lower motion vector confidence level.Motion vector unanimity degree can be by relatively adjacent block motion vector and motion compensation block boundary pixel difference obtain.Motion vector confidence level and/or motion vector unanimity degree can be used for generating confidence level-consistent degree measured value, to be used for for example motion jitter filtering.
Frame rate up-conversion device 540 comprises suitable logic, circuit, interface and/or code, is used for the frame per second of the decoded video frames that receives from WirelessHD reflector 110 is carried out up conversion.The coded message that frame rate up-conversion device 540 can use WirelessHD reflector 110 to provide is carried out motion compensation frame per second up conversion.The pixel motion vector of the decoded video frames that receives and/or the motion vector confidence level-consistent degree that is associated can be used for motion compensation frame per second up conversion.Frame rate up-conversion device 540 use pixel motion vectors and the motion vector confidence level-consistent degree measured value that is associated come the decoded video frames that receives is carried out interpolation.For example, under motion vector situation with a low credibility, frame rate up-conversion device 540 can use static images interpolation reference frame, for example repeats interpolation by frame.Higher motion vector confidence level can cause relying on fully the interpolation of motion vector.The frame of video of interpolation can send scene change detector 550 to.
Scene change detector 550 comprises suitable logic, circuit and/or code, is used to detect the interior scene variation of interpolation frame of video of reception.Scene change detector 550 can be handled the interpolation frame of video of reception by for example nonlinear filtering, with the pseudomorphism in the frame of video that reduces last interpolation (artifact).Scene change detector 550 will consider that confidence level-consistent degree measured value determines whether and when motion compensated interpolated may be failed.Scene change detector 550 can identify the problem area in the interpolation frame of video, and for example final interpolation frame of video is carried out nonlinear filtering by the whole bag of tricks and hide the problem area that identifies.
In the operation, WirelessHD receiver 140 receives decoded video frames from WirelessHD reflector 120.The decoded video frames that receives can carry out sending digital noise reduction filter 510 to before other processing.Digital noise reduction filter 510 can use various noise reduction technologies (for example removing piece, decyclization or other noise reduction filtering) that the decoded video frames that receives is carried out noise reduction.Decoded video frames through filtering is sent to pixel motion vector maker 520, motion vector estimation device 530, frame rate up-conversion device 540 and scene change detector 550 respectively, further handles.The coded message that pixel motion vector maker 520 extracts in filtered decoded video frames for example generates pixel motion vector in the block motion vector.The pixel motion vector that generates is offered motion vector estimation device 530 and frame rate up-conversion device 540 respectively.
The motion vector confidence level and/or the motion vector unanimity degree of the pixel motion vector of 530 pairs of generations of motion vector estimation device estimate that the measured value that motion vector confidence level-consistent degree is provided is to scene change detector 550.Frame rate up-conversion device 540 uses the pixel motion vector that generates from pixel motion vector maker 520 that filtered decoded video frames is carried out frame rate up-conversion.The interpolation frame of video that obtains from frame rate up-conversion device 540 can send scene change detector 550 to.The scene that scene change detector 550 detects in the interpolation frame of video that receives changes.Scene change detector 550 is handled the interpolation frame of video of reception to reduce the pseudomorphism in the final interpolation frame of video.The measured value relevant with motion vector confidence level and/or the consistent degree of motion vector is considered, with the problem area in the interpolation frame of video that is used to identify reception.This problem area can be hidden by variety of way, for example nonlinear filtering.Final interpolation frame of video is transmitted to display device 150.
Fig. 6 is the schematic diagram that inserts exemplary interpolation frame of video according to the embodiment of the invention between two reference video frames.Referring to Fig. 6, show a plurality of decoded video frames (reference video frame), for example P1 602 and P2 604, and the position of interpolation frame of video 606.For example, but 606 interpolations of interpolation frame of video locate in k the chronomeres in decoded video frames P1 602 back.
Fig. 7 is the schematic diagram according to the exemplary motion vector of the interpolation frame of video of the embodiment of the invention.Referring to Fig. 7, show a plurality of decoded video frames, for example P1 702 and P2 704, and interpolation frame of video 706.For example, but 706 interpolations of interpolation frame of video locate in k the chronomeres in decoded video frames P1 702 back.
Zone in motion vector 708 next frame of video P2 704 of regional orientation in the last frame of video P1 702, by such mode, motion vector 708 can be caught the motion that takes place between two original video frame P1 702 and the P2704.Motion vector 709 is shifted versions of motion vector 708.Motion vector 709 can be displaced to interpolation frame of video 706 and align.
Motion vector 709 can be divided into two motion vectors, for example motion vector MV1 709a and MV2709b.Motion vector MV1 709a and MV2 709b can carry out the convergent-divergent adjustment at motion compensated interpolated.The direction of two adjusted motion vectors can be toward each other.The length of each adjusted motion vector can for example the time difference between the frame of video P1 702 be directly proportional with interpolation frame of video 706 and corresponding original video frame.
Fig. 8 is the schematic diagram according to the typical radio high definition reflector of the embodiment of the invention, and it is used for sending compressing video frequency flow by the WirelessHD transmission link.Referring to Fig. 8, show WirelessHD reflector 800.WirelessHD reflector 800 comprises entropy decoding unit 810, processor 820 and memory 830.
Entropy decoding unit 810 comprises suitable logic, circuit, interface and/or code, and data through entropy coding are used to decode.The entropy decoding unit can be carried out according to the mode identical with the entropy decoding unit described in conjunction with Fig. 3 310.Entropy decoding unit 810 provides coded message from the compressing video frequency flow that is received from video source 110.The video flowing that receives comprises compression two-dimensional video stream and/or compress three-dimensional video flowing.The coded message that extracts comprises for example block motion vector, block encoding pattern, quantized level and/or quantification residual data.The information that coding mode comprises has for example based on the coding of interblock or based on coding in the piece and block size.The coded message that extracts is transmitted to processor 820.
Processor 820 comprises suitable logic, circuit, interface and/or code, is used to handle the compressing video frequency flow that receives from video source 110.Processor 820 is inserted into the coded message (for example block motion vector, block encoding pattern, quantized level and/or quantification residual data) that extracts in the compressing video frequency flow of reception, sends for example WirelessHD receiver 140 of target video receiver to by WirelessHD transmission link 130.Processor 820 can be communicated by letter with memory 830 to provide the entropy decoding unit 810 employed various video decode algorithms.Processor 820 can be communicated by letter with the video format of determining that corresponding video transmission is supported with WirelessHD receiver 140.Determined video format is used for sending the coded message of compressing video frequency flow and insertion to WirelessHD receiver 140 together by processor 820.
Memory 830 comprises suitable logic, circuit, interface and/or code, is used for stored information, for example processor 820 and entropy decoding unit 810 employed executable instruction and data.This executable instruction can comprise the video decode algorithm that is used for various decode operations by entropy decoding unit 810.These data can comprise the compressing video frequency flow of reception and the coded message that extracts.Memory 830 can comprise RAM, ROM, low delay nonvolatile memory for example flash memory and/or other suitable electronic data memory.
In the operation, processor 820 from video source for example Internet Protocol Television network 112 receive compressing video frequency flow with low frame per second.Processor 820 can send the compressing video frequency flow that receives to entropy decoding unit 810 and carry out the entropy decoding.Entropy decoding unit 810 can offer processor 820 with coded message (for example block motion vector, block encoding pattern, quantized level and quantification residual data).Processor 820 is inserted into the coded message that extracts in the compressing video frequency flow of reception, sends WirelessHD receiver 140 to the form of supporting.
Fig. 9 is the schematic diagram according to the typical radio high definition receiver of the embodiment of the invention, and it is used for receiving compressing video frequency flow by the WirelessHD transmission link.Referring to Fig. 9, show WirelessHD receiver 900.WirelessHD receiver 900 comprises decompression engine 910, frame rate up-conversion engine 920, processor 930 and memory 940.
Decompression engine 910 comprises suitable logic, circuit, interface and/or code, is used for the compressing video frequency flow (no matter being compression two-dimensional video stream or compress three-dimensional video flowing) that receives from WirelessHD reflector 120 is decompressed, and generates decoded video frames.Decompression engine 910 can be carried out various video decode/decompression operation, for example entropy decoding, re-quantization, inverse transformation and motion compensated prediction.Decompression engine 910 can provide decoded video frames to be used for further video decode to frame rate up-conversion engine 920 and handle.
Frame rate up-conversion engine 920 comprises suitable logic, circuit and/or code, be used for the up conversion frame per second so that the high image quality that obtains because of the high-quality video source to be provided, this high-quality video source comprises for example image of digital camera, camera video and/or telecine conversion.Thus, frame rate up-conversion engine 920 extracts coded message from the compressing video frequency flow that is received from WirelessHD reflector 120, for example block motion vector, block encoding pattern, quantized level and/or quantification residual data.The coded message that extracts can be used for the compressing video frequency flow that receives is carried out frame rate up-conversion.Frame rate up-conversion engine 920 can use various frame rate up-conversion algorithms (for example frame repeats and the linear interpolation by time filtering) to make up the interpolation frame of video with higher frame per second, so that show on display device 150.
Processor 930 comprises suitable logic, circuit, interface and/or code, is used to handle the decoded video streams that receives from WirelessHD reflector 120.Processor 930 can send the compressing video frequency flow that receives to the corresponding decoded video frames of decompression engine 910 with the compressing video frequency flow that obtains receiving.This decoded video frames can be used as the reference video frame in the frame rate up-conversion of final interpolation frame of video.Processor 930 transmits final interpolation frame of video and shows for display device 150.
Memory 940 comprises suitable logic, circuit, interface and/or code, is used for stored information, for example processor 930 and frame rate up-conversion engine 920 and/or decompression engine 910 employed executable instruction and data.This executable instruction comprises various video processnig algorithms, for example by decompression engine 910 and frame rate up-conversion engine 920 the video decompression algorithm and the frame rate up-conversion algorithm that use respectively.These data comprise the compressing video frequency flow that is received from WirelessHD reflector 120, coded message, decoded video frames and/or the interpolation frame of video that extracts from the compressing video frequency flow that receives.The coded message that extracts comprises for example block motion vector, block encoding pattern, quantized level and quantification residual data, will be used by frame rate up-conversion engine 920.Memory 940 can comprise RAM, ROM, low delay nonvolatile memory for example flash memory and/or other suitable electronic data memory.
In the operation, processor 930 receives compressing video frequency flow from WirelessHD reflector 120.Processor 930 sends the compressing video frequency flow that receives to decompression engine 910, obtains corresponding decoded video frames.This decoded video frames is carried out interpolation by frame rate up-conversion engine 920, obtains the interpolation frame of video.Thus, frame rate up-conversion engine 920 utilizes the coded message that extracts from the compressing video frequency flow that receives to carry out frame rate up-conversion.The interpolation frame of video that is made up by frame rate up-conversion engine 920 that obtains is transmitted to processor 930 so that show on display device 150.
Figure 10 uses WirelessHD to carry out the flow chart of the method for motion compensation frame per second up conversion according to the embodiment of the invention to compression and uncompressed video streams.Exemplary method starts from step 1002, and WirelessHD reflector 120 receives/accept compressing video frequency flow from video source 110.The compressing video frequency flow that receives can be compression two-dimensional video stream and/or compress three-dimensional video flowing.In the step 1004, WirelessHD reflector 120 extracts coded message by carrying out the entropy decoding from the compressing video frequency flow that receives.This coded message comprises for example block motion vector, block encoding pattern, quantized level and/or quantification residual data.In the step 1006, WirelessHD reflector 120 obtains information, for example the target receiver video format that can support of WirelessHD receiver 140 for example.
In the step 1008, WirelessHD receiver 140 provide video format information to WirelessHD reflector 120 to be used for video transmission.In the step 1010, WirelessHD reflector 120 is determined or is selected a kind of video format of communicating by letter with WirelessHD receiver 140 of will being used for.In the step 1012, WirelessHD reflector 120 uses the video format format of determining or selecting or formats the coded message that extracts again.In the step 1014, can determine whether that WirelessHD reflector 120 will launch uncompressed video streams and give WirelessHD receiver 140.Give under the situation of WirelessHD receiver 140 in WirelessHD reflector 120 emission uncompressed video streams, then in step 1016, WirelessHD reflector 120 is decoded/is decompressed by the compressing video frequency flow of 210 pairs of receptions of decompression engine, generates corresponding decoded video frames.
In the step 1018, WirelessHD reflector 120 emission include decoded video frames and format or again the uncompressed video streams of the coded message after the format give WirelessHD receiver 140.In the step 1020, WirelessHD receiver 140 receives the uncompressed video streams of emission.WirelessHD receiver 140 extracts coded message from the uncompressed video streams that receives.In the step 1022, WirelessHD receiver 140 uses the coded message that extracts that the decoded video frames that receives is carried out frame rate up-conversion, constructs final interpolation frame of video.In the step 1024, WirelessHD receiver 140 transmits the final interpolation frame of video that makes up and shows for display device 150.This exemplary method is got back to step 1002 then.
In the step 1014, give under the situation of WirelessHD receiver 140 at WirelessHD reflector 120 emission compressing video frequency flows, then in the step 1026, compressing video frequency flow that 120 emissions of WirelessHD reflector receive and format or again the coded message after the format give WirelessHD receiver 140.In the step 1028, WirelessHD receiver 140 extracts coded message from the compressing video frequency flow that is received from WirelessHD reflector 120.In the step 1030,140 pairs of compressing video frequency flows that receive from WirelessHD reflector 120 of WirelessHD receiver decompress, and generate corresponding decoded video frames.This exemplary method proceeds to step 1022.
Figure 11 is the flow chart that is used for the step of video decompression according to the embodiment of the invention.Referring to Figure 11, this exemplary method starts from step 1110, decompression engine, and for example decompression engine 210 on the WirelessHD reflector 200 and/or the decompression engine 910 on the WirelessHD receiver 900 receive compressing video frequency flow.The compressing video frequency flow that receives can be compression two-dimensional video stream and/or compress three-dimensional video flowing.The compressing video frequency flow that decompression engine 210 receives can directly be received from video source 110.Yet the compressing video frequency flow that decompression engine 910 receives is sent by WirelessHD transmission link 130 by WirelessHD reflector 120.In the step 1120, the current compressed video frame in the compressing video frequency flow of decompression engine 210 or 910 pairs of receptions is carried out the entropy decoding, generates the quantification residual data of current compressed video frame.
In the step 1122, determine whether decompression engine is positioned on the WirelessHD reflector 120.Decompression engine for example decompression engine 210 be positioned under the situation on the WirelessHD reflector 120, then in the step 1130, decompression engine 210 generates the coded message of current compressed video frame by the entropy decoding.In the step 1140, pass through motion compensation technique by decompression engine 210 or 910, use the quantification residual data of the current compressed video frame that generates and the interior one or more decoded video frames before of compressing video frequency flow of reception, dope current uncompressed video frames.
In the step 1150, decompression engine 210 or the current compressed video frame of 910 re-quantizations.In the step 1160, decompression engine 210 or 910 generates current decoded video frames by compressed video frame behind the current re-quantization and the current uncompressed video frames that dopes are merged.In the step 1170, whether the compressed video frame in the compressing video frequency flow of determining to receive is decoded.Under not decoded situation, then this exemplary method continues decompression processing at next the available compressed video frame in the compressing video frequency flow that receives, and gets back to step 1120.
In the step 1122, decompression engine for example decompression engine 910 be positioned under the situation on the WirelessHD receiver 140, then this method proceeds to step 1140.In the step 1170, under the decoded situation of the compressed video frame in the compressing video frequency flow that receives, then this method finishes in step 1190.
Figure 12 is by the flow chart of WirelessHD receiver to the step of compression and the performed motion compensation frame per second up conversion of uncompressed video streams according to the embodiment of the invention.Referring to Figure 12, this method step starts from step 1210, the frame rate up-conversion engine at WirelessHD receiver 140 places for example 410 and/or 920 receives decoded video frames and relevant coded message, comprises for example block motion vector, block encoding pattern, quantized level and/or quantification residual data.The decoded video frames that receives can comprise decoding two-dimensional video frame and/or decoding 3 D video frame.In the step 1220, the frame rate up-conversion engine is the 410 and/or 920 decoded video frames combine digital noise reduction filterings to each reception for example.
In the step 1230, frame rate up-conversion engine for example 410 and/or 920 uses carries out refining through the decoded video frames of filtering and/or the decoded video frames after one or more forwards that are associated and/or the inverse filtering to each block motion vector accordingly.In the step 1240, for the block motion vector after each refining is determined motion vector confidence level-consistent degree measured value.In the step 1250, be that each filtered decoded video frames generates pixel motion vector by decomposing that block motion vector after the corresponding refining comes.In the step 1260, the frame rate up-conversion engine for example 410 and/or 920 uses the pixel motion vector of corresponding generation that each filtered decoded video frames is carried out motion compensated interpolated.In the step 1270,, carry out filtering and/or protection at the interpolation decoded video frames of decoded video frames after each filtering by considering corresponding definite motion vector confidence level-consistent degree measured value.The interpolation frame of video of process filtering can be transmitted to display device 150 and show.This exemplary method is returned step 1210.
The invention provides and a kind of the method and system of motion compensation frame per second up conversion is carried out in compression and uncompressed video streams.According to various embodiments of the present invention, video receiver, for example the WirelessHD receiver 140, are used for receiving three-dimensional video stream by for example WirelessHD transmission link 130 from video transmitter (for example the WirelessHD reflector 120).A plurality of frame of video that the three-dimensional video stream that receives can include coded message and be used for showing on display device 150.WirelessHD receiver 140 extracts coded message from the three-dimensional video stream that receives.WirelessHD receiver 140 can use the coded message that extracts that a plurality of frame of video that receive are carried out frame rate up-conversion by frame rate up-conversion engine 410 and/or 920.This coded message can be by WirelessHD reflector 120 by generating carry out the entropy decoding from the compressed video of video source 110, and wherein this video source can be respectively for example cable TV network 111, Internet Protocol Television network 112, satellite broadcast network 113, mobile communications network 114, video camera 115 and/or camera 116.This coded message that extracts comprises block motion vector, block encoding pattern, quantized level and/or quantizes one or more in the residual data.
The three-dimensional video stream that receives can comprise unpressed 3 D video or compressed 3 D video.The a plurality of frame of video that are used to show in the three-dimensional video stream that receives comprise in conjunction with Fig. 2 under the situation of the described a plurality of decoded video frames of Fig. 8, the a plurality of decoded video frames that receive can be generated by WirelessHD reflector 120, and it utilizes 210 pairs of compress three-dimensional videos from video source 110 of decompression engine to decompress.Decompression engine 210 can be carried out various decode operations, for example entropy decoding, re-quantization, inverse transformation and/or motion compensated prediction.Digital noise filter 510 in the WirelessHD receiver 140 is to the decoded video frames combine digital noise reduction filtering of each reception.
The coded message that extracts for example block motion vector and filtered decoded video frames is used to the decoded video frames of each reception to generate pixel motion vector by pixel motion vector maker 520.In motion vector estimation device 530, calculate relevant motion vector confidence level and/or motion vector unanimity degree, with the measured value of pixel motion vector that generation is provided at the pixel motion vector that generates.A plurality of interpolation frame of video are generated from a plurality of encoded video frames that receive with the motion vector confidence level and/or the consistent degree of motion vector that calculate in motion vector estimation device 530 places based on the pixel motion vector that generates by frame rate up-conversion device 540.The a plurality of interpolation frame of video that generate can for example be handled by scene change detector 550.By carrying out noise reduction filtering, use the motion vector confidence level and/or the motion vector unanimity degree information that calculate, can hide for example motion jitter of pseudomorphism.
Video receiver for example the three-dimensional video stream that receives of WirelessHD receiver 900 comprise the compress three-dimensional video for example under the situation of MPEG-2, MPEG-4, AVC, VC1 and/or VP1, WirelessHD receiver 900 is carried out video decode by the compress three-dimensional video of 910 pairs of receptions of decompression engine.Decompression engine 910 can be carried out various video decode operations, for example entropy decoding, re-quantization, inverse transformation and motion compensated prediction.Make up a plurality of decoded video frames that obtain by decompression engine 910 and can be transmitted to digital noise reduction filter 510, a plurality of decoded video frames that obtain are carried out noise reduction process.A plurality of decoded video frames that pixel motion vector maker 520 uses the coded message that extracts for example to obtain after block motion vector and the filtering generate the pixel motion vector at each decoded video frames.In motion vector estimation device 530, calculate relevant motion vector confidence level and/or motion vector unanimity degree, with the measured value of pixel motion vector that generation is provided at the pixel motion vector that generates.A plurality of interpolation frame of video by frame rate up-conversion device 540 based on the pixel motion vector that generates with from a plurality of encoded video frames of reception, generate transporting motion vector confidence level and/or the consistent degree of motion vector that vector estimation device 530 places calculate.The a plurality of interpolation frame of video that generate can for example be handled by scene change detector 550.By carrying out noise reduction filtering, use the motion vector confidence level and/or the motion vector unanimity degree information that calculate, can hide for example motion jitter of pseudomorphism.
One embodiment of the present of invention provide a kind of machine-readable memory, the computer code and/or the computer program of storage comprise at least one code segment on it, this at least one code segment can be carried out by machine and/or computer, and what make that this machine and/or computer carry out the application's introduction carries out the step of motion compensation frame per second up conversion to compression and uncompressed video streams.
Therefore, the present invention can pass through hardware, software, and perhaps soft, combination of hardware realizes.The present invention can realize with centralized system at least one computer system, perhaps be realized with dispersing mode by the different piece in the computer system that is distributed in several interconnection.Anyly can realize that the computer system of described method or miscellaneous equipment all are applicatory.The combination of software and hardware commonly used can be the general-purpose computing system that computer program is installed, and by installing and carry out described program-con-trolled computer system, it is moved by described method.In computer system, utilize processor and memory cell to realize described method.
The all right embeddeding computer program product of the present invention, described program comprises whole features that can realize the inventive method, when it is installed in the computer system, by operation, can realize method of the present invention.Computer program herein refers to: one group of any expression formula of instructing that can adopt any program language, code or symbol to write, this instruction group makes system have information processing capability, with direct realization specific function, or after carrying out following one or two step, a) convert other Languages, coding or symbol to; B) reproduce with the different materials form, realize specific function.
The present invention describes by several specific embodiments, it will be appreciated by those skilled in the art that, without departing from the present invention, can also carry out various conversion and be equal to alternative the present invention.In addition, at particular condition or concrete condition, can make various modifications to the present invention, and not depart from the scope of the present invention.Therefore, the present invention is not limited to disclosed specific embodiment, and should comprise the whole execution modes that fall in the claim scope of the present invention.

Claims (10)

1. a signal processing method is characterized in that, comprising:
In video receiver:
Reception comprises the three-dimensional video stream of a plurality of frame of video and corresponding codes information;
From the three-dimensional video stream that is received, extract described coded message; And
The coded message that use extracts is carried out frame rate up-conversion to a plurality of frame of video of described reception.
2. method according to claim 1 is characterized in that, the described coded message that extracts comprises block motion vector, block encoding pattern, quantized level and/or quantizes one or more in the residual data.
3. method according to claim 1, it is characterized in that, described coded message is generated by the compress three-dimensional video from video source being carried out entropy decoding by video transmitter, wherein said video source from cable TV network, Internet Protocol Television network, satellite broadcast network, mobile communications network, video camera and/or camera one of them.
4. method according to claim 3 is characterized in that, a plurality of frame of video of described reception comprise a plurality of decoded video frames, and it is to construct by the described compress three-dimensional video of decompression from video source at described video transmitter place.
5. method according to claim 4 is characterized in that, described method further comprises:
Based on the described coded message that extracts, be each the generation pixel motion vector in a plurality of decoded video frames of described reception;
Pixel motion vector calculating kinematical vector confidence level and/or motion vector unanimity degree for the described generation of correspondence.
6. method according to claim 5, it is characterized in that, described method further comprises: based on pixel motion vector and the motion vector confidence level that is calculated and/or the consistent degree of motion vector of described generation, generate a plurality of interpolation frame of video from a plurality of decoded video frames of described reception.
7. a signal processing system is characterized in that, comprising:
Be used for the one or more circuit in the video receiver, wherein said one or more circuit are used to receive the three-dimensional video stream that comprises a plurality of frame of video and corresponding codes information;
Described one or more circuit extracts described coded message from the three-dimensional video stream that is received; And
Described one or more circuit uses the coded message that extracts that a plurality of frame of video of described reception are carried out frame rate up-conversion.
8. system according to claim 7 is characterized in that, the described coded message that extracts comprises block motion vector, block encoding pattern, quantized level and/or quantizes one or more in the residual data.
9. system according to claim 7, it is characterized in that, described coded message is generated by the compress three-dimensional video from video source being carried out entropy decoding by video transmitter, wherein said video source from cable TV network, Internet Protocol Television network, satellite broadcast network, mobile communications network, video camera and/or camera one of them.
10. system according to claim 9 is characterized in that, a plurality of frame of video of described reception comprise a plurality of decoded video frames, and it is to construct by the described compress three-dimensional video of decompression from video source at described video transmitter place.
CN201010150205A 2009-04-21 2010-04-19 Signal processing method and system Pending CN101873489A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/427,440 2009-04-21
US12/427,440 US9185426B2 (en) 2008-08-19 2009-04-21 Method and system for motion-compensated frame-rate up-conversion for both compressed and decompressed video bitstreams

Publications (1)

Publication Number Publication Date
CN101873489A true CN101873489A (en) 2010-10-27

Family

ID=42320901

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010150205A Pending CN101873489A (en) 2009-04-21 2010-04-19 Signal processing method and system

Country Status (4)

Country Link
US (2) US9185426B2 (en)
EP (1) EP2244485A3 (en)
CN (1) CN101873489A (en)
TW (1) TWI535296B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109151436A (en) * 2018-09-30 2019-01-04 Oppo广东移动通信有限公司 Data processing method and device, electronic equipment and storage medium

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100046623A1 (en) * 2008-08-19 2010-02-25 Chen Xuemin Sherman Method and system for motion-compensated frame-rate up-conversion for both compressed and decompressed video bitstreams
US20110141230A1 (en) * 2009-12-16 2011-06-16 Samsung Electronics Co., Ltd. 3d display device and method for correcting image thereof
US10448083B2 (en) * 2010-04-06 2019-10-15 Comcast Cable Communications, Llc Streaming and rendering of 3-dimensional video
US11711592B2 (en) 2010-04-06 2023-07-25 Comcast Cable Communications, Llc Distribution of multiple signals of video content independently over a network
FR2958824A1 (en) * 2010-04-09 2011-10-14 Thomson Licensing PROCESS FOR PROCESSING STEREOSCOPIC IMAGES AND CORRESPONDING DEVICE
JP5532232B2 (en) * 2010-05-18 2014-06-25 ソニー株式会社 Video signal processing device, video display device, and video display system
KR101685981B1 (en) 2010-07-29 2016-12-13 엘지전자 주식회사 A system, an apparatus and a method for displaying a 3-dimensional image
US9505962B2 (en) 2010-08-10 2016-11-29 Nissan Chemical Industries, Ltd. Adhesive composition containing resin having carbon-carbon multiple bond
CN102665027B (en) * 2012-04-20 2014-05-14 西北大学 Geometrical vector model compressing method
GB2518603B (en) 2013-09-18 2015-08-19 Imagination Tech Ltd Generating an output frame for inclusion in a video sequence
US10531127B2 (en) * 2015-06-19 2020-01-07 Serious Simulations, Llc Processes systems and methods for improving virtual and augmented reality applications
CN106205523B (en) * 2016-07-13 2018-09-04 深圳市华星光电技术有限公司 The ameliorative way and device of liquid crystal display device image retention
US10977809B2 (en) 2017-12-11 2021-04-13 Dolby Laboratories Licensing Corporation Detecting motion dragging artifacts for dynamic adjustment of frame rate conversion settings
EP4014497A4 (en) * 2019-09-14 2022-11-30 ByteDance Inc. Quantization parameter for chroma deblocking filtering
CN114651442A (en) 2019-10-09 2022-06-21 字节跳动有限公司 Cross-component adaptive loop filtering in video coding and decoding
JP2022552338A (en) 2019-10-14 2022-12-15 バイトダンス インコーポレイテッド Joint coding of chroma residuals and filtering in video processing
KR20220106116A (en) 2019-12-09 2022-07-28 바이트댄스 아이엔씨 Using quantization groups in video coding
WO2021138293A1 (en) 2019-12-31 2021-07-08 Bytedance Inc. Adaptive color transform in video coding

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050195324A1 (en) * 2004-02-16 2005-09-08 Lg Electronics Inc. Method of converting frame rate of video signal based on motion compensation
US20050265451A1 (en) * 2004-05-04 2005-12-01 Fang Shi Method and apparatus for motion compensated frame rate up conversion for block-based low bit rate video
US20070211800A1 (en) * 2004-07-20 2007-09-13 Qualcomm Incorporated Method and Apparatus for Frame Rate Up Conversion with Multiple Reference Frames and Variable Block Sizes
US20080310499A1 (en) * 2005-12-09 2008-12-18 Sung-Hoon Kim System and Method for Transmitting/Receiving Three Dimensional Video Based on Digital Broadcasting
US20090063935A1 (en) * 2007-08-29 2009-03-05 Samsung Electronics Co., Ltd. Method and system for wireless communication of uncompressed video information

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1157727C (en) * 1998-01-19 2004-07-14 索尼公司 Edit system, edit control device and edit control method
WO2001008365A1 (en) * 1999-07-08 2001-02-01 Koninklijke Philips Electronics N.V. Receiver with optimal quantization and soft viterbi decoding
JP2003116104A (en) * 2001-10-02 2003-04-18 Sony Corp Information processing apparatus and information processing method
KR100550567B1 (en) * 2004-03-22 2006-02-10 엘지전자 주식회사 Server system communicating through the wireless network and its operating method
GB2450121A (en) * 2007-06-13 2008-12-17 Sharp Kk Frame rate conversion using either interpolation or frame repetition
US20110037561A1 (en) * 2007-08-13 2011-02-17 Linx Technologies, Inc. Transcoder apparatus and methods
WO2009032255A2 (en) 2007-09-04 2009-03-12 The Regents Of The University Of California Hierarchical motion vector processing method, software and devices
US8121197B2 (en) * 2007-11-13 2012-02-21 Elemental Technologies, Inc. Video encoding and decoding using parallel processors

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050195324A1 (en) * 2004-02-16 2005-09-08 Lg Electronics Inc. Method of converting frame rate of video signal based on motion compensation
US20050265451A1 (en) * 2004-05-04 2005-12-01 Fang Shi Method and apparatus for motion compensated frame rate up conversion for block-based low bit rate video
US20070211800A1 (en) * 2004-07-20 2007-09-13 Qualcomm Incorporated Method and Apparatus for Frame Rate Up Conversion with Multiple Reference Frames and Variable Block Sizes
US20080310499A1 (en) * 2005-12-09 2008-12-18 Sung-Hoon Kim System and Method for Transmitting/Receiving Three Dimensional Video Based on Digital Broadcasting
US20090063935A1 (en) * 2007-08-29 2009-03-05 Samsung Electronics Co., Ltd. Method and system for wireless communication of uncompressed video information

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109151436A (en) * 2018-09-30 2019-01-04 Oppo广东移动通信有限公司 Data processing method and device, electronic equipment and storage medium
US11368718B2 (en) 2018-09-30 2022-06-21 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Data processing method and non-transitory computer storage medium

Also Published As

Publication number Publication date
US9185426B2 (en) 2015-11-10
US20100046615A1 (en) 2010-02-25
EP2244485A2 (en) 2010-10-27
US20160065991A1 (en) 2016-03-03
TWI535296B (en) 2016-05-21
TW201132124A (en) 2011-09-16
EP2244485A3 (en) 2013-08-21
US9462296B2 (en) 2016-10-04

Similar Documents

Publication Publication Date Title
CN101873489A (en) Signal processing method and system
CN101656825B (en) Method and system for processing signals
JP2795420B2 (en) Method and apparatus and system for compressing digitized video signal
TWI577175B (en) Image processing apparatus and method, recording medium, and program
KR100878809B1 (en) Method of decoding for a video signal and apparatus thereof
WO2010113086A1 (en) System and format for encoding data and three-dimensional rendering
US20080008241A1 (en) Method and apparatus for encoding/decoding a first frame sequence layer based on a second frame sequence layer
Chen et al. HEVC-based video coding with lossless region of interest for telemedicine applications
KR101832407B1 (en) Method and system for communication of stereoscopic three dimensional video information
KR20060043118A (en) Method for encoding and decoding video signal
KR20120062551A (en) Device and method for transmitting digital video, device and method for receiving digital video, system transmitting and receiving digital video
US20070242747A1 (en) Method and apparatus for encoding/decoding a first frame sequence layer based on a second frame sequence layer
US9258517B2 (en) Methods and apparatuses for adaptively filtering video signals
KR20070011351A (en) Video quality enhancement and/or artifact reduction using coding information from a compressed bitstream
KR20150045951A (en) Receiving device, transmission device, and image transmission method
Misu et al. Real-time 8K/4K video coding system with super-resolution inter-layer prediction
CN108574842A (en) A kind of video information processing method and processing system
JP2001346214A (en) Image information transform device and method
WO2024049627A1 (en) Video compression for both machine and human consumption using a hybrid framework
Luengo et al. HEVC Mezzanine Compression for UHD Transport over SDI and IP Infrastructures
WO2011062010A1 (en) Video encoding and decoding device
KR20140092189A (en) Method of Broadcasting for Ultra HD

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1149661

Country of ref document: HK

C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20101027

REG Reference to a national code

Ref country code: HK

Ref legal event code: WD

Ref document number: 1149661

Country of ref document: HK