WO2018087675A1 - Video frame rate conversion using streamed metadata - Google Patents
Video frame rate conversion using streamed metadata Download PDFInfo
- Publication number
- WO2018087675A1 WO2018087675A1 PCT/IB2017/056989 IB2017056989W WO2018087675A1 WO 2018087675 A1 WO2018087675 A1 WO 2018087675A1 IB 2017056989 W IB2017056989 W IB 2017056989W WO 2018087675 A1 WO2018087675 A1 WO 2018087675A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- frame
- metadata
- video
- interpolation
- frames
- Prior art date
Links
- 238000006243 chemical reaction Methods 0.000 title description 18
- 230000033001 locomotion Effects 0.000 claims description 165
- 239000013598 vector Substances 0.000 claims description 139
- 238000001514 detection method Methods 0.000 claims description 21
- 238000000034 method Methods 0.000 claims description 16
- 238000012545 processing Methods 0.000 description 42
- 230000008859 change Effects 0.000 description 21
- 238000010586 diagram Methods 0.000 description 14
- 230000008901 benefit Effects 0.000 description 11
- 238000013461 design Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000004519 manufacturing process Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000011960 computer-aided design Methods 0.000 description 2
- 238000005206 flow analysis Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000011524 similarity measure Methods 0.000 description 2
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000010219 correlation analysis Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/266—Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
- H04N21/2662—Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/61—Network physical structure; Signal processing
- H04N21/6106—Network physical structure; Signal processing specially adapted to the downstream path of the transmission network
- H04N21/6125—Network physical structure; Signal processing specially adapted to the downstream path of the transmission network involving transmission via Internet
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/521—Processing of motion vectors for estimating the reliability of the determined motion vectors or motion vector field, e.g. for smoothing the motion vector field or for correcting motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/577—Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/587—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/87—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving scene cut or scene change detection in combination with video compression
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440281—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/84—Generation or processing of descriptive data, e.g. content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/553—Motion estimation dealing with occlusions
Definitions
- a frame rate indicates a rate at which frames (which also are frequently referred to as images or fields) are captured by cameras or displayed by devices such as film projectors, televisions, digital displays, and the like.
- frames which also are frequently referred to as images or fields
- conventional movie cameras capture frames at a rate of 24 frames per second (FPS) and conventional film projectors project frames at the same rate of 24 FPS.
- FPS frames per second
- Some digital imaging devices can capture frames at higher frame rates such as 30 FPS, 48 FPS, 60 FPS, and higher.
- Digital displays such as high-definition televisions (HDTVs), are able to display frames at higher frame rates such as 60 FPS and higher.
- HDTVs high-definition televisions
- display devices use frame rate conversion to modify the frame rate of the captured frames to match the frame rate of the display device. For example, frames captured at a rate of 24 FPS can be displayed at 60 FPS by displaying two captured frames for a duration that corresponds to five displayed frames. This is referred to as 3:2 conversion because two successive captured frames A and B are repeated three and two times, respectively, to form a sequence of five displayed frames: AAABB.
- Performing frame rate up-conversion by repeating captured frames has the advantage of relative simplicity, but is known to introduce unwanted visual effects such as judder and blur.
- FIG. 1 is a diagram of a video acquisition and display system according to some embodiments.
- FIG. 2 is a diagram that illustrates a video frame and an interpolated frame generated based on motion vectors according to some embodiments.
- FIG. 3 is a block diagram illustrating a first example of a video processing system that includes a video server and a video client according to some
- FIG. 4 is a block diagram illustrating a second example of a video processing system that includes a video server and a video client according to some
- FIG. 5 is a block diagram of a video processing system illustrating video frames, metadata, and interpolated frames according to some embodiments.
- FIG. 6 is a block diagram of a video processing system that includes a video server to generate metadata from video frames and a video client to generate interpolated frames based on the metadata and the video frames according to some embodiments.
- FIG. 7 is a diagram including a screen that displays an image that can be searched to determine motion vectors associated with objects in the image according to some embodiments.
- Video display devices that support high frame rates, such as 60 FPS, perform video rate up conversion on lower frame rate streams received from video servers by interpolating between the received frames, often on the basis of motion vectors of portions of the received frames.
- frames that are captured at a frame rate of 24 FPS are subdivided into portions that include one or more pixels.
- Each portion in a first frame is compared to corresponding portions in a subsequent (second) frame that are offset from the location of the portion in the first frame by a distance indicated by a candidate motion vector. Similar comparisons are performed for a set of candidate motion vectors that represent possible motions of the portion of the first frame.
- the motion vector that produces the best match between the portion in the first frame and an offset portion in the second frame is selected as the motion vector that represents motion of the portion in the first frame.
- the motion vector calculation is then repeated for every portion of the first frame to determine a motion vector field for the first frame.
- the video display device uses the motion vector field to generate estimated frames to replace the repeated frames used in conventional frame rate conversion.
- frame rate up conversion from 24 FPS to 60 FPS can be represented as ⁇ ', where A' is a first estimated frame generated by interpolating from the frame A, A" is a second estimated frame generated by interpolating from the frame A, and B' is an estimated frame generated by interpolating from the frame B.
- video frame rate up conversion is computationally intensive, which significantly increases power usage by the video display device and limits the availability of frame rate up conversion to video display devices with sufficient computing power to perform the brute force calculations of the motion vector field.
- Power consumption by video display devices can be reduced while also allowing less computationally powerful video display devices to benefit from video frame rate up conversion by performing motion estimation on a frame in a stream at a video server and then providing the frame to the video display device with metadata that represents a motion vector field for the frame.
- the metadata also includes confidence measures for the motion vectors in the motion vector field or flags that indicate (0) that interpolation is not performed on the basis of the motion vector, (1) that interpolation is only performed forward in time, (2) that interpolation is only performed backwards in time, or (3) that interpolation is performed bi- directionally in time.
- the video server provides the frame in the stream at a first frame rate and multiplexes or otherwise incorporates the metadata into the stream.
- Some embodiments of the video server are also configured to perform scene change detection on the frame and provide additional metadata that indicates whether the scene change was detected in the frame.
- Motion vector processing is used to identify outlier motion vectors that are unexpectedly different from neighboring motion vectors, e.g., they point in the opposite direction or have a magnitude that is much different than the average for the neighboring motion vectors.
- the outlier motion vectors can be ignored or modified based on values of neighboring motion vectors.
- Occlusion detection can be used to identify motion vectors for portions of the frame that are affected by occlusion so that interpolation is not performed, performed only forward in time, performed only backward in time, or performed bi-directionally in time based on the occluded motion vectors.
- the motion vector processing and occlusion detection are performed by the video server, which generates metadata representative of the outlier motion vectors or occluded motion vectors in the frame and provides the metadata with the frame.
- the video display device receives the frame in the stream along with the corresponding metadata and uses the metadata to generate estimated frames by interpolating from the frame on the basis of the metadata.
- the estimated frames are used for frame rate up conversion of the frames in the stream from the first frame rate to a second (higher) frame rate.
- FIG. 1 is a diagram of a video acquisition and display system 100 according to some embodiments.
- the system 100 includes a video acquisition device 105 such as a video camera.
- the video acquisition device 105 can be a standalone device or the video acquisition device 105 can be integrated into another computing device such as a desktop computer, a laptop computer, a tablet computer, a smart phone, and the like.
- the video acquisition device 105 acquires a sequence of images of a scene 110.
- the scene 110 includes a field 1 15, a person 120, and a ball 125.
- the scene 110 can be any scene that is capable of being monitored by a video acquisition device 105.
- the images captured by the video acquisition device 105 are represented as values of pixels in a frame.
- the video acquisition device 105 generates frames based on the captured images at a frame rate, such as 24 frames per second (FPS) or 30 FPS.
- Frames generated by the video acquisition device 105 are provided to a video server 130 that is configured to store the frames (at least temporarily) and provide the frames to one or more video clients 135, e.g., via an intervening network 140.
- the scene 110 includes a portion of a soccer or football match that a user is watching on a screen 145 of the video client 135.
- the video server 130 receives a stream of frames generated by the video acquisition device 105 and transmits the stream of frames to the video client 135 at the frame rate of the video acquisition device 105.
- the frame rate of the video acquisition device 105 does not necessarily match the frame rate that can be used to display the video represented by the stream of frames at the video client 135.
- the video acquisition device 105 can acquire images at a frame rate of 24 FPS, while the video client 135 can display frames at a higher frame rates such as 30 FPS, 48 FPS, 60 FPS, and higher.
- the video client 135 can perform video frame rate up conversion to convert the frames received at a lower frame rate (such as 24 FPS) to a larger number of frames that can be displayed at a higher frame rates (such as 60 FPS).
- the video client 135 can generate additional frames by interpolating between the frames received from the video server 130.
- the video client 135 can perform the
- interpolation on the basis of interpolation parameters derived from the received frames, such as motion vectors of portions of the received frames that are generated using block-based comparisons of received frames with a reference frame, an optical flow analysis of the received frames, or correlations of the portions of the received frames, e.g., autocorrelations, convolutions, cross-correlations, or phase correlations.
- interpolation parameters such as motion vectors of portions of the received frames that are generated using block-based comparisons of received frames with a reference frame, an optical flow analysis of the received frames, or correlations of the portions of the received frames, e.g., autocorrelations, convolutions, cross-correlations, or phase correlations.
- generating the interpolation parameters is
- some embodiments of the video server 130 generate interpolation parameters using the frames received from the video acquisition device 105.
- the video server 130 can generate one or more sets of interpolation parameters that can be used to perform video rate up conversion from the frame rate used by the video acquisition device 105 (e.g., 24 FPS) to the frame rate used to display frames at the video client 135 (e.g., 60 FPS).
- the interpolation parameters for a first frame in a stream of frames generated by the video acquisition device 105 are used to generate one or more interpolated frames representative of the scene 110 subsequent to the first frame and prior to a second frame in the stream generated by the video acquisition device 105.
- the video server 130 then generates metadata
- the video client 135 receives the stream of frames including the multiplexed metadata from the video server 130.
- the video client 135 can receive a first frame representative of the scene 110 in a stream of frames including multiplexed metadata representative of interpolation parameters for portions of the first frame.
- the video client 135 can then generate one or more interpolated frames that represent the scene at time intervals subsequent to the first frame and prior to a second frame in the stream of frames.
- the video client 135 can use motion vectors for portions of the first frame (such as pixels or groups of pixels) to interpolate values of the pixels in the first frame to generate estimated values of the pixels in the interpolated frames.
- the number of interpolated frames is determined based on the ratio of the frame rate used by the video acquisition device 105 and the frame rate used by the video client 135.
- the video client 135 can iteratively generate two interpolated frames for a first frame and one interpolated frame for a second frame to perform 3:2 frame rate up conversion from 24 FPS to 60 FPS.
- the video client 135 displays the first frame, the two frames interpolated from the first frame, the second frame, the one frame interpolated from the second frame, etc.
- Interpolation is selectively performed on the basis of confidence measures or flags in some embodiments, as discussed herein. For example, interpolation can be bypassed, performed forward in time, performed backward in time, or performed bi-directionally in time based on values of flags in the metadata.
- FIG. 2 is a diagram that illustrates a video frame 200 and an interpolated frame 205 generated based on motion vectors according to some embodiments.
- the video frame 200 represents the frames generated by some embodiments of the video acquisition device 105 shown in FIG. 1.
- the interpolated frame 205 represents interpolated frames generated by some embodiments of the video client 135 shown in FIG. 1.
- the video frame 200 is made up of an array of pixels that have values that represent a scene that is being monitored by a video acquisition device.
- the pixels 210, 211 , 212 (collectively referred to herein as "the pixels 210- 212") have values that represent corresponding portions of a person 215 in the video frame 200.
- the pixels 220, 221 have values that represent corresponding portions of a ball 225 in the video frame 200.
- the pixel 230 has a value that represents a corresponding portion of a field 235 in the video frame 200.
- the pixels are associated with corresponding motion vectors.
- the pixels 210-212 have corresponding motion vectors 240, 241 , 242 (collectively referred to herein as "the motion vectors 240-242") that indicate amplitudes and directions of motion estimated for the pixels 210-212.
- the pixels 220, 221 have corresponding motion vectors 243, 244 that indicate amplitudes and directions of motion estimated for the pixels 220, 221.
- the pixel 230 has a value that represents a stationary portion of the field 235 and so there is no motion vector associated with the pixel 230.
- a motion vector having an amplitude of zero and no direction (or an arbitrary direction) can be associated with the pixel 230.
- the motion vectors 240-244 are determined for the pixels 210-212, 220, 221 , 230 by a video server such as the video server 130 shown in FIG. 1. Although individual pixels 210-212, 220, 221 , 230 are depicted in FIG. 2, the pixels 210-212, 220, 221 , 230 are also representative of blocks of pixels such as 16 x 16 blocks of pixels in some
- the video server multiplexes metadata representative of the motion vectors 240-244 (or other interpolation parameters) with the information
- the video client performs video rate up conversion using the received frames and the metadata, e.g., by generating interpolated frames on the basis of the received frames and the metadata.
- the interpolated frame 205 is generated by interpolating values of pixels in the video frame 200 to generate values of pixels in the interpolated frame 205 on the basis of the motion vectors 240-244.
- the values of the pixels 250, 251 , 252 are generated by interpolating the values of the pixels 210-212 using the motion vectors 240-242.
- the values of the pixels 253, 254 are generated by interpolating the values of the pixels 220, 221 using the motion vectors 243, 244.
- FIG. 3 is a block diagram illustrating a first example of a video processing system 300 that includes a video server 305 and a video client 310 according to some embodiments.
- the video processing system 300 is used to implement some embodiments of the video acquisition and display system 100 shown in FIG. 1.
- the video server 305 is used to implement some embodiments of the video server 130
- the video client 310 is used to implement some embodiments of the video client 135 shown in FIG. 1.
- the video server 305 receives a stream 315 including frames that are provided by a video acquisition device (such as the video acquisition device 105 shown in FIG. 1) at a first frame rate such as 24 FPS.
- the video server 305 includes a motion estimation module 320 that is used to estimate motion vectors for pixels or groups of pixels in the received frames.
- the motion estimation module 320 can compare values of pixels in a current frame to values of pixels in a reference frame, such as a previously received frame in the stream. The comparison is performed by shifting the pixels in the current frame by an offset determined by a candidate motion vector and then comparing the values of the offset pixels to values of pixels in the reference frame.
- the comparison can also be performed on the basis of correlation analyses, optical flow analysis, and the like.
- a measure of the similarity of the pixel values is then computed. This process is iterated for a set of candidate motion vectors and the candidate motion vector with the highest similarity measure is selected as the motion vector for the pixel (or group of pixels).
- the motion estimation module 320 measures gradient of the similarity measures between different candidate motion vectors, as well comparing as a "cost in distance" between the candidate motion vectors. The gradient and the cost are weighted and combined to select one of the candidate motion vectors as the motion vector for the pixel or group of pixels.
- the "cost in distance” can be determined using an L-1 norm, e.g., a taxi cab distance on a grid measure of the distance between the candidate motion vectors, an L-2 norm that determines a Euclidean distance between the candidate motion vectors according to Pythagorean theorem, or other measures that characterize the distance between the different candidate motion vectors.
- L-1 norm e.g., a taxi cab distance on a grid measure of the distance between the candidate motion vectors
- an L-2 norm that determines a Euclidean distance between the candidate motion vectors according to Pythagorean theorem
- a "runner-up" motion vector can also be selected in addition to the primary motion vector.
- Some embodiments of the motion estimation module 320 generate confidence measures for the selected motion vector and any "runner-up" motion vectors. They confidence measures indicate a likelihood that the selected motion vector accurately represents motion of the portion of the image represented in the corresponding pixel or group of pixels.
- the confidence measure for a vector can be represented by a number within a range 0..n, with the smaller numbers representing lower levels of confidence and the larger numbers representing higher levels of confidence.
- the numbers that represent the confidence measures can be floating point numbers, 3-bit numbers, or other representations.
- the motion estimation module 320 generates metadata that represents the motion vectors for the pixels (or groups of pixels) in the frames of the stream 315.
- the motion vectors for each of the pixels (or groups of pixels) can be represented as differential distances (dx, dy) in the X and Y directions in the plane of the screen.
- the motion vectors for each of the pixels can be represented information indicating an amplitude of the motion vector, and information indicating the direction of the motion vector in the frame.
- the metadata for each of the motion vectors also includes information identifying the corresponding pixels (or groups of pixels).
- Some embodiments of the motion estimation module 320 also include the confidence measures for each of the motion vectors in the metadata.
- the motion estimation module 320 shown in FIG. 3 computes motion vectors, some embodiments of the motion estimation module 320 generate other interpolation parameters such as optical flow results, correlation analysis outcomes, and the like. The motion estimation module 320 can therefore generates metadata representative of these other interpolation parameters.
- Some embodiments of the video server 305 include a scene change detection module 325.
- a scene change occurs when the scene represented by the current frame is different than the scene represented by the previous frame in the stream 315.
- the scene change detection module 325 is able to detect scene changes by comparing values of the pixels in the current frame to values of the pixels in the previous frame. For example, if a scene change occurs between the current frame and the previous frame, values of some or all of the pixels in the current frame and the previous frame change discontinuously.
- the scene change detection module 325 can therefore determine measures of differences between values of the pixels in the current frame and the previous frame. If the distance measure is greater than a threshold, the scene change detection module 325 detects a scene change.
- the scene change detection module 325 is able to generate metadata to indicate the scene change, such as a bit that is given a value of "0" if there is no scene change and the value of "1 " if a scene change is detected.
- the value of the metadata is used to determine whether to attempt interpolation between frames in the stream 315.
- the frames in the stream 315, the metadata produced by the motion estimation module 320, the metadata produced by the scene change detection module 325, and metadata produced by any other video processing modules in the video server 305 are provided to a multiplexer 330.
- the multiplexer 330 multiplexes or otherwise incorporates the metadata into the stream 315.
- the multiplexer 330 can generate an output stream 335 that includes the frames in the stream 315 separated by metadata associated with each of the frames.
- the output stream 335 is transmitted to the video client 310.
- the frames and the metadata are stored in the video server 305.
- the multiplexed output stream 335 is then provided to the video client 310 in response to a request from the video client 310. Consequently, the metadata does not need to be generated in real time.
- Some embodiments of the video client 310 include an occlusion and motion vector processing module 340.
- Occlusion occurs when one object in a scene passes in front of or behind another object. For example, when a ball travels behind a tree portions of the ball are occluded by the tree.
- Motion vectors of portions of the object in the previous frame that are occluded in the current frame should not be used for interpolation because that can result in values of pixels representative of portions of an occluding object being assigned values corresponding to portions of an occluded object. For example, interpolating a frame representing a scene including a ball traveling behind a tree on the basis of motion vectors in the frame can result in a portion of the ball appearing to travel in front of the tree in the interpolated frame.
- the occlusion and motion vector processing module 340 can detect occlusion in portions of a scene and generate corresponding metadata. Some embodiments of the occlusion and motion vector processing module 340 detect occlusion by comparing motion vectors determined forward in time (e.g., by determining motion vectors in a current frame relative to a previous frame) and motion vectors that are determined backwards in time (e.g., by determining motion vectors in the previous frame relative to the current frame). If the motion vectors are consistent, occlusion is unlikely. However, the forward and backward motion vectors will differ if occlusion is present. The occlusion and motion vector processing module 340 generates metadata indicating whether pixels (or groups of pixels) are experiencing occlusion.
- the motion vectors for occluded pixels can be given a confidence measure of 0 or other low value to indicate a low confidence in the motion vector.
- the occluded pixels can be associated with a bit that is given a value of "0" if there is no occlusion associated with a motion vector and a value of "1" if occlusion is detected for a motion vector.
- the value of the metadata is used to determine whether to use the motion vector for interpolation between frames in the stream 335.
- the occlusion and motion vector processing module 340 can also be used to detect outlier motion vectors that may be errors or artifacts. For example, if the occlusion and motion vector processing module 340 determines that a motion vector of a first pixel is statistically different than motion vectors of one or more neighboring pixels, the occlusion and motion vector processing module 340 identifies the motion vector of the first pixel as an outlier. Examples of statistical differences include motion vectors that have an amplitude that is more than a predetermined number of standard deviations away from a mean value of amplitudes of neighboring motion vectors, a direction that is more than a predetermined number of standard deviations away from an average direction of the neighboring motion vectors, and the like.
- Some embodiments of the occlusion and motion vector processing module 340 modify the outlier motion vector based on the values of the neighboring motion vectors, e.g., by replacing an amplitude or direction of the outlier motion vector with an average of the amplitudes or directions of the neighboring motion vectors.
- the motion vectors can also be filtered to remove the outliers, e.g., using spatial-temporal meridian filters that replace outliers with local averages or with a most-similar neighboring motion vector.
- Confidence measures associated with the outlier motion vectors (or replaced values of the outlier motion vectors) can be set to a low value to indicate low confidence in the accuracy of the motion vectors.
- the occlusion and motion vector processing module 340 can generate metadata such as the confidence measures that can indicate modifications to the outlier motion vector or indicate whether the outlier motion vector should be used for interpolation.
- An interpolation module 345 in the video client 310 receives the output stream 335 including the frames of the stream 315 and the metadata generated by the video server 305, as well as any metadata generated by the occlusion and motion vector processing module 340.
- the interpolation module 345 uses the received video frames and metadata to generate one or more interpolated frames, as discussed herein.
- the interpolation module 345 then provides an interpolated video stream 350 that includes the frames in the stream 315 and the interpolated frames produced based on the frames and the metadata.
- FIG. 4 is a block diagram illustrating a second example of a video processing system 400 that includes a video server 405 and a video client 410 according to some embodiments.
- the video processing system 400 is used to implement some embodiments of the video acquisition and display system 100 shown in FIG. 1.
- the video server 405 is used to implement some embodiments of the video server 130 and the video client 410 is used to implement some embodiments of the video client 135 shown in FIG. 1.
- the video server 405 receives a stream 415 including frames that are provided by a video acquisition device (such as the video acquisition device 105 shown in FIG. 1) at a first frame rate such as 24 FPS.
- the video server 405 includes a motion estimation module 420 determines motion vectors and generates metadata that represents the motion vectors for the pixels (or groups of pixels) in the frames of the stream 415.
- the video server 405 also includes a scene change detection module 425 that detects scene changes in the frames of the stream 415 and generates metadata to indicate the scene change.
- the motion estimation module 420 and the scene change detection module 425 are configured to operate in the same manner as some embodiments of the motion estimation module 320 and the scene change detection module 325 shown in FIG. 3.
- the second example of the video processing system 400 depicted in FIG. 4 differs from the first example of the video processing system 300 shown in FIG. 3 because the video server 405 implements an occlusion and motion vector processing module 430. Moving the moderately computationally intensive operations of the occlusion and motion vector processing module 430 reduces the computational burden on the video client 410.
- the occlusion and motion vector processing module 430 is configured to detect occlusion in the frames of the stream 415 and generate metadata indicating whether pixels (or groups of pixels) in the frames are
- the occlusion and motion vector processing module 430 is also configured to detect outlier motion vectors in the frames of the stream 415. Some embodiments of the occlusion and motion vector processing module 430 modify the values of the outlier motion vectors, as discussed herein, and generate metadata that can indicate modifications to the outlier motion vector or indicate whether the outlier motion vector should be used for interpolation.
- the frames in the stream 415, the metadata produced by the motion estimation module 420, the metadata produced by the scene change detection module 425, the metadata produced by the occlusion and motion vector processing module 430, and metadata produced by any other video processing modules in the video server 405 are provided to a multiplexer 435.
- the multiplexer 435 multiplexes or otherwise incorporates the metadata into the stream 415.
- the multiplexer 435 can generate an output stream 440 that includes the frames in the stream 415 separated by metadata associated with each of the frames.
- the output stream 440 is transmitted to the video client 410.
- the frames and the metadata are stored in the video server 405.
- the multiplexed output stream 440 is then provided to the video client 410 in response to a request from the video client 410. Consequently, the metadata does not need to be generated in real time.
- An interpolation module 445 in the video client 410 receives the output stream 440 including the frames of the stream 415 and the metadata generated by the video server 405.
- the interpolation module 445 uses the received video frames and metadata to generate one or more interpolated frames, as discussed herein.
- the interpolation module 445 then provides an interpolated video stream 450 that includes the frames in the stream 415 and the interpolated frames produced based on the frames and the metadata.
- FIG. 5 is a block diagram of a video processing system 500 illustrating video frames, metadata, and interpolated frames according to some embodiments.
- the video processing system 500 includes a video server 505 and a video client 510, which are implemented using some embodiments of the video servers 130, 305, 405 and the video clients 135, 310, 410 shown in FIGs. 1 , 3, and 4.
- the video server 505 receives (or generates) a stream including video frames 515, 520.
- the video server 505 also generates metadata 525, 530 for the corresponding video frames 515, 520.
- the metadata can be generated by a motion estimation module, a scene change detection module, an occlusion and motion vector processing module (if implemented in the video server 505), or other video processing modules implemented in the video server 505.
- the video frames 515, 520 and the metadata 525, 530 are provided to a multiplexer 535, which multiplexes or otherwise incorporates the video frames 515, 520 and the metadata 525, 530 into an output stream 540.
- Some embodiments of the video server 505 compress the video frames 515, 520 and the metadata 525, 530 to form the output stream 540. Compressing the bits that form the output stream 540 can significantly improve the video quality with only a small increase in the bandwidth required to transmit the output stream 540. For example, services such as Netflix stream data at a rate of approximately 5
- a conservative estimate of the compression ratio for the metadata is 10:1 due to the typically large amounts of correlation between motion in the images.
- compressed metadata consumes very little bandwidth compared to compressed video frames.
- Bandwidth can therefore be conserved, e.g., the bandwidth needed to transport 60 Hz video can be reduced by almost 50% by transporting the 60 Hz video as 30 Hz video including metadata indicating how to recover or interpolate the frames that were not transported.
- Some embodiments of the video server 505 are also able to multiplex downscaled or thumbnail versions of frames into the output stream 540. This allows the video server 505 to drop some frames that are in a stream that has a higher frame rate and transmit the remaining frames at a lower frame rate. The video server 505 can then supplement the information in the output stream 540 with downscaled or thumbnail versions of the dropped frames so that the video client 510 can use the downscaled or thumbnail versions to reconstruct or interpolate frames for display with the received frames at the higher frame rate.
- the downscaled or thumbnail versions can also be used to identify shapes of occlusion areas or perform interpolation in occlusion areas or ambiguous areas of the image.
- the video client 510 receives the output stream 540 from the video server 505.
- the video client 510 uses the video frames 515, 520 and the metadata 525, 530 to generate interpolated frames 545, 550, 555, as discussed herein.
- the metadata 525 is used to interpolate pixel values in the video frame 515 to generate pixel values of the interpolated frames 545, 550.
- the metadata 530 is used to interpolate pixel values in the video frame 520 to generate pixel values of the interpolated frame 555.
- the video client 510 generates a display stream 560 that includes the video frames 515, 520 and the interpolated frames 545, 550, 555.
- the display stream 560 is used to display video on a screen of the video client 510.
- FIG. 6 is a block diagram of a video processing system 600 that includes a video server 605 to generate metadata from video frames and a video client 610 to generate interpolated frames based on the metadata and the video frames according to some embodiments.
- the video server 605 and the video client 610 are used to implement some embodiments of the video servers 130, 305, 405, 505 and the video clients 135, 310, 410, 510 shown in FIGs. 1 and 3-5.
- the video server 605 includes a network interface 615 (e.g., a network interface) for transmitting and receiving signals.
- the network interface 615 can receive signals representative of frames in a stream generated by a video acquisition device 620.
- the network interface 615 can also transmit signals representative of video frames and associated metadata, as discussed herein.
- the network interface 615 can be implemented as a single integrated circuit (e.g., using a single ASIC or FPGA) or as a system-on-a-chip (SOC) that includes different modules for implementing the functionality of the network interface 615.
- the video server 605 also includes a processor 625 and a memory 630.
- the processor 625 can be used to execute instructions stored in the memory 630 and to store information in the memory 630 such as the results of the executed instructions, which can include video frames or associated metadata.
- the video client 610 includes a network interface 635 for transmitting and receiving signals.
- the network interface 635 can receive signals representative of video frames and metadata generated by the video server 605.
- the network interface 635 can transmit video frames and interpolated frames generated on the basis of the received metadata to a screen 640 for display.
- the network interface 635 can be implemented as a single integrated circuit (e.g., using a single ASIC or FPGA) or as a system-on-a-chip (SOC) that includes different modules for implementing the functionality of the network interface 635.
- the video client 610 also includes a processor 645 and a memory 650.
- the processor 645 can be used to execute instructions stored in the memory 650 and to store information in the memory 650 such as the results of the executed instructions.
- the processor 645 can be used to generate interpolated frames based on video frames and metadata received from the video server 605.
- the interpolated frames are then provided to the network interface 635 to generate images on the screen 640.
- FIG. 7 is a diagram including a screen 700 that displays an image that can be searched to determine motion vectors associated with objects in the image according to some embodiments.
- the screen 700 is a 1920 x 1080 array of pixels, although other embodiments of the screen 700 include different numbers of pixels arranged in different numbers of rows or columns.
- the image shown in the current frame includes a person 705, a ball 710, and a field 715 that are represented by different values of the pixels in the array implemented by the screen 700.
- subsets of the values of the pixels that represent the image in the current frame are compared to reference subsets of values of the pixels that represent an image in a previous frame.
- the pixels of the screen 700 can be divided into 64 x 64 search windows such as the search window 720 and then 16 x 16 search blocks within the search window 720 are compared to reference blocks such as the 16 x 16 reference block 725 that includes values of pixels that represent a previous position of the ball 710.
- the results of each comparison can be represented by a score, S: n-1 n-1
- each score requires 256 subtractions and 256 absolute value operations.
- the computational load of the motion vector search can be estimated by assuming that the scores are determined using a single-instruction-multiple-data (SIMD) graphics processing unit (GPU) that requires approximately 30 instructions per processing core to perform a search for each candidate area such as the search window 720.
- SIMD single-instruction-multiple-data
- GPU graphics processing unit
- the total number of cycles needed to search each image is therefore 8100 ⁇ 122,880 «10 9 cycles.
- occlusion detection and other functions require performing a forward search (e.g.
- comparing the current frame relative to a previous frame and a backward search (e.g. , comparing the previous frame relative to the current frame), which doubles the number of cycles per image.
- a typical input frame rate is 24 FPS, which leads to a total processor requirement of 48 billion cycles per second.
- This amount of processing power is not available on all devices and, when it is available, consumes a significant amount of power.
- this estimate is a lower estimate because additional calculations are typically required for post-processing, e.g., to find and handle outliers, occlusions, and the like.
- additional calculations can be performed on the image represented at different scales.
- Some embodiments of the video acquisition and display systems described herein have a number of advantages over conventional practice. For example, performing motion estimation (and in some cases other video processing) at a video server, and providing the video frames with metadata representing interpolation parameters to video clients, reduces the minimum requirements for the video clients that support video frame rate up conversion, as well as reducing power consumption at the video clients. Shifting motion estimation (and in some cases other video processing) from the video client to the video server can also increase video quality at the video client, as well as reducing the rate of occurrence and severity of artifacts, by implementing more sophisticated motion estimation using the computational resources of the video server or using more sophisticated analysis to examine a larger range of possible choices and determine which choices would result in the best video quality. Furthermore, in some embodiments, motion estimation (and in some cases other video processing) is not required to be performed in real time at the video server. For example, metadata for a video stream can be generated before the video stream is requested by a video client and then provided upon request.
- the apparatus and techniques described above are implemented in a system comprising one or more integrated circuit (IC) devices (also referred to as integrated circuit packages or microchips), such as the video acquisition and display systems described above with reference to FIGs. 1 -6.
- IC integrated circuit
- EDA Electronic design automation
- CAD computer aided design
- the one or more software programs comprise code executable by a computer system to manipulate the computer system to operate on code representative of circuitry of one or more IC devices so as to perform at least a portion of a process to design or adapt a manufacturing system to fabricate the circuitry.
- This code can include instructions, data, or a combination of instructions and data.
- a computer readable storage medium includes any non-transitory storage medium, or combination of non-transitory storage media, accessible by a computer system during use to provide instructions and/or data to the computer system.
- Such storage media can include, but is not limited to, optical media (e.g., compact disc (CD), digital versatile disc (DVD), Blu-Ray disc), magnetic media (e.g., floppy disc, magnetic tape, or magnetic hard drive), volatile memory (e.g., random access memory (RAM) or cache), non-volatile memory (e.g., read-only memory (ROM) or Flash memory), or microelectromechanical systems (MEMS)-based storage media.
- optical media e.g., compact disc (CD), digital versatile disc (DVD), Blu-Ray disc
- magnetic media e.g., floppy disc, magnetic tape, or magnetic hard drive
- volatile memory e.g., random access memory (RAM) or cache
- non-volatile memory e.g., read-only memory (ROM) or Flash memory
- the computer readable storage medium in some implementations is embedded in the computing system (e.g., system RAM or ROM), fixedly attached to the computing system (e.g., a magnetic hard drive), removably attached to the computing system (e.g., an optical disc or Universal Serial Bus (USB)-based Flash memory), or coupled to the computer system via a wired or wireless network (e.g., network accessible storage (NAS)).
- the computing system e.g., system RAM or ROM
- fixedly attached to the computing system e.g., a magnetic hard drive
- removably attached to the computing system e.g., an optical disc or Universal Serial Bus (USB)-based Flash memory
- USB Universal Serial Bus
- certain aspects of the techniques described above may implemented by one or more processors of a processing system executing software.
- the software comprises one or more sets of executable instructions stored or otherwise tangibly embodied on a non-transitory computer readable storage medium.
- the software can include the instructions and certain data that, when executed by the one or more processors, manipulate the one or more processors to perform one or more aspects of the techniques described above.
- the non-transitory computer readable storage medium can include, for example, a magnetic or optical disk storage device, solid state storage devices such as Flash memory, a cache, random access memory (RAM) or other non-volatile memory device or devices, and the like.
- the executable instructions stored on the non-transitory computer readable storage medium may be in source code, assembly language code, object code, or other instruction format that is interpreted or otherwise executable by one or more processors.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Television Systems (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020197014759A KR20190077428A (en) | 2016-11-08 | 2017-11-08 | Video frame rate conversion using streamed metadata |
EP17870427.6A EP3539292A4 (en) | 2016-11-08 | 2017-11-08 | Video frame rate conversion using streamed metadata |
JP2019545394A JP2019537913A (en) | 2016-11-08 | 2017-11-08 | Video frame rate conversion using streamed metadata |
CN201780066852.2A CN109891891A (en) | 2016-11-08 | 2017-11-08 | It is converted using the video frame rate of streaming metadata |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/346,392 | 2016-11-08 | ||
US15/346,392 US10412462B2 (en) | 2016-11-08 | 2016-11-08 | Video frame rate conversion using streamed metadata |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018087675A1 true WO2018087675A1 (en) | 2018-05-17 |
Family
ID=62064880
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2017/056989 WO2018087675A1 (en) | 2016-11-08 | 2017-11-08 | Video frame rate conversion using streamed metadata |
Country Status (6)
Country | Link |
---|---|
US (1) | US10412462B2 (en) |
EP (1) | EP3539292A4 (en) |
JP (1) | JP2019537913A (en) |
KR (1) | KR20190077428A (en) |
CN (1) | CN109891891A (en) |
WO (1) | WO2018087675A1 (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10621446B2 (en) * | 2016-12-22 | 2020-04-14 | Texas Instruments Incorporated | Handling perspective magnification in optical flow processing |
US20190164296A1 (en) * | 2017-11-27 | 2019-05-30 | Qualcomm Incorporated | Systems and methods for determining a confidence measure for a motion vector |
US11535158B2 (en) | 2019-03-28 | 2022-12-27 | Magna Electronics Inc. | Vehicular camera with automatic lens defogging feature |
CN110933497B (en) * | 2019-12-10 | 2022-03-22 | Oppo广东移动通信有限公司 | Video image data frame insertion processing method and related equipment |
CN111010599B (en) * | 2019-12-18 | 2022-04-12 | 浙江大华技术股份有限公司 | Method and device for processing multi-scene video stream and computer equipment |
KR20210083840A (en) * | 2019-12-27 | 2021-07-07 | 삼성전자주식회사 | Electronic device for video editing with dynamic tone metadata and operating method thereof |
CN111726555B (en) * | 2020-06-04 | 2021-11-23 | 上海顺久电子科技有限公司 | Display device, motion estimation method and video processing method |
CN112203034B (en) * | 2020-09-30 | 2023-09-08 | Oppo广东移动通信有限公司 | Frame rate control method and device and electronic equipment |
KR20220085283A (en) * | 2020-12-15 | 2022-06-22 | 삼성전자주식회사 | Electronic apparatus and controlling method thereof |
CN112804526B (en) * | 2020-12-31 | 2022-11-11 | 紫光展锐(重庆)科技有限公司 | Image data storage method and equipment, storage medium, chip and module equipment |
CN114827663B (en) * | 2022-04-12 | 2023-11-21 | 咪咕文化科技有限公司 | Distributed live broadcast frame inserting system and method |
CN115460436B (en) * | 2022-08-03 | 2023-10-20 | 北京优酷科技有限公司 | Video processing method, storage medium and electronic device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2574556A1 (en) * | 2004-07-20 | 2006-02-02 | Qualcomm Incorporated | Method and apparatus for motion vector processing |
US8488059B2 (en) * | 2009-12-16 | 2013-07-16 | Broadcom Corporation | Adaptation of frame selection for frame rate conversion |
US20140028911A1 (en) * | 2012-07-24 | 2014-01-30 | Snell Limited | Interpolation of images |
US9137569B2 (en) * | 2010-05-26 | 2015-09-15 | Qualcomm Incorporated | Camera parameter-assisted video frame rate up conversion |
US9148622B2 (en) * | 2011-12-29 | 2015-09-29 | Hong Kong Applied Science and Technology Research Institute Company, Limited | Halo reduction in frame-rate-conversion using hybrid bi-directional motion vectors for occlusion/disocclusion detection |
US9432690B2 (en) * | 2013-01-30 | 2016-08-30 | Ati Technologies Ulc | Apparatus and method for video processing |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2311182A (en) * | 1996-03-13 | 1997-09-17 | Innovision Plc | Improved gradient based motion estimation |
CA2575211C (en) * | 2004-07-30 | 2012-12-11 | Euclid Discoveries, Llc | Apparatus and method for processing video data |
US9258519B2 (en) * | 2005-09-27 | 2016-02-09 | Qualcomm Incorporated | Encoder assisted frame rate up conversion using various motion models |
US9042682B2 (en) * | 2012-05-23 | 2015-05-26 | Dolby Laboratories Licensing Corporation | Content creation using interpolation between content versions |
US9257092B2 (en) * | 2013-02-12 | 2016-02-09 | Vmware, Inc. | Method and system for enhancing user experience for remoting technologies |
CN105657541A (en) * | 2015-12-29 | 2016-06-08 | 华为技术有限公司 | Frame processing method and device |
-
2016
- 2016-11-08 US US15/346,392 patent/US10412462B2/en active Active
-
2017
- 2017-11-08 EP EP17870427.6A patent/EP3539292A4/en not_active Withdrawn
- 2017-11-08 KR KR1020197014759A patent/KR20190077428A/en not_active Application Discontinuation
- 2017-11-08 JP JP2019545394A patent/JP2019537913A/en not_active Withdrawn
- 2017-11-08 CN CN201780066852.2A patent/CN109891891A/en active Pending
- 2017-11-08 WO PCT/IB2017/056989 patent/WO2018087675A1/en unknown
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2574556A1 (en) * | 2004-07-20 | 2006-02-02 | Qualcomm Incorporated | Method and apparatus for motion vector processing |
US8488059B2 (en) * | 2009-12-16 | 2013-07-16 | Broadcom Corporation | Adaptation of frame selection for frame rate conversion |
US9137569B2 (en) * | 2010-05-26 | 2015-09-15 | Qualcomm Incorporated | Camera parameter-assisted video frame rate up conversion |
US9148622B2 (en) * | 2011-12-29 | 2015-09-29 | Hong Kong Applied Science and Technology Research Institute Company, Limited | Halo reduction in frame-rate-conversion using hybrid bi-directional motion vectors for occlusion/disocclusion detection |
US20140028911A1 (en) * | 2012-07-24 | 2014-01-30 | Snell Limited | Interpolation of images |
US9432690B2 (en) * | 2013-01-30 | 2016-08-30 | Ati Technologies Ulc | Apparatus and method for video processing |
Non-Patent Citations (1)
Title |
---|
See also references of EP3539292A4 * |
Also Published As
Publication number | Publication date |
---|---|
US20180132009A1 (en) | 2018-05-10 |
CN109891891A (en) | 2019-06-14 |
JP2019537913A (en) | 2019-12-26 |
EP3539292A1 (en) | 2019-09-18 |
KR20190077428A (en) | 2019-07-03 |
US10412462B2 (en) | 2019-09-10 |
EP3539292A4 (en) | 2020-04-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10412462B2 (en) | Video frame rate conversion using streamed metadata | |
US10425582B2 (en) | Video stabilization system for 360-degree video data | |
EP2378485B1 (en) | Moving object detection method and moving object detection apparatus | |
US8891009B2 (en) | System and method for retargeting video sequences | |
US9179071B2 (en) | Electronic device and image selection method thereof | |
EP2214137B1 (en) | A method and apparatus for frame interpolation | |
US9794588B2 (en) | Image processing system with optical flow recovery mechanism and method of operation thereof | |
US20080259169A1 (en) | Image Processing Device, Image Processing Method, and Image Processing Program | |
US11910001B2 (en) | Real-time image generation in moving scenes | |
CN104284059A (en) | Apparatus and method for stabilizing image | |
US20140126818A1 (en) | Method of occlusion-based background motion estimation | |
US20170116741A1 (en) | Apparatus and Methods for Video Foreground-Background Segmentation with Multi-View Spatial Temporal Graph Cuts | |
WO2016120132A1 (en) | Method and apparatus for generating an initial superpixel label map for an image | |
US20130069935A1 (en) | Depth generation method and apparatus using the same | |
EP2296095B1 (en) | Video descriptor generator | |
EP2698764A1 (en) | Method of sampling colors of images of a video sequence, and application to color clustering | |
JP2014110020A (en) | Image processor, image processing method and image processing program | |
KR101511315B1 (en) | Method and system for creating dynamic floating window for stereoscopic contents | |
Chae et al. | Siamevent: Event-based object tracking via edge-aware similarity learning with siamese networks | |
KR101740124B1 (en) | An appratus for frame rate conversion and a method thereof | |
Teknomo et al. | Background image generation using boolean operations | |
US20160093062A1 (en) | Method and apparatus for estimating absolute motion values in image sequences | |
KR101458099B1 (en) | Image Stabilization Method and Image Processing Apparatus usign the smae | |
Fehrman et al. | Handling occlusion with an inexpensive array of cameras | |
Lee | Novel video stabilization for real-time optical character recognition applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17870427 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2019545394 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20197014759 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2017870427 Country of ref document: EP Effective date: 20190611 |