WO2014063763A1 - Image processing device for an image data stream including identical frames and image processing method - Google Patents

Image processing device for an image data stream including identical frames and image processing method Download PDF

Info

Publication number
WO2014063763A1
WO2014063763A1 PCT/EP2013/002124 EP2013002124W WO2014063763A1 WO 2014063763 A1 WO2014063763 A1 WO 2014063763A1 EP 2013002124 W EP2013002124 W EP 2013002124W WO 2014063763 A1 WO2014063763 A1 WO 2014063763A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
image processing
current frame
directly preceding
values
Prior art date
Application number
PCT/EP2013/002124
Other languages
French (fr)
Inventor
Yalcin Incesu
Oliver Erdler
Paul Springer
Original Assignee
Sony Corporation
Sony Deutschland Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corporation, Sony Deutschland Gmbh filed Critical Sony Corporation
Publication of WO2014063763A1 publication Critical patent/WO2014063763A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • Image Processing Device for an Image Data Stream Including Identical Frames and Image
  • the present disclosure relates to an image processing device for an image data stream including sequences of identical frames.
  • the disclosure further relates to an image processing method.
  • Pixel-motion analysis allows for implementing a variety of temporal functions in video streams such as deinterlacing, frame rate conversion, image coding and multi-frame noise reduction.
  • Motion analysis calculates motion vectors which indicates for single pixels or for a group of pixels, where the respective pixel or group of pixels has moved from or will move to from frame to frame.
  • An image processing device typically receives frames of an input data stream at a certain frame rate. Sources may supply video data streams at different frame rates. For example movies are typically shot at 24 Hz whereas video content is typically broadcast at frame rates of 50 Hz or 60 Hz. Before being broadcast, the frame rate of movies may be upconverted by repeating some frames two or three times resulting in sequences of identical frames. Typically, motion analysis is by-passed or disabled for two consecutive identical frames.
  • An embodiment refers to an image processing device.
  • An evaluation unit outputs a control signal indicating identity or non-identity between a current frame and a directly preceding frame in an image data stream.
  • a calculator unit calculates first values which are descriptive for a difference between two consecutive frames. Using the first values, the calculator unit calculates second values, which are descriptive for a difference between the current frame and the previous non-identical frame, when identity is determined between the current frame and the directly preceding frame.
  • Another embodiment refers to an image processing method.
  • identity and non-identity between a current frame and a directly preceding frame in an image data stream is determined.
  • first values are calculated which are descriptive for a difference between two consecutive frames, when non-identity is determined between the current frame and the directly preceding frame.
  • second values which are descriptive for a difference between the current frame and the previous non-identical frame are calculated, when identity is determined between the current frame and the directly preceding frame.
  • an image processing device includes an evaluation unit that outputs a control signal indicating identity and non-identity between a current frame and a directly preceding frame in an image data stream.
  • a motion vector calculator calculates first motion vectors that are descriptive for a displacement of image portions in two consecutive frames.
  • the motion vector calculator calculates, on the basis of the first motion vectors, second motion vectors descriptive for a displacement of image portions between the current frame and the last previous frame which is non-identical with the current frame.
  • Fig. 1 A is a schematic block diagram of an image processing device including a calculator unit in accordance with an embodiment.
  • Fig. 1 B is a schematic block diagram illustrating details of the calculator unit of Fig. 1A in accordance with an embodiment related to motion vector estimation.
  • Fig. 1 C is a schematic block diagram of a processing system in accordance with an embodiment related to a computer program for carrying out an image processing method.
  • Fig. 2A is a schematic block diagram for illustrating the mode of operation of the image processing device of Fig. 1 B with the last frame of a first sequence of identical frames stored in a first frame buffer for illustrating an embodiment related to an image processing method.
  • Fig. 2B shows the image processing device of Fig. 2A after loading the last frame of the first sequence into a second frame buffer.
  • Fig. 2C shows the image processing device of Fig. 2B after loading a first frame of a second sequence of identical frames into the first frame buffer.
  • Fig. 2D shows the image processing device of Fig. 2C after calculating first motion vectors.
  • Fig. 2E shows the image processing device of Fig. 2D after shifting the first motion vectors to a vector buffer and loading the second frame of the second sequence into the first frame buffer.
  • Fig. 2F shows the image processing device of Fig. 2E after calculating second motion vectors.
  • Fig. 2G shows the image processing device of Fig. 2F after storing the second motion vectors in the vector buffer and loading the third frame of the second sequence into the first frame buffer.
  • Fig. 2H shows the image processing device of Fig. 2G after calculating a further second motion vectors.
  • Fig. 21 shows the image processing device of Fig. 2H after storing the further second motion vectors in the vector buffer and loading the fourth frame of the second sequence into the first frame buffer.
  • Fig. 2J is the image processing device of Fig. 21 after calculating yet further second motion vectors.
  • Fig. 2K shows the image processing device of Fig. 2J after storing the yet further second motion vectors in the vector buffer and loading a first frame of a third sequence of identical frames into the first frame buffer.
  • Fig. 2L shows the image processing device of Fig. 2K after calculating further first motion vectors.
  • Fig. 3 is a schematic diagram illustrating pull-down conversion modes.
  • Fig. 4A is a schematic diagram illustrating the generation of motion vectors according to a comparative example.
  • Fig. 4B is a schematic diagram illustrating maturing motion vectors in accordance with the embodiment related to motion estimations.
  • Fig. 5 is a schematic diagram illustrating the mode of operation of a motion vector calculator unit in accordance with an embodiment referring to a configurable motion vector calculator unit.
  • Fig. 6 is a simplified flow chart illustrating an image processing method according to a further embodiment. DESCRIPTION OF THE EMBODIMENTS
  • Fig. 1 A refers to an image processing device 100 which may be implemented in an electronic device receiving, generating, outputting or displaying image information.
  • the image processing device 100 is implemented as a stationary electronic device like a surveillance camera, a monitor or a television apparatus.
  • the image processing device 100 is a portable device, such as a handheld camera, a personal digital assistant, a tablet computer or a mobile phone.
  • the image processing device 100 generates or receives an image data stream SI including a sequence of frames, wherein each frame represents a temporal instance of image information.
  • the image data stream SI has a predefined frame rate used by the image processing device 100 for processing and or transmitting and/or displaying the image data stream SI.
  • the frame rate of the image data stream SI may be 50 Hz or 60 Hz.
  • Some sources provide image data at lower frame rates.
  • movies are typically shot at a frame rate of 24 Hz.
  • a frame rate converter duplicates frames in image data streams of a low frame rate in order to make image data streams with low frame rates compatible for processing in an image processing device using higher frame rates.
  • an image data stream originally obtained at 30 Hz may be converted by a 2:2 pull down process to a 60 Hz image data stream.
  • a common pull-down mechanism is simply duplicating each frame. Accordingly, the image processing device 100 may receive an image data stream SI containing sequences of two, three, four or more identical frames.
  • An evaluation unit 1 10 outputs a control signal Cnt indicating identity or non-identity between a current frame and a directly preceding frame in the image data stream SI.
  • the evaluation unit 1 10 may search the image data stream SI for identical frames by comparing the current frame and the directly preceding frame and may generate the control signal Cnt in response to a result of the comparison.
  • the evaluation unit 1 10 receives phase information PI about a pull-down status of the image data stream SI and generates the control signal Cnt on the basis of the received phase information PI.
  • the phase information PI may be obtained at an earlier stage of processing in the image processing device 100 and may be synchronized with the frames in the image data stream SI.
  • a calculator unit 140 receives the image data steam Si and the control signal Cnt.
  • the calculator unit 140 calculates, from the current frame and the directly preceding frame, first values, which are descriptive for a difference between the current frame and the directly preceding frame.
  • the calculator unit 140 calculates, based on image portions in the current frame and a previous non-identical frame, second values which are descriptive for a difference between the current frame and the last previous non-identical frame.
  • An image processing unit 170 receives the first and the second values and may also receive the image data stream SI.
  • the image processing unit 170 may perform any application which process may be distributed over several time instances, for example applications using iterative methods or other high complex methods.
  • the content and type of the first and second values provided from the calculation unit 140 to the image processing unit 170 considers the application provided by the image processing unit 170.
  • the image processing unit 170 provides object identification, image post processing, disparity estimation and/or iterative methods of object segmentation.
  • the calculator unit 140 outputs motion vectors and the image processing unit 170 is a video analyzing unit for determining and classifying moving objects in the image data stream SI, for example within the framework of surveillance tasks and monitoring systems, or an image coding device for image data compression.
  • the image processing unit 170 may be an interpolation unit that generates intermediate frames between the frames of the image data stream Si on the basis of the motion vectors provided by the calculator unit 140.
  • the image processing unit 170 may increase a local spatial resolution based on a motion estimation (super resolution) or may be a noise reduction unit.
  • the calculator unit 140 may calculate first motion vectors descriptive for a displacement of image portions in the current frame and the directly preceding frame and second motion vectors which are descriptive for a displacement of image portions between the current frame and the last previous non-identical frame. Thereby the calculation of the second motion vectors may use previously obtained first motion vectors and/or one or more previously obtained second motion vectors to improve the result of the motion estimation. As a result, the motion estimation maturates with each identical frame in the image data stream SI.
  • the embodiments use process cycles assigned to the processing of identical frames for maturating a complex image evaluation process like motion estimation.
  • the calculator unit 140 may output for each identical frame more and/or maturating values to the image processing unit 170, for example more accurate motion vectors.
  • the image processing unit 170 may either discard the additional information, may use some of the additional information or may use the complete additional information in order to improve its application. For example, in the case of frame rate conversion, the image processing unit 170 may estimate an intermediate frame more exactly.
  • the image processing device 100 converts the frame rate of the image data stream SI in real-time thereby scheduling a fixed higher frame rate input with lower frame rate information. Where conventional image processing methods idle or do redundant jobs the image processing device 100 uses the available time for improving the quality of the output image stream.
  • Fig. 1 B refers to an embodiment with the evaluator unit 1 10 checking the image data stream Si for identical frames.
  • the calculator unit 140 includes a first frame buffer 141 , which may hold a current frame of the image data stream SI.
  • a second frame buffer 142 may hold the frame directly preceding the current frame or the last frame which is not identical to the current frame.
  • a motion vector calculator 146 calculates motion vectors MV on the basis of the frames held in the first and second frame buffers 141 , 142 and previously calculated motion vectors MV stored in a vector buffer 149 and outputs the calculated motion vector MV to an output buffer 148.
  • the evaluation unit 1 10, the calculator unit 140, the image processing unit 170 and each of the subunits of the calculator unit 140 as illustrated in Figs. 1A and 1 B may be implemented using ICs (integrated circuits), FPGAs (field programmable gate arrays) or at least one ASIC (application specific integrated circuit).
  • One or more of the evaluation unit 110, the calculator unit 140 and the image processing unit 170 may include or consist of a logic device for supporting or fully implementing the described functions.
  • a logic device may include, but is not limited to, an ASIC, an FPGA, a GAL (generic-array of logic) and their equivalents.
  • one or more of the evaluation unit 1 10, the calculator unit 140 and the image processing unit 170 may include or consist of a logic device for supporting or fully implementing the described functions or may be realized completely in software, for example in a program running on a processing system 250 of an electronic apparatus 200 as shown in Fig. 1 C.
  • the processing system 250 can be implemented using a microprocessor or its equivalent, such as a CPU (central processing unit) 257 or at least one ASP (application specific processor) (not shown).
  • the CPU 257 utilizes a computer readable storage medium, such as a memory 252 (e.g., ROM, EPROM, EEPROM, flash memory, static memory, DRAM, SDRAM, and equivalents).
  • a memory 252 e.g., ROM, EPROM, EEPROM, flash memory, static memory, DRAM, SDRAM, and equivalents.
  • Programs stored in the memory 252 control the CPU 257 to perform an image processing method according to the embodiment.
  • results of the image processing method or the input of image data in accordance with this disclosure can be displayed by a display controller 251 to a monitor 210.
  • the display controller 251 may include at least one GPU (graphic processing unit) for improved computational efficiency.
  • An input/output (I/O) interface 258 may be provided for inputting data from a keyboard 221 or a pointing device 222 for controlling parameters for the various processes and algorithms of the disclosure.
  • the monitor 210 may be provided with a touch- sensitive interface to a command/instruction interface, and other peripherals can be incorporated including a scanner or a webcam 229.
  • the above-noted components can be coupled to a network 290 such as the Internet or a local intranet, via a network interface 256 for the transmission and/or reception of data, including controllable parameters.
  • the network 290 provides a communication path to the electronic apparatus 200, which can be provided by way of packets of data.
  • a central BUS 255 is provided to connect the above hardware components together and to provide at least one path for digital communication therebetween.
  • 2A to 2L refer to a mode of operation of the evaluation unit 1 10 and the calculator unit 140 of Fig. 1 B.
  • the mode of operation is described for an image data stream SI resulting from a 4:4 pull-down conversion, applied, for example, to image data generated by a 15 Hz source to up-convert the image data by repetition to 60 Hz for broadcasting purposes.
  • the image data stream SI includes a plurality of sequences of four identical frames.
  • Fig. 2A refers to a point in time where the current frame, which is the last frame Od of a first sequences of four identical frames, is held in the first frame buffer 141.
  • the next frame 4a is the first frame of a second sequence of four identical frames.
  • the evaluation unit 1 10 may detect non-identity between the first frame 4a of the second sequence and the last frame Od of the first sequence and hence may save the last frame Od of the first sequence in the second frame buffer 142.
  • Fig. 2B shows the last frame Od of the first sequence, which is not identical to the current frame 4a saved in the second frame buffer 142.
  • the current frame 4a which is the first frame of the second sequence is stored in the first frame buffer 141 .
  • Fig. 2C shows two non-identical frames 4a, Od stored in the first and second frame buffers 141 , 142. From image portions of the frames stored in the first and second frame buffers 141 , 142 and previously obtained motion vectors stored in the vector buffer 149, the motion vector calculator 146 computes first motion vectors M V0d-4a.
  • Fig. 2D shows the first motion vectors MV0d-4a stored in the output buffer 148.
  • the next frame in the image data stream SI is the second frame 4b of the second sequence, which is identical to the directly preceding frame 4a.
  • the evaluation unit 1 10 detects identity between the current frame 4b and the previous frame 4a and either discards the second frame 4b or replaces the previous frame 4a with the current frame 4b in the first frame buffer 141 .
  • the motion vector calculator 146 On the output side of the motion vector calculator 146, the previously obtained first motion vectors MV0d-4a are shifted to the vector buffer 149 as show in Fig. 2E. Further, the evaluation unit 1 10 may control the motion vector calculator 146 to switch to another algorithm for obtaining the next motion vectors via a configuration signal Cfg. As a result, the frame buffers 141 , 142 still hold frames between which motion takes place. This gives the opportunity to re-estimate the same content again by using previously obtained information on the same couple of frames. Consequently the estimation is significantly improved. Since the additional information is obtained in no-motion (repetition) phases, no extra resources are required.
  • estimation of the first motion vectors can start very aggressively and adapt over time based on the previous estimation results to be more accurate, for example at object borders.
  • the motion vector calculator 146 may be reconfigured to perform an occlusion-aware estimation, wherein the first motion vector MV0d-4a is used to identify the occlusion information.
  • the motion vector calculator 146 outputs the second motion vectors MV0d-4b which consider the obtained occlusion information and may exclude concerned image portions from the motion estimation process for obtaining the second motion vectors MVOd- 4b.
  • the current frame is the third frame 4c of the second sequence of identical frames.
  • the evaluation unit 1 10 detects identity between the current frame 4c and the previous frame 4b and either discards the current frame 4c or writes the current frame 4c in place of the previous frame 4b into the first frame buffer 141.
  • the second motion vectors MV0d-4b are shifted to the vector buffer 149.
  • the motion estimator unit 146 may be again reconfigured.
  • the second motion vectors V0d-4b which are not disturbed by occlusion aspects are used to distinguish between foreground and background and the motion vector calculator 146 performs a foreground/background-aware estimation to obtain further second motion vectors MV0d-4c as illustrated in Fig.
  • Fig. 21 refers to the next cycle with the current frame 4d, which is the fourth frame of the second sequence, being discarded or loaded in the first frame buffer 141.
  • the evaluation unit 110 which detects identity between the current frame 4d and the previous frame 4c, keeps the last non-identical frame Od in the second frame buffer 142.
  • the further second motion vectors MV0d-4c are shifted to the vector buffer 149.
  • the motion vector calculator 146 may be reconfigured to identify segments for finer detailed estimations on the basis of an occlusion-free and foreground/background-aware estimation to obtain the yet further second motion vectors MV0d-4d as shown in Fig.
  • the next frame is the first frame 8a of a third sequence of four identical frames.
  • the evaluation unit 110 detects non-identity between the current frame 8a and the frame directly preceding the current frame, which may be the fourth frame 4d of the second sequence or another frame identical to the fourth frame 4d.
  • the evaluation unit 1 10 controls the control signal Cnt such that the content of the first frame buffer 141 is shifted to the second frame buffer 142 and the current frame 8a is stored in the first frame buffer 141.
  • This mode of operation of the frame buffers 141 , 142 is an example only.
  • the last frame of the previous series may be maintained in the first frame buffer 141 and the two frame buffers 141 , 142 may be controlled complementarily.
  • the yet further second motion vectors MV0d-4d are shifted to the vector buffer 149.
  • a further first motion vector MV4d-8a is obtained from the current frame 8a and the non-identical directly preceding frame 4d. Since the motion vector calculator 146 provides more accurate motion vectors MV0d-4d the estimation for the further first motion vector MV4d-8a may be more precise than in conventional motion estimation processes. For example, the speed of motion between two successive, non-identical frames in a 4:4 pulldown image data stream SI is comparatively high conventionally resulting in only poor results and significant motion blur.
  • the image processing unit 170 may benefit from the improved motion vector estimation by receiving more accurate motion vectors.
  • the image processing unit 170 may also use all motion vectors for further tasks, for example disparity estimation, super resolution, noise reduction, object identification and other post processing applications.
  • Fig. 3 shows examples of content in broadcast videostreams containing previously up- converted frame rates.
  • the upper line shows continuous motion phase as provided, for example by a camera. Each single frame is shot at a different point in time.
  • the second line refers to a 2:2 pull-down image data stream.
  • the image data stream includes pairs of identical frames assigned to the same points in time.
  • 3:2 pull-down image data stream illustrated in the third line sequence with three identical frames alternate with sequence of two identical frames.
  • the 4:4 pull-down image data steam in the fourth line four consecutive frames are identical, respectively.
  • sequences of X identical frames, sequences of Y identical frames and sequences of Z identical frames alternate each other consecutively.
  • Fig. 4A refers to a 4:4 pull-down image data stream with sequences of four consecutive identical frames.
  • the second and the third line refer to the contents of two internal frame buffers.
  • the first frame buffer holds the current frame and the second frame buffer the directly preceding current frame.
  • the line at the bottom shows the motion vectors obtained by comparison of the two frames in the frame buffers.
  • the motion vector MV0d-4a includes the motion information between the first frame of the second sequence and the last frame of the first sequence.
  • the following three motion vectors MV4a-4b, MV4b-4c, MV4c-4d indicate no motion since they are based on the comparison of identical frames. An image processing unit relying on these motion vectors obtains no further information from the second to the fourth motion vector.
  • Fig. 4B shows motion vectors obtained according to embodiments related to motion estimation. Since the two buffers contain different frames for each motion estimation, no redundant motion vectors are output to the image processing unit but motion vectors MVOd- 4a, MV0d-4b, MV0d-4c, MV0d-4d. Since the estimation of motion vectors uses information from the previous estimation and since the algorithm applied by the motion vector calculation may be changed, the motion vectors MV0d-4a, MV0d-4b, MV0d-4c, MV0d-4d maturate. The fourth motion vectors MV0d-4d are more precise than the first motion vectors MV0d-4a. A following image processing can be based on more accurate motion vectors.
  • the further image processing may use the other motion vectors obtained in the time periods during which identical frames are applied to further improve image processing.
  • Fig. 5 shows the maturing process of the motion vector in more detail.
  • the algorithm applied by the motion estimation unit 140 may be changed by changing the configuration of the motion estimation unit 140.
  • the motion estimation unit 140 is based on re-configurable hardware.
  • the motion estimation unit 140 may perform different programs. For a first motion estimation, the motion estimation unit 140 is in a first configuration 140-1 and outputs a first quality measure.
  • the motion estimation unit 140 is in a second configuration 140-2 and uses the first quality measure for evaluating the current frame and the previous non-identical frame. With the second configuration 140-2, the motion estimation unit 140 outputs a second quality measure.
  • the same pattern recurs for a third and a fourth configuration 140-3, 140-4.
  • the first configuration 140-1 may be a conventional configuration for rough motion estimation, which is more or less identical to a configuration for conventional motion estimation.
  • the result of the rough estimation may be used to detect occlusions.
  • the occlusion information is considered. For example, image portions where occlusion occurs may be excluded from the second motion estimation process performed with a second configuration 140-2.
  • the occlusion-clean motion vector may be used to identify foreground and background information and a further motion estimation that is aware of the foreground and background may be applied using a third configuration 140-3.
  • vector fields may be used to identify segments and information identifying segments can be used for finer detailed estimation in a fourth configuration 140-4.
  • re-configuration may concern the change of a search range for an estimation process.
  • Other embodiments may provide the configuration to change treatment of disturbances in an estimation process, for example in a steepest gradient descent method.
  • the length of update vectors in a 3D recursive motion estimation may be adapted and reconfigured.
  • Another aspect of the configuration may concern the block size in a block matching process or the evaluation of vector candidates, for example penalty functions of vector candidates.
  • the search directions may be reconfigured. For example the first motion estimation may proceed from the left hand upper corner to the right hand lower corner and a second estimation may proceed from the right hand lower corner to the right hand upper corner.
  • any kind of information contributing to the stability of the estimation for example covering and uncovering information, may be used to maturate the motion vectors.
  • an image processing method includes generating a control signal indicating identity and non-identity between a current frame and a directly preceding frame in an image data stream (602).
  • first values are calculated on the basis of image portions in the current frame and the directly preceding frame, wherein the first values are descriptive for a difference between the current frame and the directly preceding frame (604).
  • second values are calculated based on image portions in the current frame and the last previous non-identical frame, wherein the second values are descriptive for a difference between the current frame and the previous non-identical frame.
  • the first values may be first motion vectors descriptive for a displacement of image portions in the current frame and the directly preceding frame and the second values may be second motion vectors descriptive for a displacement of image portions between the current frame and the previous non-identical frame.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Television Systems (AREA)

Abstract

An image processing device includes an evaluation unit that outputs a control signal indicating identity/non-identity between a current frame and a directly preceding frame in an image data stream. When the current frame and the directly preceding frame are non- identical, a calculator unit calculates first values descriptive for a difference between the current frame and the directly preceding frame. When the current frame and the directly preceding frame are identical, the calculator unit calculates second values descriptive for a difference between the current frame and a previous non-identical frame, e.g. using previous results.

Description

Image Processing Device for an Image Data Stream Including Identical Frames and Image
Processing Method
BACKGROUND
Field of the Disclosure
The present disclosure relates to an image processing device for an image data stream including sequences of identical frames. The disclosure further relates to an image processing method.
Description of Related Art
Pixel-motion analysis allows for implementing a variety of temporal functions in video streams such as deinterlacing, frame rate conversion, image coding and multi-frame noise reduction. Motion analysis calculates motion vectors which indicates for single pixels or for a group of pixels, where the respective pixel or group of pixels has moved from or will move to from frame to frame. An image processing device typically receives frames of an input data stream at a certain frame rate. Sources may supply video data streams at different frame rates. For example movies are typically shot at 24 Hz whereas video content is typically broadcast at frame rates of 50 Hz or 60 Hz. Before being broadcast, the frame rate of movies may be upconverted by repeating some frames two or three times resulting in sequences of identical frames. Typically, motion analysis is by-passed or disabled for two consecutive identical frames.
It is an object of the embodiments to provide an image processing apparatus that operates more efficient.
SUMMARY
An embodiment refers to an image processing device. An evaluation unit outputs a control signal indicating identity or non-identity between a current frame and a directly preceding frame in an image data stream. When non-identity is determined between the current frame and the directly preceding frame, a calculator unit calculates first values which are descriptive for a difference between two consecutive frames. Using the first values, the calculator unit calculates second values, which are descriptive for a difference between the current frame and the previous non-identical frame, when identity is determined between the current frame and the directly preceding frame.
Another embodiment refers to an image processing method. Using an evaluation unit, identity and non-identity between a current frame and a directly preceding frame in an image data stream is determined. Using a processor, first values are calculated which are descriptive for a difference between two consecutive frames, when non-identity is determined between the current frame and the directly preceding frame. Using the first values, second values, which are descriptive for a difference between the current frame and the previous non-identical frame are calculated, when identity is determined between the current frame and the directly preceding frame.
According to another embodiment, an image processing device includes an evaluation unit that outputs a control signal indicating identity and non-identity between a current frame and a directly preceding frame in an image data stream. When the control signal indicates non- identity between the current frame and the directly preceding frame, a motion vector calculator calculates first motion vectors that are descriptive for a displacement of image portions in two consecutive frames. When the control signal indicates identity between the current frame and the directly preceding frame, the motion vector calculator calculates, on the basis of the first motion vectors, second motion vectors descriptive for a displacement of image portions between the current frame and the last previous frame which is non-identical with the current frame.
The foregoing paragraphs have been provided by way of general introduction, and are not intended to limit the scope of the following claims. The described embodiments, together with further advantages, will be best understood by reference to the following detailed description taken in conjunction with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
A more complete appreciation of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings. The elements of the drawings are not necessarily to scale relative to each other. In the following drawings, like reference numerals designate identical or corresponding parts throughout the several views. Features of the illustrated embodiments can be combined with each other to form yet further embodiments. Fig. 1 A is a schematic block diagram of an image processing device including a calculator unit in accordance with an embodiment. Fig. 1 B is a schematic block diagram illustrating details of the calculator unit of Fig. 1A in accordance with an embodiment related to motion vector estimation.
Fig. 1 C is a schematic block diagram of a processing system in accordance with an embodiment related to a computer program for carrying out an image processing method.
Fig. 2A is a schematic block diagram for illustrating the mode of operation of the image processing device of Fig. 1 B with the last frame of a first sequence of identical frames stored in a first frame buffer for illustrating an embodiment related to an image processing method. Fig. 2B shows the image processing device of Fig. 2A after loading the last frame of the first sequence into a second frame buffer.
Fig. 2C shows the image processing device of Fig. 2B after loading a first frame of a second sequence of identical frames into the first frame buffer.
Fig. 2D shows the image processing device of Fig. 2C after calculating first motion vectors.
Fig. 2E shows the image processing device of Fig. 2D after shifting the first motion vectors to a vector buffer and loading the second frame of the second sequence into the first frame buffer.
Fig. 2F shows the image processing device of Fig. 2E after calculating second motion vectors. Fig. 2G shows the image processing device of Fig. 2F after storing the second motion vectors in the vector buffer and loading the third frame of the second sequence into the first frame buffer.
Fig. 2H shows the image processing device of Fig. 2G after calculating a further second motion vectors. Fig. 21 shows the image processing device of Fig. 2H after storing the further second motion vectors in the vector buffer and loading the fourth frame of the second sequence into the first frame buffer. Fig. 2J is the image processing device of Fig. 21 after calculating yet further second motion vectors.
Fig. 2K shows the image processing device of Fig. 2J after storing the yet further second motion vectors in the vector buffer and loading a first frame of a third sequence of identical frames into the first frame buffer.
Fig. 2L shows the image processing device of Fig. 2K after calculating further first motion vectors. Fig. 3 is a schematic diagram illustrating pull-down conversion modes.
Fig. 4A is a schematic diagram illustrating the generation of motion vectors according to a comparative example. Fig. 4B is a schematic diagram illustrating maturing motion vectors in accordance with the embodiment related to motion estimations.
Fig. 5 is a schematic diagram illustrating the mode of operation of a motion vector calculator unit in accordance with an embodiment referring to a configurable motion vector calculator unit.
Fig. 6 is a simplified flow chart illustrating an image processing method according to a further embodiment. DESCRIPTION OF THE EMBODIMENTS
Fig. 1 A refers to an image processing device 100 which may be implemented in an electronic device receiving, generating, outputting or displaying image information. According to an embodiment the image processing device 100 is implemented as a stationary electronic device like a surveillance camera, a monitor or a television apparatus. According to other embodiments, the image processing device 100 is a portable device, such as a handheld camera, a personal digital assistant, a tablet computer or a mobile phone. The image processing device 100 generates or receives an image data stream SI including a sequence of frames, wherein each frame represents a temporal instance of image information.
The image data stream SI has a predefined frame rate used by the image processing device 100 for processing and or transmitting and/or displaying the image data stream SI. For example, the frame rate of the image data stream SI may be 50 Hz or 60 Hz. Some sources provide image data at lower frame rates. For example, movies are typically shot at a frame rate of 24 Hz. Usually a frame rate converter duplicates frames in image data streams of a low frame rate in order to make image data streams with low frame rates compatible for processing in an image processing device using higher frame rates. For example, an image data stream originally obtained at 30 Hz may be converted by a 2:2 pull down process to a 60 Hz image data stream. A common pull-down mechanism is simply duplicating each frame. Accordingly, the image processing device 100 may receive an image data stream SI containing sequences of two, three, four or more identical frames.
An evaluation unit 1 10 outputs a control signal Cnt indicating identity or non-identity between a current frame and a directly preceding frame in the image data stream SI. According to an embodiment, the evaluation unit 1 10 may search the image data stream SI for identical frames by comparing the current frame and the directly preceding frame and may generate the control signal Cnt in response to a result of the comparison. According to another embodiment the evaluation unit 1 10 receives phase information PI about a pull-down status of the image data stream SI and generates the control signal Cnt on the basis of the received phase information PI. The phase information PI may be obtained at an earlier stage of processing in the image processing device 100 and may be synchronized with the frames in the image data stream SI.
A calculator unit 140 receives the image data steam Si and the control signal Cnt. When the control signal Cnt indicates non-identity between the current frame and the directly preceding frame, the calculator unit 140 calculates, from the current frame and the directly preceding frame, first values, which are descriptive for a difference between the current frame and the directly preceding frame. When the control signal Cnt indicates identity between the current frame and the directly preceding frame, the calculator unit 140 calculates, based on image portions in the current frame and a previous non-identical frame, second values which are descriptive for a difference between the current frame and the last previous non-identical frame. An image processing unit 170 receives the first and the second values and may also receive the image data stream SI. The image processing unit 170 may perform any application which process may be distributed over several time instances, for example applications using iterative methods or other high complex methods. The content and type of the first and second values provided from the calculation unit 140 to the image processing unit 170 considers the application provided by the image processing unit 170. For example, the image processing unit 170 provides object identification, image post processing, disparity estimation and/or iterative methods of object segmentation. According to another embodiment, the calculator unit 140 outputs motion vectors and the image processing unit 170 is a video analyzing unit for determining and classifying moving objects in the image data stream SI, for example within the framework of surveillance tasks and monitoring systems, or an image coding device for image data compression. For example, the image processing unit 170 may be an interpolation unit that generates intermediate frames between the frames of the image data stream Si on the basis of the motion vectors provided by the calculator unit 140. According to further embodiments, the image processing unit 170 may increase a local spatial resolution based on a motion estimation (super resolution) or may be a noise reduction unit. The calculator unit 140 may calculate first motion vectors descriptive for a displacement of image portions in the current frame and the directly preceding frame and second motion vectors which are descriptive for a displacement of image portions between the current frame and the last previous non-identical frame. Thereby the calculation of the second motion vectors may use previously obtained first motion vectors and/or one or more previously obtained second motion vectors to improve the result of the motion estimation. As a result, the motion estimation maturates with each identical frame in the image data stream SI.
Where conventional approaches usually disable or bypass a motion estimation process in the case of consecutive identical frames, the embodiments use process cycles assigned to the processing of identical frames for maturating a complex image evaluation process like motion estimation. The calculator unit 140 may output for each identical frame more and/or maturating values to the image processing unit 170, for example more accurate motion vectors. The image processing unit 170 may either discard the additional information, may use some of the additional information or may use the complete additional information in order to improve its application. For example, in the case of frame rate conversion, the image processing unit 170 may estimate an intermediate frame more exactly. According to an embodiment the image processing device 100 converts the frame rate of the image data stream SI in real-time thereby scheduling a fixed higher frame rate input with lower frame rate information. Where conventional image processing methods idle or do redundant jobs the image processing device 100 uses the available time for improving the quality of the output image stream.
Fig. 1 B refers to an embodiment with the evaluator unit 1 10 checking the image data stream Si for identical frames. The calculator unit 140 includes a first frame buffer 141 , which may hold a current frame of the image data stream SI. A second frame buffer 142 may hold the frame directly preceding the current frame or the last frame which is not identical to the current frame. A motion vector calculator 146 calculates motion vectors MV on the basis of the frames held in the first and second frame buffers 141 , 142 and previously calculated motion vectors MV stored in a vector buffer 149 and outputs the calculated motion vector MV to an output buffer 148.
The evaluation unit 1 10, the calculator unit 140, the image processing unit 170 and each of the subunits of the calculator unit 140 as illustrated in Figs. 1A and 1 B may be implemented using ICs (integrated circuits), FPGAs (field programmable gate arrays) or at least one ASIC (application specific integrated circuit). One or more of the evaluation unit 110, the calculator unit 140 and the image processing unit 170 may include or consist of a logic device for supporting or fully implementing the described functions. Such a logic device may include, but is not limited to, an ASIC, an FPGA, a GAL (generic-array of logic) and their equivalents.
According to other embodiments, one or more of the evaluation unit 1 10, the calculator unit 140 and the image processing unit 170 may include or consist of a logic device for supporting or fully implementing the described functions or may be realized completely in software, for example in a program running on a processing system 250 of an electronic apparatus 200 as shown in Fig. 1 C. The processing system 250 can be implemented using a microprocessor or its equivalent, such as a CPU (central processing unit) 257 or at least one ASP (application specific processor) (not shown). The CPU 257 utilizes a computer readable storage medium, such as a memory 252 (e.g., ROM, EPROM, EEPROM, flash memory, static memory, DRAM, SDRAM, and equivalents). Programs stored in the memory 252 control the CPU 257 to perform an image processing method according to the embodiment. In another aspect, results of the image processing method or the input of image data in accordance with this disclosure can be displayed by a display controller 251 to a monitor 210. The display controller 251 may include at least one GPU (graphic processing unit) for improved computational efficiency. An input/output (I/O) interface 258 may be provided for inputting data from a keyboard 221 or a pointing device 222 for controlling parameters for the various processes and algorithms of the disclosure. The monitor 210 may be provided with a touch- sensitive interface to a command/instruction interface, and other peripherals can be incorporated including a scanner or a webcam 229.
The above-noted components can be coupled to a network 290 such as the Internet or a local intranet, via a network interface 256 for the transmission and/or reception of data, including controllable parameters. The network 290 provides a communication path to the electronic apparatus 200, which can be provided by way of packets of data. Additionally, a central BUS 255 is provided to connect the above hardware components together and to provide at least one path for digital communication therebetween. In so far as embodiments of the invention have been described as being implemented, at least in part, by the software-controlled electronic apparatus 200, any non-transitory machine-readable medium carrying such software, such as an optical disk, a magnetic disk, semiconductor memory or the like, represents an embodiment of the present invention. Figs. 2A to 2L refer to a mode of operation of the evaluation unit 1 10 and the calculator unit 140 of Fig. 1 B. The mode of operation is described for an image data stream SI resulting from a 4:4 pull-down conversion, applied, for example, to image data generated by a 15 Hz source to up-convert the image data by repetition to 60 Hz for broadcasting purposes. As a result, the image data stream SI includes a plurality of sequences of four identical frames.
Fig. 2A refers to a point in time where the current frame, which is the last frame Od of a first sequences of four identical frames, is held in the first frame buffer 141. The next frame 4a is the first frame of a second sequence of four identical frames. The evaluation unit 1 10 may detect non-identity between the first frame 4a of the second sequence and the last frame Od of the first sequence and hence may save the last frame Od of the first sequence in the second frame buffer 142.
Fig. 2B shows the last frame Od of the first sequence, which is not identical to the current frame 4a saved in the second frame buffer 142. The current frame 4a, which is the first frame of the second sequence is stored in the first frame buffer 141 . Fig. 2C shows two non-identical frames 4a, Od stored in the first and second frame buffers 141 , 142. From image portions of the frames stored in the first and second frame buffers 141 , 142 and previously obtained motion vectors stored in the vector buffer 149, the motion vector calculator 146 computes first motion vectors M V0d-4a.
Fig. 2D shows the first motion vectors MV0d-4a stored in the output buffer 148. The next frame in the image data stream SI is the second frame 4b of the second sequence, which is identical to the directly preceding frame 4a. The evaluation unit 1 10 detects identity between the current frame 4b and the previous frame 4a and either discards the second frame 4b or replaces the previous frame 4a with the current frame 4b in the first frame buffer 141 .
On the output side of the motion vector calculator 146, the previously obtained first motion vectors MV0d-4a are shifted to the vector buffer 149 as show in Fig. 2E. Further, the evaluation unit 1 10 may control the motion vector calculator 146 to switch to another algorithm for obtaining the next motion vectors via a configuration signal Cfg. As a result, the frame buffers 141 , 142 still hold frames between which motion takes place. This gives the opportunity to re-estimate the same content again by using previously obtained information on the same couple of frames. Consequently the estimation is significantly improved. Since the additional information is obtained in no-motion (repetition) phases, no extra resources are required. For example, estimation of the first motion vectors can start very aggressively and adapt over time based on the previous estimation results to be more accurate, for example at object borders. Accordingly, for calculating the second motion vectors MV0d-4b obtained in Fig. 2F, the motion vector calculator 146 may be reconfigured to perform an occlusion-aware estimation, wherein the first motion vector MV0d-4a is used to identify the occlusion information. The motion vector calculator 146 outputs the second motion vectors MV0d-4b which consider the obtained occlusion information and may exclude concerned image portions from the motion estimation process for obtaining the second motion vectors MVOd- 4b. For the next cycle, the current frame is the third frame 4c of the second sequence of identical frames.
According to Fig. 2G, the evaluation unit 1 10 detects identity between the current frame 4c and the previous frame 4b and either discards the current frame 4c or writes the current frame 4c in place of the previous frame 4b into the first frame buffer 141. The second motion vectors MV0d-4b are shifted to the vector buffer 149. In addition, the motion estimator unit 146 may be again reconfigured. For example, the second motion vectors V0d-4b, which are not disturbed by occlusion aspects are used to distinguish between foreground and background and the motion vector calculator 146 performs a foreground/background-aware estimation to obtain further second motion vectors MV0d-4c as illustrated in Fig. 2H on the basis of the current frame 4c and the last frame Od which is not identical to the current frame 4c. Fig. 21 refers to the next cycle with the current frame 4d, which is the fourth frame of the second sequence, being discarded or loaded in the first frame buffer 141. The evaluation unit 110, which detects identity between the current frame 4d and the previous frame 4c, keeps the last non-identical frame Od in the second frame buffer 142. The further second motion vectors MV0d-4c are shifted to the vector buffer 149. The motion vector calculator 146 may be reconfigured to identify segments for finer detailed estimations on the basis of an occlusion-free and foreground/background-aware estimation to obtain the yet further second motion vectors MV0d-4d as shown in Fig. 2J. The next frame is the first frame 8a of a third sequence of four identical frames. As shown in Fig. 2K, the evaluation unit 110 detects non-identity between the current frame 8a and the frame directly preceding the current frame, which may be the fourth frame 4d of the second sequence or another frame identical to the fourth frame 4d. Hence the evaluation unit 1 10 controls the control signal Cnt such that the content of the first frame buffer 141 is shifted to the second frame buffer 142 and the current frame 8a is stored in the first frame buffer 141. This mode of operation of the frame buffers 141 , 142 is an example only. According to other embodiments, the last frame of the previous series may be maintained in the first frame buffer 141 and the two frame buffers 141 , 142 may be controlled complementarily. The yet further second motion vectors MV0d-4d are shifted to the vector buffer 149.
As shown in Fig. 2L, a further first motion vector MV4d-8a is obtained from the current frame 8a and the non-identical directly preceding frame 4d. Since the motion vector calculator 146 provides more accurate motion vectors MV0d-4d the estimation for the further first motion vector MV4d-8a may be more precise than in conventional motion estimation processes. For example, the speed of motion between two successive, non-identical frames in a 4:4 pulldown image data stream SI is comparatively high conventionally resulting in only poor results and significant motion blur.
Referring back to Fig. 1A, the image processing unit 170 may benefit from the improved motion vector estimation by receiving more accurate motion vectors. According to another embodiment, the image processing unit 170 may also use all motion vectors for further tasks, for example disparity estimation, super resolution, noise reduction, object identification and other post processing applications.
Fig. 3 shows examples of content in broadcast videostreams containing previously up- converted frame rates. The upper line shows continuous motion phase as provided, for example by a camera. Each single frame is shot at a different point in time. The second line refers to a 2:2 pull-down image data stream. The image data stream includes pairs of identical frames assigned to the same points in time. In the 3:2 pull-down image data stream illustrated in the third line sequence with three identical frames alternate with sequence of two identical frames. In the 4:4 pull-down image data steam in the fourth line four consecutive frames are identical, respectively. In the general form as indicated in the fifth line for a X:Y:Z pull-down image data streams sequences of X identical frames, sequences of Y identical frames and sequences of Z identical frames alternate each other consecutively. Fig. 4A refers to a 4:4 pull-down image data stream with sequences of four consecutive identical frames. The second and the third line refer to the contents of two internal frame buffers. The first frame buffer holds the current frame and the second frame buffer the directly preceding current frame. The line at the bottom shows the motion vectors obtained by comparison of the two frames in the frame buffers. The motion vector MV0d-4a includes the motion information between the first frame of the second sequence and the last frame of the first sequence. The following three motion vectors MV4a-4b, MV4b-4c, MV4c-4d indicate no motion since they are based on the comparison of identical frames. An image processing unit relying on these motion vectors obtains no further information from the second to the fourth motion vector.
Fig. 4B shows motion vectors obtained according to embodiments related to motion estimation. Since the two buffers contain different frames for each motion estimation, no redundant motion vectors are output to the image processing unit but motion vectors MVOd- 4a, MV0d-4b, MV0d-4c, MV0d-4d. Since the estimation of motion vectors uses information from the previous estimation and since the algorithm applied by the motion vector calculation may be changed, the motion vectors MV0d-4a, MV0d-4b, MV0d-4c, MV0d-4d maturate. The fourth motion vectors MV0d-4d are more precise than the first motion vectors MV0d-4a. A following image processing can be based on more accurate motion vectors. In addition, the further image processing may use the other motion vectors obtained in the time periods during which identical frames are applied to further improve image processing. Fig. 5 shows the maturing process of the motion vector in more detail. The algorithm applied by the motion estimation unit 140 may be changed by changing the configuration of the motion estimation unit 140. According to an embodiment, the motion estimation unit 140 is based on re-configurable hardware. According to another embodiment, the motion estimation unit 140 may perform different programs. For a first motion estimation, the motion estimation unit 140 is in a first configuration 140-1 and outputs a first quality measure. For a second motion estimation, the motion estimation unit 140 is in a second configuration 140-2 and uses the first quality measure for evaluating the current frame and the previous non-identical frame. With the second configuration 140-2, the motion estimation unit 140 outputs a second quality measure. The same pattern recurs for a third and a fourth configuration 140-3, 140-4.
The first configuration 140-1 may be a conventional configuration for rough motion estimation, which is more or less identical to a configuration for conventional motion estimation. The result of the rough estimation may be used to detect occlusions. In the next estimation step the occlusion information is considered. For example, image portions where occlusion occurs may be excluded from the second motion estimation process performed with a second configuration 140-2. The occlusion-clean motion vector may be used to identify foreground and background information and a further motion estimation that is aware of the foreground and background may be applied using a third configuration 140-3. As a next step vector fields may be used to identify segments and information identifying segments can be used for finer detailed estimation in a fourth configuration 140-4.
According to other embodiments, re-configuration may concern the change of a search range for an estimation process. Other embodiments may provide the configuration to change treatment of disturbances in an estimation process, for example in a steepest gradient descent method. The length of update vectors in a 3D recursive motion estimation may be adapted and reconfigured. Another aspect of the configuration may concern the block size in a block matching process or the evaluation of vector candidates, for example penalty functions of vector candidates. The search directions may be reconfigured. For example the first motion estimation may proceed from the left hand upper corner to the right hand lower corner and a second estimation may proceed from the right hand lower corner to the right hand upper corner. In general, any kind of information contributing to the stability of the estimation, for example covering and uncovering information, may be used to maturate the motion vectors.
According to Fig. 6, an image processing method includes generating a control signal indicating identity and non-identity between a current frame and a directly preceding frame in an image data stream (602). When the control signal indicates non-identity between the current frame and a directly preceding frame, first values are calculated on the basis of image portions in the current frame and the directly preceding frame, wherein the first values are descriptive for a difference between the current frame and the directly preceding frame (604). When the control signal indicates identity between the current frame and the directly preceding frame, second values are calculated based on image portions in the current frame and the last previous non-identical frame, wherein the second values are descriptive for a difference between the current frame and the previous non-identical frame. The first values may be first motion vectors descriptive for a displacement of image portions in the current frame and the directly preceding frame and the second values may be second motion vectors descriptive for a displacement of image portions between the current frame and the previous non-identical frame.
Obviously, numerous modifications and variations of the present disclosure are possible in light of the above teachings. It is therefore to be understood that within the scope of the appended claims, the invention may be practiced otherwise than as specifically described herein.

Claims

CLAIMS:
1. An image processing device comprising
an evaluation unit configured to output a control signal indicating identity and non- identity between a current frame and a directly preceding frame in an image data stream; a calculator unit configured to calculate first values descriptive for a difference between the current frame and the directly preceding frame based on image portions in the current frame and the directly preceding frame when the control signal indicates non-identity between the current frame and the directly preceding frame and to calculate, using the first values, second values descriptive for a difference between the current frame and a previous non-identical frame based on image portions in the current frame and the previous non- identical frame when the control signal indicates identity between the current frame and the directly preceding frame.
2. The image processing device according to claim 1 , wherein
the first values are first motion vectors descriptive for a displacement of image portions in the current frame and the directly preceding frame and
the second values are second motion vectors descriptive for a displacement of image portions between the current frame and the previous non-identical frame.
3. The image processing device according to claims 1 to 2, wherein
calculation of the second values uses previously obtained first values.
4. The image processing device according to claims 1 to 3, wherein
calculation of the second values uses one or more previously obtained second values.
5. The image processing device according to claims 1 to 4, comprising
a storage unit configured to store the directly preceding frame when the control signal indicates non-identity between the current frame and the directly preceding frame.
6. The image processing device according to claims 1 to 5, wherein
the storage unit is configured to store the last previous frame not being identical to the current frame when the control signal indicates identity between the current frame and the directly preceding frame.
7. The image processing device according to claims 1 to 6, wherein the evaluation unit is configured to receive phase information about a pull-down status of the image data stream and to generate the control signal on the basis of the received phase information, the received phase information being synchronized with the image data stream.
8. The image processing device according to claims 1 to 7, wherein
the evaluation unit is configured to compare the current frame and the directly preceding frame and to generate the control signal in response to a result of the comparison.
9. The image processing device according to claims 1 to 8, comprising
an image processing unit using the first and second values for further processing the image data stream.
10. The image processing device according to claims 1 to 9, wherein
the image processing unit is a frame rate converter.
11 . The image processing device according to claims 1 to 9, wherein
the image processing unit is an image coding unit.
12. The image processing device according to claims 1 to 1 1 , wherein
the calculator unit is reconfigurable to apply different algorithms for obtaining the second values.
13. An integrated circuit comprising
the image processing device according to one of the preceding claims.
14. A television apparatus comprising
the integrated circuit according to claim 13 or the image processing device according to any of claims 1 to 12.
15. An image processing method comprising
generating a control signal indicating identity and non-identity between a current frame and a directly preceding frame in an image data stream;
calculating first values descriptive for a difference between the current frame and the directly preceding frame based on image portions in the current frame and the directly preceding frame when the control signal indicates non-identity between the current frame and the directly preceding frame and calculating, using the first values, second values descriptive for a difference between the current frame and a previous non-identical frame based on image portions in the current frame and the previous non-identical frame when the control signal indicates identity between the current frame and the directly preceding frame.
16. The image processing method according to claim 15, wherein
the first values are first motion vectors descriptive for a displacement of image portions in the current frame and the directly preceding frame and
the second values are second motion vectors descriptive for a displacement of image portions between the current frame and the previous non-identical frame.
17. The image processing method according to claims 15 to 16, comprising
storing, in a storage unit, the directly preceding frame when non-identity is determined between the current frame and the directly preceding frame.
18. The image processing method according to claims 15 to 17, comprising
storing, in a storage unit, the last previous frame not being identical to the current frame when identity is determined between the current frame and the directly preceding frame.
19. The image processing method according to claims 15 to 18, wherein
the second values are calculated on the basis of previously obtained first values.
20. The image processing method according to claims 15 to 19, wherein
the second values are calculated on the basis of one or more previously obtained second values.
21. An image processing device comprising
an evaluation unit configured to output a control signal indicating identity and non- identity between a current frame and a directly preceding frame in an image data stream; a motion vector calculator unit configured to calculate first motion vectors descriptive for a displacement of image portions in two consecutive frames when the control signal indicates non-identity between the current frame and the directly preceding frame and to calculate second motion vectors descriptive for a displacement of image portions between the current frame and a previous non-identical frame when the control signal indicates identity between the current frame and the directly preceding frame, wherein calculation of the second motion vectors is based on previously obtained first motion vectors.
PCT/EP2013/002124 2012-10-26 2013-07-17 Image processing device for an image data stream including identical frames and image processing method WO2014063763A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP12007382 2012-10-26
EP12007382.0 2012-10-26

Publications (1)

Publication Number Publication Date
WO2014063763A1 true WO2014063763A1 (en) 2014-05-01

Family

ID=47115185

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2013/002124 WO2014063763A1 (en) 2012-10-26 2013-07-17 Image processing device for an image data stream including identical frames and image processing method

Country Status (1)

Country Link
WO (1) WO2014063763A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040001544A1 (en) * 2002-06-28 2004-01-01 Microsoft Corporation Motion estimation/compensation for screen capture video
EP2175316A2 (en) * 2008-10-08 2010-04-14 Nikon Corporation Imaging device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040001544A1 (en) * 2002-06-28 2004-01-01 Microsoft Corporation Motion estimation/compensation for screen capture video
EP2175316A2 (en) * 2008-10-08 2010-04-14 Nikon Corporation Imaging device

Similar Documents

Publication Publication Date Title
JP6163674B2 (en) Content adaptive bi-directional or functional predictive multi-pass pictures for highly efficient next-generation video coding
US10334271B2 (en) Encoding system using motion estimation and encoding method using motion estimation
US9055217B2 (en) Image compositing apparatus, image compositing method and program recording device
CN109963048B (en) Noise reduction method, noise reduction device and noise reduction circuit system
JP4564564B2 (en) Moving picture reproducing apparatus, moving picture reproducing method, and moving picture reproducing program
US20150110190A1 (en) Method and apparatus for motion estimation
JP2015122553A (en) Image processing apparatus, image processing method, and program
US10924753B2 (en) Modular motion estimation and mode decision engine
CN103929648A (en) Motion estimation method and device in frame rate up conversion
CN111052184A (en) Moving image processing device, display device, moving image processing method, and control program
JP2022097392A (en) Offloading video coding processes to hardware for better density-quality tradeoffs
US20110255596A1 (en) Frame rate up conversion system and method
JP2005198268A (en) Dynamic image converting apparatus and method therefor, and dynamic image data format
US20090324102A1 (en) Image processing apparatus and method and program
WO2014063763A1 (en) Image processing device for an image data stream including identical frames and image processing method
KR101810118B1 (en) Apparatus and method for profile based motion estimation
US20130064295A1 (en) Motion detection method and associated apparatus
Vijayaratnam et al. A latency compensation framework for video transmission based on frame extrapolation
Hwang et al. Frame rate up-conversion technique using hardware-efficient motion estimator architecture for motion blur reduction of TFT-LCD
WO2022193211A1 (en) Real-time adaptive correction in viewport prediction for improved immersive video
JP2004229150A (en) Motion vector searching method and device
US20140002733A1 (en) Subframe level latency de-interlacing method and apparatus
JP4354799B2 (en) Interpolated image generation method and apparatus
JP2011216935A (en) Video processing device and video display device
Ding et al. A robust motion estimation with center-biased diamond search and its parallel architecture for motion-compensated de-interlace

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13737546

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13737546

Country of ref document: EP

Kind code of ref document: A1