CN1672402A - Image processing apparatus and method, and image pickup apparatus - Google Patents

Image processing apparatus and method, and image pickup apparatus Download PDF

Info

Publication number
CN1672402A
CN1672402A CNA028018222A CN02801822A CN1672402A CN 1672402 A CN1672402 A CN 1672402A CN A028018222 A CNA028018222 A CN A028018222A CN 02801822 A CN02801822 A CN 02801822A CN 1672402 A CN1672402 A CN 1672402A
Authority
CN
China
Prior art keywords
processing
object component
foreground
moiety
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA028018222A
Other languages
Chinese (zh)
Other versions
CN100512390C (en
Inventor
近藤哲二郎
泽尾贵志
石桥淳一
永野隆浩
藤原直树
三宅彻
和田成司
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of CN1672402A publication Critical patent/CN1672402A/en
Application granted granted Critical
Publication of CN100512390C publication Critical patent/CN100512390C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6811Motion detection based on the image signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/683Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2622Signal amplitude transition in the zone between image portions, e.g. soft edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20201Motion blur correction

Abstract

The present invention particularly relates to an image processing apparatus in which motion blur contained in a blurred image can be eliminated. An area specifying unit 103 specifies a non-mixed area formed of a foreground area consisting of foreground object components which form a foreground object and a background area consisting of background object components which form a background object, or a mixed area in which the foreground object components and the background object components are mixed. A separating/blur-eliminating unit 1503 simultaneously performs processing for separating the foreground object components and the background object components from the pixel data of the mixed area and processing for eliminating motion blur from the separated foreground object components based on a result obtained by specifying the area. The present invention is applicable to an image processing apparatus in which a difference between a signal detected by an image-capturing device and the real world is considered.

Description

Image processing equipment and method and image capture device
Technical field
The present invention relates to a kind of image processing equipment, particularly relate to a kind of the considered signal of sensor and the image processing equipment of the difference between the reality (real world).
Background technology
Be used to utilize event of sensor reality and the technology of processing to obtain being extensive use of from the sample data of this imageing sensor output.
For example, if the object motion speed ratio is very fast, then can in the image that obtains by the object that utilizes video camera to be captured in the motion of predetermined static background the place ahead motion blur appear.
Up to now, for fear of this motion blur, for example accelerated the speed of electronic shutter so that reduce the time for exposure.
In addition, in the method that increases shutter speed, must before catching image, adjust the shutter speed of video camera.The result cause can not blur correction mode the problem of image to get a distinct image.
Summary of the invention
The present invention considers that the above-mentioned background technology makes.Therefore, the present invention seeks to make that eliminating the motion blur that is included in the blurred picture becomes possibility.
First image processing equipment of the present invention comprises: region appointment device, be used to specify the non-mixed region that constitutes by foreground area and background area, perhaps specify the Mixed Zone of mixed foreground object component and background object component, here, described foreground area is made up of the foreground object component of the foreground object of composing images data, and described background area is made up of the background object component of the background object of composing images data; With the processing execution device, according to by region appointment device by result that the appointed area obtained, carry out the processing that from the pixel data of Mixed Zone, separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
This image processing equipment also comprises equal (equal-portion) part checkout gear, detects the moiety that the neighborhood pixels data of the foreground area that is equated substantially each other by its numerical value are formed.This processing execution device according to the moiety that is detected and by region appointment device by the result that the appointed area obtained, can carry out from the pixel data of Mixed Zone the processing that separates foreground object component and background component object at least simultaneously and the processing of elimination motion blur from the foreground object component that separates.
Image processing apparatus can also comprise: handle unit (unit-of-processing) and determine device, be used for according to the definite processing unit that is made up of a plurality of foreground object components and background object component in the position of moiety.The processing execution device is handled unit for each can carry out processing that separates foreground object component and background object component and the processing of eliminating motion blur from the foreground object component that separates simultaneously.
Handle unit and determine that device can determine the processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, this pixel data is positioned on the straight line and is different from the pixel data of moiety.
The moiety checkout gear can detect moiety by the difference and a threshold value of compared pixels data.
The moiety checkout gear can detect the moiety of being made up of the neighborhood pixels data, and the pixel quantity of these neighborhood pixels data is more than or equal to corresponding to the momental pixel quantity of foreground object.
By implementing the calculating corresponding to motion vector, the processing execution device can be carried out the processing that separates foreground object component and background object component simultaneously, and the processing of eliminating motion blur from the foreground object component that is separated.
The processing execution device can comprise: the model deriving means is used to obtain the model corresponding to handling unit and motion vector; The equation generating apparatus generates an equation according to the model that obtained, this equation corresponding to the pixel data of handling unit be included in foreground object component in the processing unit and the relational expression between the background object component; And calculation element, be used for being included in foreground object component and the background object component of handling in the unit according to the Equation for Calculating that is generated.
First image processing method of the present invention comprises: regional given step, be used to specify the non-mixed region that constitutes by foreground area and background area, perhaps specify the Mixed Zone of mixed foreground object component and background object component, here, described foreground area is made up of the foreground object component of the foreground object of composing images data, and described background area is made up of the background object component of the background object of composing images data; With the processing execution step, according to by regional designated treatment step by result that the appointed area obtained, carry out the processing that from the pixel data of Mixed Zone, separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
This image processing method comprises that also moiety detects step, detects the moiety that the neighborhood pixels data of the foreground area that is equated substantially each other by its numerical value are formed.In the processing of processing execution step, according to the moiety that is detected and by regional given step by the result that the appointed area obtained, can carry out from the pixel data of Mixed Zone the processing that separates foreground object component and background component object at least simultaneously and the processing of elimination motion blur from the foreground object component that separates.
Image processing method can also comprise: handle the unit determining step, be used for according to the definite processing unit that is made up of a plurality of foreground object components and background object component in the position of moiety.In the processing of processing execution step, handle unit for each and can carry out processing that separates foreground object component and background object component and the processing of from the foreground object component that separates, eliminating motion blur simultaneously.
In the processing of handling the unit determining step, can determine processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, this pixel data is positioned on the straight line and is different from the pixel data of moiety.
Detect in the processing of step at moiety, can detect moiety by the difference and a threshold value of compared pixels data.
Detect in the processing of step at moiety, can detect the moiety of being made up of the neighborhood pixels data, the pixel quantity of these neighborhood pixels data is more than or equal to corresponding to the momental pixel quantity of foreground object.
In the processing of processing execution step, by implementing calculating, can carry out the processing that separates foreground object component and background object component simultaneously corresponding to motion vector, and the processing of from the foreground object component that is separated, eliminating motion blur.
The processing of processing execution step can comprise: the model obtaining step is used to obtain the model corresponding to handling unit and motion vector; Equation generates step, generates an equation according to the model that is obtained, this equation corresponding to the pixel data of handling unit be included in foreground object component in the processing unit and the relational expression between the background object component; And calculation procedure, be used for being included in foreground object component and the background object component of handling in the unit according to the Equation for Calculating that is generated.
The program of first storage medium of the present invention comprises: regional given step, be used to specify the non-mixed region that constitutes by foreground area and background area, perhaps specify the Mixed Zone of mixed foreground object component and background object component, here, described foreground area is made up of the foreground object component of the foreground object of composing images data, and described background area is made up of the background object component of the background object of composing images data; With the processing execution step, the result who is obtained according to regional designated treatment step appointed area, carry out the processing that from the pixel data of Mixed Zone, separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
The program of this storage medium comprises that also moiety detects step, detects the moiety that the neighborhood pixels data of the foreground area that is equated substantially each other by its numerical value are formed.In the processing of processing execution step, according to the moiety that is detected and by regional given step by the result that the appointed area obtained, can carry out from the pixel data of Mixed Zone the processing that separates foreground object component and background component object at least simultaneously and the processing of elimination motion blur from the foreground object component that separates.
The program of storage medium can also comprise: handle the unit determining step, be used for according to the definite processing unit that is made up of a plurality of foreground object components and background object component in the position of moiety.In the processing of processing execution step, handle unit for each and can carry out processing that separates foreground object component and background object component and the processing of from the foreground object component that separates, eliminating motion blur simultaneously.
In the processing of handling the unit determining step, can determine processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, this pixel data is positioned on the straight line and is different from the pixel data of moiety.
Detect in the processing of step at moiety, can detect moiety by the difference and a threshold value of compared pixels data.
Detect in the processing of step at moiety, can detect the moiety of being made up of the neighborhood pixels data, the pixel quantity of these neighborhood pixels data is more than or equal to corresponding to the momental pixel quantity of foreground object.
In the processing of processing execution step, by implementing calculating, can carry out the processing that separates foreground object component and background object component simultaneously corresponding to motion vector, and the processing of from the foreground object component that is separated, eliminating motion blur.
The processing of processing execution step can comprise: the model obtaining step is used to obtain the model corresponding to handling unit and motion vector; Equation generates step, generates an equation according to the model that is obtained, this equation corresponding to the pixel data of handling unit be included in foreground object component in the processing unit and the relational expression between the background object component; And calculation procedure, be used for being included in foreground object component and the background object component of handling in the unit according to the Equation for Calculating that is generated.
First program of the present invention allows computer to carry out: regional given step, be used to specify the non-mixed region that constitutes by foreground area and background area, perhaps specify the Mixed Zone of mixed foreground object component and background object component, here, described foreground area is made up of the foreground object component of the foreground object of composing images data, and described background area is made up of the background object component of the background object of composing images data; With the processing execution step, according to by regional designated treatment step by result that the appointed area obtained, carry out the processing that from the pixel data of Mixed Zone, separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
This program can also comprise that moiety detects step, detects the moiety that the neighborhood pixels data of the foreground area that is equated substantially each other by its numerical value are formed.In the processing of processing execution step,, can carry out simultaneously at least and from the pixel data of Mixed Zone, separate the processing of foreground object component and background component object and the processing of elimination motion blur from the foreground object component that separates by the result that the appointed area obtained according to the moiety that is detected and regional designated treatment step.
This program can also comprise: handle the unit determining step, be used for according to the definite processing unit that is made up of a plurality of foreground object components and background object component in the position of moiety.In the processing of processing execution step, handle unit for each and can carry out processing that separates foreground object component and background object component and the processing of from the foreground object component that separates, eliminating motion blur simultaneously.
In the processing of handling the unit determining step, can determine processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, this pixel data is positioned on the straight line and is different from the pixel data of moiety.
Detect in the processing of step at moiety, can detect moiety by the difference and a threshold value of compared pixels data.
Detect in the processing of step at moiety, can detect the moiety of being made up of the neighborhood pixels data, the pixel quantity of these neighborhood pixels data is more than or equal to corresponding to the momental pixel quantity of foreground object.
In the processing of processing execution step, by implementing calculating, can carry out the processing that separates foreground object component and background object component simultaneously corresponding to motion vector, and the processing of from the foreground object component that is separated, eliminating motion blur.
The processing of processing execution step can comprise: the model obtaining step is used to obtain the model corresponding to handling unit and motion vector; Equation generates step, generates an equation according to the model that is obtained, this equation corresponding to the pixel data of handling unit be included in foreground object component in the processing unit and the relational expression between the background object component; And calculation procedure, be used for being included in foreground object component and the background object component of handling in the unit according to the Equation for Calculating that is generated.
Second image processing equipment of the present invention comprises: input unit, and input has the view data of subject area, and this subject area is made up of the object component that forms object; And motion blur (motion-blur) cancellation element, equal substantially by the value of supposition pixel data part in the subject area of the view data of importing by input unit, eliminate the motion blur that appears in the subject area.
Input unit can be imported the view data with foreground area, background area and Mixed Zone, described foreground area is made up of the foreground object component that forms object, described background area is made up of the background object component that forms background object, and described Mixed Zone is the zone of having mixed foreground object component and background object component.The motion blur cancellation element is equal substantially by the value of supposition pixel data part in the foreground area of the view data that input unit is imported, and can eliminate the motion blur that appears in the foreground area.
This image processing equipment also comprises the moiety checkout gear, detects the moiety that the value of pixel data equates substantially in the foreground area of view data.The motion blur cancellation element can be eliminated according to the moiety that the moiety checkout gear is detected and appear at motion blur in the foreground area.
Image processing apparatus can also comprise: handle unit and determine device, be used for according to the definite processing unit that is made up of a plurality of foreground object components in the position of moiety.The motion blur cancellation element can be eliminated the motion blur in each foreground area of handling unit.
Handle unit and determine that device can determine the processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, this pixel data is positioned on the straight line and is different from the pixel data of moiety.
Image processing equipment can also comprise a region appointment device, is used to specify foreground area, background area or Mixed Zone.
The moiety checkout gear can detect moiety by the difference and a threshold value of compared pixels data.
The moiety checkout gear can detect the moiety of being made up of the neighborhood pixels data, and the pixel quantity of these neighborhood pixels data is more than or equal to corresponding to the momental pixel quantity of foreground object.
By implementing the calculating corresponding to motion vector, the motion blur cancellation element can be eliminated the motion blur that appears in the foreground area.
The motion blur cancellation element can comprise: the model deriving means is used to obtain the model corresponding to handling unit and motion vector; The equation generating apparatus generates an equation according to the model that obtained, and this equation is corresponding to the relational expression between the pixel data of handling unit and the foreground object component that is included in the processing unit; And calculation element, be used for being included in the foreground object component of handling in the unit according to the Equation for Calculating that is generated.
The motion blur cancellation element is according to area information with according to moiety, can carry out the processing that the pixel data of Mixed Zone is separated into foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur, non-mixed region or indication Mixed Zone that described area information indication is made up of foreground area and background area.
Image processing equipment comprises that also handling unit determines device, determines the processing unit that is made up of a plurality of foreground object components and background object component according to the position of moiety.The processing execution device is handled unit for each can carry out the processing that separates foreground object component and background object component simultaneously, and the processing of eliminating motion blur from the foreground object component that is separated.
Handle unit and determine that device can determine the processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, this pixel data is positioned on the line always and is different from the pixel data of moiety.
Image processing equipment also comprises the region appointment device of specifying foreground area, background area or Mixed Zone
The moiety checkout gear can detect moiety by the difference and the threshold value of compared pixels data.
The moiety checkout gear can detect the moiety of being made up of the neighborhood pixels data, and the pixel quantity of described neighborhood pixels data is greater than or equal to respect to the momental pixel quantity of foreground object.
Handle unit and determine device, can carry out the processing that separates foreground object component and background object component simultaneously by implementing a calculating corresponding to motion vector, and the processing of from the foreground object component that is separated, eliminating motion blur.
Handle unit and determine that device can comprise: the model deriving means is used to obtain the model corresponding to handling unit and motion vector; The equation generating apparatus is used for generating an equation according to the model that obtained, this equation corresponding to the pixel data of handling unit be included in foreground object component in the processing unit and the relational expression between the background object component; And calculation element, be included in foreground object component and the background object component of handling in the unit according to the Equation for Calculating that generates.
Second image processing method of the present invention comprises: input step, and input has the view data of subject area, and this subject area is made up of the object component that forms object; And the motion blur removal process, equal substantially by the value of the pixel data part of supposition in the subject area of the view data of importing by the processing of input step, eliminate the motion blur that appears in the subject area.
In the processing of input step, can import view data with foreground area, background area and Mixed Zone, described foreground area is made up of the foreground object component that forms object, described background area is made up of the background object component that forms background object, and described Mixed Zone is the zone that foreground object component and background object component mix.In the processing of motion blur removal process, equal substantially by the value of supposition pixel data part in the foreground area of the view data that the input step processing is imported, can eliminate the motion blur that appears in the foreground area.
This image processing method comprises that also moiety detects step, detects the moiety that the value of pixel data equates substantially in the foreground area of view data.In the processing of motion blur removal process, can detect moiety that step process detects according to moiety and eliminate and appear at motion blur in the foreground area.
Image processing method can also comprise: handle the unit determining step, be used for according to the definite processing unit that is made up of a plurality of foreground object components in the position of moiety.In the motion blur removal process is handled, can eliminate the motion blur in each foreground area of handling unit.
In the processing of handling the unit determining step, can determine processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, this pixel data is positioned on the straight line and is different from the pixel data of moiety.
Image processing method can also comprise regional given step, is used to specify foreground area, background area or Mixed Zone.
Detect in the processing of step at moiety, can detect moiety by the difference and a threshold value of compared pixels data.
Detect in the processing of step at moiety, can detect the moiety of being made up of the neighborhood pixels data, the pixel quantity of these neighborhood pixels data is more than or equal to corresponding to the momental pixel quantity of foreground object.
In the processing of motion blur removal process,, can eliminate the motion blur that appears in the foreground area by implementing calculating corresponding to motion vector.
The processing of motion blur removal process can comprise: the model obtaining step is used to obtain the model corresponding to handling unit and motion vector; Equation generates step, generates an equation according to the model that is obtained, and this equation is corresponding to the relational expression between the pixel data of handling unit and the foreground object component that is included in the processing unit; And calculation procedure, be used for being included in the foreground object component of handling in the unit according to the Equation for Calculating that is generated.
In the processing of motion blur removal process, according to area information with according to moiety, can carry out the processing that the pixel data of Mixed Zone is separated into foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur, non-mixed region or indication Mixed Zone that described area information indication is made up of foreground area and background area.
Image processing method also comprises handles the unit determining step, determines the processing unit that is made up of a plurality of foreground object components and background object component according to the position of moiety.In the processing of processing execution step, handle unit for each and can carry out the processing that separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
In the processing of handling the unit determining step, can determine processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, this pixel data is positioned on the line always and is different from the pixel data of moiety.
Image processing step also comprises the regional given step of specifying foreground area, background area or Mixed Zone.
Detect in the processing of step at moiety, can detect moiety by the difference and a threshold value of compared pixels data.
Detect in the processing of step at moiety, can detect the moiety of being made up of the neighborhood pixels data, the pixel quantity of described neighborhood pixels data is greater than or equal to respect to the momental pixel quantity of foreground object.
In the processing of handling the unit determining step, by implementing a calculating, can carry out the processing that separates foreground object component and background object component simultaneously corresponding to motion vector, and the processing of from the foreground object component that is separated, eliminating motion blur.
The processing of handling the unit determining step can comprise: the model obtaining step is used to obtain the model corresponding to handling unit and motion vector; Equation generates step, be used for generating an equation according to the model that is obtained, this equation corresponding to the pixel data of handling unit be included in foreground object component in the processing unit and the relational expression between the background object component; And calculation procedure, be included in foreground object component and the background object component of handling in the unit according to the Equation for Calculating that generates.
The program of second storage medium comprises: input step, and input has the view data of subject area, and this subject area is made up of the object component that forms object; And the motion blur removal process, equal substantially by the value of the pixel data part of supposition in the subject area of the view data of importing by the processing of input step, eliminate the motion blur that appears in the subject area.
In the processing of input step, can import view data with foreground area, background area and Mixed Zone, described foreground area is made up of the foreground object component that forms object, described background area is made up of the background object component that forms background object, and described Mixed Zone is the zone that foreground object component and background object component mix.In the processing of motion blur removal process, equal substantially by the value of supposition pixel data part in the foreground area of the view data that the processing of input step is imported, can eliminate the motion blur that appears in the foreground area.
The program of this storage medium comprises that also moiety detects step, detects the moiety that the value of pixel data equates substantially in the foreground area of view data.In the processing of motion blur removal process, can detect moiety that step process detects according to moiety and eliminate and appear at motion blur in the foreground area.
The program of storage medium can also comprise: handle the unit determining step for one, be used for according to the definite processing unit that is made up of a plurality of foreground object components in the position of moiety.In the motion blur removal process is handled, can eliminate the motion blur in each foreground area of handling unit.
In the processing of handling the unit determining step, can determine processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, this pixel data is positioned on the straight line and is different from the pixel data of moiety.
The program of storage medium can also comprise regional given step, is used to specify foreground area, background area or Mixed Zone.
Detect in the processing of step at moiety, can detect moiety by the difference and a threshold value of compared pixels data.
Detect in the processing of step at moiety, can detect the moiety of being made up of the neighborhood pixels data, the pixel quantity of these neighborhood pixels data is more than or equal to corresponding to the momental pixel quantity of foreground object.
In the processing of motion blur removal process,, can eliminate the motion blur that appears in the foreground area by implementing calculating corresponding to motion vector.
The processing of motion blur removal process can comprise: the model obtaining step is used to obtain the model corresponding to handling unit and motion vector; Equation generates step, generates an equation according to the model that is obtained, and this equation is corresponding to the relational expression between the pixel data of handling unit and the foreground object component that is included in the processing unit; And calculation procedure, be used for being included in the foreground object component of handling in the unit according to the Equation for Calculating that is generated.
In the processing of motion blur removal process, according to area information with according to moiety, can carry out the processing that the pixel data of Mixed Zone is separated into foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur, non-mixed region or indication Mixed Zone that described area information indication is made up of foreground area and background area.
The program of storage medium also comprises handles the unit determining step, determines the processing unit that is made up of a plurality of foreground object components and background object component according to the position of moiety.In the processing of processing execution step, handle unit for each and can carry out the processing that separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
In the processing of handling the unit determining step, can determine processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, this pixel data is positioned on the line always and is different from the pixel data of moiety.
The program of storage medium also comprises the regional given step of specifying foreground area, background area or Mixed Zone.
Detect in the processing of step at moiety, can detect moiety by the difference and a threshold value of compared pixels data.
Detect in the processing of step at moiety, can detect the moiety of being made up of the neighborhood pixels data, the pixel quantity of described neighborhood pixels data is greater than or equal to respect to the momental pixel quantity of foreground object.
In the processing of handling the unit determining step, by implementing a calculating, can carry out the processing that separates foreground object component and background object component simultaneously corresponding to motion vector, and the processing of from the foreground object component that is separated, eliminating motion blur.
The processing of handling the unit determining step can comprise: the model obtaining step is used to obtain the model corresponding to handling unit and motion vector; Equation generates step, be used for generating an equation according to the model that is obtained, this equation corresponding to the pixel data of handling unit be included in foreground object component in the processing unit and the relational expression between the background object component; And calculation procedure, be included in foreground object component and the background object component of handling in the unit according to the Equation for Calculating that generates.
Second program of the present invention allows computer to carry out: input step, and input has the view data of subject area, and this subject area is made up of the object component that forms object; And the motion blur removal process, equal substantially by the value of the pixel data part of supposition in the subject area of the view data of importing by the processing of input step, eliminate the motion blur that appears in the subject area.
In the processing of input step, can import view data with foreground area, background area and Mixed Zone, described foreground area is made up of the foreground object component that forms object, described background area is made up of the background object component that forms background object, and described Mixed Zone is the zone that foreground object component and background object component mix.In the processing of motion blur removal process, equal substantially by the value of supposition pixel data part in the foreground area of the view data that the processing of input step is imported, can eliminate the motion blur that appears in the foreground area.
This program comprises that also moiety detects step, detects the moiety that the value of pixel data equates substantially in the foreground area of view data.In the processing of motion blur removal process, can detect moiety that step process detects according to moiety and eliminate and appear at motion blur in the foreground area.
This program can also comprise: handle the unit determining step for one, be used for according to the definite processing unit that is made up of a plurality of foreground object components in the position of moiety.In the motion blur removal process is handled, can eliminate the motion blur in each foreground area of handling unit.
In the processing of handling the unit determining step, can determine processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, this pixel data is positioned on the straight line and is different from the pixel data of moiety.
This program can also comprise regional given step, is used to specify foreground area, background area or Mixed Zone.
Detect in the processing of step at moiety, can detect moiety by the difference and a threshold value of compared pixels data.
Detect in the processing of step at moiety, can detect the moiety of being made up of the neighborhood pixels data, the pixel quantity of these neighborhood pixels data is more than or equal to corresponding to the momental pixel quantity of foreground object.
In the processing of motion blur removal process,, can eliminate the motion blur that appears in the foreground area by implementing calculating corresponding to motion vector.
The processing of motion blur removal process can comprise: the model obtaining step is used to obtain the model corresponding to handling unit and motion vector; Equation generates step, generates an equation according to the model that is obtained, and this equation is corresponding to the relational expression between the pixel data of handling unit and the foreground object component that is included in the processing unit; And calculation procedure, be used for being included in the foreground object component of handling in the unit according to the Equation for Calculating that is generated.
In the processing of motion blur removal process, according to area information with according to moiety, can carry out the processing that the pixel data of Mixed Zone is separated into foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur, non-mixed region or indication Mixed Zone that described area information indication is made up of foreground area and background area.
This program can also comprise handles the unit determining step, determines the processing unit that is made up of a plurality of foreground object components and background object component according to the position of moiety.In the processing of processing execution step, handle unit for each and can carry out the processing that separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
In the processing of handling the unit determining step, can determine processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, this pixel data is positioned on the line always and is different from the pixel data of moiety.
This program also comprises the regional given step of specifying foreground area, background area or Mixed Zone.
Detect in the processing of step at moiety, can detect moiety by the difference and a threshold value of compared pixels data.
Detect in the processing of step at moiety, can detect the moiety of being made up of the neighborhood pixels data, the pixel quantity of described neighborhood pixels data is greater than or equal to respect to the momental pixel quantity of foreground object.
In the processing of handling the unit determining step, by implementing a calculating, can carry out the processing that separates foreground object component and background object component simultaneously corresponding to motion vector, and the processing of from the foreground object component that is separated, eliminating motion blur.
The processing of handling the unit determining step can comprise: the model obtaining step is used to obtain the model corresponding to handling unit and motion vector; Equation generates step, be used for generating an equation according to the model that is obtained, this equation corresponding to the pixel data of handling unit be included in foreground object component in the processing unit and the relational expression between the background object component; And calculation procedure, be included in foreground object component and the background object component of handling in the unit according to the Equation for Calculating that generates.
First image capture device of the present invention comprises: image capture apparatus is used to export the object images of being caught by image-capture device the time integral function that comprises the pixel of predetermined quantity and have the view data that the pixel data by scheduled volume forms; Region appointment device, be used to specify the non-mixed region that constitutes by foreground area and background area, perhaps specify the Mixed Zone of mixed foreground object component and background object component, here, described foreground area is made up of the foreground object component of the foreground object of composing images data, and described background area is made up of the background object component of the background object of composing images data; With the processing execution device, according to by region appointment device by result that the appointed area obtained, carry out the processing that from the pixel data of Mixed Zone, separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
This image capture device also comprises the moiety checkout gear, detects the moiety that the neighborhood pixels data of the foreground area that is equated substantially each other by its numerical value are formed.The processing execution device, can be carried out simultaneously at least and separate the processing of foreground object component and background component object and the processing of elimination motion blur from the foreground object component that separates from the pixel data of Mixed Zone by the result that the appointed area obtained according to the moiety that is detected and region appointment device.
Image capture apparatus can also comprise: handle unit and determine device, be used for according to the definite processing unit that is made up of a plurality of foreground object components and background object component in the position of moiety.The processing execution device is handled unit for each can carry out processing that separates foreground object component and background object component and the processing of eliminating motion blur from the foreground object component that separates simultaneously.
Handle unit and determine that device can determine the processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, this pixel data is positioned on the straight line and is different from the pixel data of moiety.
The moiety checkout gear can detect moiety by the difference and a threshold value of compared pixels data.
The moiety checkout gear can detect the moiety of being made up of the neighborhood pixels data, and the pixel quantity of these neighborhood pixels data is more than or equal to corresponding to the momental pixel quantity of foreground object.
By implementing the calculating corresponding to motion vector, the processing execution device can be carried out the processing that separates foreground object component and background object component simultaneously, and the processing of eliminating motion blur from the foreground object component that is separated.
The processing execution device can comprise: the model deriving means is used to obtain the model corresponding to handling unit and motion vector; The equation generating apparatus generates an equation according to the model that obtained, this equation corresponding to the pixel data of handling unit be included in foreground object component in the processing unit and the relational expression between the background object component; And calculation element, be used for being included in foreground object component and the background object component of handling in the unit according to the Equation for Calculating that is generated.
Second image capture device of the present invention comprises: image capture apparatus, be used to export the object images of catching by image-capture device, comprise the pixel of predetermined quantity and have the time integral function of the view data that the pixel data by scheduled volume forms and have the subject area of forming by the object component that forms object; And the motion blur cancellation element is equal substantially by the value of supposition pixel data part in the subject area of view data, eliminates the motion blur that appears in the subject area.
Input unit can be imported the view data with foreground area, background area and Mixed Zone, described foreground area is made up of the foreground object component that forms object, described background area is made up of the background object component that forms background object, and described Mixed Zone is the zone that foreground object component and background object component mix.Value by supposition pixel data part in the foreground area of the view data that input unit is imported is equal substantially, and the motion blur cancellation element can be eliminated the motion blur that appears in the foreground area.
This image capture device also comprises the moiety checkout gear, detects the moiety that the value of pixel data equates substantially in the foreground area of view data.The motion blur cancellation element can be eliminated according to the moiety that the moiety checkout gear is detected and appear at motion blur in the foreground area.
Image capture apparatus can also comprise: handle unit and determine device, be used for according to the definite processing unit that is made up of a plurality of foreground object components in the position of moiety.The motion blur cancellation element can be eliminated the motion blur in each foreground area of handling unit.
Handle unit and determine that device can determine the processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, this pixel data is positioned on the straight line and is different from the pixel data of moiety.
Image capture device can also comprise a region appointment device, is used to specify foreground area, background area or Mixed Zone.
The moiety checkout gear can detect moiety by the difference and a threshold value of compared pixels data.
The moiety checkout gear can detect the moiety of being made up of the neighborhood pixels data, and the pixel quantity of these neighborhood pixels data is more than or equal to corresponding to the momental pixel quantity of foreground object.
By implementing the calculating corresponding to motion vector, the motion blur cancellation element can be eliminated the motion blur that appears in the foreground area.
The motion blur cancellation element can comprise: the model deriving means is used to obtain the model corresponding to handling unit and motion vector; The equation generating apparatus generates an equation according to the model that obtained, and this equation is corresponding to the relational expression between the pixel data of handling unit and the foreground object component that is included in the processing unit; And calculation element, be used for being included in the foreground object component of handling in the unit according to the Equation for Calculating that is generated.
The motion blur cancellation element is according to area information with according to moiety, can carry out the processing that the pixel data of Mixed Zone is separated into foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur, non-mixed region or indication Mixed Zone that described area information indication is made up of foreground area and background area.
Image capture device comprises that also handling unit determines device, determines the processing unit that is made up of a plurality of foreground object components and background object component according to the position of moiety.The processing execution device is handled unit for each can carry out the processing that separates foreground object component and background object component simultaneously, and the processing of eliminating motion blur from the foreground object component that is separated.
Handle unit and determine that device can determine the processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, this pixel data is positioned on the line always and is different from the pixel data of moiety.
Image capture device also comprises the region appointment device of specifying foreground area, background area or Mixed Zone
The moiety checkout gear can detect moiety by the difference and the threshold value of compared pixels data.
The moiety checkout gear can detect the moiety of being made up of the neighborhood pixels data, and the pixel quantity of described neighborhood pixels data is greater than or equal to respect to the momental pixel quantity of foreground object.
Handle unit and determine device, can carry out the processing that separates foreground object component and background object component simultaneously by implementing a calculating corresponding to motion vector, and the processing of from the foreground object component that is separated, eliminating motion blur.
Handle unit and determine that device can comprise: the model deriving means is used to obtain the model corresponding to handling unit and motion vector; The equation generating apparatus is used for generating an equation according to the model that obtained, this equation corresponding to the pixel data of handling unit be included in foreground object component in the processing unit and the relational expression between the background object component; And calculation element, be included in foreground object component and the background object component of handling in the unit according to the Equation for Calculating that generates.
Description of drawings
Fig. 1 is the block diagram that illustrates an embodiment configuration of signal handling equipment of the present invention;
Fig. 2 is the block diagram of signals shown treatment facility;
Fig. 3 illustrates the image of being carried out by a transducer and catches;
Fig. 4 illustrates the arrangement of pixel;
Fig. 5 illustrates the operation of checkout gear;
Fig. 6 A illustrates by image and catches corresponding to the object of sport foreground and an image obtaining corresponding to the object of static background;
Fig. 6 B illustrates the model of an image that obtains by pixel value on expanding corresponding to the straight line of image on the time orientation;
Fig. 7 illustrates background area and unlapped background area of a background area, a foreground area, a Mixed Zone, a covering;
Fig. 8 illustrates a model that obtains by the pixel value of expanding the many pixels that are arranged side by side of an image on time orientation, wherein, described image is to catch corresponding to object of static prospect with corresponding to an object of static background by image to obtain;
Fig. 9 illustrates on time orientation the expansion pixel value and cuts apart a model corresponding to the cycle of aperture time;
Figure 10 illustrates on time orientation the expansion pixel value and cuts apart a model corresponding to the cycle of aperture time;
Figure 11 illustrates on time orientation the expansion pixel value and cuts apart a model corresponding to the cycle of aperture time;
Figure 12 illustrates an example that extracts the pixel in foreground area, background area and the Mixed Zone;
Figure 13 illustrates many pixels and by the relation between the model that obtains at time orientation expansion pixel value;
Figure 14 illustrates on time orientation the expansion pixel value and cuts apart a model corresponding to the cycle of aperture time;
Figure 15 illustrates on time orientation the expansion pixel value and cuts apart a model corresponding to the cycle of aperture time;
Figure 16 illustrates on time orientation the expansion pixel value and cuts apart a model corresponding to the cycle of aperture time;
Figure 17 illustrates on time orientation the expansion pixel value and cuts apart a model corresponding to the cycle of aperture time;
Figure 18 illustrates on time orientation the expansion pixel value and cuts apart a model corresponding to the cycle of aperture time;
Figure 19 is the flow chart that the processing of amount of movement blur is adjusted in explanation;
Figure 20 is the block diagram of an example of the configuration of declare area designating unit 103;
Figure 21 illustrates an image when an object corresponding to prospect is moving;
Figure 22 illustrates on time orientation the expansion pixel value and cuts apart a model corresponding to the cycle of aperture time;
Figure 23 illustrates on time orientation the expansion pixel value and cuts apart a model corresponding to the cycle of aperture time;
Figure 24 illustrates on time orientation the expansion pixel value and cuts apart a model corresponding to the cycle of aperture time;
Figure 25 illustrates the condition of determining the zone;
Figure 26 A illustrates the result's who obtains by regional designating unit 103 appointed areas a example;
Figure 26 B illustrates the result's who obtains by regional designating unit 103 appointed areas a example;
Figure 26 C illustrates the result's who obtains by regional designating unit 103 appointed areas a example;
Figure 26 D illustrates the result's who obtains by regional designating unit 103 appointed areas a example;
Figure 27 illustrates the result's who obtains by regional designating unit 103 appointed areas a example;
Figure 28 is the flow chart of declare area designated treatment;
Figure 29 is the block diagram of another configuration of declare area designating unit 103;
Figure 30 illustrates on time orientation the expansion pixel value and cuts apart a model corresponding to the cycle of aperture time;
Figure 31 illustrates an example of background image;
Figure 32 is that the explanation target binary image extracts the block diagram of the configuration of part 302;
Figure 33 A illustrates the calculating of correlation;
Figure 33 B illustrates the calculating of correlation;
Figure 34 A illustrates the calculating of correlation;
Figure 34 B illustrates the calculating of correlation;
Figure 35 illustrates an example of target binary image;
Figure 36 is the block diagram of the configuration of description time change detector 303;
Figure 37 illustrates by determining that regional determining section 342 is made;
Figure 38 illustrates an example of determining of being made by time change detector 303;
Figure 39 is the flow chart of explanation by the regional designated treatment of regional designating unit 103 execution;
Figure 40 is the flow chart of the details of declare area designated treatment;
Figure 41 is the block diagram of another configuration of declare area designating unit 103;
Figure 42 is the block diagram that the configuration of processing section 361 is strengthened in explanation;
Figure 43 illustrates the motion compensation of being carried out by a motion compensator 381;
Figure 44 illustrates the motion compensation of being carried out by a motion compensator 381;
Figure 45 is the flow chart of declare area designated treatment;
Figure 46 is the flow chart that the details of processing is strengthened in explanation;
Figure 47 is the block diagram of example of the configuration of explanation mixing ratio calculator 104;
Figure 48 illustrates the example of theoretical mixture ratio α;
Figure 49 illustrates on time orientation the expansion pixel value and cuts apart a model corresponding to the cycle of aperture time;
Figure 50 illustrates on time orientation the expansion pixel value and cuts apart a model corresponding to the cycle of aperture time;
Figure 51 illustrates relevant being similar to of use prospect component;
Figure 52 illustrates the relation among C, N and the P;
Figure 53 is the block diagram of the configuration of explanation estimation mixing ratio processor 401;
Figure 54 is the example that the estimation mixing ratio has been described;
Figure 55 is the block diagram of explanation mixing ratio calculator 104 another configurations;
Figure 56 is the flow chart that the processing of mixing ratio is calculated in explanation;
Figure 57 is the flow chart that the processing of estimation mixing ratio is calculated in explanation;
Figure 58 illustrates the straight line that approaches mixing ratio α;
Figure 59 illustrates a plane approaching mixing ratio α;
Figure 60 illustrates when calculating mixing ratio α the relation of many pixels in a plurality of frames;
Figure 61 is the block diagram of another configuration of explanation mixing ratio estimation process device 401;
Figure 62 illustrates an example of estimation mixing ratio;
Figure 63 is the flow chart that the mixing ratio computing is calculated in explanation;
Figure 64 is that explanation is by the flow chart of use corresponding to the processing of the model assessment mixing ratio of the background area that covers;
Figure 65 is the block diagram of an example of the configuration of explanation background/foreground separator 105;
Figure 66 A illustrates an input picture, a prospect component image and a background component image;
Figure 66 B illustrates by the model that pixel obtained on expansion straight line on the time orientation;
Figure 67 illustrates on time orientation the expansion pixel value and cuts apart a model corresponding to the cycle of aperture time;
Figure 68 illustrates on time orientation the expansion pixel value and cuts apart a model corresponding to the cycle of aperture time;
Figure 69 illustrates on time orientation the expansion pixel value and cuts apart a model corresponding to the cycle of aperture time;
Figure 70 is the block diagram of an example of the configuration of explanation separating part 601;
Figure 71 A illustrates an example that has separated the prospect component image;
Figure 71 B illustrates an example of separating background component image;
Figure 72 is the flow chart that illustrates the processing that separates a prospect and a background;
Figure 73 is the block diagram of an example of the configuration of the fuzzy adjustment unit 106 of account for motion;
Figure 74 is the block diagram that account for motion is blured an example of the configuration of eliminating unit 108;
Figure 75 illustrates on time orientation the expansion pixel value and cuts apart a model corresponding to the cycle of aperture time;
Figure 76 illustrates on time orientation the expansion pixel value and cuts apart a model corresponding to the cycle of aperture time;
Figure 77 illustrates on time orientation the expansion pixel value and cuts apart a model corresponding to the cycle of aperture time;
Figure 78 illustrates an example of the pixel of eliminating motion blur;
Figure 79 illustrates an example of the pixel of eliminating motion blur;
Figure 80 illustrates an example of the pixel that increases motion blur;
Figure 81 illustrates the processing of adjusting background component;
Figure 82 illustrates by catching the image that static black quadrangle piece obtains;
Figure 83 illustrates the image that obtains by capture movement black quadrangle piece;
Figure 84 illustrates an example being carried out the result who handles and obtain by the motion blur adjustment unit;
Figure 85 is the flow chart that the processing of amount of movement blur is adjusted in explanation;
Figure 86 is the flow chart of processing of the motion blur of explanation elimination prospect component image;
Figure 87 is the block diagram of another configuration of the function of explanation signal handling equipment;
Figure 88 illustrates the configuration of synthesizer 1001;
Figure 89 is the block diagram of another configuration of the function of explanation signal handling equipment;
Figure 90 is the block diagram of the configuration of explanation mixing ratio calculator 1101;
Figure 91 is the block diagram of the configuration of explanation foreground/background separation device 1102;
Figure 92 is the block diagram of a configuration again of explanation signal handling equipment function;
Figure 93 illustrates the configuration of synthesizer 1201;
Figure 94 is the explanation signal handling equipment block diagram of a configuration again;
Figure 95 is the block diagram of the configuration of explanation separating unit 1503;
Figure 96 illustrates and handles unit and corresponding to a model handling unit;
Figure 97 illustrates and handles unit and corresponding to a model handling unit;
Figure 98 illustrates and handles unit and corresponding to a model handling unit;
Figure 99 illustrates and handles unit and corresponding to a model handling unit;
Figure 100 illustrates the calculating of pixel value;
Figure 101 illustrates an example of input picture;
Figure 102 illustrates an example of result;
Figure 103 is the flow chart that the processing of motion blur is eliminated in explanation;
Figure 104 illustrates the flow chart of handling separation prospect and background and elimination motion blur simultaneously
Figure 105 is the block diagram of a configuration again of explanation signal handling equipment;
Figure 106 is the fuzzy block diagram of eliminating the configuration of unit 106 of account for motion;
Figure 107 illustrates a model that is supplied to equation maker 1622;
Figure 108 illustrates a model that is supplied to equation maker 1622;
Figure 109 illustrates the calculating of pixel value;
Figure 110 is the flow chart that the processing of motion blur is eliminated in explanation;
Figure 111 is the flow chart of processing of the motion blur of explanation elimination prospect component image.
Embodiment
Fig. 1 is the block diagram of the configuration of an embodiment of explanation signal handling equipment of the present invention.CPU (CPU) 21 carries out various processing according to the program that is stored in ROM (read-only memory) 22 or the memory cell 28.Program and data that CPU 21 carries out are stored among the RAM (random access memory) 23 as required.CPU 21, ROM 22 and RAM 23 are connected to each other by bus 24.
Input/output interface 25 also connects CPU 21 via bus 24.The input unit 26 that is made of video camera keyboard, mouse, microphone etc. is connected input/output interface 25 with the output unit 27 that is made of display, loud speaker etc.CPU 21 responses are from the various processing of command execution of input unit 26 inputs.CPU21 exports image or the sound that obtains as the result that output unit 27 is handled subsequently.
The memory cell 28 that connects input/output interface 25 is made of (for example) hard disk, and storage is by the program and the various data of CPU 21 execution.Communication unit 29 is communicated by letter with external device (ED) via the Internet or other network.In this example, communication unit 29 is as an acquiring unit that obtains transducer output.
As selection, program can obtain via communication unit 29, and stores in the memory cell 28.
When disk 51, CD 52, magneto optical disk 53, semiconductor memory 54 or analog were connected to the driver 30 of input/output interface 25, driver 30 drove these storage mediums, and obtained program and data on the corresponding media.Driver 30 obtains to be stored in corresponding medium heavy program or data.If desired, then give memory cell 28 and be stored in wherein the program that obtains or data passes.
Below by one more specifically example signal handling equipment is described, this signalling arrangement is specified a zone that is inserted with effective information therein, perhaps extracts the effective information that inserts, the data that this effective information obtains from transducer.In the following example, CCD line sensor or CCD area sensor are equivalent to this transducer, and area information or mixing ratio are equivalent to effective information, and the prospect in the Mixed Zone and the mixing of background are equivalent to motion blur or distortion.
Fig. 2 is the block diagram of explanation signal handling equipment.
Here, each function of signal handling equipment is by hardware or unimportant by software implementation.Just the block diagram of this specification can be hardware block diagram or software function block diagram.
Motion blur is the distortion that contains in the image corresponding to the motion object, and the motion of objects that this motion object is caught by reality and the image capture features of transducer cause.
In this specification, be known as image object corresponding to the image that will catch of real-world object.
Be supplied to the input picture of signal handling equipment to be provided for object extracting unit 101, regional designating unit 103, mixing ratio calculator 104 and foreground/background separation device 105.
Object extracting unit 101 is extracted corresponding to the thick image object that is included in the foreground object in the input picture, and the image object that extracts is supplied to motion detector 102.Object extracting unit 101 for example detects the profile that is included in the foreground image in the input picture, so that extract the thick image object corresponding to foreground object.
Object extracting unit 101 is extracted corresponding to the thick image object that is included in the background object in the input picture, and the image object that extracts is supplied to motion detector 102.Object extracting unit 101 according to input picture for example with extract difference extraction between the image object corresponding to the thick image object of background object corresponding to foreground object.
As selection, for example, object extracting unit 101 can be extracted corresponding to the thick image object of foreground object with corresponding to the thick image object of background object according to the background image of storing in the built-in background memory and the difference between the input picture.
Motion detector 102 is according to the technology of being correlated with such as piece coupling, gradient, phase place, perhaps the pixel-recursive technique computes is corresponding to the motion vector of the image object of the thick extraction of foreground object, and institute's calculated motion vector and motion vector positional information (being the information that is used to specify corresponding to the locations of pixels of motion vector) are supplied to regional designating unit 103 and motion blur extraction unit 106.
The motion vector of motion detector 102 outputs comprises the information corresponding to amount of exercise v.
Motion detector 102 can together be exported to motion blur adjustment unit 106 with the motion vector of each image object and the location of pixels information of specify image object pixels.
Amount of exercise v is in the value corresponding to the change in location in the image of motion object with the indication of pel spacing unit.For example, if move corresponding to the object images of prospect, being shown in when making this object images be positioned at subsequent frame has four pixels away from reference frame on the position, and then the amount of exercise v corresponding to the object images of prospect is 4.
When adjusting, need object extracting unit 101 and motion detector 102 corresponding to motion motion of objects fuzzy quantity.
Zone designating unit 103 determines which of foreground area, background area or Mixed Zone be each pixel of input pictures belong to, and the information (below be referred to as " area information ") which zone is each pixel of indication belong to is supplied to mixing ratio calculator 104, foreground/background separation device 105 and motion blur adjustment unit 106.
Mixing ratio calculator 104 is according to input picture, from the motion vector of motion detector 102 supply and positional information thereof and from the area information of regional designating unit 103 supplies, calculating is corresponding to the mixing ratio of the pixel that contains in the Mixed Zone 63 (below be referred to as " mixing ratio α "), and mixing ratio α is supplied to foreground/background separation device 105.
Mixing ratio α is indication corresponding to a value of the ratio of the represented pixel value of picture content of background object (below be referred to as " background component ") and equation as follows (3).
Foreground/background separation device 105 bases are from the area information of regional designating unit 103 supplies and the mixing ratio α that supplies from mixing ratio calculator 104, input picture is separated into a prospect component image that only is made of the picture content corresponding to foreground object (below be referred to as " prospect component ") and a background component image that only is made of background component, and the prospect component image is supplied to motion blur adjustment unit 106 and selector 107.The prospect component image that has separated can be set to final output.The known method of not considering the Mixed Zone with only specifying a prospect and a background compares, and the present invention can obtain more accurate prospect and background.
Motion blur adjustment unit 106 determines that according to the amount of exercise v that obtains from motion vector and according to area information indication is included in the processing unit of at least one pixel in the prospect component image.This processing unit is data, is used to specify one group and will be in the pixel that motion blur is adjusted.
According to being input to motion blur in the signal handling equipment with controlled amount, from the prospect component image of foreground/background separation device 105 supply, from the motion vector and the positional information thereof of motion detector 102 supplies and handle unit, motion blur adjustment unit 106 is included in amount of movement blur in the prospect component image by eliminating, reduce or increasing the motion blur adjustment that is included in the prospect component image.Motion blur adjustment unit 106 has been adjusted the prospect component image of amount of movement blur subsequently to selector 107 outputs.But it is not essential using motion vector and positional information thereof.
Selector 107 is according to for example reflecting the selection signal that the user selects, selection from the prospect component image of foreground/background separation device 105 supply and from the adjustment of motion blur adjustment unit 106 supplies one of the prospect component image of amount of movement blur, and export selected prospect component image.
Be supplied to the input picture of signal handling equipment below with reference to Fig. 3 to Figure 18 discussion.
Fig. 3 illustrates the image of being carried out by transducer and catches.Transducer is made of a ccd video camera for example having equipped CCD (charge coupled device) area sensor, and it is a solid imaging element.Corresponding to the motion from left to right flatly between corresponding to the object 112 of background and transducer of the object 111 (for example) of real prospect.
Transducer is caught corresponding to the image of the object 111 of prospect with corresponding to the image of the object 112 of background.The image that transducer unit output has frame by frame been caught.For example, transducer output has an image of per second 30 frames.The time for exposure of transducer can be 1/30 second.Time for exposure be transducer begin input light when converting electric charge to until the one-period of finishing when input light converts electric charge to.Time for exposure also is referred to as " aperture time ".
Fig. 4 illustrates the arrangement of pixel.In Fig. 4, A to I indicates each pixel.These pixels are disposed on the plane of corresponding image.A checkout gear corresponding to each pixel is set on the transducer.When the transducer carries out image was caught, each checkout gear output formed the pixel value of the respective pixel of image.For example, the position of the checkout gear of directions X is corresponding to the horizontal direction of image, and the position of the checkout gear of Y direction is corresponding to the vertical direction of image.
As shown in Figure 5, checkout gear (for example being a CCD) is corresponding to converting input light to electric charge during of aperture time, and stores converted electric charge.The quantity of electric charge is almost with the light intensity of input light and to import cycle of this light proportional.Checkout gear sequentially adds the electric charge from the conversion of input light on the charge stored during corresponding to the cycle of aperture time.Just, checkout gear is integration input light during corresponding to aperture time, and storage is corresponding to the electric charge of the light quantity that is integrated light.Can think that checkout gear has the integrating function about the time.
Charge stored converts magnitude of voltage to by the circuit (not shown) in the checkout gear, and this magnitude of voltage is further converted to the pixel value, such as numerical data, and output then.Therefore, each pixel value of transducer output is a value that projects on the linear space, and it is about the result of aperture time integration corresponding to a certain three-dimensional portion of the object of prospect or background.
Signal handling equipment extracts the effective information that is inserted in the output signal, for example mixing ratio α by the storage operation of transducer.Signal handling equipment is adjusted amount distortion, for example adjusts because the amount of movement blur that the mixing of foreground image object itself causes.Signal handling equipment also adjusts because the amount distortion that the mixing of foreground image object and background image object causes.
Fig. 6 A is explanation by catching corresponding to motion object 111 of prospect and the image that obtains corresponding to the stationary objects 112 of background.In the example shown in Fig. 6 A, flatly move to the screen right-hand member from the screen left end corresponding to the object of prospect.
Fig. 6 B illustrates the model that obtains corresponding to the pixel value of delegation's image shown in Fig. 6 A by in time orientation expansion.Horizontal direction shown in Fig. 6 B is corresponding to the direction in space X among Fig. 6 A.
The value of pixel only is made of background component in the background area, just is made of the picture content corresponding to background object.The value of pixel only is made of the prospect component in the foreground area, just is made of the picture content corresponding to foreground object.
The value of the pixel of Mixed Zone is made of background component and prospect component.Because the value of pixel is made of background component and prospect component in the Mixed Zone, so it is referred to as " distortion zone ".The Mixed Zone also further is divided into the background area and the unlapped background area of covering.
The background area that covers is corresponding to the foreground object locational Mixed Zone of the front end on travel direction just, and wherein background component passs in time and covered gradually by prospect.
On the contrary, unlapped background area is that wherein background component passs in time and engenders corresponding to the foreground object Mixed Zone of the tail end on the direction of motion just.
According to above discussion, comprise foreground area, background area or the background area that covers or the image of unlapped background area and be input to regional designating unit 103, mixture ratio calculation block 104 and foreground/background separation device 105 as input picture.
Fig. 7 illustrates the background area and the unlapped background area of background area discussed above, foreground area, Mixed Zone, covering.In zone corresponding to the image shown in Fig. 6 A, the background area is a stationary part, foreground area is a motion parts, and the background area of the covering of Mixed Zone is the part that changes from the background area to the foreground area, and the unlapped background area of Mixed Zone is the part from the prospect to the change of background.
Fig. 8 illustrates the model that the pixel value by the many pixels that are arranged side by side of expanded images on time orientation obtains, wherein, described image is to obtain by catching corresponding to the object images of static prospect with corresponding to the object images of static background.For example, can be elected to be the pixel that is arranged side by side to the pixel of arranging in the screen lastrow.
The pixel value of F01 to F04 indication shown in Figure 8 is the pixel value corresponding to static foreground object.The pixel value of B01 to B04 indication shown in Figure 8 is corresponding to static background object pixels value.
Time is with the vertical direction of Fig. 8 portion on earth of passing from the top of Fig. 8.Position among Fig. 8 on the top of rectangle begins importing the time that light is transformed into electric charge corresponding to transducer, and finish the time that is transformed into electric charge from input light corresponding to transducer the position among Fig. 8 on the bottom of rectangle.Just, from Fig. 8 the distance of the top of rectangle to bottom corresponding to aperture time.
The following describes pixel shown in Figure 8, suppose that for example aperture time equals frame length.
The horizontal direction of Fig. 8 is corresponding to direction X in space among Fig. 6.Say that more specifically in example shown in Figure 8, the right-hand member of the rectangle of indicating from the left end of the rectangle of Fig. 8 " F01 " indication to " B04 " is the octuple of pel spacing, that is, and eight contiguous pixels.
When prospect object and background object when all being static, the light input that enters transducer did not change during the cycle corresponding to aperture time.
The part that is divided into two and a plurality of equal periods corresponding to period of aperture time.For example, if the quantity of virtual dividing part is 4, model then shown in Figure 8 can be by model representative shown in Figure 11.The quantity of virtual dividing part can be set according to the motion of objects amount v corresponding to prospect in aperture time.For example, when amount of exercise v was 4, the quantity of virtual dividing part was set to 4, and the period corresponding to aperture time is divided into four parts.
When Figure 11 the most up opened corresponding to shutter first cut apart the period.When second row of Figure 11 is opened corresponding to shutter second cut apart the period.When the third line of Figure 11 is opened corresponding to shutter the 3rd cut apart the period.When the fourth line of Figure 11 is opened corresponding to shutter the 4th cut apart the period.
The aperture time of cutting apart according to amount of exercise v also is referred to as " aperture time/v " hereinafter.
When corresponding to prospect to as if static, the light input that enters transducer does not change, thereby prospect component F 01/v equals by removing the value that pixel value F01 obtains with virtual dividing part number.Equally, when corresponding to prospect to as if when static, prospect component F 02/v equals by removing the value that pixel value F02 obtains with virtual dividing part number, prospect component F 03/v equals by removing the value that pixel value F03 obtains with virtual dividing part number, and prospect component F 04/v equals by removing the value that pixel value F04 obtains with virtual dividing part number.
When corresponding to background to as if when static, the light input that enters transducer does not change, thereby background component B01/v equals by removing the value that pixel value B01 obtains with virtual dividing part number.Equally, when corresponding to background to as if when static, background component B02/v equals by removing the value that pixel value B02 obtains with virtual dividing part number, background component B03/v equals by removing the value that pixel value B03 obtains with virtual dividing part number, and background component B04/v equals by removing the value that pixel value B04 obtains with virtual dividing part number.
More specifically say, when corresponding to prospect to as if during transfixion, do not change during cycle corresponding to the light input of the foreground object that enters transducer corresponding to aperture time.So, the prospect component F 01/v of the first of aperture time/v when opening corresponding to shutter, when opening corresponding to shutter the second portion of aperture time/v prospect component F 01/v, when opening corresponding to shutter the third part of aperture time/v prospect component F 01/v, the tetrameric prospect component F 01/v of aperture time/v becomes identical value when opening corresponding to shutter.As the situation of F01/v, it is equally applicable to F02/v to F04/v.
When corresponding to the object transfixion of background, during period, do not change corresponding to aperture time corresponding to the light of the background object input that enters transducer.So, the background component B01/v of the first of aperture time/v when opening corresponding to shutter, when opening corresponding to shutter the second portion of aperture time/v background component B01/v, when opening corresponding to shutter the third part of aperture time/v background component F01/v, the tetrameric background component F01/v of aperture time/v becomes identical value when opening corresponding to shutter.Above-mentioned situation is equally applicable to B02/v to B04/v.
Here provided the explanation of such a case: the object corresponding to prospect moves, and corresponding to the object transfixion of background.
Figure 10 illustrates when corresponding to the object of prospect during to the right-hand member motion of Figure 10, by time orientation to the delegation of the background area that comprises a covering in the pixel value of pixel expand a model that obtains.In Figure 10, amount of exercise v is 4.Because a frame is a short time interval, therefore can suppose corresponding to prospect to as if a rigid body with identical speed motion.In Figure 10, move corresponding to the object images of prospect, like this when when subsequent frame shows it, it is positioned as four pixels the right to reference frame.
In Figure 10, belong to foreground area from the pixel of Far Left pixel to the four pixels.In Figure 10, the pixel of several the 5th pixel to the seven pixels belongs to the Mixed Zone from a left side, that is, and and the background area of covering.In Figure 10, the rightmost pixel belongs to the background area.
Object corresponding to prospect moves, and makes it change the object corresponding to background that covers gradually in time.Therefore, belong to the component that contains in the pixel value of these pixels of background area of covering and changing to the prospect component from background component corresponding to a certain moment during the period of aperture time.
For example, the pixel value M that is surrounded by runic frame among Figure 10 is represented by following equation (1).
M=B02/v+B02/v+F07/v+F06/v (1)
For example, several the 5th pixels in a left side comprise one corresponding to the background component of the part of aperture time/v with corresponding to the prospect component of three parts of aperture time/v, thereby the mixing ratio α of several the 5th pixels in a left side is 1/4.Several the 6th pixels in a left side comprise corresponding to the background component of two parts of aperture time/v with corresponding to the prospect component of two parts of aperture time/v, and therefore, the mixing ratio α of several the 6th pixels in a left side is 1/2.Several the 7th pixels in a left side comprise corresponding to the background component of three parts of aperture time/v with corresponding to the prospect component of the part of aperture time/v, and therefore, the mixing ratio α of several the 7th pixels in a left side is 3/4.
Can suppose corresponding to prospect to as if rigid body, this foreground object makes it be shown as four pixels the right to subsequent frame with the motion of identical speed.Therefore, for example, when opening, shutter equals when shutter is opened prospect component corresponding to Figure 10 several the 5th pixels of the second portion of aperture time/v from a left side corresponding to the prospect component F 07/v of Figure 10 several the 4th pixels of the first of aperture time/v from a left side.Equally, prospect component F 07/v equals when shutter is opened the prospect component corresponding to Figure 10 several the 6th pixels from a left side of the third part of aperture time/v, and equals when shutter is opened the prospect component corresponding to tetrameric Figure 10 several the 7th pixels from a left side of aperture time/v.
Can suppose corresponding to prospect to as if rigid body, this foreground object makes it be shown as four pixels the right to subsequent frame with the motion of identical speed.Therefore, for example, when opening, shutter equals when shutter is opened prospect component corresponding to Figure 10 several the 4th pixels of the second portion of aperture time/v from a left side corresponding to the prospect component F 06/v of Figure 10 several the 3rd pixels of the first of aperture time/v from a left side.Equally, prospect component F 06/v equals when shutter is opened the prospect component corresponding to Figure 10 several the 5th pixels from a left side of the third part of aperture time/v, and equals when shutter is opened the prospect component corresponding to tetrameric Figure 10 several the 6th pixels from a left side of aperture time/v.
Can suppose corresponding to prospect to as if rigid body, this foreground object makes it be shown as four pixels the right to subsequent frame with the motion of identical speed.Therefore, for example, when opening, shutter equals when shutter is opened prospect component corresponding to Figure 10 several the 3rd pixels of the second portion of aperture time/v from a left side corresponding to the prospect component F 05/v of Figure 10 several second pixels of the first of aperture time/v from a left side.Equally, prospect component F 05/v equals when shutter is opened the prospect component corresponding to Figure 10 several the 4th pixels from a left side of the third part of aperture time/v, and equals when shutter is opened the prospect component corresponding to tetrameric Figure 10 several the 5th pixels from a left side of aperture time/v.
Can suppose corresponding to prospect to as if rigid body, this foreground object makes it be shown as four pixels the right to subsequent frame with the motion of identical speed.Therefore, for example, when opening, shutter equals when shutter is opened prospect component corresponding to Figure 10 several second pixels of the second portion of aperture time/v from a left side corresponding to the prospect component F 04/v of Figure 10 Far Left pixel of the first of aperture time/v.Equally, prospect component F 04/v equals when shutter is opened the prospect component corresponding to Figure 10 several the 3rd pixels from a left side of the third part of aperture time/v, and equals when shutter is opened the prospect component corresponding to tetrameric Figure 10 several the 4th pixels from a left side of aperture time/v.
Because the foreground area corresponding to the motion object comprises motion blur as discussed above, so it can also be referred to as " distortion zone ".
Figure 11 illustrates when corresponding to the object of prospect during to the right-hand member motion of Figure 11, by time orientation to the delegation that comprises a unlapped background area in the pixel value of pixel expand a model that obtains.In Figure 11, amount of exercise v is 4.Because a frame is a short time interval, therefore can suppose corresponding to prospect to as if a rigid body with identical speed motion.In Figure 11, moving to the right corresponding to the object images of prospect, when when subsequent frame shows it, it is positioned as the right that four pixels arrive reference frame like this.
In Figure 11, belong to the background area from many pixels of Far Left pixel to the four pixels.In Figure 11, the pixel of several the 5th pixel to the seven pixels belongs to the Mixed Zone from a left side, that is, and and unlapped background area.In Figure 11, the rightmost pixel belongs to foreground area.
Object corresponding to prospect moves, and it is changed gradually in time from object, shift out corresponding to background, wherein, the object that described prospect covers corresponding to background.Therefore, belong to the component that contains in the pixel value of these pixels of unlapped background area corresponding to a certain moment of period of aperture time from the prospect component variation to background component.
For example, the pixel value M that is surrounded by runic frame among Figure 11 is represented by following equation (2).
M′=F02/v+F01/v+B26/v+B26/v (2)
For example, several the 5th pixels in a left side comprise corresponding to the background component of three parts of aperture time/v with corresponding to the prospect component of the shutter section of aperture time/v, thereby the mixing ratio α of several the 5th pixels in a left side is 3/4.Several the 6th pixels in a left side comprise corresponding to the background component of two parts of aperture time/v with corresponding to the prospect component of two parts of aperture time/v, and therefore, the mixing ratio α of several the 6th pixels in a left side is 1/2.Several the 7th pixels in a left side comprise corresponding to the background component of the part of aperture time/v with corresponding to the prospect component of three parts of aperture time/v, and therefore, the mixing ratio α of several the 7th pixels in a left side is 1/4.
When concluding equation (1) and (2), pixel value M can be expressed by equation (3):
M = α · B + Σ i Fi / v - - - ( 3 )
Wherein α is a mixing ratio, the pixel value of B indication background, and Fi/v specifies the prospect component.
Can suppose corresponding to prospect to as if with the rigid body of identical speed motion, and amount of exercise is 4.Therefore, for example, when opening, shutter equals when shutter is opened prospect component corresponding to Figure 11 several the 6th pixels of the second portion of aperture time/v from a left side corresponding to the prospect component F 01/v of Figure 11 several the 5th pixels of the first of aperture time/v from a left side.Equally, prospect component F 01/v equals when shutter is opened the prospect component corresponding to Figure 11 several the 7th pixels from a left side of the third part of aperture time/v, and equals when shutter is opened the prospect component corresponding to tetrameric Figure 11 several the 8th pixels from a left side of aperture time/v.
Can suppose corresponding to prospect to as if with the rigid body of identical speed motion, and amount of exercise v is 4.Therefore, for example, when opening, shutter equals when shutter is opened prospect component corresponding to Figure 11 several the 7th pixels of the second portion of aperture time/v from a left side corresponding to the prospect component F 02/v of Figure 11 several the 6th pixels of the first of aperture time/v from a left side.Equally, prospect component F 02/v equals when shutter is opened the prospect component corresponding to Figure 11 several the 8th pixels from a left side of the third part of aperture time/v.
Can suppose corresponding to prospect to as if with the rigid body of identical speed motion, and amount of exercise v is 4.Therefore, for example, when opening, shutter equals when shutter is opened prospect component corresponding to Figure 11 several the 8th pixels of the second portion of aperture time/v from a left side corresponding to the prospect component F 03/v of Figure 11 several the 7th pixels of aperture time/v first from a left side.
Illustrated that in conjunction with Fig. 9 to Figure 11 the quantity of virtual dividing part is 4 situation above.The quantity of virtual dividing part is equivalent to amount of exercise v.Usually, amount of exercise v is equivalent to the motion of objects speed corresponding to prospect.For example, if move corresponding to the object of prospect, make it be shown as when it is positioned subsequent frame the right that four pixels arrive a certain frame, then amount of exercise v is set to 4.According to this amount of exercise v, the quantity of virtual dividing part is set to 4.Equally, when the object corresponding to prospect moves, make it be shown as when it is positioned subsequent frame the right that six pixels arrive a certain frame, amount of exercise v is set to 6, and the quantity of virtual dividing part is set to 6.
Figure 12 and Figure 13 illustrate about with aperture time cut apart the corresponding prospect component of cycle and the relation of background component and foreground area, background area and Mixed Zone, this Mixed Zone is made up of the background or the unlapped background of covering discussed above.
Figure 12 illustrates an example of the pixel of extracting in foreground area, background area and the Mixed Zone from image, described image comprises the prospect corresponding to the object that moves in the place ahead of static background.In the example of Figure 12, move with respect to screen level corresponding to the object of prospect.
Frame #n+1 is the frame after the frame #n, and #n+2 is the frame after the frame #n+1.
Pixel in foreground area, background area and the Mixed Zone is extracted from one of frame #n to #n+2, and amount of exercise v is set to 4.Figure 13 has shown a model that obtains by the pixel value of expanding the pixel of being extracted at time orientation.
Because the object corresponding to prospect moves, so the pixel value in the foreground area is made of four different prospect components corresponding to aperture time/v.For example, the Far Left pixel of pixel is made up of F01/v, F02/v, F03/v and F04/v in the foreground area shown in Figure 13.Just, the pixel in the prospect comprises motion blur.
Since corresponding to background to as if static, the light input that enters during aperture time corresponding to the transducer of background does not change.In this case, the pixel value in the background area does not comprise motion blur.
Pixel value in the Mixed Zone of being made up of the background area of a covering or unlapped background area is made of prospect component and background component.
Provided the explanation of a model that obtains by the pixel value of expanding many pixels at time orientation below, described many pixels are arranged side by side in a plurality of frames, and these pixels are positioned at same position when causing these frames overlapping when moving corresponding to the image of object.For example, when the image corresponding to object is moving with respect to screen level, be arranged in the pixel that the pixel on the screen can be selected as being arranged side by side.
Figure 14 illustrates by expand the model that many pixels obtain at time orientation, described many pixels are arranged side by side in three frames of the image that obtains by an object of catching corresponding to static background, and these pixels are positioned on the same position when these frames are overlapping.Frame #n is the frame after the frame #n-1, and frame #n+1 is the frame after the frame #n.Above-mentioned explanation is applicable to other frame.
The B01 to B12 of pixel value shown in Figure 14 is corresponding to static background object pixels value.Because corresponding to the object transfixion of background, so the pixel value of respective pixel does not change among the frame #n-1 to #n+1.For example, the pixel of frame #n has pixel value B05 with the pixel of the #n+1 of the correspondence position that is arranged in the pixel with frame #n-1 pixel value B05.
Figure 15 illustrates by expand the model that many pixels obtain at time orientation, described many pixels be arranged side by side at by catch with one in three frames of the corresponding object of prospect of the right of Figure 15 motion and an image obtaining corresponding to an object of static background, and these pixels are positioned on the same position when these frames are overlapping.Model shown in Figure 15 comprises the background area of a covering.
In Figure 15, can suppose corresponding to prospect to as if a rigid body with the motion of identical speed, and moving, make it be shown as the right that four pixels arrive subsequent frames.So amount of exercise v is 4, and the quantity of virtual dividing part is 4.
For example, prospect component corresponding to the Far Left pixel of frame #n-1 among Figure 15 of the first of aperture time/v when shutter is opened is F12/v, and when shutter is opened corresponding to the prospect component of several second pixels also is F12/v from a left side among Figure 15 of the second portion of aperture time/v.When shutter is opened corresponding among Figure 15 of the third part of aperture time/v from the prospect component of several the 3rd pixels in a left side and when shutter is opened corresponding to the prospect component of several the 4th pixels also is F12/v from a left side tetrameric Figure 15 of aperture time/v.
Prospect component corresponding to the Far Left pixel of frame #n-1 among Figure 15 of the second portion of aperture time/v when shutter is opened is F11/v.When shutter is opened corresponding to the prospect component of several second pixels also is F11/v from a left side among Figure 15 of the third part of aperture time/v.When shutter is opened corresponding to the prospect component of several the 3rd pixels is F11/v from a left side among tetrameric Figure 15 of aperture time/v.
Prospect component corresponding to the Far Left pixel of frame #n-1 among Figure 15 of the third part of aperture time/v when shutter is opened is F10/v.When shutter is opened corresponding to the prospect component of several second pixels also is F10/v from a left side among tetrameric Figure 15 of aperture time/v.The prospect component of left pixel corresponding to frame #n-1 among tetrameric Figure 15 of aperture time/v when shutter is opened is F09/v.
Since corresponding to background to as if static, therefore the background component of several second pixels from a left side corresponding to frame #n-1 among Figure 15 of the first of aperture time/v is B01/v when shutter is opened.The background component of several the 3rd pixels from a left side corresponding to frame #n-1 among Figure 15 of first and second parts of aperture time/v when shutter is opened is B02/v.When shutter is opened corresponding to aperture time/v first to Figure 15 of third part in the background component of several the 4th pixels of frame #n-1 from a left side are B03/v.
In the frame #n-1 of Figure 15, the high order end pixel belongs to the background area, and several second to the 4th pixels in a left side belong to the Mixed Zone of the background area that is covering.
Several the 5th to the 12 pixels in a frame #n-1 left side belong to the background area from Figure 15, and its pixel value is respectively B04 to B11.
Several first to five pixels in a frame #n left side belong to foreground area from Figure 15.In the foreground area of frame #n among aperture time/v the prospect component be any one of F05/v to F12/v.
Can suppose corresponding to prospect to as if with the rigid body of identical speed motion, and moving, make foreground image be shown as four pixels to subsequent frame the right.So, prospect component corresponding to several the 5th pixels in a frame #n left side among Figure 15 of the first of aperture time/v when shutter is opened is F12/v, and the prospect component corresponding to left several the 6th pixels among Figure 15 of the second portion of aperture time/v also is F12/v when shutter is opened.Corresponding to the prospect component of several the 7th pixels in a left side among Figure 15 of the third part of aperture time/v, and the prospect component corresponding to left several the 8th pixels among tetrameric Figure 15 of aperture time/v also is F12/v when shutter is opened when shutter is opened.
Prospect component corresponding to several the 5th pixels in a frame #n left side among Figure 15 of the second portion of aperture time/v when shutter is opened is F11/v.Prospect component corresponding to several the 6th pixels in a left side among Figure 15 of the third part of aperture time/v when shutter is opened also is F11/v.Prospect component corresponding to several the 7th pixels in a left side among tetrameric Figure 15 of aperture time/v when shutter is opened is F11/v.
Prospect component corresponding to several the 5th pixels in a frame #n left side among Figure 15 of the third part of aperture time/v when shutter is opened is F10/v.Prospect component corresponding to several the 6th pixels in a left side among tetrameric Figure 15 of aperture time/v when shutter is opened also is F10/v.Prospect component corresponding to several the 5th pixels in a frame #n left side among tetrameric Figure 15 of aperture time/v when shutter is opened is F09/v.
Since corresponding to background to as if static, therefore the background component corresponding to left several the 6th pixels of frame #n among Figure 15 of the first of aperture time/v is B05/v when shutter is opened.Background component corresponding to several the 7th pixels in a frame #n left side among Figure 15 of first and second parts of aperture time/v when shutter is opened is B06/v.When shutter is opened corresponding to aperture time/v first to Figure 15 of third part in the background component of several the 8th pixels in a frame #n left side are B07/v.
Among the frame #n in Figure 15, several the 6th to the 8th pixels in a left side belong to the Mixed Zone, promptly belong to the background area of covering.
Several the 9th to the 12 pixels in a frame #n left side belong to the background area among Figure 15, and its pixel value is respectively B08 to B11.
Several first to the 9th pixels in a frame #n+1 left side belong to foreground area among Figure 15, and the prospect component of the aperture time/v in the foreground area of frame #n+1 is any of F01/v to F12/v.
Can suppose corresponding to prospect to as if with the rigid body of identical speed motion, and moving, make foreground image be shown as the right that four pixels arrive subsequent frames.So the prospect component corresponding to several the 9th pixels in a frame #n+1 left side among Figure 15 of the first of aperture time/v when shutter is opened is F12/v.Prospect component corresponding to several the tenth pixels in Figure 15 left side of the second portion of aperture time/v when shutter is opened is F12/v.When shutter is opened corresponding to the prospect component of several the 11 pixels in Figure 15 left side of the third part of aperture time/v and when shutter is opened the prospect component corresponding to left several the 12 pixels of tetrameric Figure 15 of aperture time/v be F12/v.
Prospect component corresponding to several the 9th pixels in Figure 15 frame #n+1 left side of the second portion of aperture time/v when shutter is opened is F11/v.Prospect component corresponding to several the tenth pixels in Figure 15 left side of the third part of aperture time/v when shutter is opened is F11/v.Prospect component corresponding to several the 11 pixels in the tetrameric Figure 15 left side of aperture time/v when shutter is opened is F11/v.
Prospect component corresponding to several the 9th pixels in a frame #n+1 left side among Figure 15 of the third part of aperture time/v when shutter is opened is F10/v.Prospect component corresponding to several the tenth pixels in the tetrameric Figure 15 left side of aperture time/v when shutter is opened also is F10/v.Prospect component corresponding to several the 9th pixels in the tetrameric Figure 15 frame #n+1 left side of aperture time/v when shutter is opened is F09/v.
Since corresponding to background to as if static, therefore the background component corresponding to left several the tenth pixels of Figure 15 of the first of aperture time/v is B09/v when shutter is opened.Background component corresponding to several the 11 pixels in a frame #n+1 left side among Figure 15 of first and second parts of aperture time/v when shutter is opened is B10/v.Background component corresponding to several the 12 pixels in a frame #n+1 left side among Figure 15 of first and the third part of aperture time/v when shutter is opened is B11/v.
Among the frame #n+1 in Figure 15, several the tenth to the 12 pixels in a left side belong to the Mixed Zone, and it is the background area that covers.
Figure 16 extracts the prospect component and a model of the pixel that obtains from pixel shown in Figure 15.
Figure 17 illustrates a model that obtains by the many pixels of expansion on time orientation, these pixels are arranged side by side in three frames of an image, and be positioned on the same position when these frames are overlapping, a described image is by catching corresponding to obtaining to the object of a prospect of Figure 17 the right motion with corresponding to the object of static background.Model shown in Figure 17 comprises a unlapped background area.
In Figure 17, can suppose corresponding to prospect to as if a rigid body with the motion of identical speed, and moving, make it be shown as four pixels the right to subsequent frame.So amount of exercise v is 4.
For example, prospect component corresponding to frame #n-1 Far Left pixel among Figure 17 of the first of aperture time/v when shutter is opened is F13/v, and the prospect component corresponding to several second pixels in Figure 17 left side of the second portion of aperture time/v also is F13/v when shutter is opened.When shutter is opened corresponding to the prospect component of several the 3rd pixels in Figure 17 left side of the third part of aperture time/v and when shutter is opened the prospect component corresponding to left several the 4th pixels of tetrameric Figure 17 of aperture time/v be F13/v.
When shutter is opened, are F14/v corresponding to the several second pixel prospect components in a frame #n-1 left side among Figure 17 of the first of aperture time/v.Prospect component corresponding to several the 3rd pixels in Figure 17 left side of the second portion of aperture time/v when shutter is opened also is F14/v.Prospect component corresponding to several the 3rd pixels in Figure 17 left side of the third part of aperture time/v when shutter is opened is F15/v.
Since corresponding to background to as if static, therefore when shutter is opened corresponding to aperture time/v second to tetrameric Figure 17 in the background component of Far Left pixel of frame #n-1 be B25/v.Background component corresponding to several second pixels in a frame #n-1 left side among Figure 17 of third and fourth part of aperture time/v when shutter is opened is B26/v.Background component corresponding to several the 3rd pixels in a #n-1 left side among tetrameric Figure 17 of aperture time/v when shutter is opened is B27/v.
In the frame #n-1 of Figure 17, Far Left pixel to the three pixels belong to the Mixed Zone, are unlapped background areas.
Several the 4th to the 12 pixels in a frame #n-1 left side belong to foreground area among Figure 17.The prospect component of this frame is any one of F13/v to F24/v.
Frame #n left side number Far Left pixel to the three pixels belong to the background area among Figure 17, and its pixel value is respectively B25 to B28.
Can suppose corresponding to prospect to as if with the rigid body of identical speed motion, and moving, make foreground image be shown as four pixels and arrive subsequent frames the right.So, prospect component corresponding to several the 5th pixels in a frame #n left side among Figure 17 of the first of aperture time/v when shutter is opened is F13/v, and the prospect component corresponding to left several the 6th pixels among Figure 17 of the second portion of aperture time/v also is F13/v when shutter is opened.Corresponding to the prospect component of several the 7th pixels in a left side among Figure 17 of the third part of aperture time/v, and the prospect component corresponding to left several the 8th pixels among tetrameric Figure 17 of aperture time/v is F13/v when shutter is opened when shutter is opened.
Prospect component corresponding to several the 6th pixels in a frame #n left side among Figure 17 of the first of aperture time/v when shutter is opened is F14/v.Prospect component corresponding to several the 7th pixels in a left side among Figure 17 of the second portion of aperture time/v when shutter is opened also is F14/v.Prospect component corresponding to several the 8th pixels in a left side among Figure 17 of the first of aperture time/v when shutter is opened is F15/v.
Since corresponding to background to as if static, therefore when shutter is opened corresponding to aperture time/v second to tetrameric Figure 17 in the background component of left several the 5th pixels of frame #n are B29/v.Background component corresponding to several the 6th pixels in a frame #n left side among Figure 17 of third and fourth part of aperture time/v when shutter is opened is B30/v.Background component corresponding to several the 7th pixels in a frame #n left side among tetrameric Figure 17 of aperture time/v when shutter is opened is B31/v.
Among the frame #n in Figure 17, several the 5th to the 7th pixels in a left side belong to the Mixed Zone, and it is unlapped background area.
Several the 8th to the 12 pixels in a frame #n left side belong to foreground area among Figure 17.Corresponding to the prospect component value in the foreground area of the frame #n in aperture time/v cycle any of F13/v to F20/v.
Frame #n+1 high order end pixel to several the 8th pixels in a left side belong to the background area among Figure 17, and its pixel value is respectively B25 to B32.
Can suppose corresponding to prospect to as if with the rigid body of identical speed motion, and moving, make foreground image be shown as the right that four pixels arrive subsequent frames.So the prospect component corresponding to several the 9th pixels in a frame #n+1 left side among Figure 17 of the first of aperture time/v when shutter is opened is F13/v.Prospect component corresponding to several the tenth pixels in Figure 17 left side of the second portion of aperture time/v when shutter is opened is F13/v.When shutter is opened corresponding to the prospect component of several the 11 pixels in Figure 17 left side of the third part of aperture time/v and when shutter is opened the prospect component corresponding to left several the 12 pixels of tetrameric Figure 17 of aperture time/v be F13/v.
Prospect component corresponding to several the tenth pixels in a frame #n+1 left side among Figure 17 of the first of aperture time/v when shutter is opened is F14/v.Prospect component corresponding to several the 11 pixels in Figure 17 left side of the second portion of aperture time/v when shutter is opened is F14/v.Prospect component corresponding to several the 12 pixels in Figure 17 left side of the third part of aperture time/v when shutter is opened is F15/v.
Since corresponding to background to as if static, therefore second background component to left several the 9th pixels of tetrameric Figure 17 frame #n+1 corresponding to aperture time/v is B33/v when shutter is opened.Background component corresponding to several the tenth pixels in a frame #n+1 left side among Figure 17 of third and fourth part of aperture time/v when shutter is opened is B34/v.Background component corresponding to several the 11 pixels in a frame #n+1 left side among tetrameric Figure 17 of aperture time/v when shutter is opened is B35/v.
Among the frame #n+1 in Figure 17, belong to the Mixed Zone from several the 9th to the 11 pixels in Figure 17 left side, it is unlapped background area.
Several the 12 pixels in a frame #n+1 left side belong to foreground area among Figure 17.Prospect component in the foreground area of frame #n+1 among aperture time/v is respectively any of F13 to F16.
Figure 18 illustrates a model by the image that extracts the acquisition of prospect component from the pixel value of Figure 17 indication.
Again referring to Fig. 2, zone designating unit 103 is specified a plurality of marks by the pixel value that uses a plurality of frames, each pixel with the indication incoming frame belongs to the background area of foreground area, background area, covering or among the unlapped background area which, and these marks are supplied to mixing ratio calculator 104 and motion blur adjustment unit 106 as area information.
Mixing ratio calculator 104 calculates the mixing ratio α that is included in each pixel in the Mixed Zone according to the pixel value and the area information of a plurality of frames, and the mixing ratio α that is calculated is supplied to foreground/background separation device 105.
Foreground/background separation device 105 extracts the prospect component image of only being made up of the prospect component according to pixel value, area information and the mixing ratio α of a plurality of frames, and the prospect component image is supplied to motion blur adjustment unit 106.
Motion blur adjustment unit 106 is according to from the prospect component image of foreground/background separation device 105 supplies, adjust the amount of movement blur that comprises the prospect component image from the motion vector of motion detector 102 supplies with from the area information of regional designating unit 103 supplies, and the prospect component image of motion blur has been adjusted in output then.
Flowchart text below with reference to Figure 19 is carried out the processing of adjusting amount of movement blur by signal handling equipment.In step S11, zone designating unit 103 is according to input picture execution area designated treatment, be used to generate area information, each pixel of this area information indication input picture belongs to the background area of foreground area, background area, covering or among the unlapped background area which.The following describes the details of regional designated treatment.The area information that zone designating unit 103 is generated to 104 supplies of mixing ratio calculator.
In step S11, zone designating unit 103 generates area information according to input picture, belongs in foreground area, background area or the Mixed Zone (not considering that each pixel belongs to the background area of covering or belongs to unlapped background area) which with each pixel of indication input picture.In the case, foreground/background separation device 105 and motion blur adjustment unit 106 determine that according to the direction of motion vector the Mixed Zone is background area or the unlapped background area that covers.If input picture with the order setting of foreground area, Mixed Zone and background area, determines that then the Mixed Zone is unlapped background area on the motion vector direction.
In step S12, mixing ratio calculator 104 calculates the mixing ratio α of each pixel that comprises in the Mixed Zone according to input picture and area information.Provide the details of mixing ratio computing below.Mixing ratio computation processor 104 is supplied to foreground/background separation device 105 with the mixing ratio α that calculates.
In step S13, foreground/background separation device 105 extracts the prospect component according to area information and mixing ratio α from input picture, and the prospect component is supplied to motion blur adjustment unit 106 as the prospect component image.
In step S14, motion blur adjustment unit 106 generates the processing unit according to motion vector and area information, and the amount of movement blur that comprises in the prospect component of adjustment corresponding to this processing unit, wherein, the position of the contiguous pixels that the indication of described processing unit is arranged on the direction of motion, and these contiguous pixels belong to any of background area of unlapped background area, foreground area and covering.Provide the processing details of adjusting amount of movement blur below.
In step S15, signal handling equipment determines whether to finish the processing that is used for whole screen.If determine not finish the processing that is used for whole screen, then processing advances to step S14, and repeats to adjust the processing corresponding to the amount of movement blur of the prospect component of handling unit.
Finish if in step S15, be identified for the processing of whole screen, then finish this processing.
In this mode, signal handling equipment can come the amount of movement blur that comprises in the adjustment prospect by separation prospect and background.Just, signal handling equipment can be adjusted the amount of movement blur in the data from the sample survey that is comprised in the pixel value of indicating foreground pixel.
The following describes each configuration of regional designating unit 103, mixing ratio calculator 104, foreground/background separation device 105 and motion blur adjustment unit 106.
Figure 20 is the block diagram of an example that illustrates the configuration of regional designating unit 103.The regional designating unit 103 that shows among Figure 20 is not used motion vector.Frame memory 201 unit storage input picture frame by frame.When pending image is #n, frame memory 201 storage frame #n-2 (being the frame of preceding two frames of frame #n), frame #n-1 (being the frame of the previous frame of frame #n), frame #n, frame #n+1 (being the frame of a frame behind the frame #n), frame #n+2 (being the frame of two frames behind the frame #n).
Static/motion determining section 202-1 reads the pixel value of the pixel of the frame #n+2 on the specified pixel same position that is positioned at frame #n from frame memory 201, in the #n frame, determine the zone that this pixel belongs to, and from frame memory 201, read the pixel value of the frame #n+1 pixel on this specified pixel same position that is positioned at frame #n, and calculate the absolute value of difference between the pixel value of being read.Static/motion determining section 202-1 determines that whether the absolute value of difference between the pixel value of the pixel value of frame #n+2 and frame #n+1 is greater than the threshold value Th that presets.If determine this difference, then static/the motion of expression " motion " is determined to be supplied to regional determining section 203-1 greater than threshold value Th.If the absolute value of difference is less than or equal to threshold value Th between the pixel value of the pixel value of the pixel of definite frame #n+2 and the pixel of frame #n+1, then static/motion determining section 202-1 determines to be supplied to regional determining section 203-1 to the static/motion of expression " static ".
Static/motion determining section 202-2 reads the pixel value of the specified pixel of frame #n from frame memory 201, in #n, determine the zone that this specific pixel belongs to, and from frame memory 201, read the pixel value of the frame #n+1 pixel on this specified pixel same position that is positioned at frame #n, and calculate the absolute value of difference between these pixel values.Static/motion determining section 202-2 determines that whether the absolute value of difference of pixel value of the pixel value of frame #n+1 and frame #n is greater than threshold value Th.If the absolute value of the difference of definite these pixels greater than threshold value Th, then determines to be supplied to regional determining section 203-1 and regional determining section 203-2 to the static/motion of expression " motion ".If the absolute value of the difference of the pixel value of the pixel value of the pixel of definite frame #n+1 and the pixel of frame #n is less than or equal to threshold value Th, then static/motion determining section 202-2 determines to be supplied to regional determining section 203-1 and regional determining section 203-2 to the static/motion of expression " static ".
Static/motion determining section 202-3 reads the pixel value of the specified pixel of frame #n from frame memory 201, in frame #n, determine the zone that this specified pixel belongs to, and from frame memory 201, read the pixel value of the frame #n-1 pixel on this specified pixel same position that is positioned at frame #n, and calculate the absolute value of difference between these pixel values.Static/motion determining section 202-3 determines that whether the absolute value of difference between the pixel value of the pixel value of frame #n and frame #n-1 is greater than threshold value Th.If the absolute value of determining difference between these pixels greater than threshold value Th, then determines to be supplied to regional determining section 203-2 and regional determining section 203-3 to the static/motion of expression " motion ".If the absolute value of difference is less than or equal to threshold value Th between the pixel value of the pixel value of the pixel of definite frame #n and the pixel of frame #n-1, then static/motion determining section 202-3 determines to be supplied to regional determining section 203-2 and regional determining section 203-3 to the static/motion of expression " static ".
Static/motion determining section 202-4 reads the pixel value of the frame #n-1 pixel on the specified pixel same position that is positioned at frame #n from frame memory 201, in frame #n, determine the zone that this pixel belongs to, and from frame memory 201, read the pixel value of the frame #n-2 pixel on this specified pixel same position that is positioned at frame #n, and calculate the absolute value of difference between these pixel values.Static/motion determining section 202-4 determines that whether the absolute value of difference of pixel value of the pixel value of frame #n-1 and frame #n-2 is greater than the threshold value Th that presets.If the absolute value of the difference of definite these pixels greater than threshold value Th, then determines to be supplied to regional determining section 203-3 to the static/motion of expression " motion ".If the absolute value of difference is less than or equal to threshold value Th between the pixel value of the pixel value of the pixel of definite frame #n-1 and the pixel of frame #n-2, then static/motion determining section 202-4 determines to be supplied to regional determining section 203-3 to the static/motion of expression " static ".
When determine from the static/motion of static/motion determining section 202-1 supply to be designated as " static " and when from static/motion determining section 202-2 supply static/move when determining to be designated as " motion ", zone determining section 203-1 determines that the specified pixel of frame #n belongs to unlapped background area, and " 1 " of indicating this designated pixel to belong to unlapped background area is set in the definite sign in the unlapped background area related with this specified pixel.
When determining expression " motion " from the static/motion of static/motion determining section 202-1 supply and when representing " static " from the static/motion of static/motion determining section 202-2 supply is definite, zone designating unit 203-1 determines that the specified pixel of frame #n does not belong to unlapped background area, and " 0 " of indicating this designated pixel not belong to unlapped background area is set in the definite sign in the unlapped background area related with this designated pixel.
Zone determining section 203-1 determines sign to the unlapped background area that frame memory 204 supplies of the definite sign of storage are established set or " 0 " according to above discussion.
When determine from the static/motion of static/motion determining section 202-2 supply expression " static " and when from static/motion determining section 202-3 supply static/when motion is determined to represent " static ", zone determining section 203-2 determines that the designated pixel of frame #n belongs to stagnant zone, and " 1 " of indicating this pixel to belong to stagnant zone is set in the definite sign of the stagnant zone related with this designated pixel.
When determining expression " motion " from the static/motion of static/motion determining section 202-2 supply or when representing " motion " from the static/motion of static/motion determining section 202-3 supply is definite, zone determining unit 203-2 determines that the specific pixel of frame #n does not belong to stagnant zone, and " 0 " of indicating this pixel not belong to stagnant zone is set in the definite sign of the stagnant zone related with this specific pixel.
Zone determining section 203-2 determines sign to the stagnant zone that frame memory 204 supplies of the definite sign of storage are established set or " 0 " according to above discussion.
When determining expression " motion " from the static/motion of static/motion determining section 202-2 supply and when representing " motion " from the static/motion of static/motion determining section 202-3 supply is definite, zone determining section 203-2 determines that the designated pixel of frame #n belongs to the moving region, and " 1 " of indicating this designated pixel to belong to the moving region is set in the definite sign in the moving region related with this designated pixel.
When determine from the static/motion of static/motion determining section 202-2 supply expression " static " or when from static/motion determining section 202-3 supply static/when motion is determined to represent " static ", zone determining section 203-2 determines that the designated pixel of frame #n does not belong to the moving region, and " 0 " of indicating this designated pixel not belong to the moving region is set in the definite sign in the moving region related with this designated pixel.
Zone determining section 203-2 determines sign to the moving region that frame memory 204 supplies of the definite sign of storage are established set or " 0 " according to above discussion.
When determining expression " motion " from the static/motion of static/motion determining section 202-3 supply and when representing " static " from the static/motion of static/motion determining section 202-4 supply is definite, zone determining section 203-3 determines that the specified pixel of frame #n belongs to the background area of covering, and " 1 " of indicating this specified pixel to belong to the background area of covering is set in the definite sign in background area of the covering related with this specific pixel.
When determine from the static/motion of static/motion determining section 202-3 supply expression " static " or when from static/motion determining section 202-4 supply static/when motion is determined to represent " motion ", zone determining section 203-3 determines that the specified pixel of frame #n does not belong to the background area of covering, and " 0 " of indicating this designated pixel not belong to the background area of covering is set in the definite sign in background area of the covering related with this specified pixel.
Zone determining section 203-3 determines sign to the covering of set or " 0 " is established in frame memory 204 supplies of the definite sign of storage according to above discussion background area.
Storage determines that 204 storages of marker frames memory determine sign, determine sign, determine sign from the moving region of regional determining section 203-2 supply and determine sign from the background area of the covering of regional determining section 203-3 supply from the stagnant zone of regional determining section 203-2 supply from the unlapped background area of regional determining section 203-1 supply.
The definite marker frames memory 204 of storage determines that sign, the definite sign of stagnant zone, the definite sign in moving region and the definite sign in background area that covers are supplied to synthesizer 205 to the unlapped background area of being stored.Synthesizer 205 is according to the unlapped background area of determining 204 supplies of marker frames memory is determined to indicate, stagnant zone is determined sign, sign is determined in the moving region and the background area of covering determines that sign generates area information from storing, indicating each pixel to belong in the background area of unlapped background area, stagnant zone, moving region or covering which, and be supplied to area information storage to determine marker frames memory 206.
The area information of marker frames memory 206 storages from synthesizer 205 supplies determined in storage, and exports the area information of being stored.
Below with reference to the example of Figure 21 to 25 explanation by the processing of regional designating unit 103 execution.
When the object corresponding to prospect is moving, in every frame, change corresponding to the position of the image of object on the screen.As shown in figure 21, be arranged in by frame #n Yn (x, y) the indicated corresponding image of locational object be positioned in the frame #n+1 after frame #n Yn+1 (x, y) on.
Figure 24 shows the model that the pixel value of the pixel that is arranged side by side along the direction of motion corresponding to the image of foreground object by expansion on time orientation obtains.For example, if corresponding to the direction of motion of the image of foreground object about screen level, then the model shown in Figure 22 is the model that the pixel value by a plurality of pixels of being arranged side by side in delegation in the time orientation expansion obtains.
In Figure 22, the row among the frame #n equals the row among the frame #n+1.
Be comprised in several the 6th pixel to the 17 pixels in a frame #n+1 left side corresponding to the prospect component that is included in the object in several second pixel to the, 13 pixels in a frame #n left side.
In frame #n, a plurality of pixels that belong to the background area of covering are several the 7th to the 13 pixels from a left side, and a plurality of pixels that belong to unlapped background area are several second to the 4th pixels from a left side.In frame #n+1, a plurality of pixels that belong to the background area of covering are several the 15 to the 17 pixels from a left side, and a plurality of pixels that belong to unlapped background area are several the 6th to the 8th pixels from a left side.
In the shown example of Figure 22, owing to the prospect component that comprises among the frame #n is moved by four pixels among the frame #n+1, so amount of exercise v is 4.The quantity of virtual dividing part is and amount of exercise v corresponding 4.
Be given in the explanation of variation of the pixel value of the pixel that belongs to the Mixed Zone in a plurality of frames before and after the designated frame below.
In Figure 23, the pixel that belongs to the background area that covers among the frame #n be a left side several the 15 to the 17 pixels, in frame #n background be static and prospect in amount of exercise v be 4.Because amount of exercise v is 4, therefore several the 15 to the 17 frames in a frame #n-1 left side only comprise background component and belong to the background area in the past.Only comprise background component from several the 15 to the 17 pixels in frame #n-2 (frame before a frame #n-1) left side, and belong to the background area.
Since corresponding to background to as if static, so the pixel value of several the 15 pixels in a frame #n-1 left side can not change the pixel value of several the 15 pixels in a frame #n-2 left side.Equally, the pixel value of the 16 pixel of frame #n-1 left side number can not change the pixel value of several the 16 pixels in a frame #n-2 left side, and the pixel value of the 17 pixel of frame #n-1 left side number can not change the pixel value of several the 17 pixels in a frame #n-2 left side.
Just, only be made up of background component with the pixel among corresponding frame #n-1 of a plurality of pixels that belong to the background area that covers among the frame #n and the frame #n-2, its pixel value does not change.Therefore, the absolute value of the difference of these pixel values no better than 0.Thereby determine it is " static " by static/motion that static/motion determining section 202-4 makes about corresponding to many pixels among the frame #n-1 of a plurality of pixels that belong to Mixed Zone among the frame #n and the frame #n-2.
Because the many pixels that belong to the background area that covers among the frame #n comprise the prospect component, so its pixel value only is different from the pixel value of the pixel of the frame #n-1 that is made up of background component.Therefore, the static/motion of being made by static/motion determining section 202-3 about many pixels corresponding among the many pixels that belong to Mixed Zone among the frame #n and the frame #n-1 determines it is " motion ".
When the static/motion of static/motion determining section 202-3 supply expression " motion " is determined as a result, and when the static/motion of static/motion determining section 202-4 supply expression " static " is determined as a result, according to above discussion, regional determining section 203-3 determines that many pixels of this correspondence belong to the background area of covering.
In Figure 24, and amount of exercise v prospect motionless in stationary background is that the many pixels that are included in the unlapped background area are several second to the 4th pixels in a left side among 4 the frame #n.Because amount of exercise v is 4, so several second to the 4th pixels in a left side only comprise background component among the subsequent frame #n+1, and belong to the background area.Among the frame #n+2 after frame #n+1, several second to the 4th pixels in a left side only comprise background component and belong to the background area.
Because corresponding to the object transfixion of background, so the pixel value of several second pixels in a left side does not change the pixel value of several second pixels in a frame #n+1 left side among the frame #n+2.Equally, the pixel value of several the 3rd pixels in a left side does not change the pixel value of several the 3rd pixels in a frame #n+1 left side among the frame #n+2, and the pixel value of several the 4th pixels in a left side does not change the pixel value of several the 4th pixels in a frame #n+1 left side among the frame #n+2.
Just, and belong among the frame #n among the corresponding frame #n+1 of many pixels of unlapped background area and frame #n+2 many pixels and only form, and its pixel value does not change by background component.Therefore, the absolute value of the difference of these pixel values no better than 0.Like this, by static/motion determining section 202-1 make about with belong among the frame #n among the corresponding frame #n+1 of many pixels of Mixed Zone and frame #n+2 the static/motion of many pixels and determine it is " static ".
Because the pixel that belongs to unlapped background area among the frame #n comprises the prospect component, so its pixel value only is different from the pixel value of the frame #n+1 that is made up of background component.So, determine it is " motion " by static/motion that static/motion determining section 202-2 makes about many pixels corresponding among the many pixels that belong to Mixed Zone among the frame #n and the frame #n+1.
When the static/motion of static/motion determining section 202-2 supply expression " motion " is determined as a result, and when the static/motion of static/motion determining section 202-1 supply expression " static " is determined as a result, according to above discussion, regional determining section 203-1 determines that many pixels of this correspondence belong to unlapped background area.
Figure 25 illustrates the frame #n that made by regional designating unit 103 fixed condition really.When being positioned at definite result of the pixel of the locational frame #n-2 of designated pixel identical image of pending frame #n and to be positioned at definite result with the pixel of the locational frame #n-1 of designated pixel identical image of frame #n be static, and when the designated pixel of frame #n with when being positioned at definite result with the pixel of the locational frame #n-1 of designated pixel identical image of frame #n and being motion, regional designating unit 103 determines that the pixel among the frame #n+1 belongs to the background area of covering.
When pixel among the frame #n with when to be positioned at definite result with the pixel of the locational frame #n-1 of designated pixel identical image of frame #n be static, and determine the result and be positioned at definite result with the pixel of the locational frame #n+1 of specified pixel identical image of frame #n that when being static, regional designating unit 103 determines that the designated pixel among the frame #n belongs to stagnant zone when the pixel of frame #n.
As definite result of pixel among the frame #n with when being positioned at definite result with the pixel of the locational frame #n-1 of designated pixel identical image of frame #n and being motion, and as definite result of the pixel of frame #n with when being positioned at definite result with the pixel of the locational frame #n+1 of specified pixel identical image of frame #n and being motion, regional designating unit 103 determines that the designated pixel among the frame #n belongs to the moving region.
As definite result of pixel among the frame #n with when being positioned at definite result with the pixel of the locational frame #n+1 of designated pixel identical image of frame #n and being motion, and as definite result of the pixel of the frame #n+1 on the same position that is positioned at the designated pixel of frame #n with when to be positioned at definite result with the pixel of the locational frame #n+2 of specified pixel identical image of frame #n be static, regional designating unit 103 determines that the designated pixel among the frame #n belongs to unlapped background area.
Figure 26 A to 26D illustrates definite result's of regional designating unit 103 acquisitions example.In Figure 26 A, indicated the pixel of the background area that is confirmed as belonging to covering with white colour.In Figure 26 B, indicated the pixel that is confirmed as belonging to unlapped background area with white colour.
In Figure 26 C, indicated the pixel that is confirmed as belonging to the moving region with white colour.In Figure 26 D, indicated the pixel that is confirmed as belonging to stagnant zone with white colour.
Figure 27 illustrates area information, the Mixed Zone of the image format that this area information indication is selected from store the area information of determining the output of marker frames memory.In Figure 27, be confirmed as belonging to the pixel of the background area or the unlapped background area of covering with white colour indication, that is, be confirmed as belonging to the pixel of Mixed Zone.Indication is specified the Mixed Zone and is had the part of a texture (texture) from the area information of storing the Mixed Zone of determining 206 outputs of marker frames memory, this texture by the part that does not have texture in the foreground area around.
Regional designated treatment below in conjunction with regional designating unit 103 execution of the flowchart text of Figure 28.In step S201, frame memory 201 acquisitions comprise the image of the frame #n-2 to #n+2 of #n.
In step S202, whether definite result that static/motion determining section 202-3 determines to be arranged in the pixel of the pixel of the frame #n-1 on the same position and frame #n is static.Determine that the result is static if determine this, then handle advancing to step S203, determine to be arranged in frame #n pixel on the same position and whether definite result of #n+1 pixel is static by static/motion determining section 202-2.
If it is static determining to be arranged in frame #n pixel on the same position and definite result of #n+1 pixel in step S203, then handles and advancing to step S204.At step S204, " 1 " that regional determining section 203-2 will indicate pending pixel to belong to stagnant zone is set to the stagnant zone related with pending pixel and determines in the sign.Zone determining section 203-2 determines the definite sign of marker frames memory 204 supply stagnant zones to storage, and advances to step S205.
If determining to be arranged in frame #n-1 pixel on the same position and definite result of #n pixel in step S202 moves, move if perhaps determine to be arranged in the pixel of frame #n of same position and definite result of #n+1 pixel in step S203, then pending pixel does not belong to stagnant zone.Therefore, the processing of step S204 is skipped, and handles to advance to step S205.
At step S205, definite result that static/motion determining section 202-3 determines to be positioned at the pixel of the pixel of frame #n-1 of same position and #n moves.Determine that the result move if determine this, then processing advancing to step S206, and in this step, whether definite result that static/motion determining section 202-2 determines to be arranged in the pixel of the pixel of frame #n of same position and frame #n+1 moves.
Move if in step S206, determine to be arranged in frame #n pixel on the same position and definite result of #n+1 pixel, then handle advancing to step S207.In step S207, " 1 " that regional determining section 203-2 will indicate pending pixel to belong to the moving region is set to the moving region related with this pending pixel and determines in the sign.Zone determining section 203-2 determines the definite sign in marker frames memory 204 supply moving regions to storage, handles then and advancing to step S208.
If it is static determining to be arranged in frame #n-1 pixel on the same position and definite result of #n pixel in step S205, if it is static perhaps determining to be arranged in the pixel of frame #n of same position and definite result of #n+1 pixel in step S203, then pixel does not belong to the moving region among the frame #n.Therefore, the processing of step S207 is skipped, and processing advances to step S208.
At step S208, definite result that static/motion determining section 202-4 determines to be positioned at the pixel of the pixel of frame #n-2 of same position and #n-1 is static.Determine that the result is static if determine this, then handle advancing to step S209, in this step, static/motion determining section 202-3 determines to be arranged in the pixel of frame #n-1 of same position and the pixel of frame #n is moved.
Move if in step S209, determine to be arranged in frame #n-1 pixel on the same position and definite result of #n pixel, then handle advancing to step S210.In step S210, " 1 " that regional determining section 203-3 will indicate pending pixel to belong to the background area of covering is set to the background area of the covering related with this pending pixel and determines in the sign.Zone determining section 203-3 determines sign to the background area that definite marker frames memory 204 supplies of storage cover, and advances to step S211.
If determining to be arranged in frame #n-2 pixel on the same position and definite result of #n-1 pixel in step S208 moves, if it is static perhaps determining to be arranged in definite result of the pixel of the pixel of frame #n-1 of same position and #n in step S209, then pixel does not belong to the background area of covering among the frame #n.Therefore, the processing of step S210 is skipped, and advances to step S211 so handle.
At step S211, definite result that static/motion determining section 202-2 determines to be positioned at the pixel of the pixel of frame #n of same position and #n+1 moves.Determine that the result moves if in step S211, determine this, then handle and advance to step S212, in this step, whether definite result that static/motion determining section 202-1 determines to be arranged in the pixel of the pixel of frame #n+1 of same position and frame #n+2 is static.
If it is static determining to be arranged in frame #n+1 pixel on the same position and definite result of #n+2 pixel in step S212, then handles and advancing to step S213.In step S213, " 1 " that regional determining section 203-2 will indicate pending pixel to belong to unlapped background area is set to the unlapped background area related with this pending pixel and determines in the sign.Zone determining section 203-1 determines the definite sign in the unlapped background area of marker frames memory 204 supplies to storage, and advances to step S214.
If it is static determining to be arranged in frame #n pixel on the same position and definite result of #n+1 pixel in step S211, move if perhaps determine definite result of pixel among the pixel of frame #n+1 and the frame #n+2 in step S212, then pixel does not belong to unlapped background area among the frame #n.Therefore, the processing of step S213 is skipped, and advances to step S214 so handle.
In step S214, whether the zone of all pixels of regional designating unit 103 definite frame #n is designated.If the zone of all pixels of definite frame #n is also not designated, then handle and turn back to step S202, and for residual pixel repeat region designated treatment.
If determine that in step S214 all pixels are all designated among the frame #n, then handle advancing to step S215.In step S215, synthesizer 215 is determined to indicate according to being stored in definite sign in unlapped background area in the definite marker frames memory 204 of storage and the background area that covers, generate the area information of indication Mixed Zone, generate also that each pixel of indication belongs to unlapped background area, stagnant zone, moving region or the background area that covers among which area information, and in determining sign storage frame memory 206, the area information that is generated is set.Finish this processing then.
As above discuss, regional designating unit 103 can generate that indication is included in that each pixel in the frame belongs to moving region, stagnant zone, unlapped background area or the background area that covers among which area information.
Zone designating unit 103 can be applied to logic OR corresponding to the area information of unlapped background area with corresponding to the area information of the background area that covers, so that generate area information corresponding to the Mixed Zone, can generate the area information of forming by many signs then, be included in each pixel in the frame with indication and belong among moving region, stagnant zone or the Mixed Zone which.
When the object corresponding to prospect had texture, regional designating unit 103 can more accurate designated movement zone.
Zone designating unit 103 can be exported the area information of indication moving region, with area information as the indication foreground area, and the area information of output indication stagnant zone, with area information as the indication background area.
Supposition described above corresponding to background to as if static embodiment.In addition, even comprise motion, also can use above-mentioned regional designated treatment corresponding to the image of background area.For example, if corresponding to the evenly motion of the image of background area, then regional designating unit 103 is according to this moving displacement entire image, and according to be similar to corresponding to background to as if the mode of static situation carry out processing.If the image corresponding to the background area comprises local different motion, the pixel that then regional designating unit 103 is selected corresponding to this motion, and carry out above-mentioned processing.
Figure 29 is an example of the configuration of declare area designating unit 103.Regional designating unit 103 shown in Figure 29 is not used a motion vector.The background image that background image maker 301 generates corresponding to input picture, and extract part 302 to target binary image and supply the background image that is generated.Background image maker 301 extracts for example corresponding to an image object that is included in the background object in the input picture, and the generation background image.
Figure 30 shows the example of the model that the pixel value by a plurality of pixels that are arranged side by side along the direction of motion corresponding to the image of foreground object in time orientation expansion obtains.For example, if corresponding to the direction of motion of the image of foreground object about screen level, model then shown in Figure 30 is to expand a model that obtains by the pixel value to the pixel that is arranged side by side on the single file in time domain.
In Figure 30, among the frame #n row with frame #n-1 in row and the row among the frame #n+1 be identical.
In frame #n, be comprised in several the second to the 13 pixels in a frame #n-1 left side corresponding to the prospect component that is included in the object in several the 6th to the 17 pixels in a left side, and be included in several the tenth to the 21 pixels in a frame #n+1 left side.
In frame #n-1, a plurality of pixels that belong to the background area of covering are several the 11 to the 13 pixels in a left side, and the pixel that belongs to unlapped background area is several second to the 4th pixels in a left side.In frame #n, the pixel that belongs to the background area of covering is several the 15 to the 17 pixels in a left side, and the pixel that belongs to unlapped background area is several the 6th to the 8th pixels in a left side.In frame #n+1, the pixel that belongs to the background area of covering is several the 19 to the 21 pixels in a left side, and the pixel that belongs to unlapped background area is several the tenth to the 12 pixels in a left side.
In frame #n-1, a plurality of pixels that belong to the background area are several first pixels in a left side, and are several the 14 to the 21 pixels in a left side.In frame #n, the pixel that belongs to the background area is several first to the 5th pixels in a left side, and is several the 18 to the 21 pixels in a left side.In frame #n+1, the pixel that belongs to the background area is several first to the 9th pixels in a left side.
Figure 31 shows an example corresponding to the background image of example shown in Figure 30 that is generated by background image maker 301.Background image is made up of a plurality of pixels corresponding to background object, but does not comprise the picture content corresponding to foreground object.
Target binary image extracts part 302 according to the relevant generation target binary image between background image and the input picture, and the target binary image that is generated to 303 supplies of time change detector.
Figure 32 illustrates the block diagram that target binary image extracts the configuration of part 302.Correlation value calculation device 321 calculates relevant between the background image of background image maker 301 supplies and the input picture, so that the generation correlation, and the correlation that is generated is supplied to processor threshold 322.
Correlation value calculation device 321 applies to equation (4) for example, and the center has X 43 * 3 background image pieces (shown in Figure 33 A), and for example apply to have Y corresponding to the center of background image piece 43 * 3 background image pieces (shown in Figure 33 B), thereby calculate corresponding to Y 4Correlation.
Figure A0280182200671
X ‾ = Σ i = 0 8 Xi 9 - - - ( 5 )
Y ‾ = Σ i = 0 8 Yi 9 - - - ( 6 )
Correlation value calculation device 321 is to the processor threshold 322 supplies correlation for each pixel calculating discussed above.
As selection, correlation value calculation device 321 can for example apply to that the center has X with equation (7) 43 * 3 background image pieces (shown in Figure 34 A), and for example apply to have Y corresponding to the center of background image piece 43 * 3 background image pieces (shown in Figure 34 B), thereby calculate corresponding to Y 4The absolute value sum of difference.
Correlation value calculation device 321 is to the absolute value sum of processor threshold 322 supplies as the difference of the above calculating of correlation.
Processor threshold 322 compares the pixel value and the threshold value th0 of associated picture.If this correlation is less than or equal to threshold value th0, then be set to 1 in the pixel value of target binary image.If correlation greater than threshold value th0, then is set to 0 in the pixel value of target binary image.Processor threshold 322 is exported its pixel value subsequently and is set to 0 or 1 target binary image.Processor threshold 322 is storage threshold th0 in advance, perhaps can use from the threshold value th0 of external source input.
Figure 35 has illustrated the target binary image corresponding to the model of input picture shown in Figure 30.In target binary image, 0 is set at and has in the pixel value with the higher relevant pixel of background image image.
Figure 36 is the block diagram of description time transition detector 303 configurations.When determining the pixel region of frame #n, frame memory 341 storages are extracted frame #n-1, the frame #n of part 302 supplies and the target binary image of frame #n+1 from target binary image.
Zone determining section 342 is determined the zone of each pixel of frame #n according to the target binary image of frame #n-1, frame #n and frame #n+1, so that generate area information, and exports the area information that is generated.
Figure 37 illustrates by determining that regional determining section 342 is made.When the specified pixel of target binary image among the frame #n was 0, regional determining section 342 determined that the specified pixel of frame #n belongs to the background area.
When the specified pixel of target binary image among the frame #n is 1, when being 1 with respective pixel when target binary image among the frame #n-1, and when the respective pixel of target binary image among the frame #n+1 was 1, regional determining section 342 determined that the specified pixel among the frame #n belongs to foreground area.
When the specified pixel of target binary image among the frame #n is 1, and when the respective pixel of target binary image among the frame #n-1 was 0, regional determining section 342 determined that the specified pixel among the frame #n belongs to the background area of covering.
When the specified pixel of target binary image among the frame #n is 1, and when the respective pixel of target binary image among the frame #n+1 was 0, regional determining section 342 determined that the specified pixel among the frame #n belongs to unlapped background area.
Figure 38 illustrate by time change detector 303 make about a example of determining corresponding to the target binary image of the model of input picture shown in Figure 30.Time change detector 303 determines that several first to the 5th pixels in a frame #n left side belong to the background area, because the respective pixel of the target binary image of frame #n is 0.
Time change detector 303 determines that several the 6th to the 9th pixels in a left side belong to unlapped background area, because the pixel of target binary image is 1 among the frame #n, and the respective pixel among the frame #n+1 is 0.
Time change detector 303 determines that several the tenth to the 13 pixels in a left side belong to foreground area, because the pixel of target binary image is 1 among the frame #n, the respective pixel among the frame #n-1 is 1, and the respective pixel among the frame #n+1 is 1.
Time change detector 303 determines that several the 14 to the 17 pixels in a left side belong to the background area of covering, because the pixel of target binary image is 1 among the frame #n, respective pixel is 0 among the frame #n-1.
Time change detector 303 determines that several the 18 to the 21 pixels in a left side belong to the background area, because the respective pixel of target binary image is 0 among the frame #n.
The regional designated treatment of carrying out by regional designating unit 103 below with reference to the flowchart text of Figure 39.In step S301, the background image maker 301 of zone designating unit 103 for example extracts and the corresponding image object of background object that is included in the input picture according to input picture, so that the generation background image, and extract part 302 to binary picture and supply the background image that is generated.
In step S302, target binary image extracts part 302 according to for example coming correlation between the background image that calculating input image and background image maker 301 supply with reference to the calculating that Figure 33 discussed.In step S303, target binary image extracts part 302 by for example comparing correlation and threshold value th0, calculates target binary image from this correlation and threshold value th0.
In step S304, time change detector 303 execution areas are determined to handle, and finish this processing.
Determine the details of processing below with reference to the zone among the flowchart text step S304 of Figure 40.In step S321, whether the specified pixel among the frame #n that the regional determining section 342 of time change detector 303 is determined to store in the frame memory 341 is 0.If determine that specified pixel is 0 among the frame #n, then handle advancing to step S322.In step S322, determine that specified pixel belongs to the background area among the frame #n, finish this processing then.
If determine that in step S321 specified pixel is 1 among the frame #n, then handle advancing to step S323.In step S323, the regional determining section 342 of time change detector 303 determines whether the specified pixel of the frame #n of storage in the frame memory 341 is 1, and whether the respective pixel among definite frame #n-1 is 0.If determine that specified pixel is 1 among the frame #n, and respective pixel is 0 among the frame #n-1, then handle advancing to step S324.In step S324, determine that specified pixel among the frame #n belongs to the background area of covering, finishes this processing then.
If determine that in step S323 the specified pixel among the frame #n is 0, perhaps the respective pixel of frame #n-1 is 1, then handles advancing to step S325.In step S325, the regional determining section 342 of time change detector 303 determines whether the specified pixel of the frame #n of storage in the frame memory 341 is 1, and whether the respective pixel among definite frame #n+1 is 0.If determine that specified pixel is 1 among the frame #n, and respective pixel is 0 among the frame #n+1, then handle advancing to step S326.In step S326, determine that specified pixel belongs to unlapped background area among the frame #n, and finish this processing.
If determine that in step S325 the specified pixel among the frame #n is 0, perhaps the respective pixel of frame #n+1 is 1, then handles advancing to step S327.In step S327, the regional determining section 342 of time change detector 303 determines that the specified pixel of frame #n belongs to foreground area, and finishes this processing.
As mentioned above, zone designating unit 103 can belong to the background area of foreground area, background area, covering or among the unlapped background area which according to each pixel of the correlation appointment input picture between input picture and the corresponding background image, and generation is corresponding to the area information of particular result.
Figure 41 is the block diagram that illustrates another configuration of regional designating unit 103.Regional designating unit 103 shown in Figure 41 is used from the motion vector and the positional information thereof of motion detector 102 supplies.Parts identical with parts shown in Figure 29 among Figure 41 are represented by similar reference number, and are omitted its explanation.
Strengthen processing (robust processing) part 361 and generate a reinforcement target binary image, and strengthen (robust) target binary images to 303 outputs of time change detector according to the target binary image that extracts N frame of part 302 supplies from target binary image.
Figure 42 is the block diagram that processing section 361 configurations are strengthened in explanation.Motion compensator 381 is according to the motion from the target binary image of the motion vector of motion detector 102 supplies and N frame of positional information compensation thereof, and the motion compensation target binary image is exported to a switch 382.
The motion compensation of carrying out by motion compensator 381 below in conjunction with Figure 43 and case discuss shown in Figure 44.For example, suppose that the zone among the frame #n will handle.When importing the target binary image of frame #n-1, frame #n shown in Figure 43 and frame #n+1, motion compensator 381 is according to the motion from the target binary image of the target binary image of the motion vector compensation frame #n-1 of motion detector 102 supplies and frame #n+1, shown in the example of Figure 44, and the motion compensation target binary image is supplied to switch 382.
Switch 382 is exported to frame memory 383-1 to the motion compensation target binary image of first frame, and the motion compensation target binary image of second frame is exported to frame memory 383-2.Equally, switch 382 is exported to frame memory 393-3 to 383-(N-1) to the motion compensation target binary image of the 3rd frame to the (N-1) frame, and the motion compensation target binary image of N frame is exported to frame memory 383-N.
Frame memory 381-1 stores the motion compensation target binary image of first frame, and the target binary image of being stored is exported to weighting part 384-1.Frame memory 383-2 stores the motion compensation target binary image of second frame, and the target binary image of being stored is exported to weighting part 384-2.
Equally, frame memory 381-3 to 383-(N-1) stores the 3rd motion compensation target binary image to (N-1) frame, and the target binary image of being stored is exported to weighting part 384-3 to 384-(N-1).Frame memory 383-N stores the motion compensation target binary image of N frame, and the target binary image of being stored is exported to weighting part 384-N.
Weighting part 384-1 takes advantage of the pixel value of motion compensation target binary image of first frame of frame memory 383-1 supply with a predefined weight (weigh) w1, and to the target binary image of an accumulator 385 supply weightings.Weighting part 384-2 takes advantage of the pixel value of motion compensation target binary image of second frame of frame memory 383-2 supply with a predefined weight w2, and to the target binary image of accumulator 385 supply weightings.
Equally, weighting part 384-3 to 384-(N-1) takes advantage of the 3rd pixel value to the motion compensation target binary image of (N-1) frame of frame memory 383-3 to 383-(N-1) supply with predefined weight w3 to w (N-1), and to the target binary image of accumulator 385 supply weightings.Weighting part 384-N takes advantage of the pixel value of motion compensation target binary image of the N frame of frame memory 383-N supply with a predefined weight wN, and to the target binary image of accumulator 385 supply weightings.
Accumulator 385 accumulations are by the pixel value of first to the weight w1 to wN of the N frame motion compensation target binary image that multiplies each other, and the pixel value and the predetermined threshold value th0 that are relatively accumulated, thereby generate target binary image.
As mentioned above, strengthen processing section 361 and from N target binary image, generate a reinforcement target binary image, and be supplied to time change detector 303.So the regional designating unit 103 of pressing Figure 41 configuration can be than Figure 29 appointed area more accurately, even contain noise in input picture.
The regional designated treatment of carrying out according to the regional designating unit 103 of Figure 41 configuration below with reference to the flowchart text of Figure 45.Step S341 is similar to the step S301 that discusses in conjunction with the flow chart of Figure 39 processing to step S303 to the processing of step S343, thereby omits its explanation.
In step S344, strengthen processing section 361 and carry out the reinforcement processing.
In step S345, time change detector 303 execution areas are determined to handle, and finish this processing.The processing details of step S345 is similar to the processing of discussing in conjunction with the flow chart of Figure 40, therefore omits its explanation.
Provide the details of handling corresponding to the reinforcement of the processing of the step S344 among Figure 45 below with reference to the flow chart of Figure 46.In step S361, motion compensator 381 is according to the motion compensation of carrying out the input target binary image from the motion vector and the positional information of motion detector 102 supplies.In step S362, one of frame memory 383-1 to 383-N storage is via the motion compensation target binary image of the correspondence of switch 382 supplies.
In step S363, strengthen processing section 361 and determine whether to have stored N target binary image.If determine not store N target binary image, then handle and turn back to step S361, and repeat to compensate the processing of target binary image motion and the processing of storage target binary image.
If in step S363, determine to have stored N target binary image, then handle advancing to the step S364 that carries out weighting.In step S364, weighting part 384-1 to 384-N takes advantage of N corresponding target binary image with weight w1 to wN.
In step S365, N binary weighted object images of accumulator 385 accumulations.
In step S366, accumulator 385 is value and the predetermined threshold th1 by relatively being accumulated for example, generates target binary image from the image of accumulation, finishes this processing then.
As mentioned above, the regional designating unit 103 according to Figure 41 configuration can generate area information according to strengthening target binary image.
As what see from above-mentioned explanation, regional designating unit 103 can generate area information, is included among the background area that each pixel in the frame belongs to moving region, stagnant zone, unlapped background area or covering which with indication.
Figure 47 is the block diagram of an example of explanation mixing ratio calculator 104 configurations.Estimation mixing ratio processor 401 calculates the estimation mixing ratio of each pixel, and the estimation mixing ratio of calculating is supplied to mixing ratio determining section 403 by calculating the model of the background area that covers according to input picture.
Estimation mixing ratio processor 402 calculates the estimation mixing ratio of each pixel, and the estimation mixing ratio of calculating is supplied to mixing ratio determining section 403 by calculate a model of unlapped background area according to input picture.
Move with identical speed at aperture time owing to can suppose the object corresponding to prospect, the mixing ratio α that therefore belongs to the pixel of a Mixed Zone represents following feature.Just, mixing ratio α changes and linear change according to locations of pixels.If it is one dimension that locations of pixels changes, can represent the variation of mixing ratio α linearly.If it is two-dimentional that locations of pixels changes, can represent the variation of mixing ratio α in the plane.
Because period of a frame is very short, therefore can suppose corresponding to prospect to as if the rigid body of identical speed motion.
Amount of exercise v in the gradient of mixing ratio α and the aperture time of prospect is inversely proportional to.
Figure 48 shows the example of desirable mixing ratio α.The gradient 1 of theoretical mixture ratio α in the Mixed Zone can be represented by the inverse of amount of exercise v.
As shown in figure 48, theoretical mixture ratio α has 1 value in the background area, have 0 value in foreground area, has in the Mixed Zone greater than 0 less than 1 value.
In example shown in Figure 49, the pixel value C06 of several the 7th pixels in a frame #n left side can be by equation (8) expression of the pixel value P06 that uses several the 7th pixels in a frame #n-1 left side.
C 06 = B 06 / v + B 06 / v + F 01 / v + F 02 / v
= P 06 / v + P 06 / v + F 01 / v + F 02 / v - - - ( 8 )
= 2 / v · P 06 + Σ i = 1 2 Fi / v
In equation (8), pixel value C06 represents by the pixel value M of the pixel in the Mixed Zone, and pixel value P06 is represented by the pixel value B of the pixel of background area.Just, the pixel value B of the pixel of the pixel value M of the pixel of Mixed Zone and background area can be respectively by equation (9) and equation (10) expression.
M=C06 (9)
B=P06 (10)
In equation (8), 2/v is equivalent to mixing ratio α.Because amount of exercise v is 4, so the mixing ratio α of several the 7th pixels in a frame #n left side is 0.5.
As mentioned above, the pixel value C among the designated frame #n is considered to the pixel value of Mixed Zone, and the pixel value P of the frame #n-1 before the frame #n is considered to the pixel value in the background area.So the equation (3) of indication mixing ratio α can be represented by equation (11):
C=α·P+f (11)
Wherein, the indication of the f in the equation (11) is included in the prospect component ∑ in the specified pixel iThe Fi/v sum.The variable that contains in the equation (11) is two factors, that is, and and mixing ratio α and prospect component sum f.
Equally, Figure 50 shows a model that obtains by expansion pixel value on time orientation, and amount of exercise v is 4 in this pixel value, thereby the quantity of virtual dividing part is 4 in unlapped background area.
As the expression of the background area that covers, in unlapped background area, the pixel value C of designated frame #n is considered to the pixel value in the Mixed Zone, and the pixel value N of the frame #n+1 after the frame #n is considered to the background area.Therefore, the equation (3) of indication mixing ratio α can be represented by equation (12).
C=α·N+f (12)
Illustrated that above the supposition background object is static embodiment.In addition, by the pixel value of use, equation (8) to equation (12) can be applied to the situation that background object is being moved corresponding to the pixel of the amount of exercise v location of background.For example, supposition now be 2 corresponding to the motion of objects amount v of background, and the virtual dividing partial amt is 2 in Figure 49.In the case, when corresponding to the object of background during to the right-hand member motion of Figure 49, the pixel value B of the pixel in the background area in the equation (10) is represented by pixel value P04.
Because each comprises two variablees equation (11) and (12), therefore not revising equation just can not determine mixing ratio α.In general, image has strong space correlation, so pixel very close to each other almost has identical pixel value.
Because prospect component strong correlation spatially, so revise described equation, make the prospect component sum f can be from preceding or the frame of back, deduce, thereby determine mixing ratio α.
The pixel value Mc of several the 7th pixels in a frame #n left side can be expressed by equation (13) among Figure 51.
Mc = 2 v · B 06 + Σ i = 11 12 Fi / v - - - ( 13 )
First 2/v on equation (13) right side is corresponding to mixing ratio α.Can express by the equation that utilizes pixel value among the subsequent frame #n+1 (14) for second of equation (13) right side.
Σ i = 11 12 Fi / v = β · Σ i = 7 10 Fi / v - - - ( 14 )
Supposition equation (15) is by the space correlation that utilizes the prospect component effectively now.
F=F05=F06=F07=F08=F09=F10=F11=F12 (15)
Utilize equation (15) equation (14) can be modified as equation (16).
Σ i = 11 12 Fi / v = 2 v · F
= β · 4 v · F - - - ( 16 )
As a result, can use equation (17) to express β
β=2/4 (17)
If the many prospect components in the supposition Mixed Zone equate that shown in figure equation (15), because internal ratio, equation (18) can be effective to all pixels in the Mixed Zone.
β=1-α (18)
If equation (18) is effective, then equation (11) is evolved into equation (19).
C = α · P + f
= α · P + ( 1 - α ) · Σ i = γ γ + v - 1 Fi / v - - - ( 19 )
= α · P + ( 1 - α ) · N
Equally, if equation (18) is effective, then equation (12) can be evolved into equation (20).
C = α · N + f
= α · N + ( 1 - α ) · Σ i = γ γ + v - 1 Fi / v - - - ( 20 )
= α · N + ( 1 - α ) · P
In equation (19) and (20), because C, N and P are known pixel values, therefore the variable that is included in equation (19) and (20) only is mixing ratio α.Figure 52 shows C, N in equation (19) and (20) and the relation between the P.C is the pixel value that is used for calculating the frame #n specified pixel of mixing ratio α.N is the pixel value that is positioned at corresponding to pixel among the frame #n+1 on the locus of specified pixel.P is the pixel value that is positioned at corresponding to pixel among the frame #n-1 on the locus of specified pixel.
Owing to comprise a variable in each of equation (19) and (20), therefore utilize three pixels in the frame to calculate mixing ratio α.The following describes by solve an equation (19) and (20) and find the solution the condition of correct mixing ratio α.In the Mixed Zone, have in the image object of identical prospect component, promptly, in the image object of the prospect of when foreground object is static, catching, the pixel value of the contiguous pixels of on border, locating, be that the pixel quantity of amount of exercise v twice must be consistent corresponding to the image object of the direction of motion of foreground object.
As mentioned above, the mixing ratio α of pixel that belongs to the background area of covering calculates by equation (21), and the mixing ratio α that belongs to the pixel of unlapped background area calculates by equation (22).
α=(C-N)/(P-N) (21)
α=(C-P)/(N-P) (22)
Figure 53 is the block diagram of explanation estimation mixing ratio processor 401 configurations.Frame memory 421 is by input picture of all multiframe unit's storages, and a frame frame afterwards as the input picture input is supplied to a frame memory 422 and a mixing ratio calculator 423.
Frame memory 422 is stored input picture by all multiframe units, and the frame after the frame of frame memory 421 supplies is supplied to mixing ratio calculator 423.
Therefore, when frame #n+1 inputed to mixing ratio calculator 423 as input picture, frame memory 423 was supplied to mixing ratio calculator 423 to frame #n, and frame memory 422 is supplied to mixing ratio calculator 423 to frame #n-1.
Mixing ratio calculator 423 is according to the pixel value C of specified pixel among the frame #n, the pixel value N of pixel that is positioned the frame #n+1 on the position opposite position with specified pixel and the pixel value P that is positioned the pixel of the frame #n-1 on the position opposite position with specified pixel, calculate the estimation mixing ratio by solve an equation (21), the estimation mixing ratio that output is calculated.For example, when stationary background, mixing ratio calculator 423 according to the pixel value C of specified pixel among the frame #n, be arranged in the specified pixel same position on frame #n+1 pixel pixel value N and be positioned at the specified pixel same position on the pixel value P of pixel, calculate the estimation mixing ratio of specified pixel, and export the estimation mixing ratio of being calculated.
Like this, estimation mixing ratio calculator 401 calculates the estimation mixing ratio according to input picture, and this is supplied to mixing ratio determining section 403.
Estimation mixing ratio calculator 401 calculates the estimation mixing ratio of specified pixel by solve an equation (21).The class of operation of estimation mixing ratio calculator 402 is similar to the operation of estimation mixing ratio calculator 401, estimates that only mixing ratio calculator 402 calculates the different estimation mixing ratio of specified pixel by solve an equation (22).Therefore, omit estimating the explanation of mixing ratio calculator 402.
Figure 54 illustrates the estimation mixing ratio of being calculated by estimation mixing ratio processor 401.Estimation mixing ratio shown in Figure 54 is by the result of a line display when the amount of exercise of the foreground object of identical speed motion is 11.
As shown in figure 48, estimate that as can be seen mixing ratio almost is linear change in the Mixed Zone.
Referring to Figure 47, mixing ratio determining section 403 is provided with mixing ratio α according to the area information from regional designating unit 103 supplies, and the pixel that this area information indication will be calculated mixing ratio α belongs to the background area of background area, covering or among the unlapped background area which.Mixing ratio determining section 403 mixing ratio α are set to 0 when respective pixel belongs to the background area, and mixing ratio is set to 1 when respective pixel belongs to the background area.When respective pixel belonged to unlapped background area, mixing ratio determining section 403 mixing ratio α were set to the estimation mixing ratio of estimating that mixing ratio 401 is supplied.When respective pixel belonged to unlapped background area, mixing ratio determining section 403 mixing ratio α were set to the estimation mixing ratio of estimating that mixing ratio processor 402 is supplied.The mixing ratio α that 403 outputs of mixing ratio determining section are provided with according to area information.
Figure 55 is the block diagram of explanation estimation mixing ratio processor 401 another configurations.Selector 441 belong to 442 supplies of estimation mixing ratio processor according to area information from regional designating unit 103 supplies covering the background area pixel and in respective pixel preceding and the frame of back.Selector 441 is according to belonging to the pixel of unlapped background area and in respective pixel preceding and the frame of back from the area information of regional designating unit 103 supplies to 443 supplies of estimation mixing ratio processor.
Estimation mixing ratio processor 442 is according to the pixel value from selector 441 inputs, calculates the estimation mixing ratio of the specified pixel of the background area that belongs to covering by the calculating of equation (21) expression, and the estimation mixing ratio of being calculated to selector 444 supplies.
Estimation mixing ratio processor 443 is according to the pixel value from selector 441 inputs, calculates the estimation mixing ratio of the specified pixel that belongs to unlapped background area by the calculating of equation (22) expression, and the estimation mixing ratio of being calculated to selector 444 supplies.
According to the specified area information of regional designating unit 103, selector 444 is selected estimation mixing ratio 0, it is set to the mixing ratio α when specified pixel belongs to foreground area, and selects estimation mixing ratio 1, the mixing ratio α when it is set to specified pixel and belongs to the background area.When specified pixel belonged to the background area of covering, selector 444 was selected from the estimation mixing ratio of estimation mixing ratio processor 443 supplies, and it is set to mixing ratio α.Then, selector 444 outputs are according to the mixing ratio α of area information selection and setting.
As mentioned above, the mixing ratio calculator 104 that disposes by Figure 55 can be that each pixel that contains in the image is calculated mixing ratio α, and exports the mixing ratio α that is calculated.
Below with reference to the computing of Figure 56 discussion by the performed mixing ratio α of mixing ratio calculator shown in Figure 47 104.In step S401, mixing ratio calculator 104 obtains from the area information of regional designating unit 103 supplies.In step S402, estimation mixing ratio processor 401 carries out estimation process to mixing ratio by using model corresponding to the background area that covers, and the mixing ratio of being estimated is supplied to mixing ratio determining section 403.Mixing ratio is carried out the details of estimation process below with reference to Figure 57 discussion.
In step S403, estimation mixing ratio processor 402 uses the model corresponding to unlapped background area that mixing ratio is carried out estimation process, and will estimate that mixing ratio is supplied to mixing ratio determining section 403.
In step S404, mixing ratio calculator 104 determines whether to have estimated mixing ratio for entire frame.If determine also not to be entire frame estimation mixing ratio, then processing turns back to step S402, and carries out the estimation process of mixing ratio for later pixel.
If in step S404, determine to have estimated mixing ratio, then handle advancing to step S405 for entire frame.In step S405, mixing ratio determining section 403 is provided with mixing ratio according to the area information from regional designating unit 103 supplies, and the pixel that this area information indication will be calculated mixing ratio α belongs to the background area of foreground area, background area, covering or among the unlapped background area which.When respective pixel belonged to foreground area, mixing ratio determining section 403 mixing ratio α were set to 0; When respective pixel belonged to the background area, mixing ratio determining section 403 mixing ratio α were set to 1.When respective pixel belonged to the background area of covering, mixing ratio determining section 403 estimated that the estimation mixing ratio of mixing ratio processor 401 supplies is set to mixing ratio α.When respective pixel belonged to unlapped background area, mixing ratio determining section 403 estimated that the estimation mixing ratio of mixing ratio processor 402 supplies is set to mixing ratio α, finishes this processing then.
As mentioned above, mixing ratio calculator 104 can calculate the mixing ratio α of indication corresponding to the characteristic quantity of each pixel according to area information, the input picture from regional designating unit 103 supplies.
Calculating is similar to situation about discussing in conjunction with Figure 56 by the processing of the mixing ratio α that the mixing ratio calculator shown in Figure 55 104 is carried out, therefore omits it to explanation.
The mixing ratio estimation process of in the step S402 of Figure 56, using the model corresponding to the background area that covers to carry out below in conjunction with the flowchart text of Figure 57.
In step S421, mixing ratio calculator 423 obtains the pixel value C of the specified pixel among the frame #n from frame memory 421.
In step S422, mixing ratio calculator 423 obtains the pixel value P corresponding to pixel among the frame #n-1 that is included in the specified pixel in the input picture from frame memory 422.
In step S423, the pixel value N that mixing ratio calculator 423 obtains corresponding to pixel among the frame #n+1 that is included in the specified pixel in the input picture.
In step S424, mixing ratio calculator 423 calculates the estimation mixing ratio according to the pixel value N of pixel among the pixel value P of pixel among pixel value C, the frame #n-1 of specified pixel among the frame #n and the frame #n+1.
In step S425, mixing ratio calculator 423 determines whether to finish the estimation mixing ratio that is used for entire frame.If determine also not finish the processing of the calculating mixing ratio that is used for entire frame, then handle and turn back to step S421, be recycled and reused for the processing of the calculating estimation mixing ratio of later pixel.
Finish if in step S425, be identified for the processing of the calculating estimation mixing ratio of entire frame, then finish this processing.
As mentioned above, estimation mixing ratio processor 401 can calculate the estimation mixing ratio according to input picture.
In the step S403 of Figure 56, use the mixing ratio estimation process of carrying out corresponding to the model of unlapped background area to be similar to the processing that use is indicated corresponding to the flow chart by Figure 57 of the model execution of unlapped background area, thereby omit its explanation.
Estimation mixing ratio processor 442 shown in Figure 55 and estimation mixing ratio processor 443 calculate the estimation mixing ratio by the processing that execution is similar to the flow chart of Figure 57, therefore omit its explanation.
The embodiment supposition that has illustrated is static to liking corresponding to background.Even but comprise the processing that motion also can be used above-mentioned definite mixing ratio α corresponding to the image of background area.For example, if corresponding to the image of background area just at uniform motion, then estimate the motion displacement entire image of mixing ratio processor 401 according to background, and be similar to corresponding to background to as if the mode of rest configuration carry out processing.If the image corresponding to the background area comprises local different motion, estimate that then mixing ratio processor 401 is elected to be the respective pixel that belongs to the Mixed Zone to the pixel corresponding to this motion, and carry out above-mentioned processing.
Estimation mixing ratio calculator 104 can only be carried out the mixing ratio estimation process that is used for all pixels by using corresponding to the model of the background area that covers, so that an estimation mixing ratio of calculating is exported as mixing ratio α.In the case, mixing ratio α indication is used to belong to the background component ratio of pixel of the background area of covering, and indication is used to belong to the background component ratio of the pixel of unlapped background area.For the pixel that belongs to unlapped background area, calculate the absolute value of the difference of mixing ratio α and l, and the absolute value that is calculated is set to mixing ratio α.Then, signal handling equipment can determine to indicate the background component ratio of the pixel that belongs to unlapped background area.
Equally, mixing ratio calculator 104 can only be carried out the mixing ratio estimation process that is used for all pixels by using corresponding to the model of unlapped background area, so that the estimation mixing ratio of calculating is exported as mixing ratio α.
The another kind of being carried out by mixing ratio calculator 104 is discussed below to be handled.
Mixing ratio α changes and linear change according to locations of pixels, because just move in identical speed corresponding to the object of prospect.By utilizing this feature, can make at direction in space effective near the equation of mixing ratio α and prospect component sum f.By utilizing many groups to belong to the pixel value and the pixel value that belongs to the pixel of background area of the pixel of Mixed Zone, can find the solution equation, thereby calculate mixing ratio α near mixing ratio α and prospect component sum f.
When the variation of mixing ratio α was close to straight line, mixing ratio α can be represented by equation (23).
α=il+p (23)
In equation (23), i represents the spatial index when the position of specified pixel is set to 0, and l is meant the gradient of the straight line of mixing ratio α, p be meant mixing ratio α straight line intercept and the expression specified pixel mixing ratio α.In equation (23), index i is known, and gradient l and intercept p are unknown.
Figure 58 shows the relation between index i, gradient l and the intercept P.
By the approximate mixing ratio α of foundation equation (23), can represent a plurality of different blended composition and division in a proportion α of a plurality of pixels with two variablees.In the example shown in Figure 58, five mixing ratios of five pixels are by two variablees, and promptly gradient l and intercept p represent.
When being similar to mixing ratio α in the plane shown in Figure 59, by amount of exercise v is considered to corresponding to both direction, promptly the horizontal direction of image and vertical direction expand to equation (23) in this plane, and can use equation (24) expression mixing ratio α.
α=jm+kq+p (24)
In equation (24), j is the index (index) of horizontal direction, and k is meant and decides the index that locations of pixels is 0 o'clock a vertical direction.In equation (24), m refers to the horizontal gradient of mixing ratio α in this plane, and q refers to the vertical gradient of mixing ratio α in this plane.In equation (24), p refers to the intercept of mixing ratio α in this plane.
For example, in frame #n shown in Figure 49, equation (25) to (27) is effective to C05 to C07 respectively.
C05=α05·B05/v+F05 (25)
C06=α06·B06/v+F06 (26)
C07=α07·B07/v+F07 (27)
Suppose that being positioned at approximating locational prospect component is equal to each other, promptly F01 to F03 equates, then by substitute F01 to F03 with fc equation (equation) (28) is set up.
f(x)=(1-α(x))·Fc (28)
In equation (28), the position of x indication direction in space.
When α (x) was substituted by equation (24), equation (28) can be used equation (29) expression.
f(x)=(1-(jm+kq+p)·Fc
=j·(-m·Fc)+k·(-q·Fc)+((1-p)·Fc)
=js+kt+u (29)
In equation (29), (mFc), (qFc) and (1-p) Fc is alternative by s, the t shown in the equation (30) to (32), u respectively.
s=-m·Fc (30)
t=-q·Fc (31)
u=(1-p)·Fc (32)
In equation (29), j is the index of horizontal direction, and k is meant and decides the index that locations of pixels is 0 o'clock a vertical direction.
As mentioned above and since can suppose object corresponding to prospect in aperture time just with identical speed (constant velocity) motion, and the prospect component of location closer to each other is consistent, then can pass through equation (29) and be similar to prospect component sum.
When mixing ratio α is approximate by straight line, can be by equation (33) expression prospect component.
f(x)=is+u (33)
By using mixing ratio α and the prospect component sum in the alternative equation (13) of equation (24) and (29), can use equation (34) remarked pixel value M.
M=(jm+kq+p)·B+js+kt+u
=jB·m+kB·q+B·p+j·s+k·t+u (34)
In equation (34), known variables is six factors, such as horizontal gradient (horizontal gradient) m of the mixing ratio α in this plane, the vertical gradient q of the mixing ratio α in this plane, the intercept of the mixing ratio α in this plane, p, s, t and u.
Pixel value M is set in the equation (34) that conforms to the many pixels that approach specified pixel with pixel value B, finds the solution a plurality of equations that are provided with pixel value M and pixel value B with least square method then, thereby calculates mixing ratio α.
Supposition now, for example the horizontal index j of specified pixel is set to 0, and the vertical index k of specified pixel is set to 0.In the case, in pixel value M or pixel value B are set at normal equation by equation (34) expression, when being used to be positioned 3 * 3 pixels near specified pixel, can obtain equation (35) to (43).
M -1,-1=(-1)·B -1,-1·m+(-1)·B -1,-1·q+B -1,-1·p+(-1)·s+(-1)·t+u (35)
M 0,-1=(0)·B 0,-1·m+(-1)·B 0,-1·q+B 0,-1·p+(0)·s+(-1)·t+u (36)
M +1,-1=(+1)·B +1,-1·m+(-1)·B +1,-1·q+B +1,-1·p+(+1)·s+(-1)·t+u (37)
M -1,0=(-1)·B -1,0·m+(0)·B -1,0·q+B -1,0·p+(-1)·s+(0)·t+u (38)
M 0,0=(0)·B 0,0·m+(0)·B 0,0·q+B 0,0·p+(0)·s+(0)·t+u (39)
M +1,0=(+1)·B +1,0·m+(0)·B +1,0·q+B +1,0·p+(+1)·s+(0)·t+u (40)
M -1,+1=(-1)·B -1,+1·m+(+1)·B -1,+1·q+B -1,+1·p+(-1)·s+(+1)·t+u (41)
M 0,+1=(0)·B 0,+1·m+(+1)·B 0,-1·q+B 0,+1·p+(0)·s+(+1)·t+u (42)
M +1,+1=(+1)·B +1,+1·m+(+1)·B +1,+1·q+B +1,+1·p+(+1)·s+(+1)·t+u (43)
Because the horizontal index j of specified pixel is 0, the vertical index k of specified pixel is 0, the mixing ratio α of specified pixel equal when the middle j of equation (24) be 0 and the value of k when being 0, promptly mixing ratio α equals the intercept p in the equation (24).
So according to nine equations, promptly equation (35) is to (43),, and intercept p is output as mixing ratio α by least square method calculated level gradient q and intercept p, s, t, u.
The following describes by using least square method and calculate the concrete processing of mixing ratio α.
When index i and index k are represented by single index x, can use the relation between equation (44) expression index i, index k and the index x.
x=(j+1)·3+(k+1) (44)
Supposition horizontal gradient m, vertical gradient q and intercept p, s, t, u are represented by variable w0, w1, w2, w3, w4, w5 respectively now, and jB, kB, B, j, k and l are represented by a0, a1, a2, a3, a4 and a5 respectively.Under the situation of considering error e x, equation (35) to (43) may be modified as equation (45).
Mx = Σ y = 0 5 ay · wy + ex - - - ( 45 )
In equation (45), x is any one of integer 0 to 8.
Can obtain equation (46) from equation (45).
ex = Mx - Σ y = 0 8 ay · wy - - - ( 46 )
Since used least square method, therefore can be by the quadratic sum E of equation (47) definition error.
E = Σ x = 0 8 e x 2 - - - ( 47 )
For minimum error, should be 0 about the partial differential value of the variable Wv of the quadratic sum E of error.V is any one of integer 0 to 5.Thereby definite wy is to satisfy equation (48).
∂ E ∂ Wv = 2 · Σ x = 0 8 ex · ∂ ex ∂ Wv - - - ( 48 )
= 2 · Σ x = 0 8 ex · av = 0
By equation (46) substitution equation (48), can obtain equation (49).
Σ x = 0 8 ( av · Σ y = 0 5 ay · Wy ) = Σ x = 0 8 av · Mx - - - ( 49 )
For example, elimination approach (Gauss-Ruo works as elimination approach) is applied to the normal equation be made up of six equations, these six equations are by the v in any substitution equation (49) of integer 0 to 5 is obtained, thereby obtain wy.As mentioned above, w0 is horizontal gradient m, and w1 is vertical gradient q, and w2 is intercept m, and w3 is s, and w4 is t, and w5 is u.
As mentioned above, by least square method being applied to the equation that wherein is provided with pixel M and pixel B, can determine horizontal gradient m, vertical gradient q and intercept p, s, t and u.
Because intercept p is positioned in index i and k is on the point of 0 (being the mixing ratio α on the center), so it is output.
The pixel value that is included in the pixel in the Mixed Zone by hypothesis is that M and the pixel value that is included in the pixel in the background area are B, provides an explanation in conjunction with equation (35) to (43).In the case, be necessary for each situation that in the background area that covers, comprises specified pixel or in unlapped background area, comprise specified pixel, normal equation is set.
For example, if the mixing ratio α of the pixel that comprises is determined, the pixel value P04 to P08 of the pixel of the C04 to C08 of pixel of frame #n and frame #n-1 is set in normal equation then in the background area of the covering of frame #n shown in Figure 49.
If the mixing ratio α of the pixel that comprises is determined, the pixel value N28 to N32 of the pixel of the C28 to C32 of pixel of frame #n and frame #n+1 is set in normal equation then in the unlapped background area of frame #n shown in Figure 50.
In addition, calculated, establish an equation (50) then are set down to (58) if be included in the mixing ratio α of pixel in the background area that covers shown in Figure 60.The pixel value that is used to calculate the pixel of mixing ratio α is Mc5.
Mc1=(-1)·Bc1·m+(-1)·Bc1·q+Bc1·p+(-1)·s+(-1)·t+u (50)
Mc2=(0)·Bc2·m+(-1)·Bc2·q+Bc2·p+(0)·s+(-1)·t+u (51)
Mc3=(+1)·Bc3·m+(-1)·Bc3·q+Bc3·p+(+1)·s+(-1)·t+u (52)
Mc4=(-1)·Bc4·m+(0)·Bc4·q+Bc4·p+(-1)·s+(0)·t+u (53)
Mc5=(0)·Bc5·m+(0)·Bc5·q+Bc5·p+(0)·s+(0)·t+u (54)
Mc6=(+1)·Bc6·m+(0)·Bc6·q+Bc6·p+(+1)·s+(0)·t+u (55)
Mc7=(-1)·Bc7·m+(+1)·Bc7·q+Bc7·p+(-1)·s+(+1)·t+u (56)
Mc8=(0)·Bc8·m+(+1)·Bc8·q+Bc8·p+(0)·s+(+1)·t+u (57)
Mc9=(+1)·Bc9·m+(+1)·Bc9·q+Bc9·p+(+1)·s+(+1)·t+u (58)
When the mixing ratio α of the pixel that comprises in the background area of the covering of calculating frame #n, use the pixel value Bc1 to Bc9 of the pixel of background area among the frame #n-1 in the equation (50) to (58) of the pixel that corresponds respectively to frame #n.
If for example the mixing ratio α of the pixel that comprises is calculated, establish an equation (59)-(67) are set down then in the unlapped background area shown in Figure 60.The pixel value that calculates the pixel of mixing ratio α is Mu5.
Mu1=(-1)·Bu1·m+(-1)·Bu1·q+Bu1·p+(-1)·s+(-1)·t+u (59)
Mu2=(0)·Bu2·m+(-1)·Bu2·q+Bu2·p+(0)·s+(-1)·t+u (60)
Mu3=(+1)·Bu3·m+(-1)·Bu3·q+Bu3·p+(+1)·s+(-1)·t+u (61)
Mu4=(-1)·Bu4·m+(0)·Bu4·q+Bu4·p+(-1)·s+(0)·t+u (62)
Mu5=(0)·Bu5·m+(0)·Bu5·q+Bu5·p+(0)·s+(0)·t+u (63)
Mu6=(+1)·Bu6·m+(0)·Bu6·q+Bu6·p+(+1)·s+(0)·t+u (64)
Mu7=(-1)·Bu7·m+(+1)·Bu7·q+Bu7·p+(-1)·s+(+1)·t+u (65)
Mu8=(0)·Bu8·m+(+1)·Bu8·q+Bu8·p+(0)·s+(+1)·t+u (66)
Mu9=(+1)·Bu9·m+(+1)·Bu9·q+Bu9·p+(+1)·s+(+1)·t+u (67)
When the mixing ratio α of the pixel that comprises in the unlapped background area of frame #n is calculated, use the pixel value Bu1 to Bu9 of the pixel of background area among the frame #n+1 in the equation (59) to (67) of the pixel correspond respectively to frame #n.
Figure 61 is the block diagram of explanation estimation mixing ratio processor 401 configurations.The image that is input in the estimation mixing ratio processor is supplied to decay part 501 and adder 502.
Delay circuit 221 postpones the input picture of a frame, and this image is supplied to adder 502.When frame #n was supplied to adder 502 as input picture, delay circuit 221 was supplied to adder 502 with frame #n-1.
Adder 502 is provided with the pixel value of the pixel that is adjacent to the pixel that calculates mixing ratio α, and the pixel value of the frame #n-1 in the normal equation is set.For example, adder 502 is provided with pixel value Mc1 to Mc9 and pixel value Bc1 to Bc9 in the normal equation according to equation (50) to (58) respectively.The normal equation that adder 502 will be provided with pixel value within it is supplied to a calculator 503.
Calculator 503 is determined to estimate mixing ratio (estimated mixture ratio) by adopting elimination approach (sweep out) for example to find the solution from the normal equation of adder 502 supplies, and is exported determined estimation mixing ratio.
Like this, estimation mixing ratio processor 401 can calculate the estimation mixing ratio according to input picture, and it is supplied to mixing ratio determining section 403.
The configuration of estimation mixing ratio processor 402 is similar to the configuration of estimation mixing ratio processor 401, thereby omits its explanation.
Figure 62 illustrates an example of the estimation mixing ratio of being calculated by estimation mixing ratio processor 401.When the foreground moving v corresponding to the object of identical speed motion was 11, the estimation mixing ratio shown in Figure 62 was by the represented result of delegation, and was to be carried out by the equation that generates with the unit of 7 * 7 block of pixels to calculate and obtain.
In the Mixed Zone, the estimation mixing ratio almost changes linearly, as shown in figure 48.
Mixing ratio determining section 403 is provided with mixing ratio according to the area information from regional designating unit 101 supplies, and the pixel that described area information indication will be calculated mixing ratio α belongs to the background area of foreground area, background area, covering or among the unlapped background area which.Mixing ratio determining section 403 is when respective pixel belongs to foreground area, and mixing ratio is set to 0; When respective pixel belonged to the background area, mixing ratio was set to 1.When respective pixel belonged to the background area of covering, mixing ratio determining section 403 mixing ratios were set to from the estimation mixing ratio of estimation mixing ratio processor 401 supplies.When respective pixel belonged to unlapped background area, mixing ratio determining section 403 mixing ratios were set to from the estimation mixing ratio of estimation mixing ratio processor 402 supplies.The mixing ratio that 403 outputs of mixing ratio determining section are provided with according to area information.
Below with reference to the flow chart discussion of Figure 63 when estimation mixing ratio processor when being configured shown in Figure 61, the computing of the mixing ratio of carrying out by mixing ratio calculator 102.In step S501, mixing ratio calculator 102 obtains from the area information of regional designating unit 101 supplies.In step S502, estimation mixing ratio processor 401 is carried out the processing that mixing ratio is estimated by using corresponding to the model of the background area that covers, and the estimation mixing ratio is supplied to mixing ratio determining section 403.The processing details of mixing ratio being estimated below in conjunction with the flow chart discussion of Figure 64.
In step S503, estimation mixing ratio processor 402 is carried out the processing that mixing ratio is estimated by using corresponding to the model of unlapped background area, and will estimate that mixing ratio is supplied to mixing ratio determining section 403.
In step S504, mixing ratio calculator 102 determines whether entire frame have been estimated mixing ratio.If determine also entire frame not to be estimated mixing ratio, then handle turning back to step S502, and carry out processing later pixel estimation mixing ratio.
If in step S504, determine to have estimated mixing ratio, then handle advancing to step S505 for entire frame.At step S505, mixing ratio determining section 403 is provided with mixing ratio according to the area information from regional designating unit 101 supplies, and the pixel that described area information indication will be calculated mixing ratio α belongs to the background area of foreground area, background area, covering or among the unlapped background area which.Mixing ratio determining section 403 is when respective pixel belongs to foreground area, and mixing ratio α is set to 0; And when respective pixel belonged to the background area, mixing ratio α was set to 1.When respective pixel belonged to the background area of covering, mixing ratio determining section 403 was set to mixing ratio from the estimation mixing ratio of estimation mixing ratio processor 401 supplies.When respective pixel belonged to unlapped background area, mixing ratio determining section 403 was set to mixing ratio α from the estimation mixing ratio of estimation mixing ratio processor 402 supplies.Finish this processing then.
As mentioned above, mixing ratio calculator 102 can calculate the mixing ratio α of indication corresponding to the characteristic quantity of each pixel according to the area information from regional designating unit 101 supplies.
By utilizing this mixing ratio α, can separate prospect component and the background component that is included in the pixel value, keep being included in information simultaneously corresponding to the motion blur in the image of motion object.
If according to mixing ratio α composograph, then can set up and mate the image that comprises motion blur of the speed of slice-of-life motion object truly.
The mixing ratio estimation process of in the step S502 of Figure 63, using the model corresponding to the background area that covers to carry out below with reference to the flowchart text of Figure 64.
In step S521, adder 502 is provided with in the normal equation corresponding to the model of the background area that covers and is included in the pixel value in the input picture and is included in pixel value from the image of delay circuit 221 supplies.
In step S522, estimation mixing ratio processor 401 has determined whether to finish the setting of object pixel.If determine not finish the setting of object pixel, then handle and turn back to step S521, and repeat in normal equation, to be provided with the processing of pixel value.
If in step S522, determine to finish the setting of object pixel, then handle and advance to step S523.In step S523, calculator 173 calculates the estimation mixing ratio according to the normal equation that is provided with pixel value.
As mentioned above, estimation mixing ratio processor 401 can calculate the estimation mixing ratio according to input picture.
The mixing ratio estimation process of using the model corresponding to unlapped background area to carry out in the step S153 of Figure 63 is similar to by the utilization of the flow chart indication of Figure 64 processing corresponding to the model of unlapped background area, therefore omits its explanation.
Explanation to this embodiment is supposition carrying out liking under the static situation corresponding to background.In addition, even if comprise motion, also can use above-mentioned mixing ratio computing corresponding to the image of background area.For example, if evenly move, then estimate mixing ratio processor 401, and carry out processing to be similar to corresponding to the mode to liking static situation of background according to this motion displacement entire image corresponding to the image of background area.If the image corresponding to the background area comprises local different motion, estimate that then mixing ratio processor 401 will be the pixel that belongs to the Mixed Zone corresponding to the pixel selection of motion, and carry out above-mentioned processing.
Foreground/background separation device 105 is discussed below.Figure 65 is the block diagram of an example of the configuration of explanation foreground/background separation device 105.Be supplied to the input picture of foreground/background separation device 105 to be supplied to separating part 601, switch 602 and switch 604.The area information of the background area that the indication of supplying from regional designating unit 103 covers and the information of unlapped background area is supplied to separating part 601.The area information of indication foreground area is supplied to switch 602.The area information of indication background area is supplied to switch 604.
Be supplied to separating part 601 from the mixing ratio α of mixing ratio calculator 104 supplies.
Separating part 601 is isolated the prospect component according to the area information of the background area of indication covering, the area information and the mixing ratio α of the unlapped background area of indication from input picture, and the prospect separation that is separated is supplied to synthesizer 603.Separating part 601 is separating background component from input picture also, and the background component of being separated is supplied to synthesizer 605
When according to the area information input of indication foreground area during corresponding to the pixel of prospect, switch 602 closures, and only being supplied to synthesizer 603 corresponding to the pixel that is included in the prospect in the input picture.
When according to the area information input of indication background area during corresponding to the pixel of background, switch 604 closures, and only being supplied to synthesizer 605 corresponding to the pixel that is included in the background in the input picture.
Synthesizer 603 synthesizes the prospect component image according to the prospect component of supplying from separating part 601 with corresponding to the pixel of the prospect of supplying from switch 602, and exports the prospect component image that is synthesized.Because foreground area and Mixed Zone are not overlapping, therefore, synthesizer 603 is applied to prospect component and foreground pixel to for example logic OR, thus synthetic prospect component image.
In the initialization process of carrying out when the synthetic processing of prospect component image begins, synthesizer 603 is stored its pixel value in built-in frame memory all be 0 image.Then, in the synthetic processing of prospect component image, synthesizer 603 storage these prospect component images (rewriting image the preceding) with this prospect component image.Therefore, be stored in the corresponding many pixels in background area of the prospect component image of importing with synthesizer 603 0.
Synthesizer 605 synthesizes the background component image according to the background component of supplying from separating part 601 with corresponding to the pixel of the background of supplying from switch 604, and exports the background component image that is synthesized.Because background area and Mixed Zone are not overlapping, therefore, synthesizer 605 is applied to background component and background pixel to for example logic OR, thus synthetic background component image.
In the initialization process of carrying out when the synthetic processing of background component image begins, synthesizer 605 is stored its pixel value in built-in frame memory all be 0 image.Then, in the synthetic processing of background component image, synthesizer 605 storage background component images (being rewritten as in preceding image) with this background component image.Therefore, be stored in the corresponding many pixels of foreground area of the background component image of importing with synthesizer 605 0.
Figure 66 A and Figure 66 B illustrate the input picture of input foreground/background separation device 105 and prospect component image and the background component image of exporting from foreground/background separation device 105.
Figure 66 A is the schematic diagram of the image that will show, Figure 66 B is a model, by the pixel of time direction expansion corresponding to the delegation shown in Figure 66 A, these pixels comprise the pixel that belongs to foreground area, belong to the pixel of background area and belong to the pixel of Mixed Zone in this model.
Shown in Figure 66 A and Figure 66 B, comprise the pixel that belongs to the background area and be included in background component the pixel of Mixed Zone from the background component image of foreground/background separation device 105 output.
Shown in Figure 66 A and Figure 66 B, comprise the pixel that belongs to foreground area and be included in prospect component the pixel of Mixed Zone from the prospect component image of foreground/background separation device 105 output.
Foreground/background separation device 105 is separated into background component and prospect component with the pixel value of the pixel in the Mixed Zone.The background component of being separated forms the background component image and belongs to the pixel of background area.The prospect component that is separated forms the prospect component image and belongs to the pixel of foreground area.
As mentioned above, in the prospect component image, be set to 0, and in corresponding to the pixel of foreground area and pixel, the effect pixel value be set corresponding to the Mixed Zone corresponding to the pixel value of the pixel of background area.Equally, in the background component image, be set to 0, and in corresponding to the pixel of background area and pixel, effective pixel value be set corresponding to the Mixed Zone corresponding to the pixel value of the pixel of background area.
The processing that separating part 601 is carried out describes below, and this separating part 601 is subordinated in the pixel of Mixed Zone and isolates prospect component and background component.
Figure 67 illustrates the model of the image of prospect component and background component in two frames of indication, and these two frames are included among Figure 67 the foreground object of motion from left to right.In the model of the image shown in Figure 67, amount of exercise v is 4, and the quantity of virtual dividing part is 4.
In frame #n, several the 14 to the 18 pixels of a high order end pixel and a left side only are made up of background component, and belong to the background area.In frame #n, several second to the 4th pixels in a left side comprise background component and prospect component, and belong to unlapped background area.In frame #n, several the 11 to the 13 pixels in a left side comprise background component and prospect component, and belong to the background area of covering.In frame #n, several the 5th to the tenth pixels in a left side only comprise the prospect component, and belong to foreground area.
In frame #n+1, several first to the 5th pixels in a left side only are made up of background component, and belong to the background area.In frame #n+1, several the 6th to the 8th pixels in a left side comprise background component and prospect component, and belong to unlapped background area.In frame #n+1, several the 15 to the 17 pixels in a left side comprise background component and prospect component, and belong to the background area of covering.In frame #n+1, several the 9th to the 14 pixels in a left side only are made up of the prospect component, and belong to foreground area.
Figure 68 illustrates the processing that separates the prospect component in the pixel of the background area that is subordinated to covering.In Figure 68, the mixing ratio of each pixel of α 1 to α 18 expression frame #n.In Figure 68, several the 14 to the 17 pixels in a left side belong to the background area of covering.
The pixel value C15 of several the 15 pixels in a left side can be represented by equation (68) among the frame #n:
C15=B15/v+F09/v+F08/v+F07/v
=α15·B15+F09/v+F08/v+F07/v
=α15·P15+F09/v+F08/v+F07/v (68)
Wherein α 15 represents the mixing ratio of several the 14 pixels in a frame #n left side, and P15 represents the pixel value of several the 15 pixels in a left side among the frame #n-1.
The prospect component sum f15 of several the 15 pixels in a frame #n left side can be expressed by the equation (69) based on equation (68).
f15=F09/v+F08/v+F07/v
=C15-α15·P15 (69)
Equally, the prospect component sum f16 of several the 16 pixels in a frame #n left side can be expressed by equation (70), and the prospect component sum f17 of several the 17 pixels in a frame #n left side can be expressed by equation (71).
f16=C16-α16·P16 (70)
f17=C17-α17·P17 (71)
Like this, the prospect component fc that is included among the pixel value C of pixel of the background area that belongs to covering can be expressed by equation (72):
fc=C-α·P (72)
Wherein P represents the pixel value corresponding to pixel in preceding frame.
Figure 69 illustrates the processing that separates the prospect component in the pixel that is subordinated to unlapped background area.In Figure 69, the mixing ratio of each pixel of α 1 to α 18 expression frame #n.In Figure 69, several second to the 4th pixels in a left side belong to unlapped background area.
The pixel value C02 of several second pixels in a left side can be represented by equation (73) among the frame #n:
C02=B02/v+B02/v+B02/v+F01/v
=α2·B02+F01/v
=α2·N02+F01/v (73)
Wherein α 2 represents the mixing ratio of several second pixels in a frame #n left side, and N02 represents the pixel value of several second pixels in a left side among the frame #n+1.
The prospect component sum F02 of several second pixels in a frame #n left side can be expressed by the equation (74) based on equation (73).
F02=F01/v
=C02-α2·N02 (74)
Equally, the prospect component sum F03 of several the 3rd pixels in a frame #n left side can be expressed by equation (75), and the prospect component sum F04 of several the 4th pixels in a frame #n left side can be expressed by equation (76).
F03=C03-α3·N03 (75)
F04=C04-α4·N04 (76)
Like this, the prospect component fu that is included among the pixel value C of the pixel that belongs to unlapped background area can be expressed by equation (77):
fu=C-α·N (77)
Wherein N represents the pixel value corresponding to pixel in the subsequent frame.
As mentioned above, separating part 601 can be according to information and the information of the unlapped background area of indication and the mixing ratio α of each pixel of the background area that is included in the indication covering in the area information, be subordinated to and separate the prospect component in the pixel of Mixed Zone, and be subordinated to separating background component in the pixel of Mixed Zone.
Figure 70 is the block diagram that an example of separating part 601 configurations of carrying out above-mentioned processing is described.The image that is input in the separating part 601 is supplied to frame memory 621, and the background area that the indication of supplying from mixing ratio calculator 104 covers and the area information and the mixing ratio α of unlapped background area are supplied to separation processing unit 622.
Frame memory 621 is with frame unit's storage input picture.When a pending frame is frame #n, frame memory 621 storage frame #n-1 (being the former frame of frame #n), frame #n and frame #n+1 (being the back frame of frame #n).
Frame memory 621 is supplied to separation processing unit 622 to the respective pixel among frame #n-1, frame #n and the frame #n+1.
Separation processing unit 622 is according to the background area of indication covering and the area information and the mixing ratio α of unlapped background area, the computing application of discussing with reference to Figure 68 and Figure 69 in from frame #n-1, the frame #n of frame memory 621 supplies and the pixel value of the respective pixel the frame #n+1, separate prospect component and background component in the pixel of Mixed Zone among the frame #n so that be subordinated to, and they are supplied to frame memory 623.
Separation processing unit 622 is made of area controller 632, synthesizer 633 and the synthesizer 634 of unlapped area controller 631, covering.
The multiplier 641 usefulness mixing ratio α of unlapped area controller 631 take advantage of the pixel value of pixel among the frame #n+1 that frame memory 621 supplied, and resulting pixel value is exported to switch 642.When the pixel (being equivalent to the pixel the frame #n+1) of the frame #n that supplies from frame memory 621 belongs to unlapped zone, Closing Switch 642, and the mixed pixel value that multiplies each other than α that multiplier 641 is supplied is supplied to calculator 643 and synthesizer 634.The value of taking advantage of the pixel value of pixel among the frame #n+1 to obtain by the mixing ratio α by switch 624 output is equivalent to the background component of the pixel value of respective pixel among the frame #n.
Deduct from the background component of switch 642 supplies in the pixel value of calculator 643 pixel from the frame #n of frame memory 621 supplies, so that the prospect of acquisition component.Calculator 643 is supplied the prospect component of pixel among the frame #n that belongs to unlapped background area to synthesizer 633.
The multiplier 651 usefulness mixing ratio α of the area controller 632 that covers take advantage of the pixel value of pixel among the frame #n-1 that frame memory 621 supplied, and resulting pixel value is exported to switch 652.When from the pixel (being equivalent to the pixel the frame #n-1) of the frame #n of frame memory 621 supply when belonging to the background area of covering, Closing Switch 652, and the mixed pixel value that multiplies each other than α that multiplier 651 is supplied is supplied to calculator 653 and synthesizer 634.The value of taking advantage of the pixel value of pixel among the frame #n-1 to obtain by the mixing ratio α by switch 652 output is equivalent to the background component of the pixel value of respective pixel among the frame #n.
Deduct from the background component of switch 652 supplies in the pixel value of calculator 653 pixel from the frame #n of frame memory 621 supplies, so that the prospect of acquisition component.Calculator 653 belongs to the prospect component of pixel among the frame #n of background area of covering to synthesizer 633 supply.
The prospect component of the pixel of the background area that belongs to covering that the prospect component of the pixel that belongs to unlapped background area that synthesizer 633 is supplied calculator 643 and calculator 653 are supplied merges mutually, and a synthetic prospect component is supplied to frame memory 623.
The background component of the pixel of the background area that belongs to covering that the background component of the pixel that belongs to unlapped background area that synthesizer 634 is supplied calculator 642 and switch 652 are supplied merges mutually, and a synthetic background component is supplied to frame memory 623.
The prospect component and the background component of frame memory 623 storages pixel from the Mixed Zone of the frame #n of separation processing unit 622 supplies.
The background component of the pixel of Mixed Zone among the prospect component of the pixel of Mixed Zone and the frame #n that stored among the frame #n that frame memory 623 output is stored.
By utilizing the mixing ratio α of representation feature amount, can separate the prospect component and the background component that are included in the pixel fully.
Synthesizer 603 merges the prospect component of the pixel of Mixed Zone among the frame #n of separating part 601 outputs mutually with the pixel that belongs to foreground area, so that the prospect of generation component image.Synthesizer 605 merges the background component of the pixel of Mixed Zone among the frame #n of separating part 601 outputs mutually with the pixel that belongs to the background area, so that the generation background component image.
Figure 71 A is the schematic diagram of explanation corresponding to an example of the prospect component image of frame #n among Figure 67.Before separation prospect and background, several the 14 pixels of the most left pixel and a left side only are made up of background component, so pixel value is set to 0.
Before separation prospect and background, several second to the 4th pixels in a left side belong to unlapped background area.Therefore, background component is set to 0, and keeps the prospect component.Before separation prospect and background, several the 11 to the 13 pixels in a left side belong to the background area of covering.Therefore, background component is set to 0, and keeps the prospect component.Several the 5th to the tenth pixels in a left side only are made up of the prospect component of so keeping.
Figure 71 B is the schematic diagram of explanation corresponding to an example of the background component of frame #n among Figure 67.Before separation prospect and background, several the 14 pixels of the most left pixel and a left side only are made up of background component, thereby keep background component.
Before separation prospect and background, several second to the 4th pixels in a left side belong to unlapped background area.Therefore, the prospect component is set to 0, and keeps background component.Before separation prospect and background, several the 11 to the 13 pixels in a left side belong to the background area of covering.Therefore, the prospect component is set to 0, and keeps background component.Several the 5th to the tenth pixels in a left side only are made up of the prospect component, thereby pixel value is set to 0.
The separation prospect of carrying out by foreground/background separation device 105 below with reference to the flowchart text of Figure 72 and the processing of background.In step S601, the frame memory of separating part 601 621 obtains input pictures, and storage is used for the frame #n of separation prospect and background and at preceding frame #n-1 and subsequent frame #n+1.
In step S602, the separation processing unit 622 of separating part 601 is obtained from the area information of mixing ratio calculator 104 supplies.In step S603, the separation processing unit 622 of separating part 601 is obtained from the mixing ratio α of mixing ratio calculator 104 supplies.
In step S604, unlapped area controller 631 extracts background component according to area information and mixing ratio α from the pixel value of the pixel that belongs to unlapped background area of frame memory 621 supplies.
In step S605, unlapped area controller 631 extracts the prospect component according to area information and mixing ratio α from the pixel value of the pixel that belongs to unlapped background area of frame memory 621 supplies.
In step S606, the area controller 632 of covering extracts background component according to area information and mixing ratio α from the pixel value of the pixel of the background area that belongs to covering of frame memory 621 supplies.
In step S607, the area controller 632 of covering extracts the prospect component according to area information and mixing ratio α from the pixel value of the pixel of the background area that belongs to covering of frame memory 621 supplies.
In step S608, the prospect component of the pixel of extracting in the processing of synthesizer 633 with step S605 that belongs to unlapped background area merges mutually with the prospect component of the pixel of the background area that belongs to covering of extracting in step S607 handles.The prospect component that is synthesized is supplied to synthesizer 603.Synthesizer 603 further will merge with the prospect component of supplying from separating part 601 mutually via the pixel that belongs to foreground area of switch 602 supplies, so that the prospect of generation component image.
In step S609, the background component of the pixel that belongs to unlapped background area that synthesizer 634 will extract in step S604 handles merges mutually with the background component of the pixel of the background area that belongs to covering of extracting in step S606 handles.Synthetic background component is supplied to synthesizer 605.Synthesizer 605 further will merge with the background component of supplying from separating part 601 mutually via the pixel that belongs to the background area of switch 604 supplies, so that the generation background component image.
In step S610, synthesizer 603 output prospect component images.In step S611, synthesizer 605 output background component images.Finish this processing then.
As mentioned above, foreground/background separation device 105 can be isolated prospect component and background component according to area information and mixing ratio α from input picture, and the prospect component image only be made up of the prospect component of output and the background component image of only being made up of background component.
Figure 73 is the block diagram of fuzzy adjustment unit 106 configurations of account for motion.
Flat extraction unit 801 extracts a flat according to the area information from regional separative element 103 supplies from the prospect component of foreground/background separation device 105 supplies.In this flat, the pixel value variable quantity of neighborhood pixels is very little.The flat that flat extraction unit 901 is extracted is made up of the pixel with identical pixels value.Flat also is referred to as " moiety ".
For example, flat extraction unit 801 extracts a flat according to the area information from regional designating unit 103 supplies from the prospect component of foreground/background separation device 105 supplies, in this flat, the pixel value of neighborhood pixels is less than the threshold value Thf that prestores.
Flat extraction unit 801 also extracts for example flat, and in this flat, the pixel value variable quantity of the neighborhood pixels of prospect component image is in 1%.The ratio of pixel value variable quantity is a benchmark that extracts flat, and it can be set to a desired value.
Flat extraction unit 801 can also extract a flat, and in this flat, the standard deviation of the pixel value of the neighbor of prospect component image is less than the threshold value Thf that prestores.
In addition, flat extraction unit 801 for example basis extracts flat corresponding to the tropic of the pixel value of the neighborhood pixels of prospect component image, and in flat, each pixel value sum of the sum of errors of this tropic is less than the threshold value Thf that prestores.
Extract the fiducial value of flat, can be set to a desired value such as the ratio of threshold value or pixel value variable quantity.The present invention is not extracted the restriction of the fiducial value of flat.The fiducial value of extracting flat can change adaptively.
The pixel of the flat that extracts for belonging to, flat extraction unit 801 are provided with the flat sign that an indication pixel belongs to flat, and to handling unit determining unit 802 supply prospect component image and flat signs.Flat extraction unit 801 also generates the flat image of only being made up of the pixel that belongs to flat, and is supplied to motion blur to eliminate unit 803 it.
Handle unit determining unit 802 and handle unit according to the prospect component image with from the flat sign of flat extraction unit 801 supplies and from the area information generation of regional designating unit 103 supplies, and together be supplied to motion blur to eliminate unit 803 the processing unit and the flat sign that are generated, wherein, described processing unit is the data of indicating the pixel of the prospect component image that does not have flat.
Motion blur is eliminated unit 803 according to calculating the prospect component that is included in the pixel that belongs to flat from the flat sign of handling 802 supplies of unit determining unit from the prospect component image of foreground/background separation device 105 supplies.
Motion blur is eliminated unit 803 according to from the area information of regional designating unit 103 supplies with from handling the processing unit of unit determining unit 802 supplies, eliminates corresponding to flat prospect component from the prospect component image of foreground/background separation device 105 supplies.Motion blur is eliminated unit 803 according to from the area information of regional designating unit 103 supplies with from handling the processing unit of unit determining unit 802 supplies, calculates the residue prospect component that is included in by in the pixel of handling the unit appointment.
Motion blur is eliminated unit 803 combinations according to the pixel of being calculated that is included in pixel that is generated by the prospect component in the pixel of handling the unit appointment and the flat image of supplying from flat extraction unit 801, thereby generates the prospect component image of therefrom eliminating motion blur.
Motion blur is eliminated unit 803 will not have the prospect component image of motion blur to be supplied to motion blur adder 804 and selector 805.
Figure 74 is the fuzzy block diagram of eliminating unit 803 configurations of account for motion.From the prospect component image of foreground/background separation device 105 supply, from the processing unit that handles 802 supplies of unit determining unit and be supplied to model from the motion vector of motion detector 102 supplies and positional information thereof and form part 821.
Model forms part 821 and forms according to amount of exercise v and processing unit execution model.Specifically, model forms part 821 according to amount of exercise v with handle the quantity of divided part of the pixel value on the time orientation that unit determines each pixel and the quantity of prospect component, and generates a relevant model between specified pixel value and the prospect component.Model forms part 821 and can select corresponding to an amount of exercise v and a model handling unit among a plurality of models that prestore.Model forms part 821 model and the prospect component image that is generated is supplied to equation maker 822 together.
Equation maker 822 generates an equation according to the model that forms part 821 supplies from model, and the equation that is generated is supplied to adder 823 with the prospect component image.
Adder 823 is added to the equation of equation maker 822 supplies on the equation of least square method.Adder 823 is supplied to calculator 824 with resulting normal equation.The pixel of processing unit appointment does not comprise the pixel corresponding to flat.
Calculator 824 is found the solution the equation that is provided with pixel value by adder 823, so that the prospect of calculating component.Calculator 824 has generated corresponding to the elimination of handling unit many pixels of motion blur according to the prospect component that is calculated, and has generated pixel to synthesizer 825 outputs corresponding to handling unit.
Synthesizer 825 is according to generating the prospect component image of having eliminated motion blur corresponding to the pixel of handling unit and from the pixel of the flat image of flat extraction unit 801 supplies, and the prospect component image that generated of output from calculator 824 supplies.
Operation below with reference to the fuzzy adjustment unit 106 of Figure 75 to Figure 80 account for motion.
Figure 75 illustrates a model that obtains by at the pixel value of expanding the pixel on the straight line on the time orientation, and it is corresponding to the motion vector of exporting and be input to the prospect component of flat extraction unit 801 from foreground/background separation device 105.C01 ' is to the pixel value of each pixel of C23 ' indication prospect component image.The prospect component image only is made up of the prospect component.
Among many pixels that flat extraction unit 801 contains according to threshold value, extract the contiguous pixels of pixel value variable quantity less than threshold value Thf from the prospect component image of foreground/background separation device 105 supply.Threshold value Thf is an enough little value.The quantity of the contiguous pixels that flat extraction unit 801 extracts must be greater than the amount of exercise v of the foreground object in the frame.For example, if the amount of exercise of foreground object is 5 in a frame, then flat extraction unit 801 extracts the contiguous pixels more than five or five, and pixel value changes hardly in this contiguous pixels, promptly is flat.
For example, in the example shown in Figure 76,, can find that prospect component F 06/v to F14/v equates from equation (79) to the relation of (83) when equation (78) effectively the time.
C10′=C11′=C12′=C13′=C14′ (78)
C10′=F06/v+F07/v+F08/v+F09/v+F10/v (79)
C11′=F07/v+F08/v+F09/v+F10/v+F11/v (80)
C12′=F08/v+F09/v+F10/v+F11/v+F12/v (81)
C13′=F09/v+F10/v+F11/v+F12/v+F13/v (82)
C14′=F10/v+F11/v+F12/v+F13/v+F14/v (83)
Just, the relation by the prospect component F 06/v to F14/v of equation (84) indication is effective.
F06/v=F07/v=F08/v=F09/v=F10/v=F11/v=F12/v=F13/v=F14/v (84)
So, in the subsequent treatment of the prospect of calculating component, shown in Figure 77, only need calculating prospect component F 01/v to F05/v and prospect component F 15/v to F19/v, and do not need to calculate prospect component F 06/v to F14/v.
Flat extraction unit 801 extracts the flat with the required identical pixels value of described processing, indicate pixel whether to belong to the flat sign of flat according to the generation of this flat that extracts, and this sign is supplied to processing unit determining unit 802.The flat image that flat extraction unit 801 also will only be made up of a plurality of pixels that belong to flat is supplied to motion blur to eliminate unit 803.
Handle unit determining unit 802 and generate the processing unit, and will the unit of processing and the flat sign be supplied to motion blur to eliminate unit 803 together, described processing unit is indication by the data of the pixel of eliminating flat in the pixel on the straight line from be included in the prospect component image and obtaining.
Motion blur is eliminated unit 803 according to by the flat sign of handling 802 supplies of unit determining unit, calculates the prospect component that is contained in belonging to the pixel of flat.Motion blur is eliminated unit 803 and is eliminated this prospect component that is contained in the pixel that belongs to flat according to the flat sign from the prospect component image.
Motion blur is eliminated unit 803 according to generating an equation from the processing unit that handles 802 supplies of unit determining unit, be used for calculating residue prospect component from the pixel value of the pixel of having eliminated flat, described flat is to eliminate in the pixel on the straight line that contains from the prospect component image.
For example, shown in Figure 76, among 23 pixels on the straight line of prospect component image, when several the tenth to the 14 pixels in a left side belonged to flat, the prospect component that belongs to several the tenth to the 14 pixels in a left side can be eliminated from the prospect component image.Therefore, for residue prospect component is prospect component F 01/v to F05/v and prospect component F 15/v to F19/v, growth equation (85) is to (102).
C01″=F01/v (85)
C02″=F01/v+F02/v (86)
C03″=F01/v+F02/v+F03/v (87)
C04″=F01/v+F02/v+F03/v+F04/v (88)
C05″=F01/v+F02/v+F03/v+F04/v+F05/v (89)
C06″=F02/v+F03/v+F04/v+F05/v (90)
C07″=F03/v+F04/v+F05/v (91)
C08″=F04/v+F05/v (92)
C09″=F05/v (93)
C15″=F15/v (94)
C16″=F15/v+F16/v (95)
C17″=F15/v+F16/v+F17/v (96)
C18″=F15/v+F16/v+F17/v+F18/v (97)
C19″=F15/v+F16/v+F17/v+F18/v+F19/v (98)
C20″=F16/v+F17/v+F18/v+F19/v (99)
C21″=F17/v+F18/v+F19/v (100)
C22″=F18/v+F19/v (101)
C23″=F19/v (102)
By above-mentioned least square method being applied to equation (85), can obtain equation (103) and (104) to (102).
5 4 3 2 1 4 5 4 3 2 3 4 5 4 3 2 3 4 5 4 1 2 3 4 5 F 01 F 02 F 03 F 04 F 05 = V · C 01 ′ ′ + C 0 2 ′ ′ + C 03 ′ ′ + C 0 4 ′ ′ + C 0 5 ′ ′ C 02 ′ ′ + C 0 3 ′ ′ + C 04 ′ ′ + C 0 5 ′ ′ + C 0 6 ′ ′ C 03 ′ ′ + C 0 4 ′ ′ + C 05 ′ ′ + C 0 6 ′ ′ + C 0 7 ′ ′ C 04 ′ ′ + C 0 5 ′ ′ + C 06 ′ ′ + C 0 7 ′ ′ + C 0 8 ′ ′ C 05 ′ ′ + C 0 6 ′ ′ + C 07 ′ ′ + C 0 8 ′ ′ + C 0 9 ′ ′ - - - ( 103 )
5 4 3 2 1 4 5 4 3 2 3 4 5 4 3 2 3 4 5 4 1 2 3 4 5 F 15 F 16 F 17 F 18 F 19 = V · C 15 ′ ′ + C 16 ′ ′ + C 1 7 ′ ′ + C 18 ′ ′ + C 19 ′ ′ C 16 ′ ′ + C 17 ′ ′ + C 1 8 ′ ′ + C 19 ′ ′ + C 20 ′ ′ C 17 ′ ′ + C 18 ′ ′ + C 1 9 ′ ′ + C 20 ′ ′ + C 21 ′ ′ C 18 ′ ′ + C 19 ′ ′ + C 20 ′ ′ + C 21 ′ ′ + C 22 ′ ′ C 19 ′ ′ + C 20 ′ ′ + C 21 ′ ′ + C 22 ′ ′ + C 23 ′ ′ - - - ( 104 )
The equation maker 822 of eliminating unit 803 for example generates, by the equation corresponding to the processing unit shown in equation (103) and (104).The adder 823 that motion blur is eliminated unit 803 is provided with the pixel value that is included in the prospect component image in the equation that equation maker 822 is generated, wherein, described prospect component image has been eliminated the prospect component that is included in the pixel that belongs to flat.The calculator 824 that motion blur is eliminated unit 803 by use solution (such as, Cholesky decomposes) in the equation that is provided with pixel value, calculating is included in the prospect component in the prospect component image, rather than calculates the prospect component that is included in the pixel that belongs to flat.
Calculator 824 generates a prospect component image that does not have motion blur, and shown in the example of Figure 78, this prospect component image is made up of the pixel value Fi that has eliminated motion blur.
In the elimination shown in Figure 78 in the prospect component image of motion blur, F01 to F05 is set among the C04 " and C05 ", F15 to F19 is set among the C18 " and C19 ".Its reason is, for screen, and the invariant positionization of prospect component image.The prospect component image can be set on the desired location.
Calculator 824 is according to the flat image from 801 supplies of flat extraction unit, generate and the corresponding pixel of eliminating by the processing unit of prospect component, and pixel that combination is generated and the prospect component that does not have motion blur shown in Figure 78, thereby the prospect component image of generation shown in Figure 79.
Motion blur is eliminated unit 803 can generate the pixel corresponding to flat according to the prospect component F 06/v to F14/v that calculates by equation (84).
Motion blur adder 804 can be adjusted amount of movement blur by increasing the amount v ' that adjusts motion blur.This amount of exercise v ' is different from amount of exercise v, and the amount v ' that for example adjusts motion blur is half of amount of exercise v numerical value, and the amount v ' that perhaps adjusts motion blur is irrelevant with amount of exercise v.For example, shown in Figure 80, the amount v ' that motion blur adder 804 usefulness are adjusted motion blur removes the foreground pixel value Fi that does not have motion blur, so that the prospect of acquisition component F i/v '.Then, motion blur adder 804 is calculated prospect component F i/v ' sum, thereby generates the pixel value of having adjusted amount of movement blur.For example, when the amount v ' that adjusts motion blur was 3, " be set to (F01)/v ', pixel value C3 " was set to (F01+F02)/v ' to pixel value C02, pixel value C04 " be set to (F01+F02+F03)/v ', and pixel value C05 " is set to (F02+F03+F04)/v '.
Motion blur adder 804 has been adjusted the prospect component image of amount of movement blur to selector 805 supplies.
Selector 805 is according to for example reflecting the selection signal that the user selects, selection from the prospect component image that does not have motion blur of calculator 805 supply and from the adjustment of motion blur adder 804 supplies one of the prospect component image of amount of movement blur, and export selected prospect component image.
As mentioned above, motion blur adjustment unit 106 can be adjusted amount of movement blur according to the amount v ' that selects signal and adjustment motion blur.
Motion blur adjustment unit 106 can obtain the background component image from foreground/background separation device 105, so that adjust the background component corresponding to the pixel that belongs to the Mixed Zone.
Figure 81 illustrates the processing of correction by the background component of motion blur adjustment unit 106 execution.Among the pixel in being included in the background component image,, eliminate the prospect component by foreground/background separation device 105 for the pixel that belongs to the Mixed Zone before separating.
Motion blur adjustment unit 106 carries out subsequently correction according to area information and amount of exercise v.Among the pixel in being included in the background component image,, increase corresponding background component for the pixel that belongs to the Mixed Zone.
For example, when pixel value C02 comprised four background component B02/v, motion blur adjustment unit 106 was added to a background component (B02/v) ' (value identical with background component B02/v) on the pixel value C02 .When pixel value C02 comprised three background component B03/v, motion blur adjustment unit 106 was added to two background component (B03/v) ' (value identical with background component B03/v) on the pixel value C03 .
When pixel value C23 comprised three background component B23/v, motion blur adjustment unit 106 was added to two background component (B23/v) ' (value identical with background component B23/v) on the pixel value C23 .When pixel value C24 comprised four background component B24/v, motion blur adjustment unit 106 was added to a background component (B24/v) ' (value identical with background component B24/v) on the pixel value C24 .
Provide a example below by result with motion blur adjustment unit 106 execution of disposing shown in Figure 73.
Figure 82 illustrates by catching the image that static black quadrangle obtains.On the contrary, Figure 83 illustrates by catching the image that the motion black quadrangle shown in Figure 82 obtains.In the image shown in Figure 83, black quadrangle passive movement blurs the phase mutual interference.
Figure 84 shows the result's who is handled by the pixel execution that has on the straight line shown in the 106 pairs of Figure 83 dotted lines of motion blur adjustment unit that dispose shown in Figure 73 a example.
In Figure 84, the solid line indication is by carrying out the pixel value that this processing obtains by having the motion blur adjustment unit 106 that disposes shown in Figure 73, the pixel value on the straight line shown in dotted line indication Figure 83, the pixel value shown in chain-dotted line indication Figure 82 on the straight line.
In the dotted line of Figure 84, because the pixel value that is positioned at Figure 84 two ends almost is smooth (equating), so motion blur adjustment unit 106 is by supposing that they are that flat is eliminated these pixel values, and carries out the above-mentioned processing about the residual pixel value.
Result shown in Figure 84 shows that motion blur adjustment unit 106 generates the pixel value of the tetragonal pixel value of static no better than black from the image with incorrect pixel value that the interference that causes owing to motion black quadrangle causes.
Result shown in Figure 84 is obtained by the image that CCD catches by applying the present invention to, and has the linear relationship between incident light and the pixel value, although this image does not need to carry out gamma correction.Equally, proved the effect of the present invention of the image of experience gamma correction already by test.
Below in conjunction with the processing of Figure 85 explanation by adjustment amount of movement blur with motion blur adjustment unit 106 execution of disposing shown in Figure 73.
In step S801, flat extraction unit 801 extracts the equal flat of pixel value of neighborhood pixels from the prospect component image of foreground/background separation device 105 supplies.Flat extraction unit 801 will be supplied to corresponding to the flat sign of the flat that is extracted and handle unit determining unit 802 then, and be supplied to the flat image of only being made up of the pixel that belongs to flat motion blur to eliminate unit 803.
In step S802, handle unit determining unit 802 and generate the processing unit according to the flat sign, this processing unit indicates the position of the neighborhood pixels on the straight line that is included in the prospect component image, but do not indicate the locations of pixels that belongs to flat, be supplied to motion blur to eliminate unit 803 unit of processing then.
In step S803, motion blur is eliminated unit 803 according to calculating prospect component corresponding to the flat pixel from the prospect component image of foreground/background separation device 105 supplies with from the processing unit that handles 802 supplies of unit determining unit, and calculate, thereby from the prospect component, eliminate motion blur corresponding to the prospect component of handling unit.Motion blur is eliminated unit 803 does not have motion blur to motion blur adder 804 and selector 805 outputs prospect component.In step S803, eliminate the processing details of motion blur below in conjunction with the flowchart text of Figure 86.
In step S804, motion blur adjustment unit 806 determines whether to finish the processing for whole prospect component image.If determine also not finish processing to whole prospect component image, then handle turning back to step S803, repeat to eliminate processing corresponding to the motion blur to the prospect component of subsequent treatment unit.
If in step S804, determine to have finished processing, then handle and advance to step S805 for whole prospect component image.In step S805, the motion blur adder 804 of motion blur adjustment unit 106 is closed selector 805 and is calculated the background component image of having adjusted amount of movement blur, selection has been eliminated the prospect component image of motion blur or has been increased the prospect component image of motion blur, and exports selected image.Finish described processing then.
As mentioned above, motion blur adjustment unit 106 can be adjusted the amount of movement blur of the prospect component image of input.
Flowchart text below with reference to Figure 86 is eliminated the processing that unit 803 is carried out by motion blur in the step S803 of Figure 85, this processing is used to eliminate the motion blur corresponding to the prospect component image of handling unit.
In step S821, the model that motion blur is eliminated unit 803 forms the model of part 821 generations corresponding to amount of exercise v and processing unit.In step S822, equation maker 822 is according to the model growth equation that is generated.
In step S823, adder 823 has been eliminated in the equation that the pixel value corresponding to the prospect component image of the prospect component of flat is set to be generated.In step 824, adder 823 determines whether to be provided with the pixel value corresponding to all pixels of handling unit.If determine also in all equations, pixel value not to be set, then handle turning back to step S823, and repeat in equation, to be provided with the processing of pixel value.
If in step S824, determine in all equations, to be provided with pixel value, then handle advancing to step S825.Calculator 824 calculates the pixel value of the prospect that does not have motion blur according to the equation that is provided with pixel value from adder 823 supplies.
In step S826, calculator 824 combinations are supplied to the flat image and the pixel that is provided with the foreground pixel value of calculating that does not have motion blur in step S825 handles of flat extraction unit 801, thereby generate the prospect component image that does not have motion blur.
Like this, motion blur is eliminated unit 803 and can be eliminated motion blur from the prospect component image that comprises motion blur according to amount of exercise v and processing unit.
As mentioned above, the motion blur adjustment unit 106 with Figure 73 configuration can be adjusted the amount of movement blur that is included in the input prospect component image.
Part is eliminated the known technology of motion blur, such as Weiner filter, is effectively when perfect condition is used, and is invalid for the real image that comprises noise that quantizes still.On the contrary, proving already, is fully effectively by the real image that comprises noise of 106 pairs of quantifications of motion blur adjustment unit of Figure 73 configuration.Like this, can high accuracy eliminate motion blur.
In addition, flat shifts out from the prospect component image, and calculates the prospect component for residual pixel.Therefore, can suppress to quantize or The noise, and have the image that the motion blur adjustment unit 106 that disposes shown in Figure 73 can obtain to adjust with degree of precision amount of movement blur.
Figure 87 is the block diagram of another configuration of explanation signal handling equipment function.
The similar parts of this signal handling equipment and Fig. 2 are represented by similar label, therefore omit the explanation to it.
Zone designating unit 103 is supplied to mixing ratio calculator 104 and synthesizer 1001 with regional appointed information.
Mixing ratio calculator 104 is supplied to foreground/background separation device 105 and synthesizer 1001 with mixing ratio α.
Foreground/background separation device 105 is supplied to synthesizer 1001 with the prospect component.
Synthesizer 1001 bases are from the mixing ratio α of mixing ratio calculator 104 supplies and the area information of supplying from regional designating unit 103, make up background image of determining and the prospect component image of supplying from foreground/background separation device 105, and the image that is synthesized of definite background image and prospect component image has been made up in output.
Figure 88 illustrates the configuration of synthesizer 1001.Background component maker 1021 generates a background component image according to mixing ratio α and definite background image, and to Mixed Zone image synthesizing section 1022 supply background component images.
Synthesizer 1022 combinations in Mixed Zone so that generate the Mixed Zone composograph, and are supplied to image synthesizing section 1023 with the Mixed Zone composograph that is generated from the background component image and the prospect component image of 1021 supplies of background component maker.
Image synthesizing section 1023 is according to area information, combine foreground component image, from the Mixed Zone composograph of the synthetic portion of Mixed Zone image 1022 supplies and definite background image, so that generate composograph and export this image.
As mentioned above, synthesizer 1001 can combine foreground component image and the background image of determining.
By the image that obtains according to mixing ratio α (being characteristic quantity) combine foreground component and the background image of determining, the image that obtains than simple combination pixel is more natural.
Figure 89 is the block diagram that another configuration of the signal handling equipment function of adjusting amount of movement blur is described.The calculating of signal handling equipment shown in Figure 2 operation of execution area formulation sequentially and mixing ratio α.On the contrary, the calculating of the while of the signal handling equipment shown in Figure 89 execution area designated treatment and mixing ratio α.
Represent by similar reference number with the similar functional part of the block diagram of Fig. 2 among Figure 89, therefore omit its explanation.
Input picture is supplied to mixing ratio calculator 1101, foreground/background separation device 1102, regional designating unit 103 and object extracting unit 101.
Mixing ratio calculator 1101 calculates according to input picture, estimation mixing ratio α when each pixel that comprises in the input picture when supposition belongs to the background area of covering, the estimation mixing ratio when supposition is included in each pixel in the input picture and belongs to unlapped background area, and the estimation mixing ratio that will calculate in the above described manner is supplied to foreground/background separation device 1102.
Figure 90 is the block diagram of an example of explanation mixing ratio calculator 1101 configurations.
Estimation mixing ratio processor 401 shown in Figure 90 is identical with estimation mixing ratio processor 401 shown in Figure 47.Estimation mixing ratio processor 402 shown in Figure 90 is identical with estimation mixing ratio processor 402 shown in Figure 47.
Estimation mixing ratio processor 401 according to input picture by calculating the estimation mixing ratio that is used for each pixel corresponding to the calculating of the background area that covers, and the estimation mixing ratio calculated of output.
Estimation mixing ratio processor 402 according to input picture by calculating the estimation mixing ratio that is used for each pixel corresponding to the calculating of unlapped background area, and the estimation mixing ratio calculated of output.
The estimation mixing ratio that foreground/background separation device 1102 calculates during according to the background area that belongs to when the supposition pixel from the covering of mixing ratio calculator 1101 supplies, the estimation mixing ratio of when the supposition pixel belongs to unlapped background area from 1101 supplies of mixing ratio calculator, calculating and the area information of supplying from regional designating unit 103, from input picture, generate the prospect component image, and the prospect component image that is generated is supplied to motion blur adjustment unit 106 and selector 107.
Figure 91 is the block diagram of an example of explanation foreground/background separation device 1102 configurations.
Represent by similar reference number with the device of foreground/background separation shown in Figure 65 105 similar parts among Figure 91, therefore omit its explanation.
Selector 1121 is according to the area information from regional designating unit 103 supplies, select the estimation mixing ratio when the supposition pixel belongs to the background area of the covering of supplying from mixing ratio calculator 1101, calculated, perhaps select the estimation mixing ratio when the supposition pixel belongs to from unlapped background area that mixing ratio calculator 1101 is supplied, calculated, and selected estimation mixing ratio is supplied to separating part 601 as mixing ratio α.
Separating part 601 is according to mixing ratio α and area information from selector 1121 supplies, be subordinated in the pixel value of pixel of Mixed Zone and extract prospect component and background component, and the prospect component that is extracted is supplied to synthesizer 603, and the background of being extracted is supplied to synthesizer 605.
Separating part 601 can be in order to being configured to the homologue shown in similar Figure 70.
Synthesizer 603 synthetic prospect component images are also exported it.Synthesizer 605 synthetic background component images are also exported it.
Motion blur adjustment unit 106 shown in Figure 89 can be joined for being similar to homologue shown in Figure 2.Motion blur adjustment unit 106 is adjusted the amount of movement blur that is included in from the prospect component image of foreground/background separation device 1102 supplies according to area information and motion vector, and amount of movement blur prospect component image has been adjusted in output.
Selector 107 shown in Figure 89 is according to for example reflecting that the user selects the selection signal of moral, selection is from the prospect component image of foreground/background separation device 1102 supplies, perhaps select from the adjustment of motion blur adjustment unit 106 supply the prospect component image of amount of movement blur, and exported selected prospect component image.
As mentioned above, the signal handling equipment shown in Figure 89 can be adjusted the motion blur that is included in corresponding in the image of the foreground object of input picture, and exports resulting foreground object image.As first embodiment, it is the mixing ratio α of the information of being embedded into that the signal handling equipment shown in Figure 89 can calculate, and exports the mixing ratio α that is calculated.
Figure 92 is the block diagram that another functional configuration of the signal handling equipment that is used for combine foreground component image and definite background image is described.Signal handling equipment shown in Figure 87 is the calculating of execution area assigned operation and mixing ratio α serially.On the contrary, the calculating of execution area assigned operation and mixing ratio α concurrently of the signal handling equipment shown in Figure 92.
Represent by similar reference number with the similar functional part of parts shown in Figure 89 square frame among Figure 92, therefore omit its explanation.
Mixing ratio calculator 1101 shown in Figure 92 calculates according to input picture, estimation mixing ratio when each pixel that comprises in the input picture when supposition belongs to the background area of covering, the estimation mixing ratio when supposition is included in each pixel in the input picture and belongs to unlapped background area, and the estimation mixing ratio that will calculate in the above described manner is supplied to foreground/background separation device 1102 and synthesizer 1201.
The estimation mixing ratio that foreground/background separation device 1102 shown in Figure 92 calculates during according to the background area that belongs to when the supposition pixel from the covering of mixing ratio calculator 1101 supplies, the estimation mixing ratio of when the supposition pixel belongs to unlapped background area from 1101 supplies of mixing ratio calculator, calculating and the area information of supplying from regional designating unit 103, from input picture, generate the prospect component image, and the prospect component image that is generated is supplied to synthesizer 1201.
The estimation mixing ratio that synthesizer 1201 calculates during according to the background area that belongs to when the supposition pixel from the covering of mixing ratio calculator 1101 supplies, the estimation mixing ratio of when the supposition pixel belongs to unlapped background area from 1101 supplies of mixing ratio calculator, calculating and the area information of supplying from regional designating unit 103, background image that combination is determined and the prospect component image of supplying from foreground/background separation device 1102, and the composograph of background image and prospect component image has been made up in output.
Figure 93 illustrates the configuration of synthesizer 1202, and this synthesizer 1202 is represented with similar reference number with the similar functional part of block diagram of Figure 88, therefore omitted its explanation.
Selector 1221 is according to the area information from regional designating unit 103 supplies, select the estimation mixing ratio when the supposition pixel belongs to the background area of the covering of supplying from mixing ratio calculator 1101, calculated, perhaps select the estimation mixing ratio when the supposition pixel belongs to from unlapped background area that mixing ratio calculator 1101 is supplied, calculated, and selected estimation mixing ratio is supplied to background component maker 1021 as mixing ratio α.
Background component maker 1021 shown in Figure 93 is according to mixing ratio α and a certain background image from selector 1221 supplies, the generation background component image, and the background component image is supplied to Mixed Zone image synthesizing section 1022.
Mixed Zone image synthesizing section 1022 combinations shown in Figure 93 are from the background component image and the prospect component image of 1021 supplies of background component maker, so that generate the Mixed Zone composograph, and the Mixed Zone composograph that is generated is supplied to image synthesizing section 1023.
Image synthesizing section 1023 is according to area information, combine foreground component image, from the Mixed Zone composograph and the background image of the synthetic portion of Mixed Zone image 1022 supplies, so that generate composograph and export this image.
Like this, composite part 1201 can combine foreground component image and a certain background image.
Figure 94 is the block diagram of another configuration of explanation signal handling equipment.
The element that is similar to Fig. 2 among Figure 94 is represented with same numeral, therefore omits its explanation.
Be supplied to the input picture of signal handling equipment to be supplied to object extracting unit 101, regional designating unit 103, flat extraction unit 1501, separating unit 1503 and synthesizer 1504.
Object extracting unit 101 is extracted corresponding to the thick image object that is included in the foreground object in the input picture, and supplies the image object that is extracted to motion detector 102.Object extracting unit 101 is extracted corresponding to the thick image object that is included in the background object in the input picture, and supplies the image object that is extracted to motion detector 102.
Motion detector 102 calculates the motion vector corresponding to the thick image object of foreground object, and the positional information of institute's calculated motion vector and this motion vector is supplied to regional designating unit 103, flat extraction unit 105, processing unit determining unit 1502 and separating unit 1503.
Zone designating unit 103 is assigned to one of foreground area, background area or Mixed Zone to each pixel input picture, and to flat extraction unit 1501, handle unit determining unit 1502 and synthesizer 1504 supply indication informations, belong among foreground area, background area or the Mixed Zone which to indicate each pixel.
Flat extraction unit 1501 extracts a flat according to input picture, from the motion vector of motion detector 102 supply and positional information thereof, from the area information of regional designating unit 103 supplies; In this flat, it is very little belonging to the amount that the pixel value of the neighborhood pixels of foreground area changes, and this flat is to select among the contiguous pixels of arranging according to the following direction of motion: be subordinated to the pixel of the pixel of unlapped background area to the background area that belongs to covering.The flat that flat extraction unit 1501 extracts is made up of the pixel with identical pixels value.
For example, flat extraction unit 1501 extracts a flat according to input picture, from the motion vector of motion detector 102 supply and positional information thereof, from the area information of regional designating unit 103 supplies; In this flat, belong to amount that the pixel value of the neighborhood pixels of foreground area changes less than a threshold value Tfh1 who prestores, this flat is to select among the contiguous pixels of arranging according to the following direction of motion: be subordinated to the pixel of the pixel of unlapped background area to the background area that belongs to covering.
Flat extraction unit 1501 also extracts for example flat, and in this flat, the amount that the pixel value of the neighborhood pixels of prospect component image changes is in 1%.Ratio that can the pixel value variable quantity is set to an ideal value, and this ratio is the fiducial value of extracting flat.
Flat extraction unit 1501 also extracts a flat, and in this flat, the standard deviation of the pixel value of the neighborhood pixels of prospect component image is less than the threshold value Thf1 that prestores.
In addition, flat extraction unit 1501 for example extracts the flat of the error sum of the tropic and each pixel value less than the threshold value Thf1 that prestores according to the tropic (regression line) corresponding to the pixel value of the neighborhood pixels of prospect component image.
Extract the fiducial value of flat,, can be set to a desired value, but the present invention is not extracted the restriction of the fiducial value of flat such as the ratio of threshold value Thf1 or pixel value variable quantity.The fiducial value of can adaptively modifying extracting flat.
Flat extraction unit 1501 generates prospect flat positional information (it is the information of indicating the flat position of being extracted), and the prospect flat positional information that is generated is supplied to processing unit determining unit 1502.
Handle unit determining unit 1502 according to from the prospect flat positional information of flat extraction unit 1501 supplies, from the area information of the motion vector of motion detector 102 supplies and positional information thereof, 103 supplies of regional designating unit, be determined to the processing unit that oligodactyly shows a pixel that belongs to foreground area or Mixed Zone.
Handle unit determining unit 1502 and supply the processing unit that is generated to separating unit 1503.
Separating unit 1503 bases are from processing unit that handles 1502 supplies of unit determining unit and the motion vector and the positional information thereof of supplying from motion detector 102, generating the pixel of the processing unit's appointment among the pixel value of input picture does not have the prospect component image of motion blur and the background component image that has separated, and prospect component image that is generated and background component image are supplied to synthesizer 1504.
Synthesizer 1504 is according to the area information from regional designating unit 103 supplies, synthetic elimination from the prospect component image that does not have motion blur the input pictures of the image of foreground object motion blur, the background component image that has separated, 1503 supplies from the separating unit, and the image that does not have motion blur that synthesized of output.
Figure 95 is the block diagram of explanation separating unit 1503 configurations.From the motion vector and the positional information thereof of motion detector 102 supply be supplied to model from the processing unit that handles 1502 supplies of unit determining unit and form part 1521.
Model forms part 1521 according to generating a model from the motion vector of motion detector 102 supplies and positional information thereof with from the processing unit that handles 1502 supplies of unit determining unit, and the model that is generated is supplied to equation maker 1522.
Illustrate below with reference to Figure 96 to Figure 99 and to handle unit and corresponding to the model of handling unit.
An example of the pixel when Figure 96 illustrates and suppresses motion blur and take place by the shutter speed that increases enough transducers.F01 to F20 is the picture content corresponding to foreground object.
Pixel value C04 corresponding to foreground object is F01, is F02 corresponding to the pixel value C05 of foreground object, is F03 corresponding to the pixel value C06 of foreground object.Each pixel value is made up of the component corresponding to the image of foreground object.Equally, pixel value C07 to C23 corresponds respectively to F04 to F20.
In the example shown in Figure 96, because background object is static, so motion blur does not appear in the background.
Pixel value C01 corresponding to background object is B01, is B02 corresponding to the pixel value C02 of background object, is B03 corresponding to the pixel value C03 of background object.Equally, be F24 corresponding to the pixel value C24 of background object, be B25 corresponding to the pixel value C25 of background object, be B26 corresponding to the pixel value C26 of background object.
Figure 97 illustrates a model that obtains at time orientation expansion pixel value when motion blur occurs.
In the example shown in Figure 97, amount of exercise v is 5, and foreground object motion from left to right in Figure 97.
In the example shown in Figure 97, several second to the 5th pixels in a left side belong to the Mixed Zone, and several the 22 to the 25 pixels in a left side belong to the Mixed Zone.
Several the 6th to the 21 pixels in a left side belong to the background area.
According to the threshold value Thf1 that prestores, flat extraction unit 1501 is among arrange contiguous pixels to the direction of motion of the pixel of the background area that belongs to covering, extract to belong to the contiguous pixels of its variable quantity of foreground area less than threshold value Thf1,
Threshold value Thf1 is enough little value.The quantity of the contiguous pixels that flat extraction unit 1501 extracts must be greater than the amount of exercise v of foreground object.For example, when the amount of exercise v in the frame was 5, flat extraction unit 1501 extracted five or five above pixels, and in these pixels, pixel value does not almost change (that is, being flat).
For example, in the example shown in Figure 98, when equation (105) was effective, according to the relational expression of equation (106) to (110), prospect component F 06/v to F04/v equated.
C11=C12=C13=C14=C15 (105)
C11=F06/v+F07/v+F08/v+F09/v+F10/v (106)
C12=F07/v+F08/v+F09/v+F10/v+F11/v (107)
C13=F08/v+F09/v+F10/v+F11/v+F12/v (108)
C14=F09/v+F10/v+F11/v+F12/v+F13/v (109)
C15=F10/v+F11/v+F12/v+F13/v+F14/v (110)
Just, can be effective by the prospect component F 06/v to F14/v of equation (111) indication.
F06/v=F07/v=F08/v=F09/v=F10/v=F11/v=F12/v=F13/v=F14/v (111)
So, shown in Figure 99, in the subsequent treatment of calculating prospect component and background component, only need to calculate background component F01/v to F20/v and background component B02/v to B05/v and background component B22/v to B25/v, and do not need to calculate prospect component F 06/v to F14/v.
For example, shown in Figure 98, among 24 pixels, be a left side several the second to the 25 pixels (they are arranged to the direction of motion of the pixel of the background area that belongs to covering continuously in the pixel that is subordinated to unlapped background area) on Figure 98 straight line, if several the 11 to the 15 pixels in a left side belong to a flat, then the prospect component that is included in the 11 to the 15 pixel can be moved out of.So shown in Figure 99, growth equation (112) is to (120), to be used for prospect component and the background component corresponding to the residue contiguous pixels, i.e. prospect component F 01/v to F05/v and background component B02/v to B05/v.
C02′=4×B02/v+F01/v (112)
C03′=3×B03/v+F01/v+F02/v (113)
C04′=2×B04/v+F01/v+F02/v+F03/v (114)
C05′=B05/v+F01/v+F02/v+F03/v+F04v+F05/v (115)
C06′=F01/v+F02/v+F03/v+F04/v+F05/v (116)
C07′=F02/v+F03/v+F04/v+F05/v (117)
C08′=F03/v+F04/v+F05/v (118)
C09′=F04/v+F05/v (119)
C10′=F05/v (120)
Nine variablees are arranged here, and promptly prospect component F 01/v to F05/v and background component B02/v to B05/v are used for nine equations (112) to (120).Therefore, to (120), can obtain prospect component F 01/v to F05/v and background component B02/v to B05/v by solving equation (112).
Equally, can produce be used for prospect component F 15/v to F20/v and background component B22/v to B25/v equation (121) to (130).
C16′=F15/v (121)
C17′=F15/v+F16/v (122)
C18′=F15/v+F16/v+F17/v (123)
C19′=F15/v+F16/v+F17/v+F18/v (124)
C20′=F15/v+F16/v+F17/v+F18/v+F19/v (125)
C21′=F16/v+F17/v+F18/v+F19/v+F20/v (126)
C22′=F17/v+F18/v+F19/v+F20/v+F22/v (127)
C23′=F18/v+F19/v+F20/v+2×B23/v (128)
C24′=F019/v+F20/v+3×B24/v (129)
C25′=F20/v+4×B25/v (130)
Ten variablees are arranged here, and promptly prospect component F 15/v to F20/v and background component B22/v to B25/v are used for ten equations (121) to (130).Therefore, to (130), can obtain prospect component F 15/v to F20/v and background component B22/v to B25/v by solving equation (121).
Again referring to Figure 95, model forms part 1522 according to from the motion vector of motion detector 102 supplies and positional information thereof with from handling the processing unit of unit determining unit 1502 supplies, determines the quantity of background component of quantity, each pixel of the prospect component of pixel value partitioning portion, each pixel on the time orientation.Model forms part 1522 and forms a model subsequently, is used to generate the equation that calculates above-mentioned prospect component and background component, and the equation that is generated is supplied to equation maker 1522.
Equation maker 1522 is according to the model growth equation that forms part 1521 supplies from model.Equation maker 1522 belongs in the equation that the pixel value of corresponding prospect or background is set to be generated, and is provided with the equation of pixel value then to calculator 1523 supplies.
Calculator 1523 is found the solution from the equation of equation maker 1522 supplies, so that calculate prospect component and background component.
For example, when supply during corresponding to the equation of equation (112) to (120), calculator 1523 is by the equation left side inverse of a matrix matrix of equation (131) expression, and calculates prospect component F 01/v to F05/v and background component B02/v to B05/v.
1 0 0 0 0 4 0 0 0 1 1 0 0 0 0 3 0 0 1 1 1 0 0 0 0 2 0 1 1 1 1 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 F 01 / v F 02 / v F 03 / v F 04 / v F 05 / v B 02 / v B 03 / v B 04 / v B 05 / v = C 0 2 ′ C 0 3 ′ C 0 4 ′ C 0 5 ′ C 0 6 ′ C 0 7 ′ C 0 8 ′ C 0 9 ′ C 1 0 ′ - - - ( 131 )
In addition, when supply during corresponding to the equation of equation (121) to (130), calculator 1523 is determined the left side inverse of a matrix matrix by the equation of equation (132) expression, and calculates prospect component F 15/v to F20/v and background component B22/v to B25/v.
1 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 1 1 1 0 2 0 0 0 0 0 0 1 1 0 0 3 0 0 0 0 0 0 1 0 0 0 4 F 15 / v F 16 / v F 17 / v F 18 / v F 19 / v F 20 / v B 22 / v B 23 / v B 24 / v B 25 / v = C 16 ′ C 1 7 ′ C 1 8 ′ C 1 9 ′ C 2 0 ′ C 2 1 ′ C 2 2 ′ C 2 3 ′ C 2 4 ′ C 2 5 ′ - - - ( 132 )
Calculator 1523 generates prospect component image and the background component image of having eliminated motion blur according to prospect component and background component, and output does not have the prospect component image of motion blur and the background image component that has separated.
For example, when finding the solution this equation with definite prospect component F 01/v to F05/v and background component B02/v to B05/v and prospect component F 15/v to F20/v and background component B22/v to B25/v, shown in Figure 100, calculator 1523 usefulness amount of exercise v take advantage of prospect component F 01/v to F05/v, background component B02/v, background component B03/v, prospect component F 15/v to F20/v, background component B24/v and background component B25/v, so that calculate pixel value F01 to F05, pixel value B02, pixel value B03, pixel value F15 to F20, pixel value B24, pixel value B25 respectively.
Calculator 1523 generates does not for example have the prospect component image of motion blur and separating background component image, wherein, described prospect component image is made up of pixel value F01 to F05 and pixel value F15 to F20, and the described component image of separating background is made up of pixel value B02, pixel value B03, pixel value B24 and pixel value B25.
The example of the real processing results of being carried out by separating unit 1503 is described below with reference to Figure 101 and Figure 102.
Figure 101 illustrates the example of the mixed input picture of foreground object and background object.The image of Figure 101 upper right side is corresponding to background object, and the image of Figure 101 lower-left end is corresponding to foreground object.Foreground object is motion from left to right.Band-like portions between left upper end and the lower-left end are Mixed Zones.
Figure 102 shows for the pixel on the center line of Figure 101 and carries out above-mentioned processing and the result that obtains.The pixel value of the fine dotted line indication input picture among Figure 102.
Thick dashed line indication among Figure 102 does not have the pixel value of the foreground object of motion blur, and the chain-dotted line among the figure (one-dot-chain line) representative does not have the pixel value of the background object of foreground object.
The pixel value of the prospect component image of motion blur has been eliminated in the solid line representative of Figure 102, and carries out the component image of separating background after the above-mentioned processing of input picture.
As seeing from The above results, have the messaging device that disposes shown in Figure 94 can export the foreground object that approaches not have motion blur pixel value pixel value and do not comprise the pixel value of the background object of foreground object.
Below with reference to the processing of Figure 103 explanation by elimination motion blur with the signal handling equipment execution of disposing shown in Figure 94.In step S1001, regional designating unit 103 belongs to the zone of foreground area, background area, covering or which the area information among the unlapped background area by generate the indication input picture according to input picture, comes the execution area designated treatment.The area information that zone designating unit 103 is generated to 1501 supplies of flat extraction unit.
In step S1002, flat extraction unit 1501 is according to motion vector and positional information and area information, the pixel that is subordinated to unlapped background area is extracted a flat among the pixel of the continuous location of the direction of motion of the pixel of the background area that belongs to covering; In this flat, belong to amount that the pixel value of the pixel of foreground area changes less than threshold value Thf1.Flat extraction unit 1501 generates the prospect flat positional information of the position of indicating the flat that extracts, and the prospect flat positional information that is generated is supplied to processing unit determining unit 1502.
In step S1003, handling unit determining unit 1502 determines according to motion vector and positional information and the definite unit that handles of area information, be included in corresponding at least one pixel in the object of prospect with indication, and the unit of processing is supplied to separating unit 1503.
In step S1004, separating unit 1503 is according to from the processing unit that handles 1502 supplies of unit determining unit, from the motion vector and the positional information thereof of motion detector 102 supplies, the pixel by handling the unit appointment among the pixel of input picture is carried out the processing of separation prospect and background and the processing of elimination motion blur simultaneously, thereby calculate corresponding to prospect component and background component by the pixel of handling the unit appointment.Carry out the details of the processing of separation prospect and background and elimination motion blur simultaneously below with reference to the flowchart text of Figure 104.
In step S1005, separating unit 1503 calculates the prospect component of flat.
In step S1006, separating unit 1503 calculates the pixel value of the prospect component image that does not have motion blur and the pixel value of background component image according to the prospect component of prospect component that calculates and the flat that calculates in step S1005 handles in step S1004 handles.Separating unit 1503 does not have the prospect component image and the background component image of motion blur to synthesizer 1504 supplies.
In step S1007, signal handling equipment determines whether to have finished the processing for whole screen.If also do not finish processing, then handle and turn back to step S1004, repeated isolation prospect and background and the processing of eliminating motion blur for whole screen.
If in step S1007, determine to have finished processing for whole screen, then handle and advance to step S1008, in this step, synthesizer 1504 combination backgrounds, the prospect component image that does not have motion blur, background image component.Finish this processing then.
As mentioned above, signal handling equipment separates prospect and background, is included in motion blur in the prospect so that eliminate.
Handle by the separation prospect of separating unit 1503 execution and the simultaneous (simultanous) of background and elimination motion blur below with reference to Figure 104 explanation.
In step S1201, model forms part 1521 according to from processing unit that handles 1502 supplies of unit determining unit and the motion vector and the positional information thereof of supplying from motion detector 102, forms a model.Model forms part 1521 model that is generated is supplied to equation maker 1522.
At step S1022, equation maker 1522 is according to the simultaneous equations of the model generation of supplying from model generating portion 1521 corresponding to the relational expression between pixel value, prospect component and the background component.
In step S1023, in the simultaneous equations that equation maker 1522 is set to be generated corresponding to the pixel value of input picture.
In step S1024, equation maker 1522 determines whether to be provided with all pixel values in the couple very much in love equation.If determine also not to be provided with all pixel values, then handle and turn back to step S1023, and repeat to be provided with the processing of pixel value.
If in step S1024, determine to be provided with all pixel values, then equation maker 1522 is provided with the simultaneous equations of pixel value to calculator 1523 supplies, calculator 1523 calculating and setting the simultaneous equations of pixel value so that calculate prospect component and background component.Finish this processing then.
As mentioned above, separating unit 1503 can generate according to the prospect component that is calculated and background component does not have the prospect component image of motion blur and separating background component image.
Figure 105 is the explanation signal processing block diagram of an equipment disposition again.The parts that are similar to Figure 94 among the figure are represented by like numerals will, thereby are omitted its explanation.
Handle unit to determine/taxon 1601 is according to from the motion vector of motion detector 102 supplies and positional information thereof, from the area information of regional designating unit 103 supplies, from the prospect flat positional information of prospect flat extraction unit 1501 supplies of flat extraction unit 1501 supplies, and generate and handle unit.Handle unit to determine/also the classify pixel of input picture of taxon 1601, and be supplied to classified pixel separating unit 1503, motion blur to eliminate unit 1602, prospect component image reproducing unit 1603, background component to blur one of reproducing unit 1604.
Handle unit to determine/pixel that taxon 1601 is subordinated to unlapped background area is on the direction of motion of the pixel of the background area that belongs to covering among the continuously arranged pixel, elimination prospect component, this prospect component is corresponding to the flat from the pixel that belongs to foreground area.Handle unit to determine/taxon 1601 is to separation/elimination unit 1503, and supply belongs to the pixel (having eliminated the prospect component corresponding to the foreground area flat in this Mixed Zone) of Mixed Zone, the pixel that belongs to foreground area and corresponding processing unit.
Handle unit to determine/taxon 1601 is supplied to prospect component image reproducing unit 1603 the flat image of foreground area.
Handle unit to determine/taxon 1601 eliminates pixel that is folded by flat (having eliminated the prospect component corresponding to flat) and the corresponding processing unit that unit 1602 supplies belong to foreground area to motion blur.
Separating unit 1503 is to be similar to the processing mode of discussing with reference to the flow chart of Figure 104, generation does not have the prospect component image of motion blur and the background component image that has separated, and wherein this prospect component image is corresponding to pixel that belongs to foreground area and the pixel that belongs to the Mixed Zone.Separating unit 1503 is not there being the prospect component image of motion blur to be supplied to prospect component image reproducing unit 1603, and separating background component image is supplied to background component image repetition unit 1604.
Motion blur is eliminated unit 1602 according to determining from handling unit/the processing unit of taxon 1601 supplies, calculate and the corresponding prospect component of the pixel that is folded by flat that belongs to foreground area, so that generation is corresponding to the prospect component image that does not have motion blur of the prospect component that is calculated.The prospect component image that motion blur unit 1602 is generated to 1603 supplies of prospect component image reproducing unit.
Figure 106 is the fuzzy block diagram of eliminating unit 1602 configurations of account for motion.
The motion vector of motion detector 102 supply and positional information thereof and processing unit determine/processing unit that taxon 1601 is supplied is supplied to model and forms part 1621.
Model forms part 1621 according to determining from the motion vector of motion detector 102 supplies and positional information thereof and processing unit/the processing unit of taxon 1601 supplies, generates a model, and the model that is generated is supplied to equation maker 1622.
Below in conjunction with Figure 107 and Figure 108 the model that is supplied to equation maker 1622 is described.
Figure 107 illustrates the model that obtains by the pixel value of expanding the pixel that belongs to foreground area on time orientation.
Handle unit to determine/separating part 601 to be to be similar to the mode of the processing of discussing with reference to Figure 98, is subordinated to the prospect component of eliminating in the pixel of foreground area corresponding to flat.
For example, in the example shown in Figure 107, when equation (133) is effective, can find the relation according to equation (134) to (138), prospect component F 106/v to F114/v equates.Therefore, shown in Figure 108, be subordinated to and eliminate prospect component F 106/v to F114/v in the pixel of foreground area.
C110=C111=C112=C113=C114 (133)
C110=F106/v+F107/v+F108/v+F109/v+F110v (134)
C111=F107/v+F108/v+F109/v+F110/v+F110/v (135)
C112=F108/v+F109/v+F110/v+F111/v+F112/v (136)
C113=F109/v+F110/v+F111/v+F112/v+F113/v (137)
C114=F110/v+F111/v+F112/v+F113/v+F114/v (138)
Equally, be subordinated to and eliminate prospect component F 96/v to F100/v in the pixel of foreground area and corresponding to another flat prospect component F 120/v to F124/v.
Determine from handling unit like this ,/taxon 1601 eliminates pixel that is folded by flat (having eliminated the prospect component corresponding to flat these pixels) and the corresponding processing unit that unit 1602 supplies belong to foreground area to motion blur.
The model that motion blur is eliminated unit 1602 forms part 1621 according to handling model that is used to generate many equations of unit formation, and these equations are corresponding to the relational expression between pixel that is folded by flat that belongs to foreground area (having eliminated the prospect component corresponding to flat) and the remaining prospect component.
Model forms part 1621 model that is generated is supplied to equation maker 1622.
Equation maker 1622 generates the equation corresponding to the relational expression between pixel that is folded by flat that belongs to foreground area (having eliminated the prospect component corresponding to flat) and the remaining prospect component according to the model that forms part 1621 supplies from model.
For example, the relation between prospect component F 101/v to F105/v and the pixel value is represented by equation (139) to (147).
C101′=F101/v (139)
C102′=F101/v+F102/v (140)
C103′=F101/v+F102/v+F103/v (141)
C104′=F101/v+F102/v+F103/v+F104/v (142)
C105′=F101/v+F102/v+F103/v+F104/v+F105/v (143)
C106′=F102/v+F103/v+F104/v+F105/v (144)
C107″=F103/v+F104/v+F105/v (145)
C108″=F104/v+F105/v (146)
C109″=F105/v (147)
Relation between prospect component F 101/v to F105/v and the pixel value is represented by equation (148) to (156).
C115′=F115/v (148)
C116′=F115/v+F116/v (149)
C117′=F115/v+F116/v+F117/v (150)
C118′=F115/v+F116/v+F117/v+F118/v (151)
C119′=F115/v+F116/v+F117/v+F118/v+F119/v (152)
C120′=F116/v+F117/v+F118/v+F119/v (153)
C121″=F117/v+F118/v+F119/v (154)
C122″=F118/v+F119/v (155)
C123″=F119/v (156)
Equation maker 1622 is found the solution the equation (139) to (147) that is provided with pixel value and equation (148) to (156), so that obtain normal equation, for example equation (157) and (158) according to least square method.
5 4 3 2 1 4 5 4 3 2 3 4 5 4 3 2 3 4 5 4 1 2 3 4 5 F 101 F 102 F 103 F 104 F 105 = v · C 101 ′ + C 102 ′ + C 10 3 ′ + C 104 ′ + C 105 ′ C 102 ′ + C 103 ′ + C 10 4 ′ + C 105 ′ + C 106 ′ C 103 ′ + C 104 ′ + C 10 5 ′ + C 106 ′ + C 107 ′ C 104 ′ + C 105 ′ + C 10 6 ′ + C 107 ′ + C 108 ′ C 105 ′ + C 106 ′ + C 10 7 ′ + C 108 ′ + C 109 ′ - - - ( 157 )
5 4 3 2 1 4 5 4 3 2 3 4 5 4 3 2 3 4 5 4 1 2 3 4 5 F 115 F 116 F 117 F 118 F 119 = v · C 115 ′ + C 116 ′ + C 117 ′ + C 118 ′ + C 119 ′ C 116 ′ + C 117 ′ + C 118 ′ + C 119 ′ + C 120 ′ C 117 ′ + C 118 ′ + C 119 ′ + C 120 ′ + C 121 ′ C 118 ′ + C 119 ′ + C 120 ′ + C 121 ′ + C 122 ′ C 119 ′ + C 120 ′ + C 121 ′ + C 122 ′ + C 123 ′ - - - ( 158 )
Equation maker 1622 bases form the model growth equation that part 1621 is supplied from model, and equation that is generated and prospect component image are supplied to adder 1623 together.
Adder 1623 is added to the equation of equation maker 1622 supplies on the normal equation that obtains by least square method.Adder 1623 is supplied to calculator 1624 with resulting normal equation.
Calculator 1624 decomposes the normal equation that is applied to be provided with pixel value by with a kind of solution such as Cholesky, calculates the prospect component that is included in the prospect component image, rather than calculates the prospect component that is included in the pixel that belongs to flat.Calculator 1624 generates the prospect component image that does not have motion blur according to the prospect component that is calculated, and output does not have the prospect component image of motion blur.
For example, when prospect component F 101/v to F105/v and prospect component F 115/v to F119/v are determined, calculator 1624 usefulness amount of exercise v take advantage of prospect component F 101/v to F105/v and prospect component F 115/v to F119/v, shown in Figure 109, thus difference calculating pixel value F101 to F105 and pixel value F115 to F119.
Calculator 1624 generates the prospect component image of being made up of for example pixel value F101 to F105 and pixel value F115 to F119 that does not have motion blur.
The processing that has the performed elimination motion blur of the signal handling equipment that disposes shown in Figure 94 below in conjunction with Figure 110 explanation.
The step S1001 that step S1101 is similar to Figure 103 to the processing of step S1103 is to step S1003, thereby omits its explanation.
In step S1104, handle unit to determine/taxon 1601 is according to from the motion vector of motion detector 102 supplies and positional information thereof, from the area information that is tending towards formulating unit 103 supplies and from the prospect flat positional information of flat extraction unit 1501 supplies, to the pixel classification of input picture, and the pixel of being classified is supplied to separating unit 1503, motion blur to eliminate one of unit 1602, prospect component image reproducing unit 1603 and background component image repetition unit 1604.
In step S1105, separating unit 1503 is carried out the processing for the processing of the separation prospect of the pixel that belongs to the Mixed Zone and background and elimination motion blur simultaneously, has eliminated the prospect component corresponding to the flat of foreground area in this Mixed Zone; And the pixel that is subordinated to unlapped background area selects to belong to the pixel of foreground area in the continuously arranged many pixels of the direction of motion of the pixel of the background area that belongs to covering.The processing details of step S1105 is similar to the processing of the step S1004 of Figure 103, therefore omits its explanation.
In step S1106, separating unit 1503 calculates the pixel value of the prospect component image that does not have motion blur and the pixel value of background component according to prospect component and the background component calculated.1503 of separating unit do not have the prospect component image of motion blur to be supplied to prospect component image reproducing unit 1603, and the background component image is supplied to background component image repetition unit 1604.
In step S1107, signal handling equipment determines whether to finish the processing of Mixed Zone and foreground area.Also do not finish if be identified for the processing of Mixed Zone and foreground area, then handle turning back to step S1105, and the processing of repeated isolation prospect and background and elimination motion blur.
If, in step S1107, to determine to have finished the processing that is used for Mixed Zone and foreground area, then processing prospect is to step S1108.In step S1108, motion blur is eliminated the processing that the motion blur of the pixel that is folded by flat that belongs to foreground area is carried out in unit 1602; Not corresponding to the prospect component of flat, this flat is selected from the continuously arranged pixel of the direction of motion in this pixel.Flowchart text motion blur below in conjunction with Figure 111 is eliminated the details of handling.
In step S1109, motion blur is eliminated unit 1602 and is calculated the prospect component image that does not have motion blur according to the prospect component that is calculated.Motion blur is eliminated unit 1602 will not have the prospect component image of motion blur to be supplied to prospect component image reproducing unit 1603.
In step S1110, signal handling equipment determines whether to finish the processing of foreground area.If determine also not finish the processing of foreground area, then step turns back to step S1108, and the fuzzy processing of eliminating of repeating motion.
If in step S1110, determine to have finished the processing that is used for foreground area, then handle advancing to step S1111.
Should be noted in the discussion above that step S1108 carries out simultaneously to the processing of step S1101 and step S1105 to the processing of step S1107.
In step S1111, prospect component image reproducing unit 1603 according to determine from handling unit/the flat image of taxon 1601 supplies, the prospect component images that do not have motion blur of 1503 supplies, eliminate the prospect component image that does not have motion blur of unit 1602 supplies from motion blur from the separating unit, reappear all prospect component images that does not have motion blur.Background component image replaying unit 1604 is according to determining from handling unit/the background area image and all background component images of the background component image repetition that separates from separating unit 1503 of taxon 1601 supplies.Finish this processing then.
As mentioned above, have the messaging device that shown in Figure 105, disposes and from foreground object, to eliminate motion blur.
The elimination of being carried out by motion blur unit 1602 in the step S1108 of Figure 110 below in conjunction with the flowchart text of Figure 111 is corresponding to the processing of the motion blur of the prospect component image of handling unit.
In step S1121, the model that motion blur is eliminated unit 1602 forms the model of part 1621 formation corresponding to amount of exercise v and processing unit.In step S1122, equation maker 1622 is according to the model growth equation that is generated.
In step S1123, adder 1623 has been eliminated in the equation that the pixel value corresponding to the prospect component image of the prospect component of flat is set to be generated.In step S1124, adder 1123 determines whether to be provided with the pixel value corresponding to all pixels of handling unit.If determine also in all equations, pixel value not to be set, then handle turning back to step S1123, and repeat in equation, to be provided with the processing of pixel value.
If in step S1124, determine in all equations, to be provided with all pixel values, then handle advancing to step S1125.In step S1125, calculator 1624 calculates the pixel value of the prospect that does not have motion blur according to the equation that is provided with pixel value from adder 1623 supplies.
As mentioned above, motion blur is eliminated unit 1602 can and handle unit according to amount of exercise v, eliminates motion blur from the prospect component that comprises motion blur.
Above, by the ratio of mixing ratio α being adjusted to the background component that comprises in the pixel value the present invention has been discussed.In addition, also mixing ratio α can be adjusted to the ratio of the prospect component that comprises in the pixel value.
Above, the direction that the direction of motion by foreground object is set to has from left to right been discussed the present invention.In addition, the direction of motion is not limited to above-mentioned direction.
Non-mixed region or Mixed Zone are specified; Wherein, foreground area that non-mixed region is made up of the foreground object component of the foreground object that forms view data and the background area of being made up of the background object of the background object that forms view data constitute, and the Mixed Zone then is the zone of having mixed foreground object component and background object component.According to the result that the appointed area obtains, can carry out the processing that from the pixel data of Mixed Zone, separates the processing of foreground object component and background object component and from the foreground object component, eliminate motion blur simultaneously.In the case, can eliminate the motion blur that is included in the fuzzy image.
The moiety that the neighborhood pixels data of the foreground area that is equal to each other basically by its numerical value are formed is detected.Can carry out the processing that from the pixel data of Mixed Zone, separates the processing of foreground object component and background object component and from separate the foreground object component, eliminate motion blur at least simultaneously according to moiety that is detected and the result who obtains by the appointed area.
Detect according to the position of moiety by the processing unit that a plurality of foreground object components and background object component are formed.Can carry out processing that separates foreground object component and background object component and the processing of from separate the foreground object component, eliminating motion blur simultaneously for each unit of processing.
Processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area can be determined, and wherein, described pixel data is positioned on the straight line and is different from the pixel data of moiety.
Difference and threshold value by the compared pixels data can detect moiety.
Moiety is made up of the neighborhood pixels data, and its pixel quantity is equal to or greater than corresponding to the momental pixel quantity of foreground object, and this moiety can be detected.
By using calculating, can carry out processing that separates foreground object component and background object component and the processing of from separate the foreground object component, eliminating motion blur simultaneously corresponding to motion vector.
Can be obtained corresponding to the model of handling unit and motion vector.Generate an equation according to the model that obtained, this equation corresponding to the pixel data of handling unit be included in foreground object component in the processing unit and the relation between the background object component.Handling the foreground object component and the background object component that comprise in the unit can calculate according to the equation that is generated.
View data is transfused to, and this view data has the subject area of being made up of the object component that forms object.Value by pixel data part in the subject area of supposing the view data of importing is equal substantially, eliminates the motion blur that appears in the subject area.In the case, can eliminate the motion blur that is included in the blurred picture.
Import such view data, this view data has: a foreground area of being made up of the foreground object component that forms object, a background area and a Mixed Zone of having mixed foreground object component and background object component of forming by the background object component that forms background object.Value by pixel data part in the foreground area of supposition input image data is equal substantially, can eliminate the motion blur that appears in the foreground area.
The moiety that the value of pixel data equates substantially in the foreground area of view data is detected.Can eliminate the motion blur that appears in the foreground area according to the moiety that is detected.
The processing unit that is made up of a plurality of foreground object components determines according to the position of moiety.Can eliminate the motion blur of foreground area for each unit of processing.
Can be determined corresponding to the processing unit that is positioned at the pixel data (it is different from the pixel data of moiety) on the straight line that belongs to Mixed Zone or foreground area.
Can specify foreground area.Background area or Mixed Zone.
Difference and threshold value by the compared pixels data can detect moiety.
Can detect the moiety of being made up of the neighborhood pixels data, the pixel quantity of these neighborhood pixels data is greater than or equal to the momental pixel quantity corresponding to foreground object.
Can eliminate the motion blur that appears in the foreground area by using corresponding to the calculating of motion vector.
Can obtain corresponding to a model handling unit and motion vector.According to the model that is obtained, generate an equation, this equation is corresponding to the relational expression between the pixel data of handling unit and the foreground object component that is included in the processing unit.Being included in the foreground object component of handling in the unit can calculate according to the equation that is generated.
The processing that the pixel data of Mixed Zone is divided into the processing of foreground object component and background object component and eliminates motion blur from the foreground object component that has separated can be carried out simultaneously according to area information and moiety, the non-mixed region that this area information indication is made up of foreground area and background area or indicate the Mixed Zone.
The processing unit that is made up of a plurality of foreground object components and background object component determines according to the position of moiety.Can carry out the processing of separation foreground object component and background object component simultaneously and eliminate the processing of motion blur for each unit of processing from separating the foreground object component.
Processing unit corresponding to pixel data can be determined, and this pixel data belongs to Mixed Zone or foreground area, and is positioned on the straight line and is different from the pixel data of moiety.
Can specify foreground area, background area or Mixed Zone.
Difference and threshold value by the compared pixels data can detect moiety.
Can detect the moiety of being made up of the neighborhood pixels data, the pixel quantity of these neighborhood pixels data is greater than or equal to the momental pixel quantity corresponding to foreground object.
By using calculating, can carry out processing that separates foreground object component and background object component and the processing of from separate the foreground object component, eliminating motion blur simultaneously corresponding to motion vector.
Can obtain corresponding to a model handling unit and motion vector.According to the model that is obtained, generate an equation, this equation corresponding to the pixel data of handling unit be included in foreground object component in the processing unit and the relational expression between the background object component.Being included in foreground object component and the background object component handled in the unit can calculate according to the equation that is generated.
By the pixel that comprises predetermined quantity and have the subject image that the image capture apparatus of time integral function catches and be output, with as the view data of forming by the pixel data of predetermined quantity.Can specify by a foreground area and the Mixed Zone that the background area constitutes, perhaps specify the Mixed Zone of having mixed foreground object component and background object component, described foreground area is made up of the foreground object component of the foreground object that forms view data, and described background area is made up of the background object component of the background object that forms view data.According to the result who obtains by the appointed area, can carry out the processing that from the pixel data of Mixed Zone, separates the processing of foreground object component and background object component and from the foreground object component that has separated, eliminate motion blur simultaneously.In the case, can catch the image that does not have motion blur.
Detected by the moiety that the equal substantially each other neighborhood pixels data of its numerical value are formed.At least can carry out the processing that from the pixel data of Mixed Zone, separates the processing of foreground object component and background object component and from separate the foreground object component, eliminate motion blur simultaneously according to moiety that is detected and the result who obtains by the appointed area.
The processing unit that is made up of a plurality of foreground object components and background object component determines according to the position of moiety.For each each processing unit, can carry out the processing of separation foreground object component and background object component simultaneously and eliminate the processing of motion blur from separating the foreground object component.
Processing unit corresponding to pixel data can be determined, and this pixel data belongs to Mixed Zone or foreground area, and is positioned on the straight line and is different from the pixel data of moiety.
Difference and threshold value by the compared pixels data can detect moiety.
Can detect the moiety of being made up of the neighborhood pixels data, the pixel quantity of these neighborhood pixels data is greater than or equal to the momental pixel quantity corresponding to foreground object.
By using calculating, can carry out processing that separates foreground object component and background object component and the processing of from separate the foreground object component, eliminating motion blur simultaneously corresponding to motion vector.
Can obtain corresponding to a model handling unit and motion vector.According to the model that is obtained, generate an equation, this equation corresponding to the pixel data of handling unit be included in foreground object component in the processing unit and the relational expression between the background object component.Being included in foreground object component and the background object component handled in the unit can calculate according to the equation that is generated.
By the pixel that comprises predetermined quantity and have that subject image that the image capture apparatus of time integral function catches forms as the pixel data by predetermined quantity and view data with subject area of being made up of the object component that forms object is exported.Value by pixel data part in the subject area of supposition view data is equal substantially, eliminates the motion blur that appears in the subject area.In the case, can catch the image that does not have motion blur.
Import such view data, this view data has: the foreground area of being made up of the foreground object component that forms object, background area of forming by the background object component that forms background object and the Mixed Zone of having mixed foreground object component and background object component.
Value by pixel data part in the foreground area of supposition input image data is equal substantially, can eliminate the motion blur that appears in the foreground area.
The moiety that the value of pixel data equates substantially in the foreground area of view data is detected.Can eliminate the motion blur that occurs in the foreground area according to the moiety that detects.
The processing unit that is made up of a plurality of foreground object components and background object component determines according to the position of the moiety that is detected.For each each processing unit, can eliminate the motion blur of foreground area.
Processing unit corresponding to pixel data can be determined, and this pixel data belongs to Mixed Zone or foreground area, and is positioned on the straight line and is different from the pixel data of moiety.
Foreground area, background area or Mixed Zone can be designated.
Difference and threshold value by the compared pixels data can detect moiety.
Moiety is made up of the neighborhood pixels data, and its pixel quantity is more than or equal to corresponding to the momental pixel quantity of foreground object, and this moiety can be detected.
Can eliminate the motion blur that occurs in the foreground area by using corresponding to the calculating of motion vector.
Can obtain corresponding to a model handling unit and motion vector.According to the model that is obtained, generate an equation, this equation is corresponding to the relational expression between the pixel data of handling unit and the foreground object component that is included in the processing unit.Being included in the foreground object component of handling in the unit can calculate according to the equation that is generated.
The processing that the pixel data of Mixed Zone is divided into the processing of foreground object component and background object component and eliminates motion blur from the foreground object component that has separated can be carried out simultaneously according to area information and moiety, the non-mixed region that this area information indication is made up of foreground area and background area or indicate the Mixed Zone.
The processing unit that is made up of a plurality of foreground object components and background object component determines according to the position of moiety.Handle unit for each, can carry out the processing of separation foreground object component and background object component simultaneously and eliminate the processing of motion blur from separating the foreground object component.
Processing unit corresponding to pixel data can be determined, and this pixel data belongs to Mixed Zone or foreground area, and is positioned on the straight line and is different from the pixel data of moiety.
Can specify foreground area, background area or Mixed Zone.
Difference and threshold value by the compared pixels data can detect moiety.
Can detect the moiety of being made up of the neighborhood pixels data, the pixel quantity of these neighborhood pixels data is greater than or equal to the momental pixel quantity corresponding to foreground object.
By using calculating, can carry out processing that separates foreground object component and background object component and the processing of from separate the foreground object component, eliminating motion blur simultaneously corresponding to motion vector.
Can obtain corresponding to a model handling unit and motion vector.According to the model that is obtained, generate an equation, this equation corresponding to the pixel data of handling unit be included in foreground object component in the processing unit and the relational expression between the background object component.Being included in foreground object component and the background object component handled in the unit can calculate according to the equation that is generated.
In the superincumbent explanation, project on the time and space with two-dimensional space and timeline information by the real space image that uses video camera will have three dimensions and timeline information.In addition, the invention is not restricted to this example, the present invention also goes for following situation.When first information amount bigger in the one-dimensional space is projected to second less in the two-dimensional space amount of information, can proofread and correct because the distortion that projection generates can be extracted effective information, perhaps can synthesize more natural image.
Transducer is not limited to CCD, it can be the transducer of other type, such as solid state image pickup device, as CMOS (complementary metal oxide semiconductors (CMOS)) imageing sensor, BBD (half chain device), CID ((ChargeInjection Device, charge injection device) or CPD (Charge Prining Device).In addition, transducer is not necessarily arranged the transducer of checkout gear with matrix-style, and it can be a transducer of arranging checkout gear by delegation.
The program recording medium that image processing of the present invention carried out in record can be made of the software kit medium of logging program, these software kit medium are configured to provide program to the user discretely with computer, as shown in Figure 1, these medium can be, (CD-ROM (compact disc read-only memory) and DVD (digital versatile disc), magneto optical disk 53 (comprises MD (pocket dish) or semiconductor memory 54 such as disk 51 (comprising floppy disk), CD 52.Recording medium can also or be included in that hard disk constitutes in the memory cell 28 of logging program by ROM 22, and this recording medium is provided for the user after prestoring in computer.
The step that is formed on the program that writes down in the recording medium can be moved in chronological order according to the described order of specification.Yet they not necessarily move according to the time sequencing mode, also can move simultaneously or individually.
Industrial applicability
According to first invention, it can eliminate the motion blur that is included in the blurred picture.
According to second invention, it can eliminate the motion blur that is included in the blurred picture.
According to the 3rd invention, it can catch the image of having eliminated motion blur.
According to the 4th invention, it can catch the image of having eliminated motion blur.

Claims (130)

1, a kind of image processing equipment, be used for the view data that pixel data constituted execution processing to the predetermined quantity that obtains by image capture apparatus, described image capture apparatus comprises the pixel of predetermined quantity and has the time integral function that described image processing equipment comprises:
Region appointment device, be used to specify the non-mixed region that constitutes by foreground area and background area, perhaps specify the Mixed Zone of having mixed foreground object component and background object component, here, described foreground area is made up of the foreground object component of the foreground object of composing images data, and described background area is made up of the background object component of the background object of composing images data; With
The processing execution device, according to result by being obtained by described region appointment device appointed area, carry out the processing that from the pixel data of Mixed Zone, separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
2, image processing equipment according to claim 1, also comprise the moiety checkout gear, be used to detect the moiety that the neighborhood pixels data of the foreground area that is equated substantially each other by its numerical value are formed, wherein, described processing execution device is according to moiety that is detected and the result by being obtained by the region appointment device appointed area, at least can carry out the processing that from the pixel data of Mixed Zone, separates foreground object component and background component object simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
3, image processing equipment according to claim 2, also comprise: handle unit and determine device, be used for according to the definite processing unit that forms by a plurality of foreground object components and background object component in the position of moiety, wherein, described processing execution device is handled unit for each and is carried out the processing that separates foreground object component and background object component simultaneously, and the processing of eliminating motion blur from the foreground object component that is separated.
4, image processing equipment according to claim 3, wherein, described processing unit determines the definite processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area of device, and this pixel data is positioned on the straight line and is different from the pixel data of moiety.
5, image processing equipment according to claim 2, wherein, described moiety checkout gear detects moiety by the difference and a threshold value of compared pixels data.
6, image processing equipment according to claim 2, wherein, described moiety checkout gear detects the moiety of being made up of the neighborhood pixels data, and the pixel quantity of these neighborhood pixels data is more than or equal to corresponding to the momental pixel quantity of foreground object.
7, image processing equipment according to claim 1, wherein, described processing execution device is carried out processing that separates foreground object component and background object component and the processing of eliminating motion blur from the foreground object component that is separated simultaneously by implementing the calculating corresponding to motion vector.
8, image processing equipment according to claim 1, wherein, described processing execution device comprises:
The model deriving means is used to obtain the model corresponding to handling unit and motion vector;
The equation generating apparatus is used for generating an equation according to the model that obtained, this equation corresponding to the pixel data of handling unit be included in foreground object component in the processing unit and the relational expression between the background object component; And
Calculation element is used for being included in foreground object component and the background object component of handling in the unit according to the Equation for Calculating that is generated.
9, a kind of image processing method, be used for handling to carrying out by the view data that pixel data constituted of the predetermined quantity that image capture apparatus obtained, described image capture apparatus comprises the pixel of predetermined quantity and has the time integral function that described image processing method comprises:
The zone given step, be used to specify the non-mixed region that constitutes by foreground area and background area, perhaps specify the Mixed Zone of having mixed foreground object component and background object component, here, described foreground area is made up of the foreground object component of the foreground object of composing images data, and described background area is made up of the background object component of the background object of composing images data; With
The processing execution step, according to the result who is obtained by processing appointed area by described regional given step, carry out the processing that from the pixel data of Mixed Zone, separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
10, image processing method according to claim 9, comprise that also moiety detects step, the moiety that the neighborhood pixels data of the foreground area that detection is equated each other substantially by its numerical value are formed, wherein, in the processing of described processing execution step, according to moiety that is detected and result by being obtained by regional given step appointed area, at least simultaneously carry out the processing that from the pixel data of Mixed Zone, separates foreground object component and background component object, and the processing of from the foreground object component that is separated, eliminating motion blur.
11, image processing method according to claim 10, also comprise: handle the unit determining step, be used for according to the definite processing unit that forms by a plurality of foreground object components and background object component in the position of moiety, wherein, in the processing of described processing execution step, handle unit for each and carry out the processing that separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
12, image processing method according to claim 11, wherein, in the processing of handling the unit determining step, determine processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, this pixel data is positioned on the straight line and is different from the pixel data of moiety.
13, image processing method according to claim 10 wherein, detects in the processing of step at moiety, and a difference and a threshold value by the compared pixels data detect moiety.
14, image processing method according to claim 10, wherein, detect in the processing of step at moiety, detect the moiety of being made up of the neighborhood pixels data, the pixel quantity of these neighborhood pixels data is more than or equal to corresponding to the momental pixel quantity of foreground object.
15, image processing method according to claim 9, wherein, in the processing of processing execution step, by implementing calculating corresponding to motion vector, carry out the processing that separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
16, image processing method according to claim 9, wherein, the processing of described processing execution step comprises:
The model obtaining step is used to obtain the model corresponding to handling unit and motion vector;
Equation generates step, generates an equation according to the model that is obtained, this equation corresponding to the pixel data of handling unit be included in foreground object component in the processing unit and the relational expression between the background object component; And
Calculation procedure is used for being included in foreground object component and the background object component of handling in the unit according to the Equation for Calculating that is generated.
17, a kind of storage medium, storage is used for view data is carried out the computer-readable program of handling in this storage medium, described view data is made of the pixel data of the predetermined quantity that image capture apparatus is caught, described image capture apparatus comprises the pixel of predetermined quantity and has the time integral function that described computer-readable program comprises:
The zone given step, be used to specify the non-mixed region that constitutes by foreground area and background area, perhaps specify the Mixed Zone of having mixed foreground object component and background object component, here, described foreground area is made up of the foreground object component of the foreground object of composing images data, and described background area is made up of the background object component of the background object of composing images data; With
The processing execution step, be used for the result that basis is obtained by the processing appointed area by described regional given step, carry out the processing that from the pixel data of Mixed Zone, separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
18, storage medium according to claim 17, wherein, described program comprises that also moiety detects step, the moiety that the neighborhood pixels data of the foreground area that detection is equated each other substantially by its numerical value are formed, wherein, in the processing of described processing execution step, according to moiety that is detected and result by being obtained by described regional given step appointed area, at least simultaneously carry out the processing that from the pixel data of Mixed Zone, separates foreground object component and background component object, and the processing of from the foreground object component that is separated, eliminating motion blur.
19, storage medium according to claim 17, wherein, described program also comprises: handle the unit determining step, be used for according to the definite processing unit that forms by a plurality of foreground object components and background object component in the position of moiety, wherein, in the processing of described processing execution step, handle unit for each and carry out the processing that separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
20, storage medium according to claim 19, wherein, in the processing of described processing unit determining step, determine processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, this pixel data is positioned on the straight line and is different from the pixel data of moiety.
21, storage medium according to claim 18 wherein, detects in the processing of step at described moiety, and a difference and a threshold value by the compared pixels data detect moiety.
22, storage medium according to claim 18, wherein, detect in the processing of step at described moiety, detect the moiety of being made up of the neighborhood pixels data, the pixel quantity of these neighborhood pixels data is more than or equal to corresponding to the momental pixel quantity of foreground object.
23, storage medium according to claim 17, wherein, in the processing of described processing execution step, by implementing calculating corresponding to motion vector, carry out the processing that separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
24, storage medium according to claim 17, wherein, the processing of described processing execution step comprises:
The model obtaining step is used to obtain the model corresponding to handling unit and motion vector;
Equation generates step, generates an equation according to the model that obtains, this equation corresponding to the pixel data of handling unit be included in foreground object component in the processing unit and the relational expression between the background object component; And
Calculation procedure is used for being included in foreground object component and the background object component of handling in the unit according to the Equation for Calculating that is generated.
25, a kind of program that allows the computer execution following steps of carries out image data processing, described view data is made of the pixel data of the predetermined quantity that image capture apparatus is caught, and described image capture apparatus comprises the pixel of predetermined quantity and has the time integral function:
The zone given step, be used to specify the non-mixed region that constitutes by foreground area and background area, perhaps specify the Mixed Zone of having mixed foreground object component and background object component, here, described foreground area is made up of the foreground object component of the foreground object of composing images data, and described background area is made up of the background object component of the background object of composing images data; With
The processing execution step, according to the result who is obtained by processing appointed area by described regional designated treatment step, carry out the processing that from the pixel data of Mixed Zone, separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
26, program according to claim 25, comprise that also moiety detects step, the moiety that the neighborhood pixels data of the foreground area that detection is equated each other substantially by its numerical value are formed, wherein, in the processing of processing execution step, according to moiety that is detected and the result that obtained by processing appointed area by regional designated treatment step, at least simultaneously carry out the processing that from the pixel data of Mixed Zone, separates foreground object component and background component object, and the processing of from the foreground object component that is separated, eliminating motion blur.
27, program according to claim 26, also comprise and handle the unit determining step, be used for according to the definite processing unit that forms by a plurality of foreground object components and background object component in the position of moiety, wherein, in the processing of described processing execution step, handle unit for each and carry out the processing that separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
28, program according to claim 27, wherein, in the processing of described processing unit determining step, determine processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, this pixel data is positioned on the straight line and is different from the pixel data of moiety.
29, program according to claim 26 wherein, detects in the processing of step at described moiety, and a difference and a threshold value by the compared pixels data detect moiety.
30, program according to claim 26, wherein, detect in the processing of step at described moiety, detect the moiety of being made up of the neighborhood pixels data, the pixel quantity of these neighborhood pixels data is more than or equal to corresponding to the momental pixel quantity of foreground object.
31, program according to claim 25, wherein, in the processing of described processing execution step, by implementing calculating corresponding to motion vector, carry out the processing that separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
32, program according to claim 25, wherein, the processing of described processing execution step comprises:
The model obtaining step is used to obtain the model corresponding to handling unit and motion vector;
Equation generates step, generates an equation according to the model that obtains, this equation corresponding to the pixel data of handling unit be included in foreground object component in the processing unit and the relational expression between the background object component; And
Calculation procedure is used for being included in foreground object component and the background object component of handling in the unit according to the Equation for Calculating that is generated.
33, a kind of image processing equipment, be used for the view data that pixel data constituted execution processing to the predetermined quantity that obtains by image capture apparatus, described image capture apparatus comprises the pixel of predetermined quantity and has the time integral function that described image processing equipment comprises:
Input unit, input has the view data of subject area, and this subject area is made up of the object component that forms object; And
The motion blur cancellation element, equal substantially by the value of supposition pixel data part in the subject area of the view data that described input unit is imported, eliminate the motion blur that appears in the subject area.
34, image processing equipment according to claim 33, wherein:
Described input unit input has the view data of foreground area, background area and Mixed Zone, described foreground area is made up of the foreground object component that forms object, described background area is made up of the background object component that forms background object, and described Mixed Zone is the zone of having mixed foreground object component and background object component; And
Described motion blur cancellation element is equal substantially by the value of supposition pixel data part in the foreground area of the view data that described input unit is imported, and eliminates the motion blur that appears in the foreground area.
35, image processing equipment according to claim 34, also comprise the moiety checkout gear, be used for detecting the moiety that the value at the foreground area pixel data of view data equates substantially, wherein, the moiety that detected according to the moiety checkout gear of described motion blur cancellation element is eliminated and is appeared at motion blur in the foreground area.
36, image processing equipment according to claim 35, comprise that also handling unit determines device, be used for according to the definite processing unit that forms by a plurality of foreground object components in the position of moiety, wherein, described motion blur cancellation element is eliminated the motion blur in each foreground area of handling unit.
37, image processing equipment according to claim 36, wherein, described processing unit determines the definite processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area of device, and this pixel data is positioned on the straight line and is different from the pixel data of moiety.
38, image processing equipment according to claim 35 also comprises region appointment device, is used to specify foreground area, background area or Mixed Zone.
39, image processing equipment according to claim 35, wherein, described moiety checkout gear detects moiety by the difference and a threshold value of compared pixels data.
40, image processing equipment according to claim 35, wherein, described moiety checkout gear detects the moiety of being made up of the neighborhood pixels data, and the pixel quantity of these neighborhood pixels data is more than or equal to corresponding to the momental pixel quantity of foreground object.
41, image processing equipment according to claim 35, wherein, described motion blur cancellation element is eliminated the motion blur that appears in the foreground area by implementing the calculating corresponding to motion vector.
42, image processing equipment according to claim 35, wherein, described motion blur cancellation element comprises:
The model deriving means is used to obtain the model corresponding to handling unit and motion vector;
The equation generating apparatus generates an equation according to the model that obtains, and this equation is corresponding to the relational expression between the pixel data of handling unit and the foreground object component that is included in the processing unit; And
Calculation element is used for being included in the foreground object component of handling in the unit according to the Equation for Calculating that is generated.
43, image processing equipment according to claim 35, wherein, described motion blur cancellation element is according to area information with according to moiety, carry out the processing that the pixel data of Mixed Zone is separated into foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur, non-mixed region or indication Mixed Zone that described area information indication is made up of foreground area and background area.
44, according to the described image processing equipment of claim 43, comprise that also handling unit determines device, according to the definite processing unit that forms by a plurality of foreground object components and background object component in the position of moiety, wherein, described processing execution device is handled unit for each and is carried out the processing that separates foreground object component and background object component simultaneously, and the processing of eliminating motion blur from the foreground object component that is separated.
45, according to the described image processing equipment of claim 44, wherein, described processing unit determines that device determines the processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, and this pixel data is positioned on the line always and is different from the pixel data of moiety.
46,, also comprise the region appointment device of specifying foreground area, background area or Mixed Zone according to the described image processing equipment of claim 43.
47, according to the described image processing equipment of claim 43, wherein, described moiety checkout gear is by the difference and the threshold test moiety of compared pixels data.
48, according to the described image processing equipment of claim 43, wherein, described moiety checkout gear detects the moiety of being made up of the neighborhood pixels data, and the pixel quantity of described neighborhood pixels data is greater than or equal to respect to the momental pixel quantity of foreground object.
49, according to the described image processing equipment of claim 43, wherein, described processing unit determines that device is by implementing the calculating corresponding to motion vector, carry out the processing that separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
50, according to the described image processing equipment of claim 43, wherein, described processing unit determines that device comprises:
The model deriving means is used to obtain the model corresponding to handling unit and motion vector;
The equation generating apparatus is used for generating an equation according to the model that obtained, this equation corresponding to the pixel data of handling unit be included in foreground object component in the processing unit and the relational expression between the background object component; And
Calculation element is included in foreground object component and the background object component of handling in the unit according to the Equation for Calculating that generates.
51, a kind of image processing method, be used for the view data that pixel data constituted execution processing to the predetermined quantity that obtains by image capture apparatus, described image capture apparatus comprises the pixel of predetermined quantity and has the time integral function that described image processing method comprises:
Input step, input has the view data of subject area, and this subject area is made up of the object component that forms object; And
The motion blur removal process, equal substantially by the value of the pixel data part of supposition in the subject area of the view data of importing by the processing of input step, eliminate the motion blur that appears in the subject area.
52, according to the described image processing method of claim 51, wherein:
In the processing of described input step, input has the view data of foreground area, background area and Mixed Zone, described foreground area is made up of the foreground object component that forms object, described background area is made up of the background object component that forms background object, and described Mixed Zone is the zone of having mixed foreground object component and background object component;
In the processing of described motion blur removal process, equal substantially by the value of supposition pixel data part in the foreground area of handling the view data of being imported by described input step, eliminate the motion blur that appears in the foreground area.
53, according to the described image processing method of claim 52, comprise that also moiety detects step, the moiety that the value of detection pixel data in the foreground area of view data equates substantially, wherein, in the processing of motion blur removal process, appear at motion blur in the foreground area according to eliminating by the moiety detection moiety that step process detected.
54, according to the described image processing method of claim 53, comprise that is also handled a unit determining step, be used for according to the definite processing unit that forms by a plurality of foreground object components in the position of moiety, wherein, in described motion blur removal process is handled, eliminate the motion blur that each handles the foreground area of unit.
55, according to the described image processing method of claim 54, wherein, in the processing of described processing unit determining step, determine the processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, this pixel data is positioned on the straight line and is different from the pixel data of moiety.
56, according to the described image processing method of claim 53, also comprise regional given step, be used to specify foreground area, background area or Mixed Zone.
57, according to the described image processing method of claim 53, wherein, detect in the processing of step at moiety, a difference and a threshold value by the compared pixels data detect moiety.
58, according to the described image processing method of claim 53, wherein, detect in the processing of step at described moiety, detect the moiety of being made up of the neighborhood pixels data, the pixel quantity of these neighborhood pixels data is more than or equal to corresponding to the momental pixel quantity of foreground object.
59,, wherein, in the processing of described motion blur removal process,, eliminate the motion blur that appears in the foreground area by implementing calculating corresponding to motion vector according to the described image processing method of claim 53.
60, according to the described image processing method of claim 53, wherein, the processing of described motion blur removal process comprises:
The model obtaining step is used to obtain the model corresponding to handling unit and motion vector;
Equation generates step, generates an equation according to the model that obtains, and this equation is corresponding to the relational expression between the pixel data of handling unit and the foreground object component that is included in the processing unit; And
Calculation procedure is used for being included in the foreground object component of handling in the unit according to the Equation for Calculating that is generated.
61, according to the described image processing method of claim 53, wherein, in the processing of motion blur removal process, according to area information and moiety, carry out the processing that the pixel data of Mixed Zone is separated into foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur, non-mixed region or indication Mixed Zone that described area information indication is made up of foreground area and background area.
62, according to the described image processing method of claim 61, also comprise and handle the unit determining step, according to the definite processing unit that forms by a plurality of foreground object components and background object component in the position of moiety, wherein, in the processing of described processing execution step, handle unit for each and carry out the processing that separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
63, according to the described image processing method of claim 62, wherein, in the processing of handling the unit determining step, determine processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, this pixel data is positioned on the line always and is different from the pixel data of moiety.
64,, also comprise a regional given step of specifying foreground area, background area or Mixed Zone according to the described image processing method of claim 61.
65, according to the described image processing method of claim 61, wherein, detect in the processing of step at described moiety, a difference and a threshold value by the compared pixels data detect moiety.
66, according to the described image processing method of claim 61, wherein, detect in the processing of step at described moiety, detect the moiety of being made up of the neighborhood pixels data, the pixel quantity of described neighborhood pixels data is greater than or equal to respect to the momental pixel quantity of foreground object.
67, according to the described image processing method of claim 61, wherein, in the processing of described processing unit determining step, by implementing a calculating corresponding to motion vector, carry out the processing that separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
68, according to the described image processing method of claim 61, wherein, the processing of described processing unit determining step comprises:
The model obtaining step is used to obtain the model corresponding to handling unit and motion vector;
Equation generates step, be used for generating an equation according to the model that is obtained, this equation corresponding to the pixel data of handling unit be included in foreground object component in the processing unit and the relational expression between the background object component; And
Calculation procedure is included in foreground object component and the background object component of handling in the unit according to the Equation for Calculating that generates.
69, a kind of storage medium, in this media, stored the computer-readable program that the execution graph image data is handled, described view data is made of the pixel data of the predetermined quantity that image capture apparatus is caught, described image capture apparatus comprises the pixel of predetermined quantity and has the time integral function that computer-readable program comprises:
Input step, input has the view data of subject area, and this subject area is made up of the object component that forms object; And
The motion blur removal process, equal substantially by supposition in the value of handling pixel data part in the image data objects zone of being imported by described input step, eliminate the motion blur that appears in the subject area.
70, according to the described storage medium of claim 69, wherein:
In the processing of described input step, input has the view data of foreground area, background area and Mixed Zone, described foreground area is made up of the foreground object component that forms object, described background area is made up of the background object component that forms background object, and described Mixed Zone is the zone of having mixed foreground object component and background object component;
In the processing of described motion blur removal process, equal substantially by the value of supposition pixel data part in the foreground area of the view data of importing by the processing of described input step, eliminate the motion blur that appears in the foreground area.
71, according to the described storage medium of claim 70, comprise that also moiety detects step, be used for detecting the moiety that the value at the foreground area pixel data of view data equates substantially, wherein, in the processing of motion blur removal process, appear at motion blur in the foreground area according to eliminating by the described moiety detection moiety that step process detected.
72, according to the described storage medium of claim 71, also comprise: handle the unit determining step for one, be used for according to the definite processing unit that forms by a plurality of foreground object components in the position of moiety, wherein, in described motion blur removal process is handled, eliminate the motion blur that each handles the foreground area of unit.
73, according to the described storage medium of claim 72, wherein, in the processing of described processing unit determining step, determine processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, this pixel data is positioned on the straight line and is different from the pixel data of moiety.
74, according to the described storage medium of claim 71, also comprise regional given step, be used to specify foreground area, background area or Mixed Zone.
75, according to the described storage medium of claim 71, wherein, detect in the processing of step at described moiety, a difference and a threshold value by the compared pixels data detect moiety.
76, according to the described storage medium of claim 71, wherein, detect in the processing of step at described moiety, detect the moiety of being made up of the neighborhood pixels data, the pixel quantity of these neighborhood pixels data is more than or equal to corresponding to the momental pixel quantity of foreground object.
77,, wherein, in the processing of described motion blur removal process,, eliminate the motion blur that appears in the foreground area by implementing calculating corresponding to motion vector according to the described storage medium of claim 71.
78, according to the described storage medium of claim 71, wherein the processing of motion blur removal process comprises:
The model obtaining step is used to obtain the model corresponding to handling unit and motion vector;
Equation generates step, generates an equation according to the model that is obtained, and this equation is corresponding to the relational expression between the pixel data of handling unit and the foreground object component that is included in the processing unit; And
Calculation procedure is used for being included in the foreground object component of handling in the unit according to the Equation for Calculating that is generated.
79, according to the described storage medium of claim 71, wherein, in the processing of described motion blur removal process, according to area information with according to moiety, carry out the processing that the pixel data of Mixed Zone is separated into foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur, non-mixed region or indication Mixed Zone that described area information indication is made up of foreground area and background area.
80, according to the described storage medium of claim 79, comprise that is also handled a unit determining step, be used for according to the definite processing unit that forms by a plurality of foreground object components and background object component in the position of moiety, wherein, in the processing of described processing execution step, handle unit for each and carry out the processing that separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
81,0 described storage medium according to Claim 8, wherein, in the processing of described processing unit determining step, determine processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, this pixel data is positioned on the line always and is different from the pixel data of moiety.
82,, also comprise the regional given step of specifying foreground area, background area or Mixed Zone according to the described storage medium of claim 79.
83, according to the described storage medium of claim 79, wherein, detect in the processing of step at described moiety, a difference and a threshold value by the compared pixels data detect moiety.
84, according to the described storage medium of claim 79, wherein, detect in the processing of step at described moiety, detect the moiety of being made up of the neighborhood pixels data, the pixel quantity of described neighborhood pixels data is greater than or equal to respect to the momental pixel quantity of foreground object.
85, according to the described storage medium of claim 79, wherein, in the processing of described processing unit determining step, by implementing a calculating corresponding to motion vector, carry out the processing that separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
86, according to the described storage medium of claim 79, wherein, the processing of described processing unit determining step comprises:
The model obtaining step is used to obtain the model corresponding to handling unit and motion vector;
Equation generates step, be used for generating an equation according to the model that is obtained, this equation corresponding to the pixel data of handling unit be included in foreground object component in the processing unit and the relational expression between the background object component; And
Calculation procedure is included in foreground object component and the background object component of handling in the unit according to the Equation for Calculating that is generated.
87, a kind of program that allows the computer run following steps of carries out image data processing, described view data is made of the pixel data of the predetermined quantity that image capture apparatus is caught, and described image capture apparatus comprises the pixel of predetermined quantity and has the time integral function:
Input step, input has the view data of subject area, and this subject area is made up of the object component that forms object; And
The motion blur removal process, equal substantially by the value of supposition pixel data part in the subject area of the view data of importing by the processing of described input step, eliminate the motion blur that appears in the subject area.
88,7 described programs according to Claim 8, wherein:
In the processing of described input step, input has the view data of foreground area, background area and Mixed Zone, described foreground area is made up of the foreground object component that forms object, described background area is made up of the background object component that forms background object, and described Mixed Zone is the zone of having mixed foreground object component and background object component; And
In the processing of described motion blur removal process, equal substantially by the value of supposition pixel data part in the foreground area of the view data of importing by the processing of input step, eliminate the motion blur that appears in the foreground area.
89,8 described programs according to Claim 8, comprise that also moiety detects step, be used for detecting the moiety that the value at the foreground area pixel data of view data equates substantially, wherein, in the processing of described motion blur removal process, appear at motion blur in the foreground area according to eliminating by the described moiety detection moiety that step process detected.
90,9 described programs according to Claim 8, comprise that is also handled a unit determining step, be used for according to the definite processing unit that forms by a plurality of foreground object components in the position of moiety, wherein, in described motion blur removal process is handled, eliminate the motion blur that each handles the foreground area of unit.
91, according to the described program of claim 90, wherein, in the processing of described processing unit determining step, determine processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, this pixel data is positioned on the straight line and is different from the pixel data of moiety.
92,9 described programs according to Claim 8 also comprise regional given step, are used to specify foreground area, background area or Mixed Zone.
93,9 described programs according to Claim 8 wherein, detect in the processing of step at described moiety, and a difference and a threshold value by the compared pixels data detect moiety.
94,9 described programs according to Claim 8, wherein, detect in the processing of step at described moiety, detect the moiety of being made up of the neighborhood pixels data, the pixel quantity of these neighborhood pixels data is more than or equal to corresponding to the momental pixel quantity of foreground object.
95,9 described programs according to Claim 8 wherein, in the processing of described motion blur removal process, by implementing the calculating corresponding to motion vector, are eliminated the motion blur that appears in the foreground area.
96,9 described programs according to Claim 8, wherein, the processing of described motion blur removal process comprises:
The model obtaining step is used to obtain the model corresponding to handling unit and motion vector;
Equation generates step, generates an equation according to the model that is obtained, and this equation is corresponding to the relational expression between the pixel data of handling unit and the foreground object component that is included in the processing unit; And
Calculation procedure is used for being included in the foreground object component of handling in the unit according to the Equation for Calculating that is generated.
97,9 described programs according to Claim 8, wherein, in the processing of described motion blur removal process, according to area information with according to moiety, carry out the processing that the pixel data of Mixed Zone is separated into foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur, non-mixed region or indication Mixed Zone that described area information indication is made up of foreground area and background area.
98, according to the described program of claim 97, also comprise and handle the unit determining step, according to the definite processing unit that forms by a plurality of foreground object components and background object component in the position of moiety, wherein, in the processing of described processing execution step, handle unit for each and carry out the processing that separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
99, according to the described program of claim 98, wherein, in the processing of described processing unit determining step, determine processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, this pixel data is positioned on the straight line and is different from the pixel data of moiety.
100,, also comprise a regional given step of specifying foreground area, background area or Mixed Zone according to the described program of claim 97.
101, according to the described program of claim 97, wherein, detect in the processing of step at described moiety, a difference and a threshold value by the compared pixels data detect moiety.
102, according to the described program of claim 97, wherein, detect in the processing of step at described moiety, detect the moiety of being made up of the neighborhood pixels data, the pixel quantity of described neighborhood pixels data is greater than or equal to respect to the momental pixel quantity of foreground object.
103, according to the described program of claim 97, wherein, in the processing of described processing unit determining step, by implementing a calculating corresponding to motion vector, carry out the processing that separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
104, according to the described program of claim 97, wherein, the processing of described processing unit determining step comprises:
The model obtaining step is used to obtain the model corresponding to handling unit and motion vector;
Equation generates step, be used for generating an equation according to the model that is obtained, this equation corresponding to the pixel data of handling unit be included in foreground object component in the processing unit and the relational expression between the background object component; And
Calculation procedure is included in foreground object component and the background object component of handling in the unit according to the Equation for Calculating that is generated.
105, a kind of image capture device comprises:
Image capture apparatus is used to export the object images of being caught by image-capture device, and this image-capture device comprises the pixel of predetermined quantity and has time integral function to the formed view data of scheduled volume pixel data;
Region appointment device, be used to specify the non-mixed region that constitutes by foreground area and background area, perhaps specify the Mixed Zone of mixed foreground object component and background object component, here, described foreground area is made up of the foreground object component of the foreground object of composing images data, and described background area is made up of the background object component of the background object of composing images data;
The processing execution device, according to by region appointment device by result that the appointed area obtained, carry out the processing that from the pixel data of Mixed Zone, separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
106, according to the described image capture device of claim 105, also comprise the moiety checkout gear, the moiety that the neighborhood pixels data of the foreground area that detection is equated each other substantially by its numerical value are formed, wherein, described processing execution device, is carried out at least simultaneously and is separated the processing of foreground object component and background component object and the processing of elimination motion blur from the foreground object component that separates from the pixel data of Mixed Zone by the result that the appointed area obtained according to the moiety that is detected and region appointment device.
107, according to the described image capture apparatus of claim 106, comprise that also handling unit determines device, be used for according to the definite processing unit that forms by a plurality of foreground object components and background object component in the position of moiety, wherein, described processing execution device is handled unit for each and is carried out the processing that separates foreground object component and background object component simultaneously, and the processing of eliminating motion blur from the foreground object component that is separated.
108, according to the described image capture apparatus of claim 107, wherein, described processing unit determines the definite processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area of device, and this pixel data is positioned on the straight line and is different from the pixel data of moiety.
109, according to the described image capture apparatus of claim 106, wherein, described moiety checkout gear detects moiety by the difference and a threshold value of compared pixels data.
110, according to the described image capture device of claim 106, wherein, described moiety checkout gear detects the moiety of being made up of the neighborhood pixels data, and the pixel quantity of these neighborhood pixels data is more than or equal to corresponding to the momental pixel quantity of foreground object.
111, according to the described image capture apparatus of claim 105, wherein, described processing execution device is by implementing the calculating corresponding to motion vector, carry out the processing that separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
112, according to the described image capture apparatus of claim 105, wherein, described processing execution device comprises:
The model deriving means is used to obtain the model corresponding to handling unit and motion vector;
The equation generating apparatus generates an equation according to the model that obtained, this equation corresponding to the pixel data of handling unit be included in foreground object component in the processing unit and the relational expression between the background object component; And
Calculation element is used for being included in foreground object component and the background object component of handling in the unit according to the Equation for Calculating that is generated.
113, a kind of image capture device comprises:
Image capture apparatus, be used to export the object images of catching by image-capture device, this image-capture device comprises the pixel of predetermined quantity and has the time integral function of the view data that the pixel data of scheduled volume is formed and have the subject area of being made up of the object component that forms object; And
The motion blur cancellation element, equal substantially by the value of supposition pixel data part in the subject area of view data, eliminate the motion blur that appears in the subject area.
114, according to the described image capture device of claim 113, wherein:
Described input unit input has the view data of foreground area, background area and Mixed Zone, described foreground area is made up of the foreground object component that forms object, described background area is made up of the background object component that forms background object, and described Mixed Zone is the zone of having mixed foreground object component and background object component; And
Described motion blur cancellation element is equal substantially by the value of supposition pixel data part in the foreground area of the view data that input unit is imported, and eliminates the motion blur that appears in the foreground area.
115, according to the described image capture device of claim 114, also comprise the moiety checkout gear, be used for detecting the moiety that the value at the foreground area pixel data of view data equates substantially, wherein, described motion blur cancellation element is eliminated according to the moiety that is detected by the moiety checkout gear and is appeared at motion blur in the foreground area.
116, according to the described image capture device of claim 115, comprise that also handling unit determines device, be used for according to the definite processing unit that forms by a plurality of foreground object components in the position of moiety, wherein, described motion blur cancellation element is eliminated the motion blur that each handles the foreground area of unit.
117, according to the described image capture device of claim 116, wherein, described processing unit determines the definite processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area of device, and this pixel data is positioned on the straight line and is different from the pixel data of moiety.
118, according to the described image capture device of claim 115, also comprise a region appointment device, be used to specify foreground area, background area or Mixed Zone.
119, according to the described image capture device of claim 115, wherein, described moiety checkout gear detects moiety by the difference and a threshold value of compared pixels data.
120, according to the described image capture device of claim 115, wherein, described moiety checkout gear detects the moiety of being made up of the neighborhood pixels data, and the pixel quantity of these neighborhood pixels data is more than or equal to corresponding to the momental pixel quantity of foreground object.
121, according to the described image capture device of claim 115, wherein, described motion blur cancellation element is eliminated the motion blur that appears in the foreground area by implementing the calculating corresponding to motion vector.
122, according to the described image capture device of claim 115, wherein, described motion blur cancellation element comprises:
The model deriving means is used to obtain the model corresponding to handling unit and motion vector;
The equation generating apparatus generates an equation according to the model that obtained, and this equation is corresponding to the relational expression between the pixel data of handling unit and the foreground object component that is included in the processing unit; And
Calculation element is used for being included in the foreground object component of handling in the unit according to the Equation for Calculating that is generated.
123, according to the described image capture device of claim 115, wherein, described motion blur cancellation element is according to area information and moiety, carry out the processing that the pixel data of Mixed Zone is separated into foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur, non-mixed region or indication Mixed Zone that described area information indication is made up of foreground area and background area.
124, according to the described image capture device of claim 123, comprise that also handling unit determines device, be used for according to the definite processing unit that forms by a plurality of foreground object components and background object component in the position of moiety, wherein, described processing execution device is handled unit for each and is carried out the processing that separates foreground object component and background object component simultaneously, and the processing of eliminating motion blur from the foreground object component that is separated.
125, according to the described image capture device of claim 124, wherein, described processing unit determines that device determines the processing unit corresponding to the pixel data that belongs to Mixed Zone or foreground area, and this pixel data is positioned on the line always and is different from the pixel data of moiety.
126,, also comprise the region appointment device of specifying foreground area, background area or Mixed Zone according to the described image capture device of claim 123
127, according to the described image capture device of claim 123, wherein, described moiety checkout gear detects described moiety by the difference and a threshold value of compared pixels data.
128, according to the described image capture device of claim 123, wherein, described moiety checkout gear detects the moiety of being made up of the neighborhood pixels data, and the pixel quantity of described neighborhood pixels data is greater than or equal to respect to the momental pixel quantity of foreground object.
129, according to the described image capture device of claim 123, wherein, described processing unit determines that device is by implementing the calculating corresponding to motion vector, carry out the processing that separates foreground object component and background object component simultaneously, and the processing of from the foreground object component that is separated, eliminating motion blur.
130, according to the described image capture device of claim 123, wherein, described processing unit determines that device comprises:
The model deriving means is used to obtain the model corresponding to handling unit and motion vector;
The equation generating apparatus is used for generating an equation according to the model that obtained, this equation corresponding to the pixel data of handling unit be included in foreground object component in the processing unit and the relational expression between the background object component; And
Calculation element is included in foreground object component and the background object component of handling in the unit according to the Equation for Calculating that generates.
CNB028018222A 2001-04-10 2002-04-01 Image processing apparatus and method, and image pickup apparatus Expired - Fee Related CN100512390C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2001111437A JP4674408B2 (en) 2001-04-10 2001-04-10 Image processing apparatus and method, recording medium, and program
JP111437/2001 2001-04-10

Publications (2)

Publication Number Publication Date
CN1672402A true CN1672402A (en) 2005-09-21
CN100512390C CN100512390C (en) 2009-07-08

Family

ID=18963039

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB028018222A Expired - Fee Related CN100512390C (en) 2001-04-10 2002-04-01 Image processing apparatus and method, and image pickup apparatus

Country Status (8)

Country Link
EP (2) EP1379080B1 (en)
JP (1) JP4674408B2 (en)
KR (1) KR100875780B1 (en)
CN (1) CN100512390C (en)
CA (1) CA2412304A1 (en)
DE (2) DE60239301D1 (en)
MX (1) MXPA02011949A (en)
WO (1) WO2002085001A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101330555B (en) * 2007-06-08 2011-05-25 株式会社理光 Security processing apparatus
CN108141576A (en) * 2015-09-30 2018-06-08 三星电子株式会社 Display device and its control method
CN116156089A (en) * 2023-04-21 2023-05-23 摩尔线程智能科技(北京)有限责任公司 Method, apparatus, computing device and computer readable storage medium for processing image

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7715643B2 (en) 2001-06-15 2010-05-11 Sony Corporation Image processing apparatus and method, and image pickup apparatus
KR100904340B1 (en) * 2001-06-15 2009-06-23 소니 가부시끼 가이샤 Image processing apparatus and method and image pickup apparatus
JP4596220B2 (en) 2001-06-26 2010-12-08 ソニー株式会社 Image processing apparatus and method, recording medium, and program
KR100895744B1 (en) * 2001-06-26 2009-04-30 소니 가부시끼 가이샤 Image processing apparatus, method and recording medium, and image pickup apparatus
WO2005079062A1 (en) * 2004-02-13 2005-08-25 Sony Corporation Image processing apparatus, image processing method and program
KR20060119707A (en) * 2004-02-13 2006-11-24 소니 가부시끼 가이샤 Image processing device, image processing method, and program
KR20060073040A (en) * 2004-12-24 2006-06-28 삼성전자주식회사 Display apparatus and control method therof
US7602418B2 (en) * 2006-10-11 2009-10-13 Eastman Kodak Company Digital image with reduced object motion blur
KR101348596B1 (en) 2008-01-22 2014-01-08 삼성전자주식회사 Apparatus and method for immersive generation
KR101013668B1 (en) * 2008-10-17 2011-02-10 엘에스산전 주식회사 Coil holding apparatus of switching mechanism for air circuit breaker
JP7057079B2 (en) * 2017-09-01 2022-04-19 キヤノン株式会社 Image processing device, image pickup device, image processing method, and program
KR102637105B1 (en) 2018-07-13 2024-02-15 삼성전자주식회사 Method and apparatus for processing image data

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2280812B (en) * 1993-08-05 1997-07-30 Sony Uk Ltd Image enhancement
JPH08111810A (en) * 1994-10-07 1996-04-30 Canon Inc Image pickup device with shake correction function and method for correcting shake of image pickup device
JPH1093957A (en) * 1996-09-12 1998-04-10 Hitachi Ltd Mobile object detecting method, device, system and storage medium
US6404901B1 (en) 1998-01-29 2002-06-11 Canon Kabushiki Kaisha Image information processing apparatus and its method
JP2000030040A (en) * 1998-07-14 2000-01-28 Canon Inc Image processor and computer readable recording medium
US7583292B2 (en) 1999-12-28 2009-09-01 Sony Corporation Signal processing device and method, and recording medium
JP4596203B2 (en) * 2001-02-19 2010-12-08 ソニー株式会社 Image processing apparatus and method, recording medium, and program

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101330555B (en) * 2007-06-08 2011-05-25 株式会社理光 Security processing apparatus
CN108141576A (en) * 2015-09-30 2018-06-08 三星电子株式会社 Display device and its control method
US10542242B2 (en) 2015-09-30 2020-01-21 Samsung Electronics Co., Ltd. Display device and method for controlling same
CN108141576B (en) * 2015-09-30 2020-09-04 三星电子株式会社 Display device and control method thereof
CN116156089A (en) * 2023-04-21 2023-05-23 摩尔线程智能科技(北京)有限责任公司 Method, apparatus, computing device and computer readable storage medium for processing image

Also Published As

Publication number Publication date
KR100875780B1 (en) 2008-12-24
CA2412304A1 (en) 2002-10-24
WO2002085001A1 (en) 2002-10-24
EP1833243A1 (en) 2007-09-12
DE60239268D1 (en) 2011-04-07
MXPA02011949A (en) 2003-06-19
CN100512390C (en) 2009-07-08
EP1833243B1 (en) 2011-02-23
KR20030012878A (en) 2003-02-12
EP1379080A4 (en) 2006-11-08
JP4674408B2 (en) 2011-04-20
JP2002312782A (en) 2002-10-25
EP1379080A1 (en) 2004-01-07
DE60239301D1 (en) 2011-04-07
EP1379080B1 (en) 2011-02-23

Similar Documents

Publication Publication Date Title
CN1293517C (en) Image processing device
CN1313974C (en) Image processing apparatus and method, and image pickup apparatus
CN1248164C (en) Image procesisng apparatus and method, and image pickup apparatus
CN1237488C (en) Image processing apparatus and method and image pickup apparatus
CN1248162C (en) Image processing apparatus and method and image pickup apparatus
CN1251148C (en) Image processor
CN1241147C (en) Image processing apparatus and method, and image pickup apparatus
CN1248163C (en) Image processing apparatus and method
CN1269075C (en) Image processing apparatus
CN1267856C (en) Image processing device
CN1465196A (en) Image processing apparatus and method and image pickup apparatus
CN1279489C (en) Signal processing device and method, and recording medium
CN1672402A (en) Image processing apparatus and method, and image pickup apparatus
CN1969297A (en) Image processing apparatus and method and image pickup apparatus
CN1251149C (en) Communication apparatus and method
CN1269080C (en) Image processing apparatus and method, and image pickup apparatus
CN1267857C (en) Image processing apparatus and method, and image pickup apparatus
CN1754384A (en) Image processing device and method, learning device and method, recording medium, and program
CN1313975C (en) Image processing apparatus and method, and image pickup apparatus
CN1947152A (en) Image processing apparatus and method, and recording medium and program
CN1324531C (en) Image processor and image processing method
CN1950850A (en) Image processing device and method, recording medium, and program
CN1248161C (en) Image processing apparatus
CN1461555A (en) Image processing device and method, and imaging device
CN1249630C (en) Image processing apparatus and method, and image pickup apparatus

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090708

Termination date: 20130401