WO2005001763A1 - 信号処理装置および信号処理方法、並びにプログラムおよび記録媒体 - Google Patents
信号処理装置および信号処理方法、並びにプログラムおよび記録媒体 Download PDFInfo
- Publication number
- WO2005001763A1 WO2005001763A1 PCT/JP2004/008691 JP2004008691W WO2005001763A1 WO 2005001763 A1 WO2005001763 A1 WO 2005001763A1 JP 2004008691 W JP2004008691 W JP 2004008691W WO 2005001763 A1 WO2005001763 A1 WO 2005001763A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- pixels
- data
- pixel value
- image
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 1046
- 238000003672 processing method Methods 0.000 title claims abstract description 9
- 230000033001 locomotion Effects 0.000 claims abstract description 516
- 238000000034 method Methods 0.000 claims description 584
- 230000008569 process Effects 0.000 claims description 271
- 230000010354 integration Effects 0.000 claims description 222
- 230000000694 effects Effects 0.000 claims description 174
- 230000003287 optical effect Effects 0.000 claims description 170
- 239000013598 vector Substances 0.000 claims description 80
- 230000006870 function Effects 0.000 description 435
- 238000001514 detection method Methods 0.000 description 323
- 230000000875 corresponding effect Effects 0.000 description 246
- 238000010586 diagram Methods 0.000 description 161
- 230000014509 gene expression Effects 0.000 description 151
- 238000004364 calculation method Methods 0.000 description 138
- 239000011159 matrix material Substances 0.000 description 101
- 230000007423 decrease Effects 0.000 description 96
- 230000006833 reintegration Effects 0.000 description 83
- 230000008859 change Effects 0.000 description 73
- 230000009131 signaling function Effects 0.000 description 51
- 238000002156 mixing Methods 0.000 description 34
- 238000011156 evaluation Methods 0.000 description 29
- 230000002123 temporal effect Effects 0.000 description 28
- 238000000605 extraction Methods 0.000 description 27
- 239000000284 extract Substances 0.000 description 22
- 230000003247 decreasing effect Effects 0.000 description 21
- 230000003044 adaptive effect Effects 0.000 description 16
- 238000012937 correction Methods 0.000 description 14
- 238000003384 imaging method Methods 0.000 description 12
- 210000000078 claw Anatomy 0.000 description 10
- 238000013507 mapping Methods 0.000 description 10
- 230000002093 peripheral effect Effects 0.000 description 9
- 238000006243 chemical reaction Methods 0.000 description 8
- 230000007246 mechanism Effects 0.000 description 8
- 230000006978 adaptation Effects 0.000 description 7
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 6
- 238000007781 pre-processing Methods 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- IHPDTPWNFBQHEB-UHFFFAOYSA-N hydrobenzoin Chemical compound C=1C=CC=CC=1C(O)C(O)C1=CC=CC=C1 IHPDTPWNFBQHEB-UHFFFAOYSA-N 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000006866 deterioration Effects 0.000 description 3
- 230000002542 deteriorative effect Effects 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 240000007643 Phytolacca americana Species 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000009125 cardiac resynchronization therapy Methods 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 238000009795 derivation Methods 0.000 description 2
- 229910044991 metal oxide Inorganic materials 0.000 description 2
- 150000004706 metal oxides Chemical class 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000008685 targeting Effects 0.000 description 2
- 101100076569 Euplotes raikovi MAT3 gene Proteins 0.000 description 1
- 101001125481 Simulium damnosum Phenoloxidase subunit 1 Proteins 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000003754 machining Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20201—Motion blur correction
Definitions
- the present invention relates to a signal processing device and a signal processing method, and a program and a recording medium.
- the present invention relates to a signal processing device and a signal processing method capable of obtaining an image or the like that is closer to a real-world signal, and a program and a recording medium.
- a signal processing device and a signal processing method capable of obtaining an image or the like that is closer to a real-world signal, and a program and a recording medium.
- the first signal having a second dimension which is smaller in dimension than the first dimension, obtained by detecting a first signal which is a real-world signal having a first dimension by a sensor,
- a third signal with reduced distortion compared to the second signal is generated.
- the present invention has been made in view of such a situation, and it is an object of the present invention to obtain an image or the like that is closer to a signal in the real world.
- the signal processing device of the present invention sets a processing area in image data in which a real-world optical signal is projected onto a plurality of pixels each having a time integration effect, and a part of the continuity of the real-world optical signal is lacking.
- Motion vector setting means for setting a motion vector of an object in image data corresponding to the continuity of a real-world optical signal in which a part of continuity is missing in image data; and a processing area.
- the pixel value of each pixel in is the value obtained by integrating the pixel value of each pixel that has no motion blur corresponding to the object while moving in accordance with the motion vector.
- Model generation means for modeling the relationship between the pixel value of each pixel within the pixel and the pixel value of each pixel having no motion blur.
- the first equation for substituting the pixel value of each pixel in the processing region into the model generated by the model generation unit includes a first equation for substituting the pixel value of each pixel in which no motion blur occurs.
- a normal equation can be generated by the second equation having a difference of 0.
- a real-world optical signal is projected onto a plurality of pixels each having a time integration effect, and a processing area in image data in which a part of the continuity of the real-world optical signal is missing is set.
- a motion vector setting step for setting a motion vector of an object in the image data corresponding to the continuity of a real-world optical signal in which part of the continuity is missing in the image data;
- the pixel value of each pixel in the processing area is assumed to be a value obtained by integrating the pixel value of each pixel without motion blur corresponding to the object while moving corresponding to the motion vector.
- Model generation step for modeling the relationship between the pixel ⁇ : and the pixel value of each pixel without motion blur, and a model generated by the processing of the model generation step ,
- a normal equation that generates a normal equation is obtained by the first equation that substitutes the pixel value of each pixel in the processing area and the second equation that constrains the relationship between each pixel without motion blur. It includes an equation generation step and a real world estimation step of estimating a pixel value of each pixel without motion blur by calculating a normal equation generated by the processing of the normal equation generation step. .
- the program of the recording medium of the present invention is a program for projecting a processing area in image data in which a real-world optical signal is projected onto a plurality of pixels each having a time integration effect, and a part of the continuity of the real-world optical signal is missing.
- the pixel value of each pixel in the processing area is assumed to be a value obtained by integrating the pixel value of each pixel in which no motion blur corresponding to the object has occurred while moving corresponding to the motion vector.
- a model generation step for modeling a relationship between a pixel value of each pixel and a pixel value of each pixel having no motion blur, and a model generated by the processing of the model generation step On the other hand, a normal equation that generates a normal equation by using the first equation that substitutes the pixel value of each pixel in the processing area and the second equation that constrains the relationship between the pixels without motion blur It includes an equation generation step and a real world estimation step of estimating a pixel value of each pixel having no motion blur by calculating a normal equation generated by the processing of the normal equation generation step. I do.
- a program includes a processing area for setting a processing area in image data in which a real-world optical signal is projected onto a plurality of pixels each having a time integration effect, and a part of the continuity of the real-world optical signal is lacking.
- the pixel value of each pixel in the processing area is assumed to be the value obtained by integrating the pixel value of each pixel without motion blur corresponding to the object while moving in accordance with the moving vector. And the pixel value of each pixel without motion blur.
- the first equation in which the pixel value of each pixel in the processing area is substituted for the model generated by the model generation step and the model generated by the model generation step, and the relationship between each pixel in which no motion blur occurs A normal equation generation step for generating a normal equation based on the second equation that constrains And a real-world estimation step of estimating.
- a real-world optical signal is projected onto a plurality of pixels each having a time integration effect, and a processing region in image data in which a part of the continuity of the real-world optical signal is missing is set,
- the motion vector of the object in the image data corresponding to the continuity of the optical signal in the real world where a part of the continuity is missing in the data is set, and the pixel value of each pixel in the processing area corresponds to the object
- the pixel value of each pixel in which no motion blur occurs is a value integrated while moving according to the motion vector
- the pixel value of each pixel in the processing area and the motion blur have occurred.
- the relationship between the pixel value of each pixel and the pixel value of each pixel in the processing area is modeled.
- the normal equation is generated by calculating the normal equations in which the generated pixel value of each pixel motion blur does not occur is estimated.
- FIG. 1 is a diagram illustrating the principle of the present invention.
- FIG. 2 is a block diagram illustrating an example of a hardware configuration of the signal processing device 4.
- FIG. 3 is a block diagram showing a configuration example of one embodiment of the signal processing device 4 of FIG.
- FIG. 4 is a diagram for more specifically explaining the principle of signal processing of the signal processing device 4.
- FIG. 5 is a diagram for explaining an example of the arrangement of pixels on the image sensor.
- FIG. 6 is a diagram for explaining the operation of a detection element that is a CCD.
- FIG. 7 is a diagram illustrating a relationship between light incident on the detection elements corresponding to the pixels D to F and pixel values.
- FIG. 8 is a diagram illustrating the relationship between the passage of time, the light incident on the detection element corresponding to one pixel, and the pixel value.
- FIG. 9 is a diagram illustrating an example of an image of a linear object in the real world 1.
- FIG. 10 is a diagram illustrating an example of pixel values of image data obtained by actual imaging.
- FIG. 11 is a diagram showing an example of an image of the real world 1 of an object having a single color and a straight edge, which is a color different from the background.
- FIG. 12 is a diagram illustrating an example of pixel values of image data obtained by actual imaging.
- FIG. 13 is a schematic diagram of image data.
- FIG. 14 is a diagram for explaining the estimation of the model 16 1 based on the M pieces of data 16 2.
- FIG. 15 is a diagram for explaining the relationship between the signal of the real world 1 and the data 3.
- FIG. 16 is a diagram showing an example of data 3 of interest when formulating an equation.
- FIG. 17 is a diagram for explaining signals for two objects in the real world 1 and values belonging to a mixed region when an equation is formed.
- FIG. 18 is a diagram for explaining the stationarity expressed by Expression (18), Expression (19), and Expression (22).
- FIG. 19 is a diagram illustrating an example of M pieces of data 162 extracted from the data 3.
- FIG. 20 is a diagram for explaining integration of signals of the real world 1 in data 3 in the time direction and the two-dimensional spatial direction.
- FIG. 21 is a diagram illustrating an integration region when generating high-resolution data having a higher resolution in the spatial direction.
- FIG. 22 is a diagram for explaining an integration area when generating high-resolution data having a higher resolution in the time direction.
- FIG. 23 is a diagram illustrating an integration area when generating high-resolution data having a higher resolution in the time-space direction.
- FIG. 24 is a diagram showing the original image of the input image.
- FIG. 25 is a diagram illustrating an example of an input image.
- FIG. 26 is a diagram showing an image obtained by applying the conventional classification adaptive processing.
- FIG. 27 is a diagram showing a result of detecting a thin line region.
- FIG. 28 is a diagram illustrating an example of an output image output from the signal processing device 4.
- FIG. 29 is a flowchart illustrating signal processing by the signal processing device 4.
- FIG. 30 is a block diagram showing a configuration of the data continuity detecting unit 101.
- FIG. 31 is a diagram showing an image of the real world 1 with a thin line in front of the background.
- FIG. 32 is a diagram for explaining the approximation of the background by a plane.
- FIG. 33 is a diagram showing a cross-sectional shape of image data onto which a thin line image is projected.
- FIG. 34 is a diagram showing a cross-sectional shape of image data on which a thin line image is projected.
- FIG. 35 is a diagram showing a cross-sectional shape of image data on which a thin line image is projected.
- FIG. 36 is a diagram for explaining a process of detecting a vertex and detecting a monotonously increasing / decreasing region.
- FIG. 37 is a diagram illustrating a process of detecting a thin line region in which the pixel value of the vertex exceeds the threshold value and the pixel value of an adjacent pixel is equal to or less than the threshold value.
- FIG. 38 is a diagram illustrating the pixel values of the pixels arranged in the direction indicated by the dotted line AA ′ in FIG.
- FIG. 39 is a diagram illustrating a process of detecting the continuity of the monotone increase / decrease region.
- FIG. 40 is a diagram illustrating an example of another process of detecting a region where a thin line image is projected.
- FIG. 41 is a flowchart for explaining the continuity detection process.
- FIG. 42 is a diagram illustrating a process of detecting continuity of data in the time direction.
- FIG. 43 is a block diagram showing the configuration of the non-stationary component extraction unit 201.
- Figure 44 illustrates the number of rejections.
- FIG. 45 is a flowchart illustrating the process of extracting a non-stationary component.
- FIG. 46 is a flowchart for explaining the process of extracting the stationary component.
- FIG. 47 is a flowchart illustrating another process of extracting a steady component.
- FIG. 48 is a flowchart illustrating still another process of extracting a steady component.
- FIG. 49 is a block diagram showing another configuration of the data continuity detecting unit 101.
- FIG. 50 is a diagram for explaining activity in an input image having data continuity.
- FIG. 51 is a diagram for explaining a block for detecting an activity.
- FIG. 52 is a diagram illustrating an angle of data continuity with respect to activity.
- FIG. 53 is a block diagram showing a more detailed configuration of the data continuity detector 101.
- FIG. 54 is a diagram illustrating a set of pixels.
- FIG. 55 is a view for explaining the relationship between the position of a set of pixels and the angle of data continuity.
- FIG. 56 is a flowchart illustrating a process of detecting data continuity.
- FIG. 57 is a diagram illustrating a set of pixels extracted when detecting the continuity angle of the data in the time direction and the spatial direction.
- FIG. 58 is a diagram illustrating the principle of the function approximation method, which is an example of the embodiment of the real world estimation unit in FIG.
- FIG. 59 is a view for explaining the integration effect when the sensor is CCD.
- FIG. 60 is a view for explaining a specific example of the integration effect of the sensor of FIG.
- FIG. 61 is a view for explaining another specific example of the integration effect of the sensor of FIG.
- FIG. 62 is a diagram showing the real world region containing fine lines shown in FIG.
- FIG. 63 is a diagram for explaining the principle of an example of the embodiment of the real world estimation unit in FIG. 3 in comparison with the example in FIG.
- FIG. 64 is a diagram showing the thin-line-containing data area shown in FIG.
- FIG. 65 is a diagram in which each of the pixel values included in the thin line containing data area in FIG. 64 is graphed.
- FIG. 66 is a graph showing an approximate function obtained by approximating each pixel value included in the thin line containing data area in FIG. 65.
- FIG. 67 is a view for explaining the spatial continuity of the fine-line-containing real world region shown in FIG.
- FIG. 68 is a diagram in which each of the pixel values included in the thin line containing data area in FIG. 64 is graphed.
- FIG. 69 is a view for explaining a state where each of the input pixel values shown in FIG. 68 is shifted by a predetermined shift amount.
- FIG. 70 is a graph showing an approximate function that approximates each pixel value included in the thin-line-containing data area in FIG. 65 in consideration of the spatial continuity.
- FIG. 71 is a diagram illustrating a spatial mixing region.
- FIG. 72 is a diagram illustrating an approximation function that approximates a real-world signal in the spatial mixing region.
- Fig. 73 is a graph showing an approximation function that approximates the real-world signal corresponding to the thin-line-containing data area in Fig. 65, taking into account both the integration characteristics of the sensor and the stationarity in the spatial direction. .
- FIG. 74 is a block diagram illustrating a configuration example of a real-world estimator that uses a first-order polynomial approximation method among function approximation methods having the principle shown in FIG.
- FIG. 75 is a flowchart illustrating a real world estimation process performed by the real world estimation unit having the configuration of FIG.
- FIG. 76 is a diagram illustrating the tap range.
- FIG. 77 is a diagram for explaining signals in the real world having stationarity in the spatial direction.
- FIG. 78 is a view for explaining the integration effect when the sensor is CCD.
- FIG. 79 is a view for explaining the distance in the sectional direction.
- FIG. 80 is a block diagram illustrating a configuration example of a real-world estimator that uses a second-order polynomial approximation method among the function approximation methods having the principle shown in FIG.
- FIG. 81 is a flowchart illustrating a real world estimation process performed by the real world estimation unit having the configuration of FIG.
- FIG. 82 is a diagram illustrating the tap range.
- FIG. 83 is a diagram illustrating the direction of continuity in the spatiotemporal direction.
- FIG. 84 is a view for explaining the integration effect when the sensor is CCD.
- FIG. 85 is a diagram for explaining signals in the real world having stationarity in the spatial direction.
- FIG. 86 is a diagram for explaining signals in the real world having continuity in the spatio-temporal direction.
- FIG. 87 is a block diagram illustrating a configuration example of a real-world estimator that uses a three-dimensional approximation method among function approximation methods having the principle shown in FIG.
- FIG. 88 is a flowchart illustrating a real world estimation process performed by the real world estimation unit having the configuration of FIG.
- FIG. 89 is a diagram illustrating the principle of the reintegration method, which is an example of the embodiment of the image generation unit in FIG.
- FIG. 90 is a diagram illustrating an example of an input pixel and an approximation function that approximates a real-world signal corresponding to the input pixel.
- FIG. 91 is a diagram illustrating an example of creating four high-resolution pixels in one input pixel shown in FIG. 90 from the approximation function shown in FIG. 90.
- FIG. 92 is a block diagram illustrating a configuration example of an image generation unit that uses a one-dimensional reintegration method among the reintegration methods having the principle shown in FIG.
- FIG. 93 is a flowchart illustrating an image generation process performed by the image generation unit having the configuration of FIG.
- FIG. 94 is a diagram illustrating an example of the original image of the input image.
- FIG. 95 is a diagram illustrating an example of image data corresponding to the image in FIG.
- FIG. 96 is a diagram illustrating an example of an input image.
- FIG. 97 is a diagram illustrating an example of image data corresponding to the image in FIG.
- FIG. 98 is a diagram illustrating an example of an image obtained by performing a conventional classification adaptive process on an input image.
- FIG. 99 is a diagram illustrating an example of image data corresponding to the image of FIG.
- FIG. 10 is a diagram illustrating an example of an image obtained by performing a one-dimensional reintegration method process on an input image.
- FIG. 101 is a diagram illustrating an example of image data corresponding to the image of FIG.
- FIG. 102 is a view for explaining signals in the real world having stationarity in the spatial direction.
- FIG. 103 is a block diagram illustrating a configuration example of an image generation unit that uses a two-dimensional reintegration method among the reintegration methods having the principle shown in FIG.
- FIG. 104 is a diagram for explaining the distance in the cross-sectional direction.
- FIG. 105 is a flowchart illustrating the image generation processing executed by the image generation unit having the configuration of FIG. 103.
- FIG. 106 is a diagram illustrating an example of the input pixel.
- FIG. 107 is a diagram illustrating an example of creating four high-resolution pixels in one input pixel shown in FIG. 106 by a two-dimensional reintegration method.
- FIG. 108 is a diagram illustrating the direction of continuity in the spatiotemporal direction.
- FIG. 109 is a block diagram illustrating a configuration example of an image generation unit that uses a three-dimensional reintegration method among the reintegration methods having the principle shown in FIG.
- FIG. 110 is a flowchart illustrating an image generation process performed by the image generation unit having the configuration of FIG.
- FIG. 11 is a block diagram showing a configuration example of another embodiment of the signal processing device 4 of FIG.
- FIG. 112 is a flowchart for explaining the processing of the signal processing device 4 of FIG.
- FIG. 113 is a block diagram showing a configuration example of an embodiment of an application example of the signal processing device 4 of FIG.
- FIG. 114 is a flowchart for explaining the processing of the signal processing device 4 of FIG.
- FIG. 1 15 is a diagram for explaining an optical signal of the real world 1.
- FIG. 116 is a diagram for explaining the integration effect when the sensor 2 is CCD.
- FIG. 117 is a diagram illustrating the approximate function f (x, y) of the real world 1.
- FIG. 118 is a diagram illustrating an input image input to the signal processing device 4 in FIG. 113 and a processing region.
- FIG. 119 is a diagram illustrating an example of a processing area set by the signal processing device 4 of FIG.
- the value of y in the approximation function f (x, y) in FIG. 120 is a predetermined value y.
- Is a diagram showing an approximation function f (x) of the X cross-section of the (y y c).
- FIG. 122 is a diagram showing the approximate function f (x) of FIG. 120 after l Z v time.
- FIG. 122 is a diagram showing the approximate function f (x) of FIG. 122 after l Z v time.
- FIG. 123 is a diagram for explaining how an object captured in an input image moves at V pixels per shirt time.
- FIG. 9 is a diagram showing P9 through P9 in an Xt plane.
- Figure 126 shows the pixel value P of the input image. To 1 2 4 pixel values P 9 Q. To diagrams and tables in Q 9.
- FIG. 127 is a diagram for explaining a method of solving the blank area of FIG. 126 assuming that the end of the processing area is “flat”.
- FIG. 128 illustrates the line of interest in the processing region of FIG.
- FIG. 129 is a block diagram illustrating a configuration example of the real world estimation unit 15013 in FIG.
- FIG. 130 is a flowchart for explaining the real world estimation processing in step S15008 of FIG.
- FIG. 131 is a flowchart for explaining the real world estimation process in step S15008 of FIG.
- FIG. 132 is a diagram showing an input image input to the signal processing device 4 of FIG.
- FIG. 13 shows a first method in which the signal processing device 4 of FIG. 13 adds a formula to the input image of FIG.
- FIG. 9 is a diagram illustrating an image that has been processed and output.
- FIG. 13 shows that the signal processing device 4 of FIG. 13 processes the input image of FIG.
- FIG. 9 is a diagram showing an output image.
- FIG. 135 is a block diagram showing a configuration example of another embodiment of an application example of the signal processing device 4 of FIG.
- FIG. 136 is a flowchart for explaining the processing of the signal processing device 4 in FIG. JP2004 / 008691
- FIG. 137 is a block diagram illustrating a configuration example of the real world estimation unit 15083 in FIG.
- FIG. 138 is a flowchart illustrating the real world estimation processing in step S15088 of FIG.
- FIG. 139 is a flowchart illustrating the real world estimation processing in step S15088 of FIG.
- FIG. 140 is a block diagram showing a configuration example of another embodiment of an application of the signal processing device 4 of FIG.
- FIG. 141 is a flowchart for explaining the processing of the signal processing device 4 of FIG.
- FIG. 142 is a block diagram illustrating a configuration example of the real world estimation unit 15113 in FIG.
- FIG. 144 is a flowchart for explaining the real world estimation processing in step S15168 of FIG.
- FIG. 144 is a diagram illustrating an input image input to the signal processing device 4 of FIG.
- FIG. 144 shows an output image in which the input image of FIG. 144 is processed by the signal processing device 4 of FIG. 135, with the weight W bj for the constraint condition being the same for all the constraint conditions. It is a figure showing the example of.
- FIG. 144 shows an output image obtained by processing the input image of FIG. 144 by the signal processing device 4 of FIG. 140 so that the weight W for the constraint condition expression is set to 0 or 1 according to the activity of the input image. It is a figure showing the example of.
- FIG. 147 is a block diagram showing a configuration example of another embodiment of an application example of the signal processing device 4 of FIG.
- FIG. 148 is a flowchart for explaining the processing of the signal processing device 4 in FIG.
- FIG. 149 is a block diagram illustrating a configuration example of the real world estimation unit 15153 in FIG.
- FIG. 150 is a flowchart illustrating the real world estimation processing in step S15218 of FIG.
- FIG. 151 is a flowchart illustrating the real world estimation processing in step S15218 of FIG.
- FIG. 152 is a block diagram showing a configuration example of the continuity setting section 15012 of FIG.
- FIG. 153 is a diagram for explaining the amount of movement.
- FIG. 154 is a diagram illustrating pixel values of an image output from a camera when a camera captures a foreground object moving in front of a background object.
- FIG. 155 is a diagram showing a difference value between the pixel values of the pixels of the image shown in FIG.
- FIG. 156 is a flowchart for explaining the process of detecting the amount of motion.
- FIG. 157 is a flowchart for explaining the correlation detection process.
- FIG. 1 illustrates the principle of the present invention.
- events phenomena
- Real world 1 events include light (image), sound, pressure, temperature, mass, density, light / darkness, or smell.
- Events in the real world 1 are distributed in the spatiotemporal direction.
- the image of the real world 1 is the distribution of the light intensity of the real world 1 in the spatiotemporal direction.
- the events of real world 1 that can be acquired by sensor 2 are converted into data 3 by sensor 2. It can be said that the sensor 2 obtains information indicating an event in the real world 1.
- the senor 2 converts information indicating an event of the real world 1 into data 3. It can be said that a signal that is information indicating an event (phenomenon) of the real world 1 having dimensions of space, time, and mass is acquired by the sensor 2 and converted into data.
- a signal that is information indicating an event of the real world 1 is also referred to as a signal that is information indicating an event of the real world 1.
- a signal that is information indicating an event in the real world 1 is also simply referred to as a signal in the real world 1.
- a signal includes a phenomenon and an event, and includes a signal that the transmission side does not intend.
- Data 3 (detection signal) output from the sensor 2 is information obtained by projecting information indicating an event of the real world 1 onto a lower-dimensional space-time as compared with the real world 1.
- data 3 which is image data of a moving image, is obtained by projecting a three-dimensional spatial and temporal image of the real world 1 onto a two-dimensional spatial and temporal spatio-temporal image.
- Information is also, for example, when data 3 is digital data, data 3 is rounded according to the unit of sampling.
- data 3 is analog data, the information in data 3 is compressed or a part of the information is deleted by a limiter or the like according to the dynamic range.
- data 3 is significant information for estimating signals that are information indicating events (phenomena) in the real world 1. Contains.
- information having continuity contained in the real world 1 or the data 3 is used as significant information for estimating a signal which is information of the real world 1.
- Stationarity is a newly defined concept.
- the event of the real world 1 includes a certain feature in a direction of a predetermined dimension.
- a shape, a pattern, a color, or the like is continuous in a spatial direction or a time direction, or a pattern of a shape, a pattern, or a color is repeated.
- the information indicating the event of the real world 1 includes a certain feature in the direction of the predetermined dimension.
- a linear object such as a thread, a string, or a rope has the same cross-sectional shape at any position in the length direction, that is, an empty space. It has certain characteristics in the direction between.
- the constant feature in the spatial direction that the cross-sectional shape is the same at an arbitrary position in the length direction arises from the feature that the linear object is long. Therefore, the image of the linear object has a certain feature in the longitudinal direction, that is, in the spatial direction, that the cross-sectional shape is the same at an arbitrary position in the longitudinal direction.
- a single-color object which is a tangible object extending in the spatial direction
- an image of a single-color object which is a tangible object extending in the spatial direction
- the signal of the real world 1 has a certain characteristic in the direction of the predetermined dimension.
- continuity such a feature that is fixed in the direction of the predetermined dimension is called continuity.
- the continuity of a signal in the real world 1 (real world) refers to a characteristic of a signal indicating an event in the real world 1 (real world), which is constant in a predetermined dimension.
- data 3 is a signal obtained by projecting a signal representing information of an event in the real world ⁇ having a predetermined dimension by the sensor 2, and thus the continuity of the signal in the real world Is included.
- Data 3 can also be said to include the stationarity of the real-world signal projected.
- the data 3 output by the sensor 2 lacks part of the information of the real world 1, so the data 3 shows the continuity contained in the signal of the real world 1 (real world). May be missing.
- the data 3 includes at least a part of the continuity of the signal of the real world 1 (real world) as the continuity of the data.
- the data continuity is a feature of data 3 that is constant in a predetermined dimension.
- the stationarity of the signal of the real world 1 or the stationarity of the data of the data 3 is used as significant information for estimating a signal which is information indicating an event of the real world 1.
- information indicating a missing event of the real world 1 is generated by performing signal processing on the data 3 using the continuity of the data.
- the signal processing device 4 uses the spatial or temporal continuity of the length (space), time, and mass of the signal, which is information indicating an event of the real world 1. .
- a sensor 2 is composed of, for example, a digital still camera or a video camera, captures an image of the real world 1, and outputs obtained image data, which is data 3, to a signal processing device 4. I do.
- the sensor 2 can be, for example, a thermographic device or a pressure sensor using photoelasticity.
- the signal processing device 4 is composed of, for example, a personal computer, and performs signal processing on the data 3.
- the signal processing device 4 is configured, for example, as shown in FIG. CPU
- the (Central Processing Unit) 21 executes various processes according to a program stored in a ROM (Read Only Memory) 22 or a storage unit 28.
- ROM Read Only Memory
- RAM Random Access Memory 23 programs executed by the CPU 21 and data are stored as appropriate.
- ROM 22 N and RAM 23 are interconnected by a bus 24.
- the CPU 21 is also connected to an input / output interface 25 via a bus 24.
- the input / output interface 25 is connected to an input unit 26 including a keyboard, a mouse, and a microphone, and an output unit 27 including a display, a speaker, and the like.
- the CPU 21 executes various processes in response to a command input from the input unit 26. Then, the CPU 21 outputs an image, a sound, or the like obtained as a result of the processing to the output unit 27.
- the storage unit 28 connected to the input / output interface 25 is composed of, for example, a hard disk and stores programs executed by the CPU 21 and various data.
- the communication unit 29 communicates with external devices via the Internet or other networks. I believe. In the case of this example, the communication unit 29 functions as an acquisition unit that takes in the data 3 output from the sensor 2.
- a program may be acquired via the communication unit 29 and stored in the storage unit 28.
- the drive 30 connected to the input / output interface 25 drives the magnetic disk 51, optical disk 52, magneto-optical disk 53, or semiconductor memory 54 when they are mounted, and drives them there. Get recorded programs and data.
- the acquired programs and data are transferred to and stored in the storage unit 28 as necessary.
- FIG. 3 is a block diagram showing the signal processing device 4.
- each function of the signal processing device 4 is realized by hardware or software. That is, each block diagram in this specification may be considered as a hardware block diagram or a function block diagram by software.
- FIG. 3 is a diagram showing a configuration of the signal processing device 4 which is an image processing device.
- the input image (image data as an example of the data 3) input to the signal processing device 4 is supplied to the data continuity detecting unit 101 and the real world estimating unit 102.
- the data continuity detection unit 101 detects data continuity from the input image and supplies data continuity information indicating the detected continuity to the real world estimation unit 102 and the image generation unit 103.
- the data continuity information includes, for example, the position of a pixel region having data continuity in the input image, the direction of the pixel region having data continuity (the angle or inclination in the time direction and the spatial direction), or the data Includes the length of the area of pixels that have stationarity. Details of the configuration of the data continuity detecting unit 101 will be described later.
- the real world estimator 102 is supplied from the input image and the data continuity detector 101. Estimate the real world signal 1 based on the obtained data stationarity information.
- the real-world estimating unit 102 estimates an image, which is a real-world signal, incident on the sensor 2 when the input image is acquired.
- the real world estimating unit 102 supplies the real world estimation information indicating the result of the estimation of the signal of the real world 1 to the image generating unit 103. Details of the configuration of the real world estimation unit 102 will be described later.
- the image generation unit 103 generates a signal that is more similar to the signal of the real world 1 based on the real world estimation information indicating the estimated signal of the real world 1 supplied from the real world estimation unit 102. And output the generated signal.
- the image generation unit 103 shows the data continuity information supplied from the data continuity detection unit 101 and the estimated real world 1 signal supplied from the real world estimation unit 102 Based on the real world estimation information, it generates a signal that is more similar to the real world 1 signal and outputs the generated signal.
- the image generation unit 103 generates an image that is closer to the image of the real world 1 based on the real world estimation information, and outputs the generated image as an output image.
- the image generation unit 103 based on the data continuity information and the real world estimation information, the image generation unit 103 generates an image that is closer to the image of the real world 1 and outputs the generated image as an output image.
- the image generation unit 103 integrates the estimated image of the real world 1 in a desired spatial direction or temporal direction based on the real world estimation information, and compares Generate a high-resolution image according to the direction or time direction, and output the generated image as an output image.
- the image generation unit 103 generates an image by outer interpolation, and outputs the generated image as an output image.
- a signal of the real world 1 as an image is formed on a light receiving surface of a CCD (Charge Coupled Device) as an example of the sensor 2. Since the CCD which is an example of the sensor 2 has an integration characteristic, the data 3 output from the CCD has a difference from the image of the real world 1. Details of the integration characteristic of the sensor 2 will be described later. T / JP2004 / 008691
- the relationship between the image of the real world 1 acquired by the CCD and the data 3 captured and output by the CCD is clearly considered. That is, the relationship between the data 3 and the signal that is the real-world information acquired by the sensor 2 is clearly considered.
- the signal processing device 4 approximates (describes) the real world 1 using a model 16 1.
- the model 16 1 is represented by, for example, N variables. More precisely, the model 16 1 approximates (describes) the real world 1 signal.
- the signal processing device 4 extracts ⁇ data 16 2 from the data 3 in order to predict the model ⁇ ⁇ 6 1.
- the signal processing device 4 uses, for example, the continuity of the data included in the data 3. In other words, the signal processing device 4 determines the model based on the stationarity of the data contained in the data 3.
- the signal processing device 4 can consider the signal that is the information of the real world 1.
- An image sensor such as a CCD or a complementary metal-oxide semiconductor (CMOS) sensor, which captures an image, projects a signal, which is information of the real world, into two-dimensional data when imaging the real world.
- CMOS complementary metal-oxide semiconductor
- Each pixel of the image sensor has a predetermined area as a so-called light receiving surface (light receiving area). Light incident on a light receiving surface having a predetermined area is integrated in the spatial direction and the temporal direction for each pixel, and is converted into one pixel value for each pixel.
- the image sensor captures an image of an object in the real world, and outputs image data obtained as a result of the capture in units of one frame. That is, the image sensor acquires the signal of the real world 1, which is the light reflected by the object of the real world 1, and outputs the data 3.
- an image sensor outputs 30 frames of image data per second.
- the exposure time of the image sensor can be set to 1Z30 seconds.
- the exposure time is a period from the time when the image sensor starts converting the incident light into electric charges to the time when the conversion of the incident light into electric charges ends.
- the exposure time is also referred to as a shutter time.
- FIG. 5 is a diagram illustrating an example of an arrangement of pixels on an image sensor.
- a to I indicate individual pixels.
- the pixels are arranged on a plane corresponding to the image displayed by the image data.
- One detection element corresponding to one pixel is arranged on the image sensor.
- one detection element outputs one pixel value corresponding to one pixel constituting the image data.
- the position of the detector element in the spatial direction X corresponds to the position in the horizontal direction on the image displayed by the image data
- the position of the detector element in the spatial direction Y (Y coordinate) corresponds to the image.
- the distribution of light intensity in the real world 1 spreads in three-dimensional spatial and temporal directions, but the image sensor acquires light in the real world 1 in two-dimensional spatial and temporal directions, and Data 3 representing the distribution of light intensity in the spatial and temporal dimensions is generated.
- the detection element which is a CCD, converts light input to the light receiving surface (light receiving area) (detection area) into electric charges for a period corresponding to the shutter time. Accumulate the converted charge.
- Light is the information (signal) in the real world 1 whose intensity is determined by its position in three-dimensional space and time.
- the distribution of light intensity in the real world 1 is a function with variables x, y, and z in three-dimensional space, and time t.
- the amount of electric charge stored in the detector element which is a CCD, is almost proportional to the intensity of light incident on the entire light-receiving surface, which has a two-dimensional spatial extent, and the time the light is incident. .
- the detection element adds the electric charge converted from the light incident on the entire light receiving surface to the already accumulated electric charge in a period corresponding to the shutter time.
- the detection element integrates light incident on the entire light receiving surface having a two-dimensional spatial spread for a period corresponding to the shutter time, and accumulates an amount of charge corresponding to the integrated light. . It can be said that the detection element has an integrating effect on space (light receiving surface) and time (Shutter time).
- the electric charge stored in the detection element is converted into a voltage value by a circuit (not shown), and the voltage value is further converted into a pixel value such as digital data and output as data 3. Therefore, each pixel value output from the image sensor is
- (Signal) has a value projected onto a one-dimensional space, which is the result of integrating a part having a temporal and spatial spread in the temporal direction of the shutter time and the spatial direction of the light receiving surface of the detection element.
- the pixel value of one pixel is represented by integration of F (x, y, t).
- F (x, y, t) is a function representing the distribution of light intensity on the light receiving surface of the detection element.
- Xl is the spatial coordinate (X coordinate) of the left boundary of the light receiving surface of the detection element.
- x 2 is the spatial coordinate (X coordinate) of the right boundary of the light-receiving surface of the detector elements Ru der.
- yi is the spatial coordinate (Y Coordinates).
- y 2 is the lower boundary spatial coordinates of the light-receiving surface of the detecting element (Y-coordinate). Is the time at which the conversion of incident light into charge has started. Is the time at which the conversion of the incident light into charges has been completed.
- the gain of the pixel value of the image data output from the image sensor is corrected, for example, for the entire frame.
- Each pixel value of the image data is the integrated value of the light incident on the light receiving surface of each detection element of the image sensor, and is the light that is incident on the image sensor, of which the real world 1 is smaller than the light receiving surface of the detection element.
- the light waveform is hidden by the pixel value as an integral value.
- the waveform of a signal expressed with reference to a predetermined dimension is also simply referred to as a waveform.
- the image (optical signal) of the real world 1 is integrated in the spatial direction and the temporal direction in units of pixels, a part of the continuity of the image of the real world 1 is included in the image data. Is lost, and another part of the continuity of the real world 1 image is included in the image data.
- the image data may include stationarity that has changed from the stationarity of the real world 1 image.
- FIG. 7 is a diagram for explaining the relationship between the light incident on the detection elements corresponding to the pixels D to F and the pixel value.
- F (x) in FIG. 7 is an example of a function representing the distribution of light intensity in the real world 1 with the coordinate X in the spatial direction X in space (on the detection element) as a variable.
- F (x) is an example of a function that represents the distribution of light intensity in the real world 1 when it is constant in the spatial direction Y and the time direction.
- L indicates the length in the spatial direction X of the light receiving surface of the detection element corresponding to the pixels D to F.
- the pixel value of one pixel is represented by the integral of F (x).
- the pixel value P of the pixel E is represented by Expression (2).
- Xl is the spatial coordinate in the spatial direction X of the left boundary of the light receiving surface of the detection element corresponding to the pixel E.
- x 2 is a spatial coordinate in the spatial direction X of the right boundary of the light-receiving surface of the detecting element corresponding to the pixel E.
- FIG. 8 is a diagram illustrating the relationship between the passage of time, light incident on a detection element corresponding to one pixel, and a pixel value.
- F (t) in FIG. 8 is a function representing the distribution of light intensity in the real world 1 with time t as a variable.
- F (t) is an example of a function representing the distribution of light intensity in the real world 1 when it is constant in the spatial direction Y and the spatial direction X.
- t s indicates the shirt time.
- Frame #n_l is a frame temporally before frame #n
- frame A # n + 1 is a frame temporally after frame #n. That is, frame-1, frame, and frame # n + l are displayed in the order of frame # n-1, frame, and frame.
- the shirt time t s and the frame interval are the same.
- the pixel value of one pixel is represented by the integral of F (t).
- the pixel value P of the pixel in frame #n is represented by Expression (3).
- Equation (3) is the time at which the conversion of incident light into electric charge has started.
- t 2 is the time at which the conversion of the incident light into charges has been completed.
- the integration effect in the spatial direction by the sensor 2 is simply referred to as the spatial integration effect
- the integration effect in the time direction by the sensor 2 is simply referred to as the time integration effect
- the spatial integration effect or the time integration effect is also simply referred to as an integration effect.
- FIG. 9 is a diagram showing an image of a linear object (for example, a thin line) in the real world 1, that is, an example of the distribution of light intensity.
- the upper position in the figure indicates the light intensity (level)
- the upper right position in the figure indicates the position in the spatial direction X, which is one direction in the spatial direction of the image.
- the position on the right in the middle indicates the position in the spatial direction Y, which is another direction in the spatial direction of the image.
- the image of the linear object in the real world 1 includes a certain stationarity. That is, the image shown in FIG. 9 has stationarity in which the cross-sectional shape (change in level with respect to change in position in the direction orthogonal to the length direction) is the same at an arbitrary position in the length direction.
- FIG. 10 is a diagram showing an example of pixel values of image data obtained by actual imaging, corresponding to the image shown in FIG.
- FIG. 10 shows an image of a linear object with a shorter light receiving surface of each pixel, which extends in a direction deviated from the arrangement of pixels (vertical or horizontal arrangement of pixels) of the image sensor.
- FIG. 3 is a schematic diagram of image data obtained by imaging the image with an image sensor. The image incident on the image sensor when the image data shown in FIG. 10 is acquired is the image of the linear object in the real world 1 in FIG.
- the upper position in the figure indicates the pixel value
- the upper right position in the figure indicates the position in the spatial direction X which is one direction in the spatial direction of the image
- the right position in the figure Indicates the position in the spatial direction Y, which is another direction in the spatial direction of the image.
- the directions indicating the pixel values in FIG. 10 correspond to the directions of the levels in FIG. 9, and the spatial directions X and Y in FIG. 10 are the same as the directions in FIG.
- the linear object When an image of a linear object whose diameter is shorter than the length of the light receiving surface of each pixel is captured by the image sensor, the linear object is schematically represented in the image data obtained as a result of the imaging, for example, It is represented by a plurality of arcs (kamaboko-shaped) of a predetermined length that are arranged diagonally.
- Each arc shape is almost the same.
- One arc shape is one row vertically It is formed on a pixel or on a row of pixels.
- one arc shape in FIG. 10 is formed vertically on one column of pixels.
- the image of the linear object in the real world 1 has an arbitrary position in the length direction and a spatial direction.
- the stationarity of the same cross-sectional shape in Y is lost.
- the continuity that the image of the linear object in the real world 1 has is the same shape formed on one row of pixels vertically or one row of pixels horizontally. It can be said that there is a change to a stationary state in which certain arc shapes are arranged at regular intervals.
- FIG. 11 is a diagram showing an example of an image of the real world 1 of an object having a single color and a straight edge, which is a color different from the background, that is, an example of light intensity distribution.
- the upper position in the figure indicates the light intensity (level)
- the upper right position in the figure indicates the position in the spatial direction X which is one direction in the spatial direction of the image.
- the position to the right of indicates the position in the spatial direction Y, which is another direction in the spatial direction of the image.
- the image of the real world 1 of an object having a straight edge in a color different from the background has a predetermined constancy. That is, the image shown in FIG. 11 has a stationarity in which the cross-sectional shape (change in level with respect to change in position in the direction perpendicular to the edge) is the same at an arbitrary position in the length direction of the edge.
- FIG. 12 is a diagram showing an example of pixel values of image data obtained by actual imaging corresponding to the image shown in FIG. As shown in FIG. 12, since the image data is composed of pixel values in units of pixels, the image data has a step shape.
- FIG. 13 is a schematic diagram of the image data shown in FIG.
- FIG. 13 is a single-color, linear edge that has a different color from the background, with the edge extending in a direction deviating from the pixel array (vertical or horizontal array) of the image sensor.
- FIG. 2 is a schematic diagram of image data obtained by capturing an image of the real world 1 of an object having an image by an image sensor. When the image data shown in Fig. 13 was acquired, the image incident on the image sensor was of a color different from the background shown in Fig. 11 and having a single color, linear edge. It is an image of the real world 1.
- FIG. 13 is a single-color, linear edge that has a different color from the background, with the edge extending in a direction deviating from the pixel array (vertical or horizontal array) of the image sensor.
- FIG. 2 is a schematic diagram of image data obtained by capturing an image of the real world 1 of an object having an image by an image sensor.
- the image incident on the image sensor was of a color different from the background shown in Fig. 11 and having
- the upper position in the drawing indicates the pixel value
- the upper right position in the drawing indicates the position in the spatial direction X which is one direction in the spatial direction of the image
- the right position in the drawing Indicates the position in the spatial direction Y, which is another direction in the spatial direction of the image.
- the direction indicating the pixel value in FIG. 13 corresponds to the direction of the level in FIG. 11, and the spatial direction X and the spatial direction Y in FIG. 13 are the same as the directions in FIG.
- the linear edge is schematically represented in image data obtained as a result of the imaging. For example, it is represented by a plurality of pawls of a predetermined length, which are arranged obliquely.
- Each claw shape is almost the same shape.
- One claw shape is formed vertically on one row of pixels or horizontally on one row of pixels. For example, in FIG. 13, one claw shape is vertically formed on one column of pixels.
- the image data obtained by being captured by the image sensor there is a real-world 1 image of an object having a color different from the background and having a single-color, linear edge.
- the continuity of the same cross-sectional shape at any position along the edge length has been lost.
- the continuity of the image of the real world 1 which is a color different from the background and has a single color, and has a linear edge, has an image of one pixel vertically or one pixel horizontally. It can be said that the same shape of the claw shape formed on the pixel in the column has changed to a stationary state in which it is arranged at regular intervals.
- the data continuity detecting unit 101 detects such continuity of data included in, for example, data 3 which is an input image. For example, the data continuity detection unit 101 detects data continuity by detecting an area having a certain feature in a predetermined dimension direction. For example, the data continuity detecting unit 101 detects a region shown in FIG. 10 in which the same arc shapes are arranged at regular intervals. Further, for example, the data continuity detecting unit 101 detects an area in which the same claw shape is arranged at regular intervals, as shown in FIG. '
- the data continuity detecting unit 101 detects data continuity by detecting an angle (inclination) in the spatial direction indicating a similar shape arrangement.
- the data continuity detection unit 101 detects data continuity by detecting angles (movements) in the spatial direction and the temporal direction, which indicate how similar shapes are arranged in the spatial direction and the temporal direction. I do.
- the data continuity detecting unit 101 detects data continuity by detecting a length of an area having a certain characteristic in a direction of a predetermined dimension.
- the portion of the data 3 in which the image of the real world 1 of the object having a single color and having a linear edge and different from the background is projected by the sensor 2 is also referred to as a binary edge.
- desired high-resolution data is generated from the data 3.
- the real world 1 is estimated from the data 3, and high-resolution data is generated based on the estimation result. That is, the real world 1 is estimated from the data 3, and high-resolution data is generated from the estimated real world 1 in consideration of the data 3.
- the sensor 2 which is a CCD has an integral characteristic as described above. That is, one unit (eg, pixel value) of the data 3 is calculated by integrating the signal of the real world 1 with the detection area (eg, light receiving surface) of the detection element (eg, CCD) of the sensor 2. Can be.
- the signal of the real world 1 can be estimated from the data 3, by integrating the signal of the real world 1 (in the spatiotemporal direction) for each detection area of the detection element of the virtual high-resolution sensor, One value contained in the resolution data can be obtained.
- the data 3 cannot represent the small change of the signal of the real world 1. Therefore, by comparing the signal of the real world 1 estimated from the data 3 with the change of the signal of the real world 1 and integrating every smaller region (in the spatiotemporal direction), the signal of the real world 1 is obtained.
- High-resolution data indicating a small change in That is, for each detection element of the virtual high-resolution sensor, high-resolution data can be obtained by integrating the estimated real world 1 signal in the detection area.
- the image generation unit 103 integrates, for example, the estimated signal of the real world 1 in the space-time region of each detection element of the virtual high-resolution sensor, Generate high resolution data.
- the signal processing device 4 uses the relation between the data 3 and the real world 1, the stationarity, and the spatial or temporal mixture (spatial Mixing or time mixing) is used.
- mixing means that in data 3, signals for two objects in the real world 1 are mixed into one value.
- Spatial mixing refers to spatial mixing of signals for two objects due to the spatial integration effect of the sensor 2. The time mixing will be described later.
- Real world 1 itself consists of an infinite number of phenomena, so in order to express real world 1 itself, for example, by mathematical formulas, an infinite number of variables are needed. From Data 3, it is not possible to predict all events in the real world 1.
- the signal processing device 4 focuses on a portion of the signal of the real world 1 which has stationarity and can be represented by a function f (X, y, z, t), and a function f (X, y , z, t), a signal part of the real world 1 having a stationarity is approximated by a model 16 1 represented by N variables. Then, as shown in FIG. 14, the model 16 1 is predicted from the M data 16 2 in the data 3.
- model 161 is represented by N variables based on stationarity
- sensor Based on the integration characteristics of 2
- N variables shows the relationship between the model 16 1 represented by N variables and the M data 16 2
- model 16 1 based on stationarity, represented by N variables, indicating the relationship between model 16 1 represented by N variables and M data 16 2
- N It can be said that the equation using the variable of describes the relationship between the stationary signal part of the real world 1 and the stationary part 3 of the data.
- the data continuity detection unit 101 detects the features of the data 3 where the data continuity occurs and the characteristics of the data continuity where the data continuity occurs, based on the signal part of the real world 1 that has continuity I do.
- the edge has a slope.
- the arrow B in FIG. 15 indicates the edge inclination.
- the inclination of the predetermined edge can be represented by an angle with respect to a reference axis or a direction with respect to a reference position.
- the inclination of a predetermined edge can be represented by the angle between the coordinate axis in the spatial direction X and the edge.
- the inclination of the predetermined edge can be represented by a direction indicated by the length in the spatial direction X and the length in the spatial direction Y.
- the data 3 is output.
- the claw shape corresponding to the edge is arranged at the position indicated by A 'in Fig. 15 with respect to the noted position (A) of the edge, and corresponds to the inclination of the edge of the image of the real world 1 in Fig. 15
- the claws corresponding to the edges are arranged in the direction of the inclination indicated by B '.
- the model 16 1 represented by N variables approximates a real-world signal portion that causes data continuity in data 3.
- N variables that shows the relationship between the model 16 1 represented by N variables and the M data 162
- the data stationarity occurs in Data 3 Use the value of the part that is.
- data continuity occurs in data 3 shown in Fig. 16, and the value obtained by integrating the signal of the real world 1 is output from the detection element of sensor 2 by focusing on the values belonging to the mixed region.
- the formula is established as equal to For example, multiple expressions can be developed for multiple values in data 3 where data continuity occurs.
- A indicates the position of interest of the edge, and A, indicates the pixel (position) with respect to the position of interest (A) in the image of the real world 1.
- the mixed area refers to an area of data in which the signals for two objects in the real world 1 are mixed into one value in data 3.
- the data 3 for a real-world image 1 of an object having a single color and a straight edge, which is a color different from the background the image for the object having the straight edge and the image for the background are integrated. Pixel values belong to the mixed area.
- FIG. 17 is a diagram illustrating signals for two objects in the real world 1 and values belonging to a mixed area when an equation is formed.
- the left side in Fig. 17 is the signal of the real world 1 for two objects in the real world 1 acquired in the detection area of one detecting element of the sensor 2 and having a predetermined spread in the spatial direction X and the spatial direction Y. Is shown.
- the right side in FIG. 17 shows the pixel value P of one pixel of data 3 in which the signal of the real world 1 shown on the left side of FIG. That is, the signal of the real world 1 projected on two objects in the real world 1 having a predetermined spread in the spatial direction X and the spatial direction Y acquired by one detecting element of the sensor 2,
- the pixel value P of one pixel is shown.
- L in FIG. 17 indicates the signal level of the real world 1 in the white part of FIG.
- R in FIG. 17 indicates the level of the signal of the real world 1 in the shaded portion in FIG. 17 with respect to another object in the real world 1.
- the mixture ratio ⁇ is a ratio of a signal (area) to two objects, which is incident on a detection area having a predetermined spread in the spatial direction X and the spatial direction ⁇ of one detecting element of the sensor 2.
- the mixing ratio ⁇ has a predetermined spread in the spatial direction X and the spatial direction ⁇ with respect to the area of the detection region of one detection element of the sensor 2 and is incident on the detection area of one detection element of the sensor 2.
- the relationship between the level L, the level R, and the pixel value P can be expressed by Expression (4).
- the level R may be the pixel value of the pixel of data 3 located on the right side of the pixel of interest, and the level L may be located on the left side of the pixel of interest. In some cases, the pixel value of data 3 can be used.
- the mixing ratio ⁇ and the mixing region can be considered in the time direction as in the spatial direction.
- the ratio of the signal for the two objects incident on the detection area of one detection element of the sensor 2 in the time direction Changes.
- the signals for the two objects, which are incident on the detection area of one detection element of the sensor 2 and change in proportion in the time direction, are projected to one value of the data 3 by the detection element of the sensor 2.
- time mixing The mixing in the time direction of the signals for the two objects due to the time integration effect of the sensor 2 is called time mixing.
- the data continuity detecting unit 101 detects, for example, a pixel area in the data 3 on which the signals of the real world 1 for the two objects in the real world 1 are projected.
- data The continuity detecting unit 101 detects, for example, a tilt in the data 3 corresponding to the tilt of the edge of the image of the real world 1.
- the real world estimating unit 102 calculates N variables based on the region of the pixel having the predetermined mixture ratio detected by the data continuity detecting unit 101 and the gradient of the region. Estimate the signal of the real world 1 by constructing an equation using N variables that shows the relationship between the model 16 1 represented by and the M data 1 62, and solving the equation . Further, a specific estimation of the real world 1 will be described.
- the real-world signal represented by the function F (x, y, z, t) in the cross section in the spatial direction Z (position of the sensor 2) Let us consider approximating the signal with an approximation function f (x, y, t) determined by the position x in the spatial direction X, the position y in the spatial direction Y, and the time t.
- the detection area of the sensor 2 has a spread in the spatial direction X and the spatial direction Y.
- the approximation function f (x, y, t) is a function that approximates the signal of the real world 1 acquired by the sensor 2 and having a spatial and temporal spread.
- the value P (x, y, t) of the data 3 is obtained by the projection of the signal of the real world 1 by the sensor 2.
- the value P (x, y, t) of the data 3 is, for example, a pixel value output from the sensor 2 which is an image sensor.
- the value obtained by projecting the approximation function f (x, y, t) can be expressed as a projection function S (x, y, t).
- the function F (x, y, Z , t) representing the real world 1 signal can be a function of infinite order.
- the function Si (x, y, t) can be described from the description of the function (x, y, t).
- equation (6) by formulating the projection of sensor 2, from equation (5), the relationship between data 3 and the real-world signal can be formulated as equation (7). Can be.
- j is the data index.
- Real world model 1 6 1 can be obtained.
- N is the number of variables representing the model 1 6 1 approximating the real world 1.
- M is the number of data 16 2 included in data 3.
- the variable parts can be made independent.
- i indicates the number of variables as it is.
- the form of the function represented by fi can be made independent, and the desired function can be used as.
- the number N of the variable ⁇ can be defined without depending on the form of the function, and the variable can be obtained in relation to the number N of the variable ⁇ and the number M of data.
- the real world 1 can be estimated from the data 3.
- N variables are defined, that is, equation (5) is defined. This is made possible by describing the real world 1 using stationarity.
- a signal of the real world 1 can be described by a model 161, in which a cross section is represented by a polynomial and the same cross-sectional shape continues in a certain direction.
- the projection by the sensor 2 is formulated, and the equation (7) is described.
- the result of integrating the signals of the real world 2 is formulated as data 3.
- data 162 is collected from a region having data continuity detected by the data continuity detecting unit 101.
- data 162 of an area where a certain cross section continues which is an example of stationarity, is collected.
- N M
- the number N of variables is equal to the number M of expressions, so that the variables can be obtained by establishing a simultaneous equation.
- variable ⁇ can be obtained by the least squares method.
- Equation (9) P, j ( Xj , Yj, tj) are predicted values.
- Equation (1 2) is derived from Equation (1 + Equation (1 + Equation (1 + Equation (1 + Equation (1 + Equation (1 + Equation (1 + Equation (1 + Equation (1 + Equation (1 + Equation (1 + Equation (1 + Equation (1 + Equation (1 + Equation (1 + Equation (1 + Equation (1 + Equation (1 + Equation (1 + Equation (1 + Equation (1 + Equation (1 + Equation (1 + Equation (1 + Equation (1 + Equation (1 + (b)
- Equation (12) Equation (13)
- Si ( ⁇ ,., Y .., tj) is described as Si (j).
- Equation (13) Si represents the projection of the real world 1.
- Pj represents data 3.
- Wi is a variable that describes the characteristics of the signal in the real world 1 and seeks to obtain. Therefore, it is possible to estimate the real world 1 by inputting data 3 into equation (13) and obtaining W MAT by a matrix solution method or the like. That is, by calculating the expression (17), the real world 1 can be estimated.
- W MAT can be obtained using the transpose of S MAT .
- the real world estimating unit 102 estimates the real world 1 by, for example, inputting data 3 into Expression (13) and obtaining W MAT by a matrix solution or the like.
- the cross-sectional shape of the signal in the real world 1 that is, the level change with respect to the position change, is described by a polynomial. It is assumed that the cross section of the signal of the real world 1 is constant and the cross section of the signal of the real world 1 moves at a constant speed. Then, the projection of the signal of the real world 1 by the sensor 2 onto the data 3 is formulated by integration of the signal of the real world 1 in three dimensions in the spatiotemporal direction.
- Equations (18) and (19) are obtained from the assumption that the cross-sectional shape of the signal in the real world 1 moves at a constant speed. dx
- S (x , y, t) is the spatial direction X, from the position x s to the position x e, the spatial direction Y, from the position y s to the position y e, for the time direction t, It shows the integrated value of the region from time t s to time t e , that is, the region represented by the space-time rectangular parallelepiped.
- equation (13) By solving equation (13) using a desired function f (x ′, y ′) that can define equation (21), the signal of the real world 1 can be estimated.
- the signal of the real world 1 includes the stationarity represented by Equation (18), Equation (19), and Equation (22). This indicates that the cross section of a certain shape is moving in the spatio-temporal direction as shown in Figs.
- FIG. 19 is a diagram illustrating an example of M pieces of data 162 extracted from the data 3. For example, 27 pixel values are extracted as data 162, and the extracted pixel value is ⁇ ⁇ ⁇ ”(x, y, t). In this case, j is 0 to 26.
- the pixel value of the pixel corresponding to the position of interest at time t, which is n, is P 13 (x, y, t), and the pixel values of the pixels having data continuity are arranged.
- the region where the pixel value as data 3 output from the image sensor as sensor 2 is obtained has a spread in the time direction and the two-dimensional spatial direction. So
- the center of gravity of a rectangular parallelepiped (a region from which a pixel value is obtained) corresponding to a pixel can be used as the position of the pixel in the spatiotemporal direction.
- the real world estimating unit 102 calculates, for example, 27 pixel values P. From (x, y, t) to P 26 (x, y, t) and Eq. (2 3), generate Eq. (1 3) and estimate W to estimate the signal of real world 1 .
- a Gaussian function or a sigmoid function can be used as the function fi (, y, t).
- the data 3 has a value obtained by integrating the signal of the real world 1 in the time direction, that is, the two-dimensional spatial direction.
- the pixel value of data 3 output from the image sensor of sensor 2 is the light that is incident on the detection element.
- the signal of real world 1 is integrated in the time direction with the detection time, which is the shutter time. It has a value integrated in the light receiving area of the detection element in the spatial direction.
- high-resolution data with higher resolution in the spatial direction uses the estimated signal of the real world 1 in the time direction and the sensor 2 that outputs data 3 in the time direction. It is generated by integrating in the same time as the detection time, and integrating in a narrower area than the light receiving area of the detection element of the sensor 2 that outputs the data 3 in the spatial direction.
- high-resolution data When generating high-resolution data with higher resolution in the spatial direction, the area where the estimated signal of the real world 1 is integrated is set completely independent of the light receiving area of the detection element of the sensor 2 that outputs the data 3. can do.
- high-resolution data has a resolution that is an integer multiple in the spatial direction with respect to data 3 as well as a rational multiple that in the spatial direction with respect to data 3, such as 5/3 times. It can be done.
- high-resolution data with higher resolution in the time direction uses the estimated real world 1 signal in the spatial direction and the light receiving area of the detection element of sensor 2 that outputs data 3 in the spatial direction. It is generated by integrating in the same area and integrating in the time direction in a shorter time as compared with the detection time of the sensor 2 that outputs the data 3.
- high-resolution data When generating high-resolution data with higher resolution in the time direction, the time during which the estimated signal of the real world 1 is integrated is set irrespective of the detection time of the detection element of the sensor 2 that outputs the data 3. can do.
- high-resolution data can have a resolution that is an integer multiple in the time direction with respect to data 3, but also a resolution that is a rational multiple in the time direction with respect to data 3, such as 4 times 7Z. You can have it.
- High-resolution data from which motion blur has been removed is generated by integrating the estimated signal of the real world 1 only in the spatial direction without integrating it in the time direction.
- high-resolution data with higher resolution in the temporal and spatial directions uses the estimated real-world 1 signal to detect the sensor 2 that outputs data 3 in the spatial direction. It is generated by integrating in a narrower area compared to the light receiving area of the element, and integrating in a shorter time in the time direction compared to the detection time of sensor 2 that output data 3. You.
- the region and time in which the estimated signal of the real world 1 is integrated can be set completely independent of the light receiving region of the detection element of the sensor 2 that has output the data 3 and the shutter time.
- the image generation unit 103 integrates, for example, the estimated signal of the real world 1 in a desired spatio-temporal region, so that higher-resolution data can be obtained in the time direction or the space direction.
- 24 to 28 show an example of an input image using signal processing of the signal processing device 4 and an example of a result of the processing.
- FIG. 24 is a diagram showing the original image of the input image (corresponding to the optical signal of the real world 1).
- FIG. 25 is a diagram illustrating an example of an input image.
- the input image shown in FIG. 25 is an image in which the average value of the pixel values of the pixels belonging to the block of 2 ⁇ 2 pixels of the image shown in FIG. 24 is generated as the pixel value of one pixel. . That is, the input image is an image obtained by applying spatial integration that imitates the integration characteristics of the sensor to the image shown in FIG.
- FIG. 26 is a diagram showing an image obtained by applying the conventional classification adaptive processing to the input image shown in FIG.
- the class classification adaptation process includes a class classification process and an adaptation process.
- the class classification process classifies data into classes based on their properties, and performs an adaptation process for each class.
- the adaptive processing for example, a low-quality or standard-quality image is converted into a high-quality image by mapping using a predetermined tap coefficient.
- the first data is converted into the second data by mapping (mapping) using a predetermined tap coefficient.
- mapping method using the tap coefficients for example, a linear first-order coupling model is adopted, and a high-resolution HD (High-
- the HD pixel y that is a pixel forming the HD image is, for example, a prediction date for predicting the HD pixel from the SD pixel that is a pixel forming the SD image. It can be obtained by the following linear linear equation (linear combination) using the multiple SD pixels extracted as a group and the tap coefficients.
- Equation (24) where x n represents (the pixel value of) the n-th SD pixel that constitutes the prediction tap for the HD pixel y, and w n represents the n-th SD pixel. Represents the nth tap coefficient multiplied by the pixel.
- the prediction tap is configured by N SD pixels.
- the pixel value y of the HD pixel can be obtained not by the linear linear expression shown in Expression (24) but by a higher-order expression of second or higher order.
- the true value of (the pixel value of) the k-th HD pixel is represented as y k
- the predicted value of the true value y k obtained by Expression (24) is represented as y k ′.
- the prediction error e k is represented by, for example, the following equation.
- Equation (26) x n and k represent the n-th SD pixel that constitutes the prediction tap for the k-th HD pixel.
- K is the HD pixel y k and the SD pixels X k , x 2 , k , ⁇ ⁇ , x N that constitute the prediction tap for the HD pixel y k , k represents the number of samples in the set.
- the tap coefficient w n for minimizing (minimizing) the sum E of the squared errors in Equation (27) is obtained by partially differentiating the sum E with the tap coefficient w n to be 0, and thus satisfies the following equation There is a need. 2, ..., ⁇ )
- Eq. (30) By substituting Eq. (26) for e k in Eq. (30), Eq. (30) can be represented by the normal equation shown in Eq. (31). l, kYk) 2, kyk) N, kYk)
- Equation (3 1) The normal equation in equation (3 1) can be obtained by preparing a certain number of sets of HD pixels y k and SD pixels x nk. ), The optimal tap coefficient w n can be obtained. In solving equation (31), it is possible to adopt, for example, a sweeping method (Gauss-Jordan elimination method).
- SD pixels k forming the prediction taps for the HD pixel y k, x 2, k, ⁇ ⁇ ⁇ , x N, as the k, position or et space on SD image corresponding to the HD pixel y k SD pixels located close to the target or time can be adopted.
- the class classification adaptive processing, learning and the tap coefficient w n, and the mapping using the tap coefficient w n is performed for each class.
- classification adaptive processing attention to the class classification processing targeting HD pixel y k are is performed for each class obtained by the class classification processing, learning and the tap coefficient w n, the tap coefficient w n
- the used mapping is performed.
- HD is a pixel y k as classification processing applied to, for example, a plurality of SD pixels as a class tap used for the class classification of the HD pixel y k, extracted from the SD image is composed of the plurality of SD pixels M bits using a class tap
- ADRC Adaptive Dynamic Range Coding
- the classification adaptive processing is different from, for example, a simple interpolation processing in that components included in HD pixels are reproduced although they are not included in SD pixels. That is, the class classification adaptive processing is the same as the interpolating processing using a so-called interpolation filter as far as only Equation (24) is observed, but the tap coefficient w n corresponding to the tap coefficient of the interpolation filter is used. However, since it is obtained by learning using HD pixels as teacher data and SD pixels as student data, the components contained in HD pixels can be reproduced.
- tap coefficient w n for performing various conversion .
- the Tap staff that performs mapping to improve resolution •
- the number w n can be obtained.
- an HD image is used as the teacher data y and an SD image in which the number of pixels of the HD image is reduced is used as the student data X
- the mapping that increases the number of pixels constituting the image is used.
- the tap coefficient w n can be obtained.
- FIG. 26 is an image obtained by performing mapping by the above-described classification adaptive processing on the input image of FIG. 25.
- FIG. 26 it can be seen that the thin line image is different from the original image in FIG.
- FIG. 27 is a diagram illustrating a result of detecting a thin line region from the input image illustrated in the example of FIG. 25 by the data continuity detection unit 101.
- a white area indicates a thin line area, that is, an area where the arc shapes shown in FIG. 10 are arranged.
- FIG. 28 is a diagram illustrating an example of an output image obtained by performing signal processing in the signal processing device 4 using the image illustrated in FIG. 25 as an input image. As shown in FIG. 28, the signal processing device 4 can obtain an image closer to the thin line image of the original image shown in FIG.
- FIG. 29 is a flowchart for explaining signal processing by the signal processing device 4.
- the data continuity detecting unit 101 executes a process of detecting continuity.
- the data continuity detection unit 101 detects the continuity of the data included in the input image, which is data 3, and outputs data continuity information indicating the continuity of the detected data to the real world estimation unit 1002. And to the image generation unit 103.
- the data continuity detecting unit 101 detects the continuity of data corresponding to the continuity of a signal in the real world.
- the continuity of the data detected by the data continuity detection unit 101 is a part of the continuity of the image of the real world 1 included in the data 3, or This is a stationary state that has changed from the stationary state of the signal in the real world 1.
- the data continuity detecting unit 101 detects data continuity by detecting an area having a certain feature in a direction of a predetermined dimension. Also, for example, The data continuity detection unit 101 detects data continuity by detecting an angle (inclination) in the spatial direction, which indicates a similar arrangement of shapes.
- step S101 The details of the processing for detecting the stationarity in step S101 will be described later.
- the data continuity information can be used as a feature quantity indicating the feature of data 3.
- step S102 the real world estimating unit 102 executes a process of estimating the real world. That is, the real world estimating unit 102 estimates the signal of the real world 1 based on the input image and the data continuity information supplied from the data continuity detecting unit 101. For example, in the processing of step S102, the real world estimating unit 102 estimates the signal of the real world 1 by predicting a model 161 that approximates (describes) the real world 1. The real world estimating unit 102 supplies the real world estimation information indicating the estimated signal of the real world 1 to the image generating unit 103.
- the real world estimating unit 102 estimates the signal of the real world 1 by estimating the width of a linear object. Also, for example, the real world estimating unit 102 estimates the signal of the real world 1 by predicting a level indicating the color of a linear object.
- step S102 Details of the process of estimating the real world in step S102 will be described later.
- the real world estimation information can be used as a feature amount indicating the feature of the data 3.
- step S103 the image generation unit 103 executes a process of generating an image, and the process ends. That is, the image generation unit 103 generates an image based on the real world estimation information, and outputs the generated image. Alternatively, the image generation unit 103 generates an image based on the data continuity information and the real world estimation information, and outputs the generated image.
- the image generation unit 103 integrates the estimated real-world light in the spatial direction based on the real-world estimation information, thereby comparing the input image with the input image. Generates a high-resolution image according to the spatial direction, and outputs the generated image. For example, the image generation unit 103 generates a real world estimated based on real world estimation information. By integrating the light of the field in the spatio-temporal direction, a higher-resolution image is generated in the temporal and spatial directions compared to the input image, and the generated image is output. The details of the image generation processing in step S103 will be described later.
- the signal processing device 4 detects the continuity of the data from the data 3 and estimates the real world 1 based on the continuity of the detected data. Then, the signal processing device 4 generates a signal that is closer to the real world 1 based on the estimated real world 1.
- a first signal which is a real-world signal having a first dimension
- a second dimension of a second dimension that is less than the first dimension in which a part of the stationarity of the real-world signal is missing.
- FIG. 30 is a block diagram showing the configuration of the data continuity detecting unit 101. As shown in FIG.
- the data continuity detection unit 101 shown in FIG. 30 is included in the data 3 resulting from the continuity that the cross-sectional shape of the object is the same when a thin line object is imaged. Detect data continuity. That is, the data continuity detection unit 101 shown in FIG. 30 is configured to change the position in the direction orthogonal to the length direction at an arbitrary position in the length direction of the image of the real world 1 as a thin line. Detects the stationarity of the data contained in Data 3, resulting from the stationarity that the change in light level with respect to is the same.
- the data continuity detecting unit 101 shown in FIG. 30 includes a data line 3 that is obtained by imaging a thin line image with the sensor 2 having a spatial integration effect. A region where a plurality of arc shapes (kamaboko shapes) of a predetermined length, which are arranged adjacent to each other, is detected.
- the data continuity detection unit 101 is a part of the image data other than the image data part (hereinafter, also referred to as a stationary component) where the thin line image having the data continuity is projected from the input image which is the data 3.
- non-stationary component (Hereinafter referred to as a non-stationary component), and from the extracted non-stationary component and the input image, a pixel on which the image of the real world 1 thin line is projected is detected, and the real world 1 thin line in the input image is detected. Detects the area consisting of the pixels on which the image is projected.
- the non-stationary component extraction unit 201 extracts the non-stationary component from the input image, and outputs the non-stationary component information indicating the extracted non-stationary component together with the input image to the vertex detection unit 202 and the monotone increase / decrease detection. Supply to part 203.
- the component extraction unit 201 extracts the non-stationary component as the background by approximating the background in the input image as the data 3 with a plane.
- a solid line indicates a pixel value of data 3
- a dotted line indicates an approximate value indicated by a plane approximating the background.
- A indicates the pixel value of the pixel on which the thin line image is projected
- PL indicates a plane approximating the background.
- the pixel values of a plurality of pixels in the image data portion having data continuity are discontinuous with respect to the non-stationary component.
- the non-stationary component extraction unit 201 is configured to project a plurality of pixels of the image data, which is data 3, in which an image, which is an optical signal of the real world 1, is projected, and a part of the stationarity of the image of the real world 1 is missing. Detect discontinuities in values.
- the vertex detection unit 202 and the monotone increase / decrease detection unit 203 remove non-stationary components from the input image based on the non-stationary component information supplied from the non-stationary component extraction unit 201. For example, the vertex detection unit 202 and the monotone increase / decrease detection unit 203 set the pixel value of a pixel on which only the background image is projected to 0 in each pixel of the input image, thereby To remove unsteady components.
- the detection unit 203 removes a non-stationary component from the input image by subtracting a value approximated by the plane PL from the pixel value of each pixel of the input image.
- the vertex detection unit 202 to the continuity detection unit 204 can process only the portion of the image data on which the fine line is projected, and Processing in the detecting unit 202 to the continuity detecting unit 204 becomes easier.
- the non-stationary component extraction unit 201 may supply the image data obtained by removing the non-stationary component from the input image to the vertex detection unit 202 and the monotone increase / decrease detection unit 203.
- image data in which an unsteady component has been removed from an input image that is, image data including only pixels including a steady component
- image data projected from the vertex detection unit 202 to the continuity detection unit 204 to which the image of the thin line is projected will be described.
- the cross-sectional shape in the spatial direction Y (change in pixel value with respect to the change in the spatial direction) of the image data on which the thin line image shown in Fig. 31 is projected is sensor 2 when there is no optical LPF. From the spatial integration effect of the image sensor, a trapezoid shown in Fig. 33 or a triangle shown in Fig. 34 can be considered. However, a normal image sensor has an optical LPF, and the image sensor acquires an image that has passed through the optical LPF and projects the acquired image onto data 3, so that in reality, the spatial direction Y of the thin line image data is
- the cross-sectional shape is similar to a Gaussian distribution as shown in Fig. 35.
- the vertex detection unit 202 to the continuity detection unit 204 are pixels on which the fine line image is projected, and the same cross-sectional shape (change in pixel value with respect to change in position in the spatial direction) is displayed in the vertical direction of the screen By detecting areas consisting of objects arranged at regular intervals, and by detecting the connection of areas corresponding to the length direction of the thin lines in the real world 1, the areas with data continuity are detected. An area consisting of pixels onto which an image has been projected is detected. That is, the vertex detection unit 202 to the continuity detection unit 204 perform the vertical A region in which an arc shape (kamaboko shape) is formed on a pixel in one column is detected, and it is determined whether or not the detected regions are arranged side by side in the horizontal direction. Then, the connection of the areas where the arc shape is formed, corresponding to the length direction of the thin line image, is detected.
- an arc shape kamaboko shape
- the vertex detection unit 202 to the continuity detection unit 2 ⁇ 4 are the pixels on which the thin line images are projected, and the regions having the same cross-sectional shape are arranged at regular intervals in the horizontal direction of the screen. Detects and further detects the connection of the detected areas corresponding to the length direction of the thin line in the real world 1. Is detected. That is, the vertex detecting unit 202 to continuity detecting unit 204 detects a region where an arc shape is formed on a horizontal row of pixels in the input image, and detects the detected region. By judging whether or not they are adjacent to each other in the vertical direction, the connection of the area where the arc shape is formed corresponding to the length direction of the thin line image which is the signal of the real world 1 is detected.
- the vertex detecting unit 202 detects a pixel having a larger pixel value than the surrounding pixels, that is, the vertex, and supplies vertex information indicating the position of the vertex to the monotone increase / decrease detecting unit 203.
- the vertex detector 202 compares the pixel values of the pixels located on the upper side of the screen and the pixel values of the pixels located on the lower side of the screen. Then, a pixel having a larger pixel value is detected as a vertex.
- the vertex detection unit 202 detects one or a plurality of vertices from one image, for example, an image of one frame.
- One screen contains frames or fields. The same applies to the following description.
- the vertex detection unit 202 selects a pixel of interest from pixels that have not yet been set as the pixel of interest from the image of one frame, and determines the pixel value of the pixel of interest and the pixel value of the pixel above the pixel of interest. Is compared with the pixel value of the pixel of interest and the pixel value of the pixel below the pixel of interest. Are compared, a target pixel having a pixel value larger than the pixel value of the upper pixel and a pixel value larger than the pixel value of the lower pixel is detected, and the detected target pixel is set as a vertex.
- the vertex detection unit 202 supplies vertex information indicating the detected vertex to the monotonous increase / decrease detection unit 202.
- the vertex detector 202 may not detect the vertex in some cases. For example, when the pixel values of the pixels of one image are all the same, or when the pixel value decreases in the 1 or 2 direction, no vertex is detected. In this case, the thin line image is not projected on the image data.
- the monotonous increase / decrease detecting unit 203 Based on the vertex information indicating the position of the vertex supplied from the vertex detecting unit 202, the monotonous increase / decrease detecting unit 203 detects the vertex detected by the vertex detecting unit 202 in the vertical direction.
- the monotonous increase / decrease detection unit 203 detects a region composed of pixels having a monotonically increasing pixel value as a candidate for a region composed of pixels onto which a thin line image is projected, based on the pixel value of the vertex.
- Monotonically increasing means that the pixel value of the pixel at a longer distance from the vertex is larger than the pixel value of the pixel at a shorter distance from the vertex.
- the processing for the region composed of pixels having monotonically increasing pixel values is the same as the processing for the region composed of pixels having monotonically decreasing pixel values, and a description thereof will be omitted.
- the processing for is also monotonically decreasing Since the process is the same as that for a region composed of pixels having certain pixel values, a description thereof will be omitted.
- the monotonous increase / decrease detection unit 203 calculates the difference between the pixel value of each pixel and the pixel value of the upper pixel, and the pixel value of the lower pixel for each pixel in one column vertically with respect to the vertex. Find the difference between. Then, the monotone increase / decrease detection unit 203 detects an area where the pixel value monotonously decreases by detecting a pixel whose sign of the difference changes.
- the monotonous increase / decrease detection unit 203 detects a region having a pixel value having the same sign as that of the pixel value of the vertex based on the sign of the pixel value of the vertex from the region where the pixel value is monotonically decreasing. Is detected as a candidate for an area composed of pixels onto which a thin line image is projected.
- the monotone increase / decrease detection unit 203 compares the sign of the pixel value of each pixel with the sign of the pixel value of the upper pixel and the sign of the pixel value of the lower pixel, and determines the sign of the pixel value. By detecting the pixel where the pixel value changes, an area consisting of pixels having the pixel value of the same sign as the vertex is detected from the area where the pixel value monotonously decreases.
- the monotonous increase / decrease detection unit 203 detects an area composed of pixels arranged in the up-down direction, the pixel value of which monotonously decreases with respect to the vertex, and which has the pixel value of the same sign as the vertex.
- FIG. 36 is a diagram illustrating a process of detecting a vertex and detecting a monotonously increasing / decreasing region, which detects a pixel region on which a thin line image is projected, from a pixel value with respect to a position in the spatial direction Y.
- P indicates a vertex.
- P indicates a vertex.
- the vertex detection unit 202 compares the pixel value of each pixel with the pixel value of a pixel adjacent in the spatial direction Y, and determines a pixel value larger than the pixel value of the two pixels adjacent in the spatial direction Y.
- the vertex P is detected by detecting the pixel having the vertex P.
- the region consisting of the vertex P and the pixels on both sides of the vertex P in the spatial direction Y is a monotonically decreasing region in which the pixel values of the pixels on both sides in the spatial direction Y monotonically decrease with respect to the pixel value of the vertex P. It is.
- the arrow indicated by A and the arrow indicated by B indicate monotonically decreasing regions existing on both sides of the vertex P.
- the monotone increase / decrease detection unit 203 finds a difference between the pixel value of each pixel and the pixel value of a pixel adjacent to the pixel in the spatial direction Y, and detects a pixel whose sign of the difference changes.
- the monotonous increase / decrease detection unit 203 sets the boundary between the detected pixel whose sign of the difference changes and the pixel on the near side (vertex P side) in a thin line area composed of pixels onto which the thin line image is projected.
- the boundary of the thin line region which is the boundary between the pixel whose sign of the difference changes and the pixel on the near side (vertex P side), is indicated by C.
- the monotonous increase / decrease detection unit 203 compares the sign of the pixel value of each pixel with the sign of the pixel value of the pixel adjacent to the pixel in the spatial direction Y in the monotonically decreasing region, and determines the sign of the pixel value. A changing pixel is detected.
- the monotonous increase / decrease detection unit 203 sets the boundary between the detected pixel whose sign of the pixel value changes and the pixel on the near side (vertex P side) as the boundary of the thin line area.
- the boundary of the thin line area that is the boundary between the pixel whose sign of the pixel value changes and the pixel on the near side (vertex P side) is indicated by D.
- a thin line area F composed of pixels onto which a thin line image is projected is an area sandwiched between a thin line area boundary C and a thin line area boundary D.
- the monotone decrease detection section 203 finds a thin line region F longer than a predetermined threshold, that is, a thin line region F including a larger number of pixels than the threshold, from the thin line region F composed of such a monotonous increase / decrease region. .
- a predetermined threshold that is, a thin line region F including a larger number of pixels than the threshold.
- the monotonous increase / decrease detection unit 203 detects a thin line region F including four or more pixels.
- the monotonous increase / decrease detection unit 203 calculates the pixel value of the vertex P, the pixel value of the pixel on the right side of the vertex P, and the pixel value on the left side of the vertex P.
- Pixel value of the vertex P exceeds the threshold value
- the pixel value of the pixel on the right side of the vertex P is less than or equal to the threshold value
- a thin line region F to which a vertex P having a value equal to or smaller than a threshold belongs is detected, and the detected thin line region F is set as a candidate for a region including pixels including components of a thin line image.
- the pixel value of the vertex P is equal to or less than the threshold value, the pixel value of the pixel on the right side of the vertex P exceeds the threshold value, or the pixel value of the pixel on the left side of the vertex P exceeds the threshold value.
- F is determined not to include the component of the thin line image, and is removed from the candidate of the region including the pixel including the component of the thin line image.
- the monotonous increase / decrease detection unit 203 compares the pixel value of the vertex P with the threshold value, and moves the vertex P in the spatial direction X (the direction indicated by the dotted line ⁇ ′). ) Is compared with the threshold value, and the pixel value of the vertex ⁇ exceeds the threshold value and the pixel value of the pixel adjacent in the spatial direction X is equal to or less than the threshold value. To detect.
- FIG. 38 is a diagram illustrating the pixel values of the pixels arranged in the spatial direction X indicated by the dotted line AA ′ in FIG. Exceeds the pixel value is the threshold T h s of the vertex [rho, pixel values of pixels adjacent in the spatial direction X of the vertex P is less than or equal to the threshold value T h s, fine line region F where the vertex P belongs, including components of the thin line .
- the monotonous increase / decrease detection unit 203 compares the difference between the pixel value of the vertex P and the pixel value of the background with a threshold value based on the pixel value of the background, and also determines the vertex P in the spatial direction X.
- the difference between the pixel value of the adjacent pixel and the pixel value of the background is compared with a threshold value, and the difference between the pixel value of the vertex P and the pixel value of the background exceeds the threshold value, and the pixel value of the pixel adjacent in the spatial direction X
- a fine line region F to which the vertex P belongs, where the difference between the pixel value of the background and the pixel value of the background is equal to or smaller than the threshold value, may be detected.
- the monotone increase / decrease detection unit 203 is an area composed of pixels whose pixel values decrease monotonically with the sign of the pixel value being the same as that of the vertex P with respect to the vertex P.
- Monotonic increase / decrease region information indicating that the pixel value of the pixel on the right side of P is equal to or less than the threshold value and the pixel value of the pixel on the left side of the vertex P is equal to or less than the threshold value is supplied to the continuity detection unit 204.
- pixels belonging to the area indicated by the monotone increasing / decreasing area information are arranged vertically.
- the thin line image contains the projected pixels.
- the area indicated by the monotonically increasing / decreasing area information includes pixels arranged in a line in the vertical direction of the screen and includes an area formed by projecting a thin line image.
- the vertex detection unit 202 and the monotone increase / decrease detection unit 203 use the property that the change in the pixel value in the spatial direction Y is similar to the Gaussian distribution in the pixel on which the thin line image is projected. Then, a steady area composed of pixels onto which the thin line image is projected is detected.
- the continuity detection unit 204 includes pixels that are horizontally adjacent to each other in the area that is composed of vertically arranged pixels and that is indicated by the monotone increase / decrease area information supplied from the monotone increase / decrease detection unit 203. Regions, that is, regions that have similar changes in pixel values and that overlap in the vertical direction are detected as continuous regions, and vertex information and data continuity indicating the detected continuous regions are detected. Output information.
- the data continuity information includes monotonically increasing / decreasing area information, information indicating the connection of areas, and the like.
- the detected continuous region includes the pixels on which the fine lines are projected. Since the detected continuous area includes pixels on which fine lines are projected and arranged at regular intervals so that arc shapes are adjacent to each other, the detected continuous area is regarded as a steady area, and continuity detection is performed.
- the unit 204 outputs data continuity information indicating the detected continuous area.
- the continuity detecting unit 204 determines that the arc shape in the data 3 obtained by imaging the thin line, which is generated from the continuity of the image of the thin line in the real world 1 and is continuous in the length direction, is adjacent.
- the candidates of the areas detected by the vertex detection unit 202 and the monotone increase / decrease detection unit 203 are further narrowed down.
- FIG. 39 is a diagram illustrating a process of detecting the continuity of the monotone increase / decrease region. As shown in FIG. 39, the continuity detection unit 204 performs two monotonic operations when the thin line region F composed of pixels arranged in one row in the vertical direction of the screen includes pixels that are adjacent in the horizontal direction.
- a thin line region consisting of pixels arranged in one column in the vertical direction of the screen is a thin line region including pixels in the thin line region F Q consisting of pixels arranged in one column in the vertical direction of the screen and pixels horizontally adjacent to the thin line region. It is assumed to be continuous with the area F 0 .
- the vertex detection unit 202 to the continuity detection unit 204 detect pixels that are arranged in a line in the upper and lower direction of the screen and that are formed by projecting a thin line image. .
- the vertex detection unit 202 to the continuity detection unit 204 detect pixels that are arranged in a line in the upper and lower directions on the screen and that are formed by projecting a thin line image. Further, an area is detected which is a pixel arranged in a line in the left-right direction of the screen and formed by projecting a thin line image.
- the order of the processing is not particularly limited, and it goes without saying that the processing may be performed in parallel.
- the vertex detection unit 202 compares the pixel values of the pixels located on the left side of the screen and the pixel values of the pixels located on the right side of the screen with respect to the pixels arranged in one row in the horizontal direction of the screen. Then, a pixel having a larger pixel value is detected as a vertex, and vertex information indicating the position of the detected vertex is supplied to the monotone increase / decrease detector 203.
- the vertex detection unit 202 detects one or a plurality of vertices from one image, for example, one frame image.
- the vertex detection unit 202 selects the pixel of interest from the pixels that have not yet been set as the pixel of interest from the image of one frame, and focuses on the pixel value of the pixel and the pixel of the pixel to the left of the pixel of interest.
- the pixel value of the pixel of interest is compared with the pixel value of the pixel on the right side of the pixel of interest.
- a target pixel having a pixel value larger than the pixel value of the left pixel and a pixel value larger than the pixel value of the right pixel is detected, and the detected target pixel is set as a vertex.
- the vertex detecting unit 202 supplies vertex information indicating the detected vertex to the monotonous increase / decrease detecting unit 203.
- the vertex detector 202 may not detect the vertex in some cases.
- the monotone increase / decrease detection unit 203 is a pixel that is arranged in a line in the left and right direction with respect to the vertex detected by the vertex detection unit 202, and is a candidate for an area composed of pixels onto which a thin line image is projected. The detection is performed, and the monotonic increase / decrease area information indicating the detected area is supplied to the continuity detector 204 together with the vertex information.
- the monotonous increase / decrease detection unit 203 detects an area composed of pixels having a pixel value that is monotonically decreased with respect to a pixel value of the vertex as an area composed of pixels onto which a thin line image is projected. Detect as candidate.
- the monotonous increase / decrease detection unit 203 calculates the difference between the pixel value of each pixel and the pixel value of the pixel on the left side and the pixel value of the pixel on the right side for each pixel in one row horizontally with respect to the vertex. Find the difference. Then, the monotone increase / decrease detection unit 203 detects an area where the pixel value monotonously decreases by detecting a pixel whose sign of the difference changes.
- the monotonous increase / decrease detection unit 203 detects a region having a pixel value having the same sign as that of the pixel value of the vertex based on the sign of the pixel value of the vertex from the region where the pixel value is monotonically decreasing. Is detected as a candidate for an area composed of pixels onto which a thin line image is projected.
- the monotone increase / decrease detection unit 203 compares the sign of the pixel value of each pixel with the sign of the pixel value of the left pixel or the sign of the pixel value of the right pixel, and the sign of the pixel value changes. By detecting a pixel having a pixel value, an area composed of pixels having the pixel value of the same sign as the vertex is detected from the area where the pixel value monotonously decreases.
- the monotone increase / decrease detection unit 203 detects a region arranged in the left-right direction, the pixel value of which is monotonously decreased with respect to the vertex, and the pixel region having the same sign as the vertex.
- the monotone increase / decrease detection unit 203 obtains a thin line region longer than a predetermined threshold, that is, a thin line region including a number of pixels larger than the threshold, from the thin line region composed of such a monotone increase / decrease region.
- the monotonous increase / decrease detection unit 203 calculates the pixel value of the vertex, the pixel value of the pixel above the vertex, and the pixel value of the pixel below the vertex.
- the pixel value of the vertex exceeds the threshold value, the pixel value of the pixel above the vertex is less than the threshold value, and the thin line region to which the pixel value of the pixel below the vertex is less than the threshold value belongs.
- the detected thin line area is set as a candidate for an area composed of pixels including components of the thin line image.
- the thin line region to which the vertex whose pixel value is less than or equal to the threshold value, the pixel value of the pixel above the vertex exceeds the threshold value, or the pixel value of the pixel below the vertex exceeds the threshold value belongs to It is determined that the image does not include the component of the thin line image, and is removed from the candidate of the region including the pixel including the component of the thin line image.
- the monotonous increase / decrease detection unit 203 compares the difference between the pixel value of the vertex and the pixel value of the background with a threshold based on the pixel value of the background, and calculates the pixels of the pixels vertically adjacent to the vertex.
- the difference between the pixel value and the background pixel value is compared with a threshold value, and the difference between the vertex pixel value and the background pixel value exceeds the threshold value, and the pixel value of the vertically adjacent pixel and the background pixel value
- the detected fine line region having a difference of not more than the threshold value may be set as a candidate for a region including pixels including a component of the fine line image.
- the monotone increase / decrease detection unit 203 is an area composed of pixels whose pixel values decrease monotonously with the vertex as the reference and the sign of the pixel value is the same as the vertex, and the vertex exceeds the threshold value, and the right side of the vertex Is supplied to the continuity detecting unit 204, indicating that the pixel value of the pixel of the apex is equal to or less than the threshold value and the pixel value of the pixel on the left side of the vertex is equal to or less than the threshold value.
- the thin line image includes the projected pixels. That is, simply
- the area indicated by the tone increase / decrease area information is a row of pixels arranged in the horizontal direction of the screen, and includes an area formed by projecting a thin line image.
- the continuity detection unit 204 includes pixels that are vertically adjacent to each other in the region composed of pixels arranged in the horizontal direction, which is indicated by the monotone increase / decrease region information supplied from the monotone increase / decrease detection unit 203. Regions, that is, regions that have similar pixel value changes and overlap in the horizontal direction are detected as continuous regions, and vertex information and data continuity indicating the detected continuous regions are detected. Output information.
- the data continuity information includes information indicating the connection between the areas.
- the detected continuous region includes the pixels on which the fine lines are projected. Since the detected continuous area includes pixels on which fine lines are projected and arranged at regular intervals so that arc shapes are adjacent to each other, the detected continuous area is regarded as a steady area, and continuity detection is performed.
- the unit 204 outputs data continuity information indicating the detected continuous area.
- the continuity detecting unit 204 detects the arc in the data 3 obtained by imaging the thin line, which is generated from the continuity of the image of the thin line in the real world 1 and is continuous in the length direction. Utilizing stationarity arranged at regular intervals so as to be adjacent to each other, the candidates of the regions detected by the vertex detection unit 202 and the monotone increase / decrease detection unit 203 are further narrowed down.
- the data continuity detecting unit 101 can detect the continuity included in the data 3 as the input image. That is, the data continuity detecting unit 101 can detect the continuity of the data included in the data 3 that is generated by projecting the image of the real world 1 as a thin line onto the data 3.
- the data continuity detecting unit 101 detects, from the data 3, an area composed of pixels onto which the image of the real world 1 as a thin line is projected.
- FIG. 40 is a diagram illustrating an example of another process of detecting a region having continuity, on which a thin line image is projected, in the continuity detection unit 101.
- the continuity detecting unit 101 calculates, for each pixel, the absolute value of the difference between the pixel value and the adjacent pixel.
- the continuity detecting unit 101 when the values of the adjacent differences are the same among the absolute values of the differences arranged corresponding to the pixels, the pixel () corresponding to the absolute value of the two differences The pixel between the absolute values of the two differences) is determined to contain a thin line component.
- the continuity detecting unit 101 need not detect the difference as a thin line.
- the continuity detector 101 determines that the pixel includes a thin line component.
- the continuity detector 101 can also detect a thin line by such a simple method.
- FIG. 41 is a flowchart for explaining the processing of the stationarity detection.
- the non-stationary component extracting unit 201 extracts a non-stationary component, which is a portion other than the portion where the thin line is projected, from the input image.
- the non-stationary component extraction unit 201 supplies, together with the input image, the non-stationary component information indicating the extracted non-stationary component to the vertex detection unit 202 and the monotone increase / decrease detection unit 203. Details of the process of extracting the unsteady component will be described later.
- step S202 the vertex detection unit 202 removes non-stationary components from the input image based on the non-stationary component information supplied from the non-stationary component extraction unit 201, and outputs Only pixels containing stationary components are left. Further, in step S202, the vertex detector 202 detects a vertex. That is, when executing the processing based on the vertical direction of the screen, the vertex detection unit 202 compares the pixel value of each pixel with the pixel values of the upper and lower pixels for the pixel including the stationary component. Then, a vertex is detected by detecting a pixel having a pixel value larger than the pixel value of the upper pixel and the pixel value of the lower pixel.
- the vertex detection unit 202 when executing the processing on the basis of the horizontal direction of the screen, determines the pixel value of each pixel with respect to the pixel including the stationary component and the right side. The vertex is detected by comparing the pixel value of the left pixel with the pixel value of the right pixel and the pixel having a pixel value larger than the pixel value of the left pixel.
- the vertex detection unit 202 supplies vertex information indicating the detected vertex to the monotonous increase / decrease detection unit 203.
- step S203 the monotonous increase / decrease detection section 203 removes the non-stationary component from the input image based on the non-stationary component information supplied from the non-stationary component extraction section 201, and Only the pixels containing the stationary component are left. Further, in step S203, the monotone increase / decrease detecting unit 203 detects the monotone increase / decrease with respect to the vertex based on the vertex information indicating the position of the vertex supplied from the vertex detecting unit 202. A region consisting of pixels having data continuity is detected.
- the monotonous increase / decrease detecting unit 203 When executing processing based on the vertical direction of the screen, the monotonous increase / decrease detecting unit 203 vertically arranges the pixels based on the pixel values of the vertices and the pixel values of the pixels arranged vertically in one column. By detecting the monotonous increase / decrease of pixels in one row, the pixels of which one thin line image is projected, detect an area consisting of pixels having data continuity. That is, in step S203, the monotonous increase / decrease detection unit 203, when executing the processing with the vertical direction of the screen as a reference, determines each of the vertices and the pixels vertically arranged in one column with respect to the vertices.
- the difference between the pixel value of the pixel and the pixel value of the upper or lower pixel is determined, and the pixel whose sign of the difference changes is detected.
- the monotone increase / decrease detection unit 203 determines the sign of the pixel value of each pixel and the sign of the pixel value of the pixel above or below the vertex and the pixels arranged in one column vertically with respect to the vertex. And detects a pixel whose sign of the pixel value changes.
- the monotonous increase / decrease detection unit 203 detects the pixel value of the vertex, The pixel values of the pixels on the right and left sides of the point are compared with the threshold value, and an area consisting of pixels where the pixel value of the vertex exceeds the threshold value and the pixel values of the right and left pixels are equal to or less than the threshold value is detected. .
- the monotone increase / decrease detection unit 203 supplies the continuity detection unit 204 with monotone increase / decrease region information indicating the monotone increase / decrease region, using the region thus detected as a monotone increase / decrease region.
- the monotonous increase / decrease detection unit 203 determines the horizontal direction based on the pixel values of the vertices and the pixel values of the pixels arranged in one row horizontally with respect to the vertices. Detects an area consisting of pixels with data continuity by detecting the monotonous increase / decrease of the pixels in one row that are projected on one thin line image. That is, in step S203, the monotonous increase / decrease detection unit 203, when executing the processing with the horizontal direction of the screen as a reference, determines each of the vertices and the pixels arranged in one row horizontally with respect to the vertices.
- the difference between the pixel value of the left pixel and the pixel value of the left or right pixel is obtained, and the pixel whose sign of the difference changes is detected.
- the monotone increase / decrease detection unit 203 calculates the sign of the pixel value of each pixel and the sign of the pixel value of the pixel on the left or right side of each pixel for the vertices and the pixels arranged in a row horizontally with respect to the vertices. And detects the pixel whose sign of the pixel value changes. Further, the monotonous increase / decrease detection unit 203 compares the pixel value of the vertex, and the pixel values of the pixels above and below the vertex with a threshold value, and the pixel value of the vertex exceeds the threshold value. An area consisting of pixels whose pixel value is less than or equal to a threshold is detected.
- the monotone increase / decrease detection unit 203 supplies the continuity detection unit 204 with monotone increase / decrease region information indicating the monotone increase / decrease region, using the region thus detected as a monotone increase / decrease region.
- step S204 the monotone increase / decrease detection unit 203 determines whether or not the processing of all pixels has been completed.
- the non-stationary component extraction unit 201 detects the vertices of all the pixels of one screen (for example, a frame or a field) of the input image, and determines whether a monotonous reduction area is detected. .
- step S204 it is determined that the processing of all the pixels has not been completed, that is, there is still a pixel that is not the target of the processing of detecting the vertices and detecting the monotone increasing / decreasing area. If it is determined, the process returns to step S202, and a pixel to be processed is selected from pixels that are not subjected to vertex detection and monotone increase / decrease area detection processing, and vertex detection and monotone increase / decrease are performed. The process of detecting the area is repeated.
- step S204 If it is determined in step S204 that the processing of all pixels has been completed, that is, it is determined that a vertex and a monotonous increase / decrease region has been detected for all pixels, the process proceeds to step S205, and continuity detection is performed.
- the unit 204 detects the continuity of the detected area based on the monotone increase / decrease area information. For example, the continuity detecting unit 204 determines that when a monotone increasing / decreasing area, which is indicated by monotonous increasing / decreasing area information and is composed of pixels arranged in one row in the vertical direction of the screen, includes horizontally adjacent pixels, Assume that there is continuity between two monotone increase / decrease regions, and that there is no continuity between the two monotone increase / decrease regions when pixels adjacent in the horizontal direction are not included.
- the continuity detecting unit 204 detects that when a monotone increasing / decreasing area, which is indicated by monotonous increasing / decreasing area information and is composed of pixels arranged in one row in the horizontal direction, includes pixels that are vertically adjacent to each other, Assume that there is continuity between two monotone increase / decrease regions, and that there is no continuity between the two monotone increase / decrease regions when pixels adjacent in the vertical direction are not included.
- the continuity detecting unit 204 sets the detected continuous area as a steady area having data continuity, and outputs data continuity information indicating the position of the vertex and the steady area.
- the data continuity information includes information indicating the connection between the areas.
- the data continuity information output from the continuity detection unit 204 indicates a thin line region that is a steady region and includes pixels onto which a thin line image of the real world 1 is projected.
- step S206 the continuity direction detection unit 205 determines whether or not processing of all pixels has been completed. That is, the continuity direction detecting unit 205 determines whether or not the continuity of the area has been detected for all the pixels of the predetermined frame of the input image.
- step S206 If it is determined in step S206 that the processing of all the pixels has not been completed, that is, it is determined that there are still pixels that have not been subjected to the processing for detecting the continuity of the region, the process returns to step S205. Then, the pixel to be processed is selected from the pixels not to be subjected to the processing for detecting the continuity of the area, and the processing for detecting the continuity of the area is repeated. If it is determined in step S206 that the processing of all the pixels has been completed, that is, it is determined that the continuity of the area has been detected for all the pixels, the processing ends. In this way, the continuity included in the input image data 3 is detected.
- the data continuity detector 101 shown in FIG. 30 detects the continuity of the data in the time direction based on the continuity region of the data detected from the frame of data 3. Can be detected.
- the continuity detecting unit 204 detects the continuity of the detected data in the frame #n and the area having the continuity of the detected data in the frame # n-1. Based on the region having the detected data and the region having the detected data continuity in frame # n + l, the continuity of the data in the time direction is detected by connecting the ends of the regions.
- Frame # n-1 is a frame temporally before frame #n
- frame # ⁇ + 1 is a frame temporally subsequent to frame # ⁇ . That is, frame # ⁇ -1, frame # ⁇ , and frame # ⁇ + 1 are displayed in the order of frame # ⁇ -1, frame # ⁇ , and frame.
- G is a region having a stationarity of the detected data in frame # ⁇ , a region having a stationarity of the detected data in frame # ⁇ -1, and
- frame # ⁇ + 1 the motion vector obtained by connecting one end of each of the regions having the continuity of the detected data is shown, and G ′ is the value of the region having the continuity of the detected data.
- the motion vector obtained by tying the other end of each is shown.
- the motion vector G and the motion vector G ' are examples of the continuity of data in the time direction.
- the data continuity detection unit 101 shown in FIG. 30 can output information indicating the length of the data continuity region as data continuity information.
- Fig. 43 shows a block diagram of the configuration of the non-stationary component extraction unit 201, which extracts the non-stationary component by approximating the non-stationary component, which is the part of the image data having no data stationarity, in a plane.
- FIG. 43 shows a block diagram of the configuration of the non-stationary component extraction unit 201, which extracts the non-stationary component by approximating the non-stationary component, which is the part of the image data having no data stationarity, in a plane.
- the unsteady component extraction unit 201 shown in FIG. 43 extracts a block consisting of a predetermined number of pixels from the input image, and the error between the block and the value indicated by the plane becomes smaller than a predetermined threshold. Extract the non-stationary component by approximating the block with a plane.
- the input image is supplied to the block extraction unit 221 and output as it is.
- the block extraction unit 222 extracts a block consisting of a predetermined number of pixels from the input image. For example, the block extracting unit 222 extracts a block composed of 7 ⁇ 7 pixels and supplies the extracted block to the plane approximating unit 222. For example, the block extracting unit 221 moves pixels at the center of the block to be extracted in raster scan order, and sequentially extracts blocks from the input image.
- the plane approximating unit 222 approximates the pixel values of the pixels included in the block with a predetermined plane. For example, the plane approximating unit 222 approximates the pixel values of the pixels included in the block on the plane represented by the equation (32).
- X indicates the position of the pixel in one direction (spatial direction X) on the screen
- y indicates the position of the pixel in the other direction (spatial direction on the screen).
- z indicates an approximate value represented by a plane.
- a indicates the inclination of the plane in the spatial direction X
- b indicates the inclination of the plane in the spatial direction Y.
- c indicates a plane offset (intercept). 4 008691
- the plane approximating unit 222 calculates the slope a, the slope b, and the offset c by regression processing, and obtains the pixels included in the block on the plane represented by the equation (3 2). Is approximated.
- the plane approximation unit 2 2 2 calculates the slope a, the slope b, and the offset c by regression processing with rejection, and obtains the pixel of the pixel included in the block on the plane represented by the equation (3 2). Approximate values.
- the plane approximation unit 222 finds the plane represented by the equation (3 2) that minimizes the error with respect to the pixel value of the block pixel using the least squares method. The pixel value of the pixel to be approximated.
- the plane approximation unit 222 has been described as approximating the block with the plane expressed by the equation (32), it is not limited to the plane expressed by the equation (32) but has a higher degree of freedom.
- the block may be approximated by a function, for example, a surface represented by a polynomial of degree n (n is an arbitrary integer).
- the repetition determination unit 223 calculates an error between the approximate value indicated by the plane approximating the pixel value of the block and the pixel value of the corresponding pixel of the block.
- Equation (33) is an equation representing an error which is a difference between an approximate value indicated by a plane approximating the pixel value of the block and the pixel value of the corresponding pixel of the block.
- Equation (33) the z hat (letters with z are referred to as z hats. Hereinafter, the same applies in this specification.) Indicates the approximate value indicated by the approximated plane, a hat indicates the gradient in the spatial direction X of the plane approximating the pixel value of the block, and b knot indicates the spatial direction Y of the plane approximating the pixel value of the block. Shows the inclination of. In Equation (33), c hat indicates the offset (intercept) of the plane that approximates the pixel values of the block.
- the repetition determination unit 223 rejects the pixel having the largest error power between the approximate value and the pixel value of the corresponding pixel of the block, which is represented by Expression (33). By doing so, the pixels on which the thin lines are projected, that is, the pixels having stationarity, are rejected. Become.
- the repetition determination unit 222 supplies rejection information indicating the rejected pixel to the plane approximation unit 222.
- the repetition determination unit 223 calculates a standard error, and the standard error is equal to or more than a predetermined threshold value for approximation end determination, and more than half of the pixels of the block are not rejected. At this time, the repetition determination unit 222 causes the plane approximation unit 222 to repeat the plane approximation processing on the pixels included in the block excluding the rejected pixels.
- the plane approximates the non-stationary component by approximating the pixels excluding the rejected pixels with a plane.
- the return judgment unit 222 terminates the approximation using a plane.
- the standard error e s is calculated by, for example, equation (34).
- the repetition determination unit 223 may calculate not only the standard error but also the sum of the squares of the errors of all the pixels included in the block, and execute the following processing.
- the repetition determination unit 223 outputs information indicating the plane on which the pixel value of the block is approximated (the slope and intercept of the plane of the equation (32)) as unsteady component information. .
- the repetition determination unit 223 compares the number of rejections for each pixel with a predetermined threshold, and determines that a pixel whose number of rejections is equal to or greater than the threshold is a pixel including a steady component. May be output as stationary component information.
- the vertex detection unit 202 to the continuity direction detection unit 205 execute the respective processes on the pixels including the stationary component indicated by the stationary component information.
- the number of rejections, the inclination in the spatial direction X of the plane approximating the pixel value of the block pixel, the inclination in the spatial direction Y of the plane approximating the pixel value of the block pixel, and the plane approximating the pixel value of the block pixel approximation values indicated, and the error e t can also be used as a feature quantity of the input image.
- FIG. 45 is a flowchart corresponding to step S201 and illustrating a process of extracting a non-stationary component by the non-stationary component extracting unit 201 having the configuration shown in FIG.
- the block extraction unit 222 extracts a block consisting of a predetermined number of pixels from the input pixels, and supplies the extracted block to the plane approximation unit 222.
- the block extraction unit 221 selects one pixel from the input pixels that is not yet selected, and extracts a block composed of 7 ⁇ 7 pixels centered on the selected pixel. I do.
- the block extracting unit 221 can select pixels in a raster scan order.
- the plane approximating unit 222 approximates the extracted block with a plane.
- the plane approximating unit 222 approximates the pixel values of the pixels of the extracted block by a plane, for example, by regression processing.
- the plane approximation unit 222 approximates, with a plane, the pixel values of the pixels excluding the rejected pixels among the extracted blocks of the block by the regression processing.
- the repetition determination unit 223 performs repetition determination. For example, a plane approximating the pixel value of a block pixel The standard error is calculated from the approximate value and the number of rejected pixels is calculated, thereby repeatedly executing the determination.
- step S224 the repetition determination unit 223 determines whether or not the standard error is equal to or larger than the threshold. When it is determined that the standard error is equal to or larger than the threshold, the process proceeds to step S225.
- step S224 the repetition determination unit 223 determines whether or not more than half of the pixels of the block have been rejected and whether or not the standard error is greater than or equal to a threshold. If it is determined that half or more of the pixels have not been rejected and the standard error is equal to or greater than the threshold, the process may proceed to step S225.
- step S225 the repetition determination unit 223 calculates, for each pixel of the block, the error between the pixel value of the pixel and the approximate value of the approximated plane, rejects the pixel with the largest error, and performs plane approximation. Notify part 222.
- the procedure returns to step S 222, and the approximation process using a plane and the repetition determination process are repeated for the pixels of the block excluding the rejected pixels.
- step S225 if a block shifted by one pixel in the raster scan direction is extracted in the process of step S221, as shown in FIG. 44, a pixel including a thin line component (the black circle in the figure) Will be rejected multiple times.
- step S224 If it is determined in step S224 that the standard error is not equal to or larger than the threshold value, the block is approximated by a plane, and the process proceeds to step S226.
- step S224 the repetition determination unit 223 determines whether or not more than half of the pixels in the block have been rejected, and whether or not the standard error is equal to or greater than a threshold. If more than half of the pixels are rejected or if it is determined that the standard error is not greater than or equal to the threshold value, the process may proceed to step S225.
- step S226 the repetition determination unit 223 outputs the slope and intercept of the plane approximating the pixel value of the block pixel as non-stationary component information.
- step S 227 the block extraction unit 221 determines whether or not processing has been completed for all pixels of one screen of the input image, and determines that there is a pixel that has not been processed yet. In this case, the process returns to step S221, and a block is extracted from pixels that have not been processed yet, and the above process is repeated.
- step S227 If it is determined in step S227 that the processing has been completed for all the pixels of one screen of the input image, the processing ends.
- the non-stationary component extraction unit 201 having the configuration shown in FIG. 43 can extract the non-stationary component from the input image. Since the unsteady component extraction unit 201 extracts the unsteady component of the input image, the vertex detection unit 202 and the monotone increase / decrease detection unit 203 detect the input image and the unsteady component extraction unit 201. By calculating the difference from the extracted non-stationary component, the processing can be performed on the difference including the stationary component.
- the standard error when rejected the standard error when not rejected, the number of rejected pixels, the inclination of the spatial direction X of the plane (a hat in equation (3 2)) calculated in the approximation process using the plane ,
- the slope of the plane in the spatial direction Y (b hat in equation (3 2)), the level when replaced with a plane (c hat in equation (3 2)), and the pixel values of the input image and the plane
- the difference from the approximated value can be used as a feature value.
- FIG. 46 is a flowchart illustrating a process of extracting a stationary component by the non-stationary component extraction unit 201 illustrated in FIG. 43 instead of the process of extracting the non-stationary component corresponding to step S 201. It is.
- the processing in steps S224 to S245 is the same as the processing in steps S221 to S225, and a description thereof will be omitted.
- step S246 the repetition determination unit 223 outputs the difference between the approximate value indicated by the plane and the pixel value of the input image as a stationary component of the input image. That is, the repetition determination unit 223 outputs the difference between the approximate value based on the plane and the pixel value that is the true value. Note that the repetition determination unit 223 outputs a pixel value of a pixel whose difference between the approximate value indicated by the plane and the pixel value of the input image is equal to or greater than a predetermined threshold value as a stationary component of the input image. May be.
- step S 247 is the same as the process in step S 227, and a description thereof will not be repeated.
- the non-stationary component extraction unit 201 subtracts the approximate value indicated by the plane approximating the pixel value from the pixel value of each pixel of the input image, Non-stationary components can be removed from the input image.
- the vertex detection unit 202 to the continuity detection unit 204 can process only the steady component of the input image, that is, the value on which the image of the thin line is projected, and the vertex detection unit Processing from 202 to the continuity detecting unit 204 becomes easier.
- FIG. 47 shows another process of extracting a stationary component by the non-stationary component extraction unit 201 shown in FIG. 43 instead of the process of extracting the non-stationary component corresponding to step S 201.
- FIG. The processing of steps S261 to S265 is the same as the processing of steps S221 to S225, and a description thereof will be omitted.
- step S266 the repetition determination unit 223 stores the number of rejections for each pixel, returns to step S266, and repeats the processing.
- step S264 If it is determined in step S264 that the standard error is not equal to or larger than the threshold, the block is approximated by a plane, and the process proceeds to step S2667, where the repetition determination unit 223 determines one of the input images. It is determined whether or not processing has been completed for all pixels on the screen. If it is determined that there is a pixel that has not been processed yet, the process returns to step S2661, and a pixel that has not been processed yet is determined. A block is extracted for, and the processing described above is repeated.
- step S267 If it is determined in step S267 that the processing has been completed for all the pixels of one screen of the input image, the process proceeds to step S2688, and the repetition determination unit 222 has not been selected yet. Select one pixel from the pixels, and for the selected pixel, It is determined whether the number of rejections is equal to or greater than the threshold. For example, in step S268, the repetition determination unit 222 determines whether or not the number of rejections for the selected pixel is equal to or greater than a previously stored threshold.
- step S268 If it is determined in step S268 that the number of rejections for the selected pixel is equal to or greater than the threshold value, the selected pixel includes a stationary component.
- the unit 223 outputs the pixel value of the selected pixel (pixel value in the input image) as a steady component of the input image, and proceeds to step S270. If it is determined in step S268 that the number of rejections for the selected pixel is not equal to or greater than the threshold value, the processing in step S266 is skipped because the selected pixel does not include a stationary component. Then, the procedure proceeds to step S270. That is, no pixel value is output for a pixel for which it is determined that the number of rejections is not greater than or equal to the threshold value. In addition, the repetition determination unit 223 may output a pixel value in which 0 is set for a pixel for which the number of rejections is determined not to be equal to or larger than the threshold value.
- step S270 the return determination unit 222 determines whether or not the process of determining whether or not the number of rejections is equal to or greater than the threshold has been completed for all pixels of one screen of the input image. However, if it is determined that the processing has not been completed for all the pixels, there are pixels that have not yet been processed, so the process returns to step S2688, and one of the pixels that have not been processed yet is selected. A pixel is selected and the above-described processing is repeated. If it is determined in step S270 that the processing has been completed for all the pixels of one screen of the input image, the processing ends.
- the non-stationary component extraction unit 201 can output the pixel value of the pixel including the stationary component among the pixels of the input image as the stationary component information. That is, the non-stationary component extracting unit 201 can output the pixel value of the pixel including the component of the thin line image among the pixels of the input image.
- FIG. 48 shows another process of extracting a stationary component by the non-stationary component extraction unit 201 shown in FIG. 43 instead of the process of extracting the non-stationary component corresponding to step S 201.
- Step S 2 8 1 to Step S 2 8 8 Is the same as the processing from step S2661 to step S2688, and the description thereof will be omitted.
- step S289 the repetition determination unit 223 outputs the difference between the approximate value indicated by the plane and the pixel value of the selected pixel as a stationary component of the input image. That is, the repetition determination unit 223 outputs an image obtained by removing the non-stationary component from the input image as the constancy information.
- step S290 is the same as the processing in step S270, and a description thereof will be omitted.
- the non-stationary component extraction unit 201 can output an image obtained by removing the non-stationary component from the input image as the stationarity information.
- the real-world optical signal is projected, and a part of the continuity of the real-world optical signal is lost.
- Generates a model (function) that approximates the optical signal by detecting the stationarity of the data from the discontinuity that is output and estimating the stationarity of the optical signal in the real world based on the stationarity of the detected data.
- the second image data is generated based on the generated function, it is possible to obtain a more accurate and more accurate processing result with respect to a real world event.
- FIG. 49 is a block diagram showing another configuration of the data continuity detecting unit 101.
- the data continuity detector 101 shown in FIG. 49 detects the change in the pixel value of the pixel of interest, which is the pixel of interest, in the spatial direction of the input image, that is, the activity in the spatial direction of the input image.
- a set of pixels consisting of a predetermined number of pixels in one column in the vertical direction or one column in the horizontal direction is provided for each of the angle with respect to the target pixel and the reference axis.
- the correlation between the extracted and extracted pixel sets is detected, and the continuity angle of the data with respect to the reference axis in the input image is detected based on the correlation.
- the data continuity angle refers to the angle formed by the reference axis and the direction of a predetermined dimension, which data 3 has, in which certain features repeatedly appear. Certain features repeatedly appear
- the term “change” means, for example, a change in a value with respect to a change in position in data 3, that is, a case where the cross-sectional shapes are the same.
- the reference axis may be, for example, an axis indicating the spatial direction X (horizontal direction of the screen) or an axis indicating the spatial direction Y (vertical direction of the screen).
- the input image is supplied to the activity detection unit 401 and the data selection unit 402.
- the activity detection unit 401 detects a change in pixel value of the input image in the spatial direction, that is, activity in the spatial direction, and outputs activity information indicating the detection result to the data selection unit 402 and the steady direction derivation. Supply to part 404.
- the activity detector 401 detects a change in the pixel value in the horizontal direction of the screen and a change in the pixel value in the vertical direction of the screen, and detects the detected change in the pixel value in the horizontal direction and the detected pixel value in the vertical direction.
- the change in the pixel value in the horizontal direction is larger than the change in the pixel value in the vertical direction, or the change in the pixel value in the vertical direction is larger than the change in the pixel value in the horizontal direction. It detects whether the change in pixel value is large.
- the activity detection unit 401 indicates whether the change in the pixel value in the horizontal direction is larger than the change in the pixel value in the vertical direction, which is the result of the detection, or In comparison, activity information indicating that the change in pixel value in the vertical direction is large is supplied to the data selection unit 402 and the steady direction derivation unit 404.
- one row of pixels in the vertical direction has an arc shape (kamaboko shape).
- a claw shape is formed, and the arc shape or the claw shape is repeatedly formed in a direction closer to vertical. That is, if the change in the pixel value in the horizontal direction is large compared to the change in the pixel value in the vertical direction, the standard axis is assumed to be the axis indicating the spatial direction X.
- the sex angle is any value between 45 degrees and 90 degrees.
- the change in the pixel value in the vertical direction is greater than the change in the pixel value in the horizontal direction, for example, an arc or nail shape is formed in one row of pixels in the horizontal direction, and the arc or nail shape is horizontal. It is formed repeatedly in a direction closer to the direction.
- the change in the pixel value in the vertical direction is larger than the change in the pixel value in the horizontal direction
- the reference axis is the axis indicating the spatial direction X
- the stationarity angle is any value between 0 and 45 degrees.
- the activity detection unit 401 extracts, from the input image, a block composed of nine 3 ⁇ 3 pixels centered on the pixel of interest shown in FIG.
- the activity detection unit 401 calculates the sum of the difference between pixel values of vertically adjacent pixels and the sum of the difference of pixel values of horizontally adjacent pixels.
- the sum of the differences between the pixel values of horizontally adjacent pixels, hdiff is obtained by Eq. (35).
- hdiff ⁇ (Pi + 1, j-Pi.j).
- Equation (36) P indicates the pixel value, i indicates the horizontal position of the pixel, and j indicates the vertical position of the pixel.
- Akutibiti detector 40 1 compares the difference sum v di ff of the pixel values for the pixels adjacent to the sum h di ff and vertical difference between the pixel values of pixels adjacent to the calculated lateral input
- the range of the continuity angle of the data with respect to the reference axis in the image may be determined. That is, in this case, the activity detection unit 401 determines whether the shape indicated by the change in the pixel value with respect to the position in the spatial direction is repeatedly formed in the horizontal direction or the vertical direction. . For example, the change in the pixel value in the horizontal direction for an arc formed on one row of pixels in the horizontal direction is larger than the change in the pixel value in the vertical direction.
- the change in the pixel value in the vertical direction for the arc is larger than the change in the pixel value in the horizontal direction, and the direction of data continuity, that is, certain characteristics of the input image that is data 3 It can be said that the change in the direction of the predetermined dimension is small compared to the change in the direction orthogonal to the data continuity.
- the difference in the direction orthogonal to the direction of data continuity (hereinafter also referred to as the non-stationary direction) is larger than the difference in the direction of data continuity.
- the activity detecting unit 401 detects the calculated sum h iiff of the pixel values of the horizontally adjacent pixels and the difference h iiff of the pixel values of the vertically adjacent pixels. Comparing the sum v diff , if the sum h diff of the pixel values of the horizontally adjacent pixels is large, the degree of stationarity of the data with respect to the reference axis is between 45 degrees and 1 35 If the sum of the differences between the pixel values of vertically adjacent pixels, v diff, is large, the degree of stationarity of the data with respect to the reference axis is 0 to 4 degrees. It is determined to be any value of 5 degrees or any value of 135 degrees to 180 degrees.
- the activity detecting unit 410 supplies activity information indicating the result of the determination to the data selecting unit 402 and the steady direction deriving unit 404.
- the activity detector 401 extracts a block of an arbitrary size, such as a block of 5 ⁇ 5 pixels of 25 pixels or a block of 7 ⁇ 7 pixels of 49 pixels, and determines the activity. Can be detected.
- the data selection unit 402 selects the pixel of interest from the pixels of the input image in order, and based on the activity information supplied from the activity detection unit 401, for each angle with respect to the pixel of interest and the reference axis.
- a plurality of pixel sets each consisting of a predetermined number of pixels in one column in the vertical direction or one column in the horizontal direction are extracted.
- the data selection section 402 sets a predetermined angle in the range of 45 degrees to 135 degrees with respect to the pixel of interest and the reference axis. Each time, a plurality of pixel sets each consisting of a predetermined number of pixels in one column in the vertical direction are extracted.
- the angle of data continuity is 0 to 45 degrees or 1 3 Since the value is any value between 5 degrees and 180 degrees, the data selection unit 402 sets the range of 0 degrees to 45 degrees or 135 degrees to 180 degrees with respect to the target pixel and the reference axis. For each predetermined angle, a plurality of pixel sets each consisting of a predetermined number of pixels in one row in the horizontal direction are extracted.
- the data selection unit 402 sets the pixel of interest and the reference For each predetermined angle in the range of 45 degrees to 135 degrees with respect to the axis, a plurality of pixel sets consisting of a predetermined number of pixels in one column in the vertical direction are extracted.
- the data selection unit 402 For each predetermined angle in the range of 0 to 45 degrees or 135 to 180 degrees with respect to the pixel and the reference axis, a set of pixels consisting of a predetermined number of pixels in a row in the horizontal direction is defined. , Extract multiple. ⁇
- the data selection unit 402 supplies a plurality of sets of the extracted pixels to the error estimation unit 4 ⁇ 3.
- the error estimator 403 detects the correlation of the pixel set for each angle with respect to a plurality of sets including the extracted pixels.
- the error estimator 403 calculates the pixel value of the pixel at the corresponding position in the set of pixels for a plurality of sets of pixels having a predetermined number of pixels in one column in the vertical direction corresponding to one angle. Detect correlation. The error estimator 403 detects the correlation between the pixel values of the pixels at the corresponding positions in the set, for a plurality of sets of pixels consisting of a predetermined number of pixels in one row in the horizontal direction corresponding to one angle. . The error estimating unit 403 supplies correlation information indicating the detected correlation to the stationary direction deriving unit 404.
- the error estimating unit 4003 calculates, as the value indicating the correlation, the pixel value of the pixel of the set including the pixel of interest supplied from the data selecting unit 402 and the pixel value of the pixel at the corresponding position in the other set.
- the sum of the absolute values of the differences is calculated, and the sum of the absolute values of the differences is supplied to the stationary direction deriving unit 404 as correlation information.
- the stationary direction deriving unit 404 uses the reference axis of the input image corresponding to the continuity of the missing optical signal of the real world 1 as a reference. Detects the continuity angle of the obtained data and outputs data continuity information indicating the angle. For example, based on the correlation information supplied from the error estimating unit 403, the stationary direction deriving unit 404 detects the angle with respect to the set of pixels having the strongest correlation as the data continuity angle, Data continuity information indicating the angle with respect to the detected set of pixels having the highest correlation is output.
- FIG. 53 is a block diagram showing a more detailed configuration of the data continuity detector 101 shown in FIG.
- the data selection section 402 includes a pixel selection section 4111- 1 to a pixel selection section 4111-L.
- the error estimating section 4003 includes an estimation error calculating section 41-2-1 to an estimation error calculating section 412-L.
- the stationary direction deriving unit 4 04 includes a minimum error angle selecting unit 4 13.
- the pixel selection units 4 1 1-1 to 4 1 1 1 L The processing will be described.
- Each of the pixel selection units 4 1 1 1 1 to 4 1 1 1 to 4 1 1 1 L sets a straight line having a different predetermined angle that passes through the pixel of interest with the axis indicating the spatial direction X as a reference axis.
- the pixel selection unit 4111_1 to the pixel selection unit 4111_L are pixels belonging to one vertical pixel column to which the target pixel belongs, and a predetermined number of pixels above the target pixel, And a predetermined number of pixels below the target pixel and the target pixel are selected as a set of pixels.
- the pixel selection units 4111 to 1 through 4111-L determine the pixel of interest from the pixels belonging to one vertical pixel column to which the pixel of interest belongs. Nine pixels are selected as a set of pixels as the center.
- Fig. 54 one square (one square) in a grid shape indicates one pixel.
- a circle shown at the center indicates a target pixel.
- the pixel selecting unit 4 1 1 1 1 to the pixel selecting unit 4 1 1 1 L are pixels belonging to one vertical pixel column to which the pixel of interest belongs, and one vertical pixel column to the left. Select the pixel closest to the set straight line.
- the lower left circle of the pixel of interest indicates an example of the selected pixel.
- the pixel selection units 4 1 1 1 1 to 4 1 1 L are pixels belonging to one vertical column of pixels to which the target pixel belongs and one vertical column of pixels to the left. Then, a predetermined number of pixels above the selected pixel, a predetermined number of pixels below the selected pixel, and the selected pixel are selected as a set of pixels.
- the pixel selectors 4 1 1 1 1 to 4 1 1—L are each composed of one vertical column of pixels to which the pixel of interest belongs and one vertical column on the left side.
- Nine pixels are selected as a set of pixels, centering on the pixel closest to the straight line, from the pixels belonging to the pixel row of.
- L is a pixel belonging to one vertical column of pixels to which the pixel of interest belongs and a second vertical column of pixels to the left. Then, the pixel closest to the straight line set for each is selected.
- the leftmost circle shows an example of the selected pixel.
- the pixel selectors 4 1 1-1 to 4 1 1 1 L are assigned to the vertical column of pixels to which the target pixel belongs and the second vertical column of pixels to the left.
- a predetermined number of pixels belonging to the selected pixel, a predetermined number of pixels above the selected pixel, a predetermined number of pixels below the selected pixel, and the selected pixel are selected.
- the pixel selection unit 4111-1 to pixel selection unit 4111-L includes a second vertical column on the left side of one vertical column of pixels to which the pixel of interest belongs. In one row From the pixels belonging to the pixel column, nine pixels are selected as a set of pixels centering on the pixel closest to the straight line.
- the pixel selection unit 4 1 1 _ 1 to the pixel selection unit 4 1 1 1 L are pixels belonging to one vertical pixel column to which the pixel of interest belongs, and one vertical pixel column to the right, Select the pixel closest to the set straight line.
- the upper right circle of the target pixel indicates an example of the selected pixel.
- the pixel selection units 4 1 1 1 1 to 4 1 1 L are pixels belonging to one vertical column of pixels to which the pixel of interest belongs and one vertical column of pixels to the right. Then, a predetermined number of pixels above the selected pixel, a predetermined number of pixels below the selected pixel, and the selected pixel are selected as a set of pixels.
- the pixel selection units 4 1 1 1 1 to 4 1 1—L are arranged in a column of pixels in a column to which the pixel of interest belongs, and a column in the right column.
- Nine pixels are selected as a set of pixels, centering on the pixel closest to the straight line, from the pixels belonging to the pixel row of.
- L is a pixel belonging to one vertical column of pixels to which the pixel of interest belongs, and a second vertical column of pixels to the right. Then, the pixel closest to the straight line set for each is selected.
- the rightmost circle shows an example of the pixel selected in this manner. Then, the pixel selection units 4 1 1 1 1 to 4 1 1 -L belong to one vertical column of pixels to which the pixel of interest belongs, and belong to the second vertical one column of pixels to the right. A predetermined number of pixels above the selected pixel, a predetermined number of pixels below the selected pixel, and the selected pixel are selected as a set of pixels.
- the pixel selectors 4 1 1 1 1 to 4 1 1 ⁇ L are arranged in a vertical column of the pixel to which the pixel of interest belongs and a second vertical column on the right side. Then, from the pixels belonging to one pixel column, nine pixels are selected as a set of pixels centering on the pixel closest to the straight line. As described above, each of the pixel selection units 4111-1-1 through 4111-L selects five sets of pixels.
- the pixel selection unit 4111- 1 to the pixel selection unit 4111-L select a set of pixels at different angles (straight lines set at different angles). For example, the pixel selector 4 1 1 1 1 selects a set of pixels for 45 degrees, the pixel selector 4 1 1-2 selects a set of pixels for 47.5 degrees, The pixel selection unit 4 11 13 selects a set of pixels at 50 degrees.
- the pixel selection unit 411 1 1 to pixel selection unit 4 11 1 L select a set of pixels at an angle of 2.5 degrees from 52.5 degrees to 135 degrees every 2.5 degrees.
- the number of pixel sets can be any number, for example, three or seven. Further, the number of pixels selected as one set can be an arbitrary number such as, for example, 5 or 13.
- the pixel selection units 4111 to 1 to 411_L can select a set of pixels from a predetermined range of pixels in the vertical direction.
- the pixel selection unit 4111-1 to pixel selection unit 4111-L is composed of 121 pixels in the vertical direction (60 pixels upward and 60 pixels downward with respect to the pixel of interest). ), Select a set of pixels.
- the data continuity detecting unit 101 can detect the data continuity angle up to 88.09 degrees with respect to the axis indicating the spatial direction X.
- each of the pixel selection units 4 1 1 1 1 to 3 to L is a pixel selection unit that converts the set of selected pixels to an estimation error calculation unit 4 1 2-3 to an estimation error calculation unit 4 1 2 1 Supply to each of L.
- the estimation error calculation unit 4 1 2-1 to the estimation error calculation unit 4 1 2-L are used for a plurality of sets supplied from any of the pixel selection units 4 1 1 1 1 to 4 1 1 1 L.
- the correlation of the pixel value of the pixel at the corresponding position is detected.
- the estimation error calculation units 4 1 2 _ 1 to 4 1 2 —L are used as pixel selection units 4 11 1 as values indicating the correlation.
- — 1 to pixel selection unit 4 1 1 The sum of the absolute value of the difference between the pixel value of the pixel of the group including the target pixel and the pixel value of the pixel at the corresponding position in the other group, supplied from any of L Is calculated.
- the estimation error calculation units 4 1 2-1 to 4 1 2-L are supplied from any of the pixel selection units 4 1 1 _ 1 to 4 1 1-1 L. Also, based on the pixel values of the set of pixels including the pixel of interest and the pixel values of the set of pixels belonging to one vertical column of pixels to the left of the pixel of interest, The difference between the pixel values is calculated, and the absolute value of the difference between the pixel values is calculated in order from the pixel above, so that the difference between the pixel values of the second pixel from the top is calculated. Calculate the sum of the absolute values.
- the estimation error calculation section 4 1 2-1 to the estimation error calculation section 4 1 2-L include the pixel of interest supplied from any of the pixel selection sections 4 1 1 1 1 to 4 1 1-L Based on the pixel values of the set of pixels and the pixel values of the set of pixels belonging to the second vertical column of pixels to the left of the pixel of interest, the difference between the pixel values in Calculate the absolute value and calculate the sum of the absolute values of the calculated differences.
- the estimation error calculation unit 4 1 2-1 to the estimation error calculation unit 4 1 2 _L are supplied from any of the pixel selection units 4 1 1 _ 1 to 4 1 _ 1 L.
- the difference between the pixel value of the uppermost pixel based on the pixel value of the set of pixels including the pixel and the pixel value of the set of pixels belonging to one vertical pixel row to the right of the pixel of interest is calculated.
- the estimation error calculation section 4 1 2-1 to the estimation error calculation section 4 1 2-L are configured to calculate the pixel of interest supplied from any of the pixel selection sections 4 1 1-1 to 4 11-1 L. Based on the pixel values of the set of pixels included and the pixel values of the set of pixels that belong to the second vertical column of pixels to the right of the pixel of interest, the pixel values in order from the pixel above The absolute value of the difference is calculated, and the sum of the absolute values of the calculated differences is calculated.
- the estimation error calculation units 4 1 2-1 to 4 1 2-L add all the sums of the absolute values of the pixel value differences calculated in this way and calculate the absolute value of the pixel value differences. Calculate the sum.
- the estimation error calculation units 4 1 2-1 to 4 1 2-L supply information indicating the detected correlation to the minimum error angle selection unit 4 13.
- the estimation error calculators 4 1 2-1 to 4 1 2-L supply the sum of the absolute values of the calculated differences between the pixel values to the minimum error angle selector 4 13.
- estimation error calculation units 4 1 2-1 to 4 1 2-L are not limited to the sum of the absolute values of the pixel value differences, but may be based on the sum of the squares of the pixel value differences or the pixel values. Other values, such as the calculated correlation coefficient, can be calculated as the correlation value.
- the minimum error angle selection unit 413 is configured to calculate the missing real world 1 based on the correlation detected by the estimation error calculation units 41-2-1 to 41-2-L for different angles.
- the angle of the continuity of the data with respect to the reference axis in the input image corresponding to the continuity of the image which is the optical signal of is detected.
- the minimum error angle selection unit 413 is the strongest based on the correlation detected by the estimation error calculation units 4112_1 to 4122-L for different angles.
- the minimum error angle selection unit 413 is the smallest of the sums of the absolute values of the pixel value differences supplied from the estimation error calculation units 41-2-1 to 412-L.
- Select the sum of The minimum error angle selection unit 4 13 is a pixel belonging to the second vertical column of pixels on the left side with respect to the pixel of interest for the selected set of pixels for which the sum has been calculated, Refers to the position of the pixel closest to the straight line and the position of the pixel that belongs to the second vertical pixel column on the right side of the pixel of interest and that is closest to the straight line .
- the minimum error angle selection unit 413 calculates the vertical distance S between the position of the reference pixel and the position of the target pixel. As shown in FIG.
- the minimum error angle selection unit 4 13 calculates the reference value of the input image, which is image data, corresponding to the missing signal of the real world 1 from the equation (37).
- ⁇ ⁇ 3 ⁇ TM Q "
- the processing of the pixel selection unit 4 11 1 L will be described.
- the pixel selection unit 4 1 1 1 1 to the pixel selection unit 4 1 1 1 L set a straight line at a predetermined angle that passes through the pixel of interest with the axis indicating the spatial direction X as a reference axis, and the horizontal to which the pixel of interest belongs Then, a predetermined number of pixels to the left of the pixel of interest, a predetermined number of pixels to the right of the pixel of interest, and a pixel of interest belonging to one pixel column are selected as a set of pixels.
- the pixel selection units 4 1 1-1 to 4 11 _L are pixels belonging to one horizontal row of pixels to which the target pixel belongs and one vertical row of pixels to which the pixel of interest belongs, Select the pixel closest to the set straight line.
- the pixel selection units 4 1 1-1 to 4 1 1 -L are pixels belonging to one horizontal row of pixels to which the pixel of interest belongs and one horizontal row of pixels to the upper side. Then, a predetermined number of pixels on the left side of the selected pixel, a predetermined number of pixels on the right side of the selected pixel, and the selected pixel are selected as a set of pixels.
- L is a pixel belonging to one horizontal pixel column to which the pixel of interest belongs, and a second horizontal pixel column to the second upper pixel column. Then, the pixel closest to the straight line set for each is selected.
- the pixel selection unit 4 1 1 1 1 to pixel selection unit 4 1 1—L are a row of one row of pixels to which the pixel of interest belongs. A predetermined number of pixels to the left of the selected pixel, a predetermined number of pixels to the right of the selected pixel, and a predetermined number of pixels to the right of the selected pixel Is selected as a set of pixels.
- the pixel selectors 4 1 1 1 1 to 4 1 1 L are pixels belonging to one horizontal row of pixels to which the pixel of interest belongs and one lower horizontal row of pixels. Select the pixel closest to the set straight line.
- the pixel selection units 41 1-1 to 4 11 _L are pixels belonging to one horizontal row of pixels to which the pixel of interest belongs and one horizontal row of pixels to the lower side. A predetermined number of pixels to the left of the selected pixel, a predetermined number of pixels to the right of the selected pixel, and the selected pixel are selected as a set of pixels.
- the pixel selection unit 4 1 1 1 1 to the pixel selection unit 4 1 1—L are the pixels belonging to the second horizontal row of pixels of the first horizontal row of pixels to which the pixel of interest belongs. Then, the pixel closest to the straight line set for each is selected.
- the pixel selection units 41 1 1 1 to 41 1 _L are pixels belonging to a second row of pixels in a row of one row of pixels to which the pixel of interest belongs and a second row of pixels in a row below. Then, a predetermined number of pixels on the left side of the selected pixel, a predetermined number of pixels on the right side of the selected pixel, and pixels selected in parallel are selected as a set of pixels.
- each of the pixel selection units 4111-1 to 4111-L selects five sets of pixels.
- Each of the pixel selection units 41 11 to 11 L selects a set of pixels at different angles.
- the pixel selection unit 4 1 1 1 1 selects a set of pixels for 0 degrees
- the pixel selection unit 4 1 1-2 selects a set of pixels for 2.5 degrees, and selects a pixel.
- the part 41 11-3 selects a set of pixels for 5 degrees.
- the pixel selection section 41 1 1 to 1 1 to L are used to set a set of pixels at angles of 2.5 degrees from 7.5 degrees to 45 degrees and 135 degrees to 180 degrees. select.
- the pixel selection unit 41 1-11 supplies the selected pixel pair to the estimation error calculation unit 4 1 2-1
- the pixel selection unit 4 1 1-2 converts the selected pixel pair into the estimation error calculation unit 4 1 2-2 To supply.
- each of the pixel selection units 4 1 1-3 to 4 1 1 -L converts the selected pixel set to the estimated error calculation unit 4 1 2-3 to the estimated error calculation unit 4 1 2 1 Supply to each of L.
- the estimation error calculation unit 4 1 2-1 to the estimation error calculation unit 4 1 2-L are used for a plurality of sets supplied from any of the pixel selection units 4 1 1-1 to 4 1 1 1 L. The correlation of the pixel value of the pixel at the corresponding position is detected.
- the estimation error calculation sections 4 1 2-1 to 4 1-2 -L supply information indicating the detected correlation to the minimum error angle selection section 4 13.
- the minimum error angle selection section 4 13 3 is configured to calculate the missing optical signal of the real world 1 based on the correlations detected by the estimation error calculation sections 4 1 2-1 to 4 1 2-L. Detects the continuity angle of the data with respect to the reference axis in the input image corresponding to the continuity.
- step S101 detects the continuity of data. The processing will be described.
- step S401 the activity detection unit 401 and the data selection unit 402 select a pixel of interest, which is a pixel of interest, from the input image.
- the activity detector 401 and the data selector 402 select the same target pixel.
- the activity detection unit 401 and the data selection unit 402 select a pixel of interest from the input image in raster scan order.
- the activity detection unit 401 detects an activity for the target pixel. For example, the activity detecting unit 401 detects a difference between pixel values of pixels arranged in a vertical direction and a pixel value of pixels arranged in a horizontal direction of a block composed of a predetermined number of pixels centered on a target pixel. , Detect activity. The activity detecting unit 401 detects activity in the spatial direction with respect to the target pixel, and supplies activity information indicating the detected result to the data selecting unit 402 and the steady direction deriving unit 404. In step S ⁇ b> 403, the data selection unit 402 selects a predetermined number of pixels centered on the target pixel as a set of pixels from the column of pixels including the target pixel.
- the data selection unit 402 is a pixel belonging to one vertical or horizontal pixel row to which the target pixel belongs, and a predetermined number of pixels above or to the left of the target pixel, and a target pixel. A predetermined number of pixels on the lower side or the right side and the target pixel are selected as a set of pixels.
- the data selection unit 402 selects a predetermined number of pixel columns from the predetermined number of pixels for each predetermined range of angle based on the activity detected in the process of step S402.
- Each predetermined number of pixels is selected as a set of pixels.
- the data selection unit 402 sets a straight line passing through the pixel of interest using the axis indicating the spatial direction X as a reference axis, having an angle in a predetermined range, and Select pixels that are one or two rows apart in the vertical or vertical direction and are closest to the straight line, and a predetermined number of pixels above or to the left of the selected pixel, and below or below the selected pixel.
- a predetermined number of pixels on the right side and selected pixels closest to the line are selected as a set of pixels.
- the data selection unit 402 selects a set of pixels for each angle.
- the data selection unit 402 supplies the selected pixel set to the error estimation unit 403.
- the error estimator 403 calculates a correlation between a set of pixels centered on the target pixel and a set of pixels selected for each angle. For example, the error estimator 403 calculates, for each angle, the sum of the absolute value of the difference between the pixel value of the pixel of the group including the target pixel and the pixel value of the pixel at the corresponding position in the other group.
- the continuity angle of the data may be detected based on the mutual correlation of a set of pixels selected for each angle.
- the error estimating unit 403 supplies information indicating the calculated correlation to the stationary direction deriving unit 404.
- step S ⁇ b> 406 the stationary direction deriving unit 404, based on the correlation calculated in the processing of step S ⁇ b> 405, starts from the position of the set of pixels having the strongest correlation, Reference axis in the input image, which is image data, corresponding to the stationarity of the optical signal The angle of continuity of the data with respect to is detected.
- the stationary direction deriving unit 404 selects the minimum sum of the absolute values of the pixel value differences, and determines the data continuity from the position of the set of pixels for which the selected sum was calculated. Detect angle 0.
- the stationary direction deriving unit 404 outputs data continuity information indicating the continuity angle of the detected data.
- step S407 the data selection unit 402 determines whether or not processing of all pixels has been completed. If it is determined that processing of all pixels has not been completed, step S404 Returning to 01, the target pixel is selected from the pixels not yet selected as the target pixel, and the above-described processing is repeated.
- step S407 If it is determined in step S407 that the processing of all pixels has been completed, the processing ends.
- the data continuity detection unit 101 can detect the continuity angle of the data with respect to the reference axis in the image data corresponding to the continuity of the missing optical signal of the real world 1. it can.
- the data detection unit 101 shown in FIG. 49 detects the activity in the spatial direction of the input image with respect to the pixel of interest, which is the pixel of interest, of the frame of interest, which is the frame of interest. , According to the detected activity, the angle with respect to the pixel of interest and the reference axis in the spatial direction, and for each motion vector, the vertical direction from the frame of interest and the frame temporally before or after the frame of interest.
- a plurality of pixel sets each consisting of a predetermined number of pixels in one row or one row in the horizontal direction are extracted, the correlation of the extracted pixel sets is detected, and based on the correlation, The continuity angle of the data in the time direction and the space direction may be detected.
- the data selection unit 402 determines, based on the detected activity, an angle with respect to the target pixel and the reference axis in the spatial direction, and a target frame for each motion vector.
- Frame #n, frame # n-1 and frame A plurality of pixel sets each consisting of a predetermined number of pixels in one column in the vertical direction or one column in the horizontal direction are extracted from each of # n + l.
- Frame # n-1 is a frame temporally before frame #n
- frame # n + l is a frame temporally subsequent to frame # ⁇ . That is, frame ifn-1, frame # ⁇ , and frame # ⁇ + 1 are displayed in the order of frame # ⁇ -1, frame # ⁇ , and frame # ⁇ + 1.
- the error estimating unit 403 detects the correlation of the pixel set for each of the plurality of sets of the extracted pixels for each angle and each motion vector.
- the stationary direction deriving unit 404 calculates the continuity of the data in the time direction and the spatial direction in the input image corresponding to the continuity of the missing optical signal of the real world 1 based on the correlation of the set of pixels. And outputs data continuity information indicating the angle.
- FIG. 3 Another example of the embodiment of the real world estimation unit 102 (FIG. 3) will be described.
- FIG. 58 is a view for explaining the principle of the embodiment of this example.
- the signal (distribution of light intensity) of the real world 1, which is an image incident on the sensor 2 is represented by a predetermined function F.
- a signal of the real world 1 which is an image is particularly referred to as an optical signal
- the function F is particularly referred to as an optical signal function F.
- the real world estimating unit 102 when the optical signal of the real world 1 represented by the optical signal function F has a predetermined continuity, the real world estimating unit 102 outputs the input image from the sensor 2
- the optical signal function F is estimated by approximating the optical signal function F with a predetermined function f.
- the function f is particularly referred to as an approximate function f.
- the real-world estimator 102 uses the model 16 1 (FIG. 4) represented by the approximate function f to generate the image represented by the optical signal function F. (The optical signal of the real world 1). Therefore, the embodiment of this example is hereinafter referred to as a function approximation method.
- FIG. 59 is a view for explaining the integration effect when the sensor 2 is a CCD. As shown in FIG. 59, a plurality of detection elements 2-1 are arranged on a plane of the sensor 2.
- the direction parallel to a predetermined side of the detection element 2-1 is defined as the X direction, which is one direction in the spatial direction, and the direction perpendicular to the X direction is defined as the other direction in the spatial direction. Which is the Y direction. And the direction perpendicular to the XY plane is the t direction, which is the time direction.
- the spatial shape of each of the detection elements 2-1 of the sensor 2 is a square having one side length of one.
- the shirt time (exposure time) of the sensor 2 is set to 1.
- y 0
- the intermediate time of the exposure time is the origin in the time direction (t direction) (the position in one direction is also 0).
- the pixel value P output from the detection element 2-1 having the center at the origin in the spatial direction is expressed by the following equation (38).
- FIG. 60 is a view for explaining a specific example of the integration effect of the sensor 2.
- the X direction and the Y direction represent the X direction and the Y direction of the sensor 2 (FIG. 59).
- One part (hereinafter, such a part is referred to as a region) of the optical signal of the real world 1 2301 represents an example of a region having a predetermined stationarity.
- the area 2301 is one part (continuous area) of a continuous optical signal.
- the area 2301 is shown as being divided into 20 small areas (square areas). This is equivalent to the size of the area 2301, where four detection elements (pixels) of the sensor 2 are arranged in the X direction and five in the Y direction. This is to indicate that. That is, each of the 20 small regions (virtual regions) in the region 2301 corresponds to one pixel.
- the white part in the figure of the region 2301 represents an optical signal corresponding to a thin line. Therefore, the area 2301 has stationarity in the direction in which the thin line continues. Therefore, hereinafter, the region 2301 is referred to as a fine-line-containing real world region 2301.
- fine line containing data area 2302 is output.
- Each pixel in the thin line containing data area 2302 is shown as an image in the figure, but is actually data representing one predetermined value. That is, due to the integration effect of the sensor 2, the thin-line-containing real world region 2301 has 20 pixels each having one predetermined pixel value (4 pixels in the X direction and 4 pixels in the Y direction). It changes into a thin-line-containing data area 2302 divided into 5 pixels (total of 20 pixels) (distorted).
- FIG. 61 is a diagram for explaining another specific example of the integration effect of the sensor 2 (an example different from FIG. 60).
- the X and Y directions represent the X and Y directions of the sensor 2 (FIG. 59).
- One part (region) 2303 of the optical signal of the real world 1 represents another example of the region having the predetermined stationarity (an example different from the real world region 2301 containing the thin line in Fig. 60). ing.
- the region 2303 is a region having the same size as the fine-line-containing real world region 2301. That is, like the real world region 2301 containing fine lines, the region 2303 is actually a part (continuous region) of the optical signal of the continuous real world 1 in FIG. However, it is shown as being divided into 20 small areas (square areas) corresponding to one pixel of the sensor 2.
- the region 2303 includes a first portion having a predetermined first light intensity (value) and an edge of a second portion having a predetermined second light intensity (value). I have. Therefore, the region 2303 has stationarity in the direction where the edge continues. Therefore, hereinafter, the region 2303 is referred to as a binary edge-containing real world region 2303.
- the sensor 2 when the real world area 2303 (one part of the optical signal of the real world 1) containing the binary edge is detected by the sensor 2, the sensor 2 outputs the input image (pixel value) by the integration effect.
- An area 2304 (hereinafter, referred to as a binary edge-containing data area 2304) is output.
- Each pixel value of the binary edge-containing data area 2304 is represented as an image in the figure similarly to the fine line-containing data area 2302, but in practice, a predetermined value Is data representing That is, due to the integration effect of the sensor 2, the binary edge-containing real world area 2303 has 20 pixels each having a predetermined one pixel value (4 pixels in the X direction and 5 pixels in the Y direction). It changes to a binary edge-containing data area 2304 divided into a total of 20 pixels) (distorted).
- the conventional image processing apparatus uses the image data output from the sensor 2, such as the fine line containing data area 2302 and the binary edge containing data area 2304, as the origin (reference). At the same time, image data was processed and subsequent image processing was performed. In other words, although the image data output from the sensor 2 is different (distorted) from the optical signal of the real world 1 due to the integration effect, the conventional image processing apparatus uses the real world 1 The image processing was performed with the data different from the optical signal as positive.
- the real-world estimating unit 102 uses the thin-line-containing data area 2302 or 2 By approximating the optical signal function F (the optical signal of the real world 1) with the approximation function f from the image data (input image) output from the sensor 2 such as the value edge containing data area 2304, Estimate the signal function F.
- FIG. 62 is a diagram again showing the thin-line-containing real world region 2301 shown in FIG. 60 described above.
- the X direction and the Y direction represent the X direction and the Y direction of the sensor 2 (FIG. 59).
- the first function approximation method is, for example, the optical signal function F (x, y, t) corresponding to the fine-line-containing real world region 2301, as shown in FIG. 1-dimensional waveform projected in the direction of 1) (hereinafter, such a waveform is referred to as X-section waveform F (x)).
- X-section waveform F (x) Is approximated by an approximation function f (x) such as a polynomial of degree n (n is an arbitrary integer). Therefore, hereinafter, the first function approximation method is particularly referred to as a one-dimensional approximation method.
- the X-section waveform F ( x ) to be approximated is not limited to the one corresponding to the thin-line-containing real world region 2301 in FIG. That is, as will be described later, in the one-dimensional approximation method, it is possible to approximate any one of the X-sectional waveforms F (x) corresponding to the optical signal of the real world 1 having a stationary characteristic. You.
- the direction of projection of the optical signal function F (x, y, t) is not limited to the X direction, but may be the Y direction or the t direction. That is, in the one-dimensional approximation method, a function F (y) obtained by projecting the optical signal function F (x, y, t) in the Y direction can be approximated by a predetermined approximation function f (y). Then, a function F (t) obtained by projecting the optical signal function F (x, y, t) in the t direction can be approximated by a predetermined approximation function f (t).
- the one-dimensional approximation method approximates the X-sectional waveform F (x) with an approximation function f (x) such as an n-th order polynomial as shown in the following equation (39). It is a method.
- the real-world estimating unit 102 calculates the coefficient (feature) Wi of X i in Expression (39) to obtain the X-sectional waveform F (x) Is estimated.
- the method of calculating the feature amount Wi is not particularly limited.
- the following first to third methods can be used.
- the first method is a method conventionally used.
- the second method is a method newly invented by the applicant of the present invention, and is a method in which spatial continuity is further taken into consideration with respect to the first method.
- the integration effect of the sensor 2 is not considered in the first method and the second method. Therefore, the approximation function f (x) obtained by substituting the feature value ⁇ calculated by the first method or the second method into the above equation (39) is an approximation function of the input image. Is not an approximate function of the X-section waveform F (x).
- the present applicant has invented a third method of calculating the feature Wi with further consideration of the integration effect of the sensor 2 with respect to the second method.
- the approximation function f (x) obtained by substituting the feature Wi calculated by the third method into the above-described equation (39) is given by taking into account the integration effect of the sensor 2, It can be said that it is an approximation function of the waveform F (X).
- the first method and the second method are not one-dimensional approximation methods, but only the third method is a one-dimensional approximation method.
- FIG. 63 is a view for explaining the principle of the embodiment corresponding to the second method.
- the real world estimating unit 1 02 is the input image from sensor 2 (image data including stationarity of data corresponding to stationarity) and data stationarity information from data stationarity detector 101 (input stationarity Instead of using the corresponding data stationarity information) to approximate the X-section waveform F (x), the input image from the sensor 2 is approximated by a predetermined approximation function f 2 (X). '
- the second method does not consider the integration effect of the sensor 2 and only approximates the input image. Therefore, it cannot be said that the second method is the same method as the third method.
- the second method is superior to the first method in that it takes into account spatial continuity.
- the details of the first method, the second method, and the third method will be individually described in that order.
- each of the approximation functions f (x) generated by the first method, the second method, and the third method is distinguished from those of the other methods, in particular, the approximation function f, ( ⁇ ), the approximate function f 2 (x) and the approximate function f 3 (x), respectively.
- x represents the pixel position relative to the X direction from the pixel of interest.
- y indicates the pixel position relative to the Y direction from the pixel of interest.
- e represents the error.
- the target pixel is now detected by the sensor 2 in the thin line containing data area 2302 (the thin line containing real world area 2301 (FIG. 62)).
- the pixel is the second pixel in the X direction from the left in the figure and the third pixel in the Y direction from the bottom.
- the center of the pixel of interest is defined as the origin (0, 0), and the coordinate system (hereinafter referred to as the pixel coordinate of interest) with the X and Y axes parallel to the X and Y directions (Fig. 59) of sensor 2 respectively. It is assumed that is set. In this case, the coordinate value (x, y) of the target pixel coordinate system indicates the relative pixel position.
- P (x, y) represents a pixel value at a relative pixel position (x, y). Specifically, in this case, in the fine line content data area 2302,
- FIG. 65 shows a graph of the pixel value P (x, y).
- each graph represents a pixel value
- the horizontal axis represents a relative position X in the X direction from the pixel of interest.
- the first from the top The dotted line in the graph of Fig. 2 shows the input pixel value P (x, -2), the three-dot chain line in the second graph from the top shows the input pixel value P (x, -1), and the solid line in the third graph from the top shows The input pixel value P (x, 0) is indicated by the dot-dash line in the fourth graph from the top, and the input pixel value P (x, l) is indicated by the dot-dash line in the fifth (first from the bottom) graph.
- the input pixel value P (x, 2) is represented.
- the 20 input pixel values P (x, -2), P (x, -l), P (x, 0), P (x, l ), P (x, 2) (where x is any integer from 1 to 2) the 20 equations shown in the following equation (4 1) are obtained. Is generated. Note that each of e k (k is an integer value of any one of 1 to 20) represents an error.
- ⁇ (- ⁇ 2) (0) + e 2
- equation (41) is composed of 20 equations, if the number of features ⁇ of the approximate function ⁇ (x) is less than 20, that is, the approximate function (X) Is less than 1 9
- the feature Wi can be calculated using the least squares method. The specific method of the least squares method will be described later.
- the approximation function (X) For example, if the order of the approximation function (X) is assumed to be 5th order, the approximation function (X) calculated by the least square method using Equation (41) (generated by the calculated feature Wi The approximation function ⁇ (X)) looks like the curve shown in Figure 66.
- the vertical axis represents the pixel value
- the horizontal axis represents the relative position X from the pixel of interest.
- P (x, y) (input pixel values P (x, -2 ), P (x, -1),?, 0),? , 1),? , 2)) is added along the X axis as it is (assuming the relative position y in the Y direction is constant, and the five Darraf shown in Fig. 65 are superimposed), and shown in Fig. 66 Multiple lines parallel to the X axis (dotted line, three-dot chain line, solid line, one-dot chain line, and two-dot chain line) are distributed.
- the dotted line indicates the input pixel value P (x, -2)
- the three-dot chain line indicates the input pixel value P (x, -1)
- the solid line indicates the input pixel value P (x, 0).
- the dashed-dotted line represents the input pixel value ⁇ ( ⁇ , 1)
- the dashed-dotted line represents the input pixel value ⁇ ( ⁇ , 2).
- each line is drawn so as not to overlap so that each line can be distinguished.
- the approximation function (X) is, Y direction of the pixel values (the same pixel values relative position X in the X direction from the pixel of interest) P (x, - 2) , P (x, _l), P It simply represents a curve connecting the average of (x, 0), P (x, l), and P (x, 2 ) in the X direction. That is, the approximate function ⁇ (X) is generated without considering the spatial continuity of the optical signal.
- the approximation target is the fine-line-containing real world region 2301 (Fig. 62).
- the thin wire containing actual world region 2 3 0 1 as shown in FIG. 6 7, has a spatial direction of continuity represented with the gradient G F.
- the X and Y directions represent the X and Y directions of the sensor 2 (FIG. 59).
- the data continuity detecting unit 1 0 1 (Fig. 5 8) as data continuity information corresponding to the constancy of the gradient G F spatial directions, the angle theta (gradient G F as shown in Figure 6 7 The direction of the stationarity of the data represented by the corresponding slope G f and the angle ⁇ ) formed by the X direction can be output.
- the data continuity information output from the data continuity detecting unit 101 is not used at all.
- the direction of continuity in the spatial direction of the fine-line-containing real world region 2301 is substantially the direction of the angle 0.
- the first method assumes that the direction of stationarity in the spatial direction of the thin-line-containing real world region 2301 is the Y direction (that is, the angle 0 is 90 degrees). This is a method of calculating the feature quantity of the approximation function (X).
- the approximation function ⁇ (x) becomes a function whose waveform becomes dull and the details are reduced from the original pixel values.
- the approximation function generated by the first method: ⁇ (X) is a waveform significantly different from the actual X-section waveform F (x).
- the present applicant has invented a second method of calculating the feature amount ⁇ (using the angle 0) in consideration of the stationarity in the spatial direction in addition to the first method.
- the second method is a method of calculating the feature amount of the approximate function f 2 (x), assuming that the direction of the stationarity of the fine-line-containing real world region 2301 is substantially the angle ⁇ direction.
- the gradient G f representing the stationarity of the data corresponding to the stationarity in the spatial direction is expressed by the following equation (42).
- Equation (42) dx represents a small amount of movement in the X direction as shown in FIG. 67, and dy represents Y with respect to dx as shown in FIG. 67. It represents the amount of small movement in the direction.
- equation (40) used in the first method is that the position X in the X direction of the center position (x, y) of the pixel is the pixel of the pixel located at the same position.
- the values P (x, y) are the same.
- equation (40) indicates that pixels having the same pixel value continue in the Y direction (there is a continuity in the Y direction).
- the equation (44) used in the second method is that the pixel value P (x, y) of the pixel whose center position is (X, y) is the target pixel (the center position Does not match the pixel value ( ⁇ f 2 (x)) of the pixel located X away from the origin (the pixel whose origin is (0,0)) in the X direction.
- C x (y) as far away as the position to pixel pixel value (target pixel from the X direction x + C x (y) pixel located away only) (f 2 (x + C x (y )) Represents the same value as).
- Expression (44) indicates that pixels having the same pixel value continue in the direction of the angle 0 corresponding to the shift amount C x (y) (there is a continuity in the direction of approximately the angle 6).
- the shift amount C x (y) is the continuity in the spatial direction (in this case, the steadiness represented by the slope G F in FIG. 67 (strictly speaking, the data represented by the slope G f )) Equation (40) is obtained by correcting Equation (40) with the shift amount C x (y).
- Equation (45) consists of 20 equations, similar to equation (4 1) described above. Therefore, in the second method as well as in the first method, when the number of features Wi of the approximate function f 2 (x) is less than 20, that is, when the approximate function f 2 (X) In the case of a polynomial of a smaller order, for example, the feature value ⁇ can be calculated using the least squares method. The specific method of the least squares method will be described later. For example, if the order of the approximation function f 2 (x) is set to 5 as in the first method, the feature Wi is calculated as follows in the second method.
- FIG. 68 shows a graph of the pixel value P (x, y) shown on the left side of Expression (45).
- Each of the five graphs shown in FIG. 68 is basically the same as that shown in FIG.
- the maximum pixel value (pixel values corresponding to fine lines) is followed in the direction of continuity of data represented with the gradient G f.
- the input pixel values P (x, -2), P (x, -l), P (x, 0), P (x, l), P (x , 2) for example, if they are added along the x-axis, they are added as in the first method. (Assuming that y is constant, five graphs are superimposed while keeping the state shown in Fig. 68.) Instead, change to the state shown in Fig. 69 and then add.
- FIG. 69 shows the input pixel values P (x, -2), P (x, -1),? , 0),? (, 1) ,? , 2 ) are shifted by the shift amount C x (y) shown in the above equation (43).
- FIG. 6-9 the five graphs shown in FIG. 68, the gradient G F representing the actual direction of continuity of data, Oh Taka also to the slope G F '(in the figure, the dotted (So that the straight line is a solid straight line).
- each of the input pixel values P (X, -2), P (X, -1), P (X, 0), P (x, 1), and P (x, 2 ) For example, if you add along the X axis (five graphs as shown in Fig. 69), multiple lines parallel to the X axis (dotted lines, 3 Dotted line, solid line, one-dot chain line, and two-dot chain line) are distributed.
- the vertical axis represents the pixel value
- the horizontal axis represents the relative position X from the pixel of interest.
- the dotted line indicates the input pixel value P (x, -2)
- the three-dot chain line indicates the input pixel value P (x, -1)
- the solid line indicates the input pixel value P (x, 0)
- the one-dot chain line indicates The input pixel value ⁇ ( ⁇ , 1) and the two-dot chain line indicate the input pixel value ⁇ ( ⁇ , 2).
- the same pixel value In the case of, two or more lines actually overlap, but in FIG. 70, each line is drawn so as not to overlap so that each line can be distinguished.
- each of the 20 input pixel values P (x, y) distributed in this manner (where X is an integer value of any one of 1 to 2; y is The regression curve that minimizes the error between the value f 2 (x + C x (y)) and the regression curve (feature value ⁇ calculated by the least squares method) is calculated using the above equation (38)
- the approximate function f 2 (x)) obtained by substituting into is a curve f 2 (X) shown by the solid line in FIG.
- the approximation function f 2 (X) generated by the second method is based on the angle ⁇ direction (that is, the stationary It represents a curve connecting the average value of the input pixel values P (x, y) in the X direction in the X direction.
- the approximation function (X) generated by the first method is based on the input pixel value P (x, y) in the Y direction (ie, a direction different from the spatial continuity). It simply represents the mean of the curve connecting the X direction.
- the approximate function f 2 (x) generated by the second method is less sharp than the approximate function (X) generated by the first method. Is reduced, and the degree of reduction of the detail with respect to the original pixel value is also reduced.
- the approximate function f 2 (X) generated by the second method is more based on the actual X-sectional waveform F (x) than the approximate function (X) generated by the first method. It has a close waveform.
- the approximation function f 2 (x) takes into account spatial continuity, it is nothing but the one generated with the input image (input pixel value) as the origin (reference). . That is, as shown in FIG. 63 described above, the approximation function f 2 (x) only approximates an input image different from the X-section waveform F (x), and approximates the X-section waveform F (x). It's hard to say.
- the second method is a method of calculating the feature quantity assuming that the above equation (44) holds, and does not consider the relationship of the above equation (38) (integration of sensor 2) Effect is not taken into account).
- the applicant of the present application has invented a third method of calculating the feature Wi of the approximate function f 3 (x) by further considering the integration effect of the sensor 2 with respect to the second method. That is, the third method introduces the concept of spatial mixing or temporal mixing. Note that considering both spatial mixing and temporal mixing complicates the description. Here, of the spatial mixing and temporal mixing, for example, spatial mixing is considered, and temporal mixing is ignored.
- a portion 2 32 1 of the optical signal of the real world 1 (hereinafter referred to as a region 2 32 1) represents a region having the same area as one detecting element (pixel) of the sensor 2. I have. ,
- a value (one pixel value) obtained by integrating the area 2 3 2 1 in the spatio-temporal direction (X direction, Y direction, and t direction) from the sensor 2 2 3 2 2 is output.
- the pixel value 2 32 2 2 is represented as an image in the figure, and is actually data representing a predetermined value.
- the area 2 3 2 1 of the real world 1 is clearly divided into an optical signal corresponding to the foreground (for example, the thin line described above) (white area in the figure) and an optical signal corresponding to the background (black area in the figure). .
- the pixel value 2 3 2 2 is a value obtained by integrating the optical signal of the real world 1 corresponding to the foreground and the optical signal of the real world 1 corresponding to the background.
- the pixel value 2 3 2 2 is a value corresponding to a level in which the light level corresponding to the foreground and the light level corresponding to the background are spatially mixed.
- the part corresponding to one pixel (detection element of sensor 2) of the optical signal of the real world 1 is not the part where the optical signal of the same level is spatially uniformly distributed, but the foreground and the background. If each of the different levels of the optical signal is distributed as described above, the area is detected by the sensor 2 and, due to the integration effect of the sensor 2, the different light levels are spatially mixed ( One pixel value (integrated in the spatial direction) Will be. As described above, in the pixel of the sensor 2, the image for the foreground (the optical signal of the real world 1) and the image for the background (the optical signal of the real world 1) are spatially integrated and mixed, so to speak. The region composed of such pixels is referred to as a spatial mixed region.
- the real-world estimator 10 2 (FIG. 58) 1 the original region 23 21 of the real world 1 (the portion of the optical signal of the real world 1 corresponding to one pixel of the sensor 2) 232 1) is approximated by an approximation function f 3 (X) such as a first-order polynomial as shown in FIG. 72 to obtain an X-sectional waveform F (x ) Is estimated.
- FIG. 72 shows an approximation function f 3 (x) corresponding to the pixel value 2322 (FIG. 71) which is a spatial mixed area, that is, a solid line (FIG. 71) in the area 2331 of the real world 1.
- An example of an approximate function f 3 (X) that approximates the corresponding X-section waveform F (x) is shown.
- the horizontal axis in the figure, the side of the lower left corner x s of a pixel corresponding to the pixel value 23 22 to lower right x e (Fig. 7 1) represents an axis parallel to, and X-axis Have been.
- the vertical axis in the figure is the axis representing the pixel value.
- the start position x s and the end position x e of the integration range in Eq. (46) are determined by the shift amount C x (y ). That is, each of the start position x s and the end position location x e of the integral range of formula (46) is represented as the following equation (47).
- each pixel value of the thin line containing data area 2302 shown in FIG. 67 that is, the input pixel value P (x, -2), P (x, -l ), P (x, 0), P (x, l), P (x, 2) (where X is any integer from 1 to 2) is expressed by the above equation ( 46) (Substituting the integration range into the above-mentioned equation (47)) yields the following 20 equations as shown in the following equation (48).
- P (-1,-1) ⁇ 1-— (-1 ⁇ ) -— ⁇ . .: ⁇ 5 f 3 (x) dx + e 5- r 00-G, ((-11)) ++ 00.55
- Equation (48) consists of 20 equations, similar to equation (4 5) described above. Therefore, in the third method as well as in the second method, the characteristics of the approximate function f 3 (X) If the number of collections is less than 20, that is, if the approximation function f 3 (x) is a polynomial of degree less than 19, for example, the feature quantity can be calculated using the least squares method. is there. The specific method of the least squares method will be described later.
- the approximation function f 3 (x) calculated by the least squares method using Equation (48) (the calculated feature amount The generated approximation function f 3 (X)) is like the curve shown by the solid line in FIG.
- the vertical axis represents the pixel value
- the horizontal axis represents the relative position X from the pixel of interest.
- the approximate function f 3 (X) approximates the X-sectional waveform F (x). Therefore, although not shown, the approximate function f 3 (X) has a waveform closer to the X-section waveform F (x) than the approximate function f 2 (X).
- FIG. 74 shows a configuration example of the real-world estimator 102 using such a first-order approximation method.
- the real world estimating unit 102 calculates, for example, the feature value by the above-described third method (least square method), and uses the calculated feature value ⁇ to obtain the above-described equation (39) Estimate the X-section waveform F (x) by generating an approximate function f (x) of.
- the third method least square method
- the real world estimator 102 has a condition setting section 2 33 1, an input image storage section 2 33 2, an input pixel value acquisition section 2 3 3 3, and an integral component operation.
- a section 2 3 3 4, a normal equation generation section 2 3 3 5, and an approximate function generation section 2 3 3 6 are provided.
- the condition setting unit 2 3 3 1 calculates the range of pixels used for estimating the X-sectional waveform F (x) corresponding to the target pixel (hereinafter referred to as tap range), and the order n of the approximate function f (x). Set.
- the input image storage unit 2332 temporarily stores the input image (pixel value) from the sensor 2.
- the input pixel value acquisition unit 233 3 acquires an input image area corresponding to the tap range set by the condition setting unit 223 1 among the input images stored in the input image storage unit 2332, and It is supplied to the normal equation generation unit 2335 as an input pixel value table. That is, the input pixel value table is a table in which each pixel value of each pixel included in the area of the input image is described. A specific example of the input pixel value table will be described later.
- the real-world estimator 102 calculates the feature quantity of the approximate function f (x) by the least squares method using the above-described equations (46) and (47). (46) can be expressed as the following equation (49).
- Si (x s , x e ) represents the integral component of the i-th term. That is, the integral component Si (x s , x e ) is expressed by the following equation (50).
- the integral component calculation unit 2334 calculates the integral component Si (x s , x e ).
- the integral component Si (x s , x e ) represented by equation (50) (where the values x s and x e are the values represented by equation (46) described above) is a relative pixel position If (x, y), shift amount C x (y), and i of the i-th term are known, calculation is possible.
- the relative pixel position (x, y) of these is determined by the target pixel and the tap range, and the shift amount C x (y) is determined by the angle.
- the range of i is determined by the degree ⁇ (from the above-mentioned equations (4 1) and (4 3)), and the order ⁇ .
- the integral component calculation unit 233 4 includes the tap range and the order set by the condition setting unit 233 1 and the data continuity information output from the data continuity detection unit 101.
- the integral component Si (x s , x e ) is calculated based on the angle ⁇ , and the calculation result is supplied to the normal equation generating section 233 35 as an integral component table.
- the normal equation generation unit 2 3 3 5 is an input pixel value acquisition unit.
- a normal equation for generating the feature quantity ⁇ on the right-hand side of equation (46) described above, that is, equation (49) by the least square method is generated, and the generated normal equation is used as a normal equation table by the approximation function generation unit 233 6 Supply.
- the approximation function generation unit 2 3 3 6 solves the normal equation included in the normal equation table supplied from the normal equation generation unit 2 3 3 5 by a matrix solution method, thereby obtaining the feature amount of the above-described equation (49). (That is, each of the coefficients of the approximation function f (x), which is a one-dimensional polynomial, is calculated and output to the image generation unit 103.
- the real world estimating unit 102 (FIG. 74) using the one-dimensional approximation method estimates the real world (the processing in step S102 in FIG. 29). Will be explained.
- the input image of one frame output from the sensor 2 and including the thin-line-containing data area 2302 in FIG. 60 described above is already stored in the input image storage unit 2332.
- the data continuity detection unit 101 performs the processing on the thin line containing data area 2302 in the processing of the continuity detection in step S 101 (FIG. 290), Assume that angle 0 has already been output as data continuity information.
- step S2301 in Fig. 75 the condition setting unit 2331 sets conditions (tap range and order). For example, it is assumed that the tap range 2351 shown in FIG. 76 is set and the fifth order is set.
- FIG. 76 is a diagram illustrating an example of the tap range.
- the X direction and the Y direction represent the X direction and the Y direction of the sensor 2 (FIG. 59).
- the tap range 2351 represents a pixel group composed of a total of 20 pixels (20 squares in the figure) for 4 pixels in the X direction and 5 pixels in the Y direction.
- the target pixel is the second pixel from the left in the figure and the third pixel from the bottom in the tap range 2351.
- the relative pixel position (x, y) from the pixel of interest is shown in FIG. It is assumed that a number 1 (1 is an integer from 0 to 19) as shown by 76 is assigned.
- step S2302 the condition setting unit 2331 sets the target pixel.
- the input pixel value acquisition unit 233 3 3 acquires an input pixel value based on the condition (tap range) set by the condition setting unit 2331, and generates an input pixel value table. That is, in this case, the input pixel value acquisition unit 2333 acquires the thin line containing data area 2302 (FIG. 64) and generates a table including 20 input pixel values P (l) as the input pixel value table. .
- Equation (51) the left side represents the input pixel value P (1), and the right side represents the input pixel value P (x, y).
- P (0) P (0,0)
- step S 2304 the integral component calculation unit 2334 determines the conditions (tap range and order) set by the condition setting unit 2331 and the data steady state supplied from the data continuity detection unit 101. Calculate integral components based on gender information (angle 0) and generate integral component table.
- the input pixel value is obtained as the value of the pixel number 1 such as P (l) instead of P (x, y).
- the integral component Si (x s , x e ) of the equation (50) is calculated as a function of 1 such as the integral component Si (1) shown on the left side of the following equation (5 2).
- Equation (53) the left side represents the integral component Si (1), and the right side represents the integral component Si ( xs , xe ). That is, in this case, i is 0 to 5, so 20 Ss. (1), 20 (1), 20 S 2 (1), 20 S 3 (1), 20 S 4 (1), 20 S 5 (1) Si (1) will be calculated.
- the integral component calculation unit 2334 uses the angle 0 supplied from the data continuity detection unit 101 to calculate the shift amounts C x ( ⁇ 2), C X ( ⁇ 1), and C X Each of (1) and C X (2) is calculated.
- step S2303 and the processing in step S2304 is not limited to the example of FIG. 75, and the processing in step S2304 may be executed first, or the processing in step S2303 and the processing in step S2303 may be performed first.
- step S2305 the normal equation generation unit 233 5 may execute the process in step S2303 with the input pixel value table generated by the input pixel value acquisition unit 2333.
- a normal equation table is generated based on the integral component table generated by the integral component calculation unit 2334 in the process of step S2304.
- the feature Wi of the following equation (54) corresponding to the above equation sss (49) is calculated by the least square method.
- the corresponding normal equation is
- Equation (55) L represents the maximum value of the pixel numbers 1 in the tap range.
- each component of the matrix W MAT is a feature to be obtained. Therefore, if the matrix S MAT on the left side and the matrix P MAT on the right side are determined in equation (59), the matrix W MAT (that is, the feature Wi ) can be calculated by the matrix solution.
- each component of the matrix S MAT can be calculated if the above-described integral component Si (1) is known. Since the integral component Si (1) is included in the integral component table supplied from the integral component calculation unit 2334, the normal equation generation unit 2335 uses the integral component table to convert each component of the matrix S MAT. Can be calculated.
- each component of the matrix P MAT can be calculated if the integral component Si (1) and the input pixel value P (l) are known.
- the integral component Si (1) is the same as that contained in each component of the matrix S MAT
- the input pixel value P (l) is the input pixel value supplied from the input pixel value acquisition unit 2333. Since it is included in the value table, the normal equation generation unit 2335 can calculate each component of the matrix P MAT using the integral component table and the input pixel value table.
- the normal equation generating unit 23 35 calculates each component of the matrix S MAT and matrix P MAT, the approximation function generating the calculation results (each component of the matrix S MAT and matrix P MAT) as normal equation table Output to section 2336.
- step S2306 the approximation function generation unit 2336 determines, based on the normal equation table, each component of the matrix W MAT of the above equation ( 59 ).
- feature amount Wi i.e., coefficient ⁇ in a one-dimensional polynomial approximation function f (x)
- each component of the matrix W MAT on the left side is a feature quantity ⁇ to be obtained.
- each component of the matrix S MAT and the matrix P MAT is included in the normal equation table supplied from the normal equation generating unit 233 35 . Accordingly, the approximation function producing formation unit 2 3 3 6 uses the normal equation table, calculates the matrix W MAT by performing matrix operation on the right side of the expression (6 0), the calculation result (characteristic amount Wi) Is output to the image generation unit 103.
- step S2307 the approximation function generation unit 2336 determines whether or not processing of all pixels has been completed.
- step S23.3 If it is determined in step S23.3 that all pixels have not been processed, the process returns to step S2302, and the subsequent processes are repeated. That is, the pixels that have not yet been set as the target pixel are sequentially set as the target pixel, and the processing in steps S2302 to S2307 is repeated.
- the waveform of the approximate function f (x) generated by the coefficient (feature amount) Wi calculated as described above is a waveform like the approximate function (X) in FIG. 73 described above.
- the waveform having the same shape as the one-dimensional X-sectional waveform F (x) is connected in the direction of the stationarity, for example, approximation such as a one-dimensional polynomial is performed.
- the feature of the function f (x) is calculated. Therefore, in the one-dimensional approximation method, it is possible to calculate the feature amount of the approximation function f (x) with a smaller amount of operation processing compared to other function approximation methods. .
- the second function approximating method for example, as shown in Figure 7.
- the optical signal in the actual world 1 having a spatial direction of continuity represented with the gradient G F, X- Y plane ( Consider the waveform F (x, y) of the X direction, which is one direction in the spatial direction, and the horizontal plane in the Y direction perpendicular to the X direction), and calculate the approximate function f (X, y) such as a two-dimensional polynomial.
- Approximate the waveform F (x, y) This method estimates the waveform F (x, y). Therefore, hereinafter, the second function approximation method is referred to as a two-dimensional approximation method.
- the horizontal direction is the X direction which is one direction in the spatial direction
- the upper right direction is the Y direction which is the other direction in the spatial direction
- the vertical direction is the light level.
- G F represents the gradient of stationarity in the spatial direction.
- the sensor 2 is a CCD configured by arranging a plurality of detection elements 2-1 on the plane as shown in FIG.
- the direction parallel to a predetermined side of the detection element 2_1 is defined as the X direction, which is one direction in the spatial direction
- the direction perpendicular to the X direction is defined in the other direction in the spatial direction.
- the direction perpendicular to the XY plane is the t direction, which is the time direction.
- the spatial shape of each of the detection elements 2-1 of the sensor 2 is a square with one side length of one.
- the shirt time (exposure time) of the sensor 2 is set to 1.
- the pixel value P output from the detection element 2-1 having the center at the origin in the spatial direction is expressed by the following equation (61).
- the two-dimensional approximation method treats the optical signal of the real world 1 as, for example, a waveform F (x, y) as shown in FIG. ) Is approximated by an approximation function f (x, y) such as a two-dimensional polynomial.
- the optical signal of the real world 1 is represented by an optical signal function F (x, y, ⁇ ⁇ ⁇ ) having variables x, y, and z in a three-dimensional space and time t.
- the one-dimensional waveform obtained by projecting the optical signal function F (x, y, t) in the X direction at an arbitrary position y in the Y direction is referred to as an X-sectional waveform F (x) here.
- a waveform having the same shape as the X-section waveform F (X) is defined. It can be considered that they are connected in the direction of normality.
- the waveform of the same shape as the X cross-sectional waveform F (X) has continuous in the direction of the gradient G F. If you say, continuous to the direction of the X cross-sectional waveform F (x) and the gradient waveform of the same shape G F, it can be said that the waveform F (x, y) are formed.
- the waveform of the approximate function f (x, y) approximating the waveform F (x, y) is formed by a series of waveforms having the same shape as the approximate function f (x) approximating the X-section waveform F (x).
- the approximate function f (x, y) can be represented by a two-dimensional polynomial.
- the light signal in the actual world 1 i.e., an optical signal having a spatial direction of continuity represented with the gradient G F is more detected by the sensor 2 (FIG. 78) And output as an input image (pixel value).
- the data continuity detector 101 (FIG. 3) outputs a total of 2 ° pixels of 4 pixels in the X direction and 5 pixels in the Y direction of the input image.
- Region 2 of input image composed of pixels (20 squares represented by dotted lines in the figure) 40
- the processing is executed for 1 and the angle ⁇ (the angle between the direction of the continuity of the data represented by the gradient G f corresponding to the gradient G F and the X direction as one of the data continuity information ⁇ ) Is output.
- the horizontal direction in the figure represents the X direction, which is one direction in the spatial direction
- the vertical direction in the figure represents the Y direction, which is the other direction in the spatial direction.
- the second pixel from the left and the third pixel from the bottom is the pixel of interest, and the center of the pixel of interest is set to the origin (0,0) (x, y).
- the coordinate system has been set.
- the relative distance in the X direction to the straight line at the angle ⁇ ⁇ passing through the origin (0,0) (the straight line of the gradient G f representing the direction of data continuity) (hereinafter referred to as the cross-sectional direction distance) is x, It is described.
- the graph on the right is an approximated function f (x '), which is an n-th order (n is an arbitrary integer) polynomial, which is an approximated function of the X-section waveform F (x').
- f (x ') an n-th order (n is an arbitrary integer) polynomial, which is an approximated function of the X-section waveform F (x').
- the horizontal axis in the figure represents the distance in the cross-sectional direction
- the vertical axis in the figure represents the pixel value.
- Equation (63) the cross-sectional direction distance x 'is expressed as in the following Equation (64).
- Wi represents a coefficient of the approximate function f (x, y).
- the coefficient ⁇ of the approximation function f including the approximation function f (x, y) can be positioned as the feature of the approximation function f. Therefore, hereinafter, the coefficient of the approximate function f is also referred to as the characteristic amount Wi of the approximate function f.
- the real-world estimating unit 102 can calculate the feature amount of Expression (65), it can estimate a waveform F (x, y) as shown in FIG.
- equation (66) is an equation using the positions x and y in the spatial direction (X direction and Y method) as variables.
- Equation (66) P (x, y) is the center position of the input image from sensor 2 at position (x, y) (relative position from the pixel of interest).
- (X, y)) represents the pixel value of the pixel present in.
- E represents an error.
- the relationship between the input pixel value P (x, y) and the approximation function f (x, y) such as a two-dimensional polynomial is expressed by equation (66).
- the actual world estimating unit 1 0 2 utilizes the equation (6 6), a characteristic quantity Wi, for example, the least square method or the like by calculating by the (computed feature quantity Wi formula (By generating the approximate function f (x, y) by substituting into (64)), the two-dimensional function F (x, y) (the stationary in the spatial direction represented by the gradient G F (Fig. 77)) It is possible to estimate a waveform F (x, y)) that expresses the optical signal of the real world 1 having the property by focusing on the spatial direction.
- a characteristic quantity Wi for example, the least square method or the like by calculating by the (computed feature quantity Wi formula (By generating the approximate function f (x, y) by substituting into (64)
- FIG. 80 shows an example of the configuration of the real world estimator 102 using such a two-dimensional approximation method.
- the real world estimator 102 has a condition setting section 2421, an input image storage section 2422, an input pixel value acquisition section 2423, and an integral component operation.
- a section 2424, a normal equation generating section 2425, and an approximate function generating section 2426 are provided.
- the condition setting unit 2 4 2 1 calculates the pixel range (tap range) used to estimate the function F (x, y) corresponding to the target pixel and the order n of the approximate function f (x, y). Set.
- the input image storage unit 242 temporarily stores the input image (pixel value) from the sensor 2.
- the input pixel value obtaining unit 2432 sets the area of the input image corresponding to the tap range set by the condition setting unit 2421 out of the input images stored in the input image storage unit 2422. It obtains it and supplies it to the normal equation generator 245 as an input pixel value table. That is, the input pixel value table is a table in which each pixel value of each pixel included in the area of the input image is described. A specific example of the input pixel value table will be described later.
- the real world estimator 102 using the two-dimensional approximation method solves the above equation (66) by the least square method, thereby obtaining the approximation expressed by the above equation (65).
- the feature quantity ⁇ of the function f (x, y) is calculated.
- Equation (6 6) is expressed as the following equation (7 1) by using the following equation (70) obtained by using the following equations (6 7) to (6 9). be able to.
- Si (x-0.5, x + 0.5, y-0.5, y + 0.5) represents the integral component of the i-th order term. That is, the integral component Si (x-0.5, x + 0.5, y-0.5, y + 0.5) is as shown in the following equation (72).
- the integral component computing unit 2424 computes the integral component Si (x-0.5, x + 0.5, y-0.5, y + 0.5).
- the integral component Si (x-0.5, x + 0.5, y-
- 0.5, y + 0.5 can be calculated if the relative pixel position (x, y), the variable s in the above equation (65), and the i-th term i are known.
- the relative pixel position (x, y) is determined by the target pixel and the tap range, and the variable s is cot 0, so the angle 0 is determined and the range of i is determined by the order n.
- the integral component calculation unit 2424 calculates the tap range and order set by the condition setting unit 2421 and the angle ⁇ ⁇ ⁇ of the data constancy information output from the data constancy detection unit 101.
- the integral component Sj (x-0.5, x + 0.5, y-0.5, y + 0.5) is computed, and the result is supplied to the normal equation generator 2425 as an integral component table.
- the normal equation generation unit 2424 uses the input pixel value table supplied from the input pixel value acquisition unit 2423 and the integral component table supplied from the integral component calculation unit 2424, as described above.
- the approximate function generator 2 4 2 6 solves the above equation by solving the normal equation included in the normal equation table supplied from the normal equation generator 2 4
- Each of the feature values ⁇ of (66) (that is, the coefficient Wi of the approximation function f (x, y) which is a two-dimensional polynomial) is calculated and output to the image generation unit 103.
- optical signals of the actual world 1 having a spatial direction of continuity represented with the gradient G F is detected by the sensor 2 (FIG. 7 8), and an input image corresponding to one frame, the input It is assumed that the image has already been stored in the image storage unit 242. Further, in the processing for detecting the continuity of the data continuity detecting section 101 force step S101 (FIG. 29), the input image includes the above-described region 241 shown in FIG. 79 of the input image. Suppose that the angle ⁇ has already been output as data continuity information.
- step S2401 the condition setting unit 2421 sets conditions (tap range and order).
- FIG. 82 is a diagram illustrating an example of the tap range.
- the X direction and the Y direction represent the X direction and the Y direction of the sensor 2 (FIG. 78).
- the tap range 2 441 represents a pixel group consisting of a total of 20 pixels (20 squares in the figure) for 4 pixels in the X direction and 5 pixels in the Y direction. I have.
- the target pixel is the second pixel from the left in the figure and the third pixel from the bottom in the tap range 2 4 4 1 .
- Ma the relative pixel position (x, y) from the target pixel (the coordinate value of the target pixel coordinate system with the origin at the center (0, 0) of the target pixel) is shown in FIG. It is assumed that a number 1 (1 is an integer value of any one of 0 to 19) as shown in FIG.
- step S2402 the condition setting unit 2421 sets a target pixel.
- the input pixel value obtaining unit 2423 obtains an input pixel value based on the condition (tap range) set by the condition setting unit 2421, and generates an input pixel value table. That is, in this case, the input pixel value obtaining unit 2423 obtains the area 240 1 (FIG. 79) of the input image and generates a table including 20 input pixel values P (1) as the input pixel value table. .
- equation (73) the relationship between the input pixel value P (l) and the above-described input pixel value P (x, y) is represented by the following equation (73).
- the left side represents the input pixel value P (1)
- the right side represents the input pixel value P (x, y).
- step S2404 the integral component calculation unit 2424 determines the conditions (tap range and order) set by the condition setting unit 242 1 and the data continuity information supplied from the data continuity detection unit 101. Calculate the integral component based on (Angle 0) and generate the integral component table.
- the input pixel value is obtained as the value of the pixel number 1 such as P (l) instead of P (x, y).
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Television Systems (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/562,176 US7684635B2 (en) | 2003-06-27 | 2004-06-15 | Signal processing device, and signal processing method, and program, and recording medium |
US11/626,556 US7512285B2 (en) | 2003-06-27 | 2007-01-24 | Signal processing device and signal processing method, and program and recording medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003184015A JP4392583B2 (ja) | 2003-06-27 | 2003-06-27 | 信号処理装置および信号処理方法、並びにプログラムおよび記録媒体 |
JP2003-184015 | 2003-06-27 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/562,176 A-371-Of-International US7684635B2 (en) | 2003-06-27 | 2004-06-15 | Signal processing device, and signal processing method, and program, and recording medium |
US11/626,556 Continuation US7512285B2 (en) | 2003-06-27 | 2007-01-24 | Signal processing device and signal processing method, and program and recording medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005001763A1 true WO2005001763A1 (ja) | 2005-01-06 |
Family
ID=33549596
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/008691 WO2005001763A1 (ja) | 2003-06-27 | 2004-06-15 | 信号処理装置および信号処理方法、並びにプログラムおよび記録媒体 |
Country Status (5)
Country | Link |
---|---|
US (2) | US7684635B2 (ja) |
JP (1) | JP4392583B2 (ja) |
KR (1) | KR101016355B1 (ja) |
CN (1) | CN100433058C (ja) |
WO (1) | WO2005001763A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105913454A (zh) * | 2016-04-06 | 2016-08-31 | 东南大学 | 一种视频图像中运动目标的像素坐标轨迹预测方法 |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4392583B2 (ja) * | 2003-06-27 | 2010-01-06 | ソニー株式会社 | 信号処理装置および信号処理方法、並びにプログラムおよび記録媒体 |
JP4861854B2 (ja) * | 2007-02-15 | 2012-01-25 | 株式会社バンダイナムコゲームス | 指示位置演算システム、指示体及びゲームシステム |
JP4915341B2 (ja) * | 2007-12-20 | 2012-04-11 | ソニー株式会社 | 学習装置および方法、画像処理装置および方法、並びにプログラム |
US9721362B2 (en) * | 2013-04-24 | 2017-08-01 | Microsoft Technology Licensing, Llc | Auto-completion of partial line pattern |
US9275480B2 (en) | 2013-04-24 | 2016-03-01 | Microsoft Technology Licensing, Llc | Encoding of line pattern representation |
US9317125B2 (en) | 2013-04-24 | 2016-04-19 | Microsoft Technology Licensing, Llc | Searching of line pattern representations using gestures |
WO2017187966A1 (ja) * | 2016-04-27 | 2017-11-02 | 富士フイルム株式会社 | 指標生成方法、測定方法、及び指標生成装置 |
US20210287573A1 (en) * | 2018-05-25 | 2021-09-16 | Nippon Telegraph And Telephone Corporation | Secret batch approximation system, secure computation device, secret batch approximation method, and program |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08237476A (ja) * | 1994-11-22 | 1996-09-13 | Xerox Corp | データ処理方法及びデジタル出力データ生成方法 |
JPH10200753A (ja) * | 1997-01-14 | 1998-07-31 | Fuji Xerox Co Ltd | 画像処理装置および画像処理方法 |
JP2000201283A (ja) * | 1999-01-07 | 2000-07-18 | Sony Corp | 画像処理装置および方法、並びに提供媒体 |
JP2001084368A (ja) * | 1999-09-16 | 2001-03-30 | Sony Corp | データ処理装置およびデータ処理方法、並びに媒体 |
JP2002112025A (ja) * | 2000-10-03 | 2002-04-12 | Fujitsu Ltd | 画像補正装置および補正方法 |
JP2003016456A (ja) * | 2001-06-27 | 2003-01-17 | Sony Corp | 画像処理装置および方法、記録媒体、並びにプログラム |
JP2003018578A (ja) * | 2001-06-27 | 2003-01-17 | Sony Corp | 通信装置および方法、通信システム、記録媒体、並びにプログラム |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0455444B1 (en) * | 1990-04-29 | 1997-10-08 | Canon Kabushiki Kaisha | Movement detection device and focus detection apparatus using such device |
US5030984A (en) * | 1990-07-19 | 1991-07-09 | Eastman Kodak Company | Method and associated apparatus for minimizing the effects of motion in the recording of an image |
JPH0583696A (ja) | 1991-06-07 | 1993-04-02 | Sony Corp | 画像符号化装置 |
JP3364939B2 (ja) | 1991-12-18 | 2003-01-08 | ソニー株式会社 | 画像符号化装置 |
FR2730571B1 (fr) * | 1995-02-10 | 1997-04-04 | Controle Dimensionnel Optique | Procede et dispositif de mesure de la distribution de la mobilite d'elements particulaires dans un milieu |
JP3444005B2 (ja) * | 1995-02-24 | 2003-09-08 | ミノルタ株式会社 | 撮像装置 |
KR100414432B1 (ko) * | 1995-03-24 | 2004-03-18 | 마츠시타 덴끼 산교 가부시키가이샤 | 윤곽추출장치 |
DE19536691B4 (de) * | 1995-09-30 | 2008-04-24 | Bts Holding International B.V. | Verfahren und Anordnung zur Korrektur von Bildstandsfehlern bei der fernsehmäßigen Filmabtastung |
GB2311183A (en) * | 1996-03-13 | 1997-09-17 | Innovision Plc | Gradient based motion estimation |
JP4282113B2 (ja) * | 1998-07-24 | 2009-06-17 | オリンパス株式会社 | 撮像装置および撮像方法、並びに、撮像プログラムを記録した記録媒体 |
JP4193233B2 (ja) | 1998-08-12 | 2008-12-10 | ソニー株式会社 | 動き判定装置、その方法および画像情報変換装置 |
US6678405B1 (en) * | 1999-06-08 | 2004-01-13 | Sony Corporation | Data processing apparatus, data processing method, learning apparatus, learning method, and medium |
JP4491965B2 (ja) | 1999-12-28 | 2010-06-30 | ソニー株式会社 | 信号処理装置および方法、並びに記録媒体 |
JP4596212B2 (ja) | 2001-06-15 | 2010-12-08 | ソニー株式会社 | 画像処理装置および方法、記録媒体、並びにプログラム |
JP4596225B2 (ja) | 2001-06-27 | 2010-12-08 | ソニー株式会社 | 画像処理装置および方法、記録媒体、並びにプログラム |
KR100415313B1 (ko) * | 2001-12-24 | 2004-01-16 | 한국전자통신연구원 | 동영상에서 상관 정합과 시스템 모델을 이용한 광류와카메라 움직임 산출 장치 |
US7218353B2 (en) * | 2001-12-26 | 2007-05-15 | Nikon Corporation | Electronic camera that selectively performs different exposure calculation routines |
US7164800B2 (en) * | 2003-02-19 | 2007-01-16 | Eastman Kodak Company | Method and system for constraint-consistent motion estimation |
JP4392583B2 (ja) * | 2003-06-27 | 2010-01-06 | ソニー株式会社 | 信号処理装置および信号処理方法、並びにプログラムおよび記録媒体 |
-
2003
- 2003-06-27 JP JP2003184015A patent/JP4392583B2/ja not_active Expired - Fee Related
-
2004
- 2004-06-15 KR KR1020057023910A patent/KR101016355B1/ko not_active IP Right Cessation
- 2004-06-15 CN CNB2004800178117A patent/CN100433058C/zh not_active Expired - Fee Related
- 2004-06-15 WO PCT/JP2004/008691 patent/WO2005001763A1/ja active Application Filing
- 2004-06-15 US US10/562,176 patent/US7684635B2/en not_active Expired - Fee Related
-
2007
- 2007-01-24 US US11/626,556 patent/US7512285B2/en not_active Expired - Fee Related
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08237476A (ja) * | 1994-11-22 | 1996-09-13 | Xerox Corp | データ処理方法及びデジタル出力データ生成方法 |
JPH10200753A (ja) * | 1997-01-14 | 1998-07-31 | Fuji Xerox Co Ltd | 画像処理装置および画像処理方法 |
JP2000201283A (ja) * | 1999-01-07 | 2000-07-18 | Sony Corp | 画像処理装置および方法、並びに提供媒体 |
JP2001084368A (ja) * | 1999-09-16 | 2001-03-30 | Sony Corp | データ処理装置およびデータ処理方法、並びに媒体 |
JP2002112025A (ja) * | 2000-10-03 | 2002-04-12 | Fujitsu Ltd | 画像補正装置および補正方法 |
JP2003016456A (ja) * | 2001-06-27 | 2003-01-17 | Sony Corp | 画像処理装置および方法、記録媒体、並びにプログラム |
JP2003018578A (ja) * | 2001-06-27 | 2003-01-17 | Sony Corp | 通信装置および方法、通信システム、記録媒体、並びにプログラム |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105913454A (zh) * | 2016-04-06 | 2016-08-31 | 东南大学 | 一种视频图像中运动目标的像素坐标轨迹预测方法 |
CN105913454B (zh) * | 2016-04-06 | 2018-05-15 | 东南大学 | 一种视频图像中运动目标的像素坐标轨迹预测方法 |
Also Published As
Publication number | Publication date |
---|---|
US7512285B2 (en) | 2009-03-31 |
CN1816826A (zh) | 2006-08-09 |
JP4392583B2 (ja) | 2010-01-06 |
CN100433058C (zh) | 2008-11-12 |
KR20060021374A (ko) | 2006-03-07 |
JP2005018533A (ja) | 2005-01-20 |
US7684635B2 (en) | 2010-03-23 |
US20070116372A1 (en) | 2007-05-24 |
KR101016355B1 (ko) | 2011-02-21 |
US20070098289A1 (en) | 2007-05-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4148041B2 (ja) | 信号処理装置および信号処理方法、並びにプログラムおよび記録媒体 | |
JP4392584B2 (ja) | 信号処理装置および信号処理方法、並びにプログラムおよび記録媒体 | |
WO2004077352A1 (ja) | 画像処理装置および方法、並びにプログラム | |
WO2004072898A1 (ja) | 信号処理装置および方法、並びにプログラム | |
US20070116372A1 (en) | Signal processing device, signal processing method, program, and recording medium | |
JP4143916B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4423537B2 (ja) | 信号処理装置および信号処理方法、並びにプログラムおよび記録媒体 | |
JP4423535B2 (ja) | 信号処理装置および信号処理方法、並びにプログラムおよび記録媒体 | |
JP4182827B2 (ja) | 信号処理装置および信号処理方法、並びにプログラムおよび記録媒体 | |
JP4325296B2 (ja) | 信号処理装置および信号処理方法、並びにプログラムおよび記録媒体 | |
JP4419453B2 (ja) | 信号処理装置および信号処理方法、並びにプログラムおよび記録媒体 | |
JP4423536B2 (ja) | 信号処理装置および信号処理方法、並びにプログラムおよび記録媒体 | |
JP4182826B2 (ja) | 信号処理装置および信号処理方法、並びにプログラムおよび記録媒体 | |
JP4419454B2 (ja) | 信号処理装置および信号処理方法、並びにプログラムおよび記録媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 1020057023910 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2007098289 Country of ref document: US Ref document number: 10562176 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 20048178117 Country of ref document: CN |
|
WWP | Wipo information: published in national office |
Ref document number: 1020057023910 Country of ref document: KR |
|
122 | Ep: pct application non-entry in european phase | ||
WWP | Wipo information: published in national office |
Ref document number: 10562176 Country of ref document: US |