WO2004077354A1 - 画像処理装置および方法、並びにプログラム - Google Patents
画像処理装置および方法、並びにプログラム Download PDFInfo
- Publication number
- WO2004077354A1 WO2004077354A1 PCT/JP2004/001585 JP2004001585W WO2004077354A1 WO 2004077354 A1 WO2004077354 A1 WO 2004077354A1 JP 2004001585 W JP2004001585 W JP 2004001585W WO 2004077354 A1 WO2004077354 A1 WO 2004077354A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- image
- continuity
- data
- pixels
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 945
- 238000000034 method Methods 0.000 title description 719
- 238000001514 detection method Methods 0.000 claims abstract description 475
- 230000007423 decrease Effects 0.000 claims abstract description 252
- 230000003287 optical effect Effects 0.000 claims abstract description 214
- 230000008859 change Effects 0.000 claims abstract description 97
- 230000010354 integration Effects 0.000 claims description 227
- 230000000694 effects Effects 0.000 claims description 71
- 230000003247 decreasing effect Effects 0.000 claims description 66
- 238000003672 processing method Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 abstract description 141
- 230000006870 function Effects 0.000 description 595
- 230000008569 process Effects 0.000 description 240
- 230000003044 adaptive effect Effects 0.000 description 238
- 238000010586 diagram Methods 0.000 description 220
- 238000004364 calculation method Methods 0.000 description 171
- 238000012937 correction Methods 0.000 description 160
- 239000011159 matrix material Substances 0.000 description 127
- 230000006833 reintegration Effects 0.000 description 96
- 230000009131 signaling function Effects 0.000 description 74
- 230000014509 gene expression Effects 0.000 description 63
- 230000033001 locomotion Effects 0.000 description 60
- 239000000284 extract Substances 0.000 description 35
- 230000005484 gravity Effects 0.000 description 21
- 230000002123 temporal effect Effects 0.000 description 21
- 230000006978 adaptation Effects 0.000 description 19
- 238000013507 mapping Methods 0.000 description 16
- 238000002156 mixing Methods 0.000 description 14
- 239000000243 solution Substances 0.000 description 11
- 238000003384 imaging method Methods 0.000 description 10
- 238000006243 chemical reaction Methods 0.000 description 9
- 230000004069 differentiation Effects 0.000 description 9
- 210000000078 claw Anatomy 0.000 description 8
- 238000013213 extrapolation Methods 0.000 description 8
- 238000007796 conventional method Methods 0.000 description 7
- ATJFFYVFTNAWJD-UHFFFAOYSA-N Tin Chemical compound [Sn] ATJFFYVFTNAWJD-UHFFFAOYSA-N 0.000 description 6
- 230000002194 synthesizing effect Effects 0.000 description 6
- 230000002093 peripheral effect Effects 0.000 description 5
- 230000004044 response Effects 0.000 description 5
- 230000015572 biosynthetic process Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 230000002542 deteriorative effect Effects 0.000 description 4
- 238000011835 investigation Methods 0.000 description 4
- 238000003786 synthesis reaction Methods 0.000 description 4
- 238000005070 sampling Methods 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 241000233855 Orchidaceae Species 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 241000257465 Echinoidea Species 0.000 description 1
- 101100076569 Euplotes raikovi MAT3 gene Proteins 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000005034 decoration Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 230000013632 homeostatic process Effects 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 238000003754 machining Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000037452 priming Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 239000002689 soil Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
Definitions
- the present invention relates to an image processing apparatus, method, and program, and more particularly, to an image processing apparatus, method, and program in consideration of the real world from which data is acquired.
- a first signal obtained by detecting a first signal, which is a real-world signal having a first dimension, by a sensor is described.
- the second signal is compared with the second signal. To generate a third signal with reduced distortion.
- a first signal that is obtained by projecting a first signal that is a real-world signal having a first dimension is less than a first dimension in which a part of the continuity of the real-world signal is missing.
- the first signal is estimated from the second signal in consideration of the fact that the second signal of the two dimensions has data stationarity corresponding to the stationarity of the missing real-world signal. Signal processing has not been considered before. Disclosure of the invention
- the present invention has been made in view of such circumstances, and takes into account the real world from which data is acquired, and performs more accurate and more accurate processing of events in the real world.
- the aim is to be able to obtain fruit.
- the image processing apparatus is an image processing apparatus in which the real-world optical signal is projected onto a plurality of detection elements each having a spatio-temporal integration effect, and the real-world optical signal lacks a part of the stationarity.
- a discontinuous portion detecting means for detecting a discontinuous portion of a pixel value of a plurality of pixels in the data; a vertex detecting means for detecting a vertex of a change in the pixel value from the discontinuous portion; and a pixel value monotonously increasing or decreasing from the vertex.
- a monotonically increasing / decreasing area detecting means for detecting a decreasing monotonous increasing / decreasing area, and another monotonous increasing / decreasing area among the monotonous increasing / decreasing areas detected by the monotonic increasing / decreasing area exists at an adjacent position on the image data.
- Continuity detecting means for detecting the monotone increasing / decreasing area as a steady area having continuity of image data; direction detecting means for detecting the direction of continuity of the steady area; and a stationary area detected by the continuity detecting means.
- real-world estimating means for estimating the real-world optical signal by estimating the continuity of the real-world optical signal based on the direction of the continuity of the stationary region detected by the direction detecting means.
- the direction detecting means includes: a change in pixel values of a plurality of first pixels arranged in the first monotone increasing / decreasing area in the monotonous increasing / decreasing area detected by the continuity detecting means; Detecting a direction of continuity of the steady region based on a change in pixel values of a plurality of second pixels adjacent to the plurality of first pixels arranged in the adjacent second monotone increasing / decreasing region; You can do so.
- the direction detecting means includes: an increment of a pixel value of a plurality of first pixels arranged in the first monotone increase / decrease area; and a decrement of a pixel value of a plurality of second pixels arranged in the second monotone increase / decrease area.
- the discontinuous portion detecting means finds a regression plane corresponding to the pixel values of a plurality of pixels of the image data, and detects, as a discontinuous portion, an area including pixels having pixel values whose distance from the regression plane is equal to or greater than a threshold value.
- the difference value obtained by subtracting the value approximated by the regression plane from the pixel value of the pixel at the discontinuous portion is calculated.
- the vertex detecting means detects the vertex based on the difference value. Detects monotonous increase / decrease area based on difference value and detects direction
- the means may detect the direction of the continuity of the steady region based on the difference value.
- the image processing method of the present invention provides an image obtained by projecting a real-world optical signal onto a plurality of detection elements each having a spatio-temporal integration effect, wherein the real-world optical signal lacks a part of stationarity.
- the program of the present invention lacks a part of the continuity of a real-world optical signal obtained by projecting a real-world optical signal onto a plurality of detection elements each having a spatiotemporal integration effect.
- a monotonous increase / decrease area detection step for detecting a monotonous increase / decrease area that is increasing or decreasing, and other monotone increase / decrease areas in the monotone increase / decrease area detected in the monotone increase / decrease area detection step are adjacent positions on the image data.
- a continuity detection step of detecting a monotonous increase / decrease area existing in The real-world light signal is estimated by estimating the continuity of the real-world optical signal based on the direction of the continuity of the constant region detected in the continuity detecting step and the constant region detected in the direction detecting step.
- Real-world estimation steps for estimating signals Is executed.
- a steady state of a real-world optical signal obtained by projecting a real-world optical signal onto a plurality of detection elements each having a spatiotemporal integration effect is provided.
- a discontinuity in the pixel values of a plurality of pixels in the image data that lacks some of the characteristics is detected, the vertex of the change in the pixel value is detected from the discontinuity, and the pixel value monotonically increases or decreases from the vertex.
- a monotonous increase / decrease region is detected, and among the detected monotone increase / decrease regions, another monotone increase / decrease region exists at an adjacent position on the image data.
- the direction of the continuity of the constant region is detected, and the continuity of the real-world optical signal is estimated based on the detected constant region and the direction of the detected continuity of the constant region.
- the real world optical signal is estimated.
- the image processing device may be an independent device or may be a block that performs image processing.
- FIG. 1 is a diagram illustrating the principle of the present invention.
- FIG. 2 is a block diagram illustrating an example of the configuration of the signal processing device.
- FIG. 3 is a block diagram showing a signal processing device.
- FIG. 4 is a diagram illustrating the principle of processing of a conventional signal processing device.
- FIG. 5 is a diagram illustrating the principle of processing of the signal processing device.
- FIG. 6 is a diagram for more specifically explaining the principle of the present invention.
- FIG. 7 is a diagram for more specifically explaining the principle of the present invention.
- FIG. 8 is a diagram illustrating an example of the arrangement of pixels on the image sensor.
- FIG. 9 is a diagram for explaining the operation of the detection element which is a CCD.
- FIG. 10 is a diagram for explaining the relationship between the light incident on the detection elements corresponding to the pixels D to F and the pixel value.
- FIG. 4 is a diagram illustrating a relationship with a pixel value.
- FIG. 12 is a diagram illustrating an example of an image of a linear object in the real world.
- FIG. 13 is a diagram illustrating an example of pixel values of image data obtained by actual imaging.
- FIG. 14 is a schematic diagram of image data.
- FIG. 15 is a diagram showing an example of an image of the real world 1 of an object having a single color and a straight edge, which is a color different from the background.
- FIG. 16 is a diagram illustrating an example of pixel values of image data obtained by actual imaging.
- FIG. 17 is a schematic diagram of image data.
- FIG. 18 is a diagram illustrating the principle of the present invention.
- FIG. 19 is a diagram illustrating the principle of the present invention.
- FIG. 20 is a diagram illustrating an example of generation of high-resolution data.
- FIG. 21 is a diagram illustrating approximation by a model.
- FIG. 22 is a diagram illustrating model estimation based on M pieces of data.
- FIG. 23 is a diagram illustrating the relationship between real-world signals and data.
- FIG. 24 is a diagram showing an example of data of interest when formulating an equation.
- FIG. 25 is a diagram illustrating signals for two objects in the real world and values belonging to a mixed region when formulating is performed.
- FIG. 26 is a diagram for explaining the stationarity expressed by Expression (18), Expression (19), and Expression (22).
- FIG. 27 is a diagram illustrating an example of M pieces of data extracted from the data.
- FIG. 28 is a diagram illustrating an area where a pixel value that is data is obtained.
- FIG. 29 is a diagram illustrating approximation of the position of a pixel in the spatiotemporal direction.
- FIG. 30 is a diagram for explaining integration of real-world signals in the time direction and the two-dimensional spatial direction in data.
- FIG. 31 is a diagram illustrating an integration area when generating high-resolution data having a higher resolution in the spatial direction.
- FIG. 4 is a diagram illustrating a minute area.
- FIG. 33 is a diagram for explaining an integration area when generating high-resolution data from which motion blur has been removed.
- FIG. 34 is a diagram illustrating an integration area when generating high-resolution data having a higher resolution in the time-space direction.
- FIG. 35 shows the original image of the input image.
- FIG. 36 is a diagram illustrating an example of the input image.
- FIG. 37 is a diagram showing an image obtained by applying the conventional classification adaptive processing.
- FIG. 38 is a diagram showing a result of detecting a thin line region.
- FIG. 39 is a diagram illustrating an example of an output image output from the signal processing device.
- FIG. 40 is a flowchart illustrating signal processing by the signal processing device.
- FIG. 41 is a block diagram illustrating the configuration of the data continuity detecting unit.
- Figure 42 is a diagram showing an image of the real world with a thin line in front of the background.
- FIG. 43 is a view for explaining the approximation of the background by a plane.
- FIG. 44 is a diagram showing a cross-sectional shape of image data on which a thin line image is projected.
- FIG. 45 is a diagram showing a cross-sectional shape of image data on which a thin line image is projected.
- FIG. 46 is a diagram illustrating a cross-sectional shape of image data on which a thin line image is projected.
- FIG. 47 is a diagram for describing processing of detecting a vertex and detecting a monotonous increase / decrease region.
- FIG. 48 is a diagram illustrating a process of detecting a thin line region in which the pixel value of the vertex exceeds the threshold value and the pixel value of an adjacent pixel is equal to or less than the threshold value.
- FIG. 49 is a diagram illustrating the pixel values of the pixels arranged in the direction indicated by the dotted line AA ′ in FIG.
- FIG. 50 is a diagram illustrating a process of detecting the continuity of the monotone increase / decrease region.
- FIG. 51 is a diagram illustrating an example of an image in which a stationary component is extracted by approximation on a plane.
- FIG. 52 is a diagram showing a result of detecting a monotonically decreasing region.
- FIG. 53 is a diagram showing an area where continuity is detected.
- FIG. 54 is a diagram illustrating pixel values of an area where continuity is detected.
- FIG. 55 is a diagram illustrating an example of another process of detecting a region where a thin line image is projected.
- FIG. 56 is a flowchart for explaining the processing of the continuity detection.
- FIG. 57 is a diagram illustrating a process of detecting the continuity of data in the time direction.
- FIG. 58 is a block diagram illustrating a configuration of the non-stationary component extraction unit.
- Figure 59 illustrates the number of rejections.
- FIG. 60 is a diagram illustrating an example of an input image.
- FIG. 61 is a diagram showing an image in which a standard error obtained as a result of approximation by a plane without rejection is used as a pixel value.
- FIG. 62 is a diagram illustrating an image in which the standard error obtained as a result of rejection and approximation by a plane is used as a pixel value.
- FIG. 63 is a diagram illustrating an image in which the number of rejections is set as a pixel value.
- FIG. 64 is a diagram illustrating an image in which the inclination of the plane in the spatial direction X is a pixel value.
- FIG. 65 is a diagram illustrating an image in which the inclination of the plane in the spatial direction Y is a pixel value.
- FIG. 66 is a diagram showing an image composed of approximate values indicated by a plane. '
- FIG. 67 is a diagram illustrating an image including a difference between an approximate value indicated by a plane and a pixel value.
- FIG. 68 is a flowchart illustrating the process of extracting the unsteady component.
- FIG. 69 is a flowchart for explaining the process of extracting the stationary component.
- FIG. 70 is a flowchart illustrating another process of extracting a steady component.
- FIG. 71 is a flowchart illustrating still another process of extracting a steady component.
- FIG. 72 is a block diagram illustrating another configuration of the data continuity detecting unit.
- FIG. 73 is a block diagram illustrating a configuration of the data continuity direction detection unit.
- FIG. 74 is a diagram illustrating an example of an input image including moiré.
- FIG. 75 is a diagram illustrating an example of an input image including moiré.
- FIG. 76 is a diagram showing pixels of data on which a thin line image is projected.
- FIG. 77 is a diagram illustrating the pixel values of the pixels in three columns in the data onto which the image of the thin line is projected.
- FIG. 78 is a diagram showing pixels of data on which a thin line image is projected.
- FIG. 79 is a diagram illustrating pixel values of pixels in three columns in data obtained by projecting a thin line image.
- FIG. 80 is a diagram showing pixels of data on which a thin line image is projected.
- FIG. 81 is a diagram showing pixel values of pixels in three columns in data onto which a thin line image is projected.
- FIG. 82 is a diagram illustrating an example of an input image.
- FIG. 83 is a diagram illustrating an example of a processing result when an image is processed by adopting an incorrect direction.
- FIG. 84 is a diagram illustrating an example of a processing result when a correct direction of continuity is detected.
- FIG. 85 is a flowchart illustrating the process of detecting data continuity.
- FIG. 86 is a flowchart for explaining the process of detecting the direction of data continuity.
- FIG. 87 is a block diagram illustrating another configuration of the data continuity detecting unit.
- FIG. 88 is a block diagram showing the configuration of the real world estimation unit.
- FIG. 89 is a diagram illustrating a process of detecting the width of a thin line in a signal in the real world.
- FIG. 90 is a diagram illustrating a process of detecting the width of a thin line in a signal in the real world.
- FIG. 91 is a diagram illustrating a process of estimating the level of a thin-line signal in a real-world signal.
- FIG. 92 is a flowchart illustrating the process of estimating the real world.
- FIG. 93 is a block diagram illustrating another configuration of the real world estimation unit.
- FIG. 94 is a block diagram illustrating a configuration of the boundary detection unit.
- FIG. 95 is a diagram for explaining the process of calculating the distribution ratio.
- FIG. 96 is a diagram for explaining the process of calculating the distribution ratio.
- FIG. 97 is a view for explaining the process of calculating the distribution ratio.
- FIG. 98 is a diagram illustrating a process of calculating a regression line indicating a boundary of a monotonous increase / decrease region.
- FIG. 99 is a diagram illustrating a process of calculating a regression line indicating a boundary of a monotonous increase / decrease region.
- FIG. 100 is a flowchart illustrating the process of estimating the real world.
- FIG. 101 is a flowchart illustrating the process of boundary detection.
- FIG. 102 is a block diagram illustrating a configuration of a real world estimating unit that estimates a differential value in a spatial direction as real world estimation information.
- FIG. 103 is a flowchart illustrating a process of real world estimation by the real world estimation unit in FIG. 102.
- FIG. 104 is a diagram illustrating a reference pixel.
- FIG. 105 is a view for explaining positions where differential values in the spatial direction are obtained.
- FIG. 106 is a diagram illustrating the relationship between the differential value in the spatial direction and the shift amount.
- FIG. 107 is a block diagram illustrating a configuration of a real world estimating unit that estimates the inclination in the spatial direction as real world estimation information.
- FIG. 108 is a flowchart for explaining the process of real world estimation by the real world estimation unit in FIG. 107.
- FIG. 109 is a view for explaining the processing for obtaining the inclination in the spatial direction.
- FIG. 110 is a diagram illustrating a process of obtaining a spatial inclination.
- FIG. 11 is a block diagram illustrating a configuration of a real world estimating unit that estimates a differential value in the frame direction as real world estimation information.
- FIG. 112 is a flowchart for explaining the processing of the real world estimation by the real world estimating unit of FIG.
- FIG. 11 is a diagram illustrating a reference pixel.
- FIG. 114 is a diagram for explaining a position where a differential value in the frame direction is obtained.
- FIG. 115 is a diagram for explaining the relationship between the differential value in the frame direction and the shift amount.
- Figure 1-16 shows real-world estimation that estimates the tilt in the frame direction as real-world estimation information. It is a block diagram which shows the structure of a part.
- FIG. 117 is a flowchart for explaining the processing of real world estimation by the real world estimation unit in FIG.
- FIG. 118 is a view for explaining the processing for obtaining the inclination in the frame direction.
- FIG. 119 is a view for explaining the processing for obtaining the inclination in the frame direction.
- FIG. 120 is a diagram for explaining the principle of the function approximation method, which is an example of the embodiment of the real world estimation unit in FIG.
- FIG. 121 is a view for explaining the integration effect when the sensor is CCD.
- FIG. 122 is a view for explaining a specific example of the integration effect of the sensor of FIG.
- FIG. 123 is a diagram for explaining another specific example of the integration effect of the sensor of FIG.
- FIG. 124 is a diagram showing the real world region containing fine lines shown in FIG.
- FIG. 125 is a diagram for explaining the principle of an example of the embodiment of the real world estimation unit in FIG. 3 in comparison with the example in FIG.
- FIG. 126 is a diagram showing the thin-line-containing data area shown in FIG.
- FIG. 127 is a graph in which each pixel value included in the thin line containing data area of FIG. 126 is graphed.
- FIG. 128 is a graph of an approximation function approximating each pixel value included in the thin line containing data area of FIG. 127.
- FIG. 129 is a diagram for describing the continuity in the spatial direction of the fine-line-containing real world region shown in FIG.
- FIG. 130 is a graph in which each pixel value included in the thin line containing data area of FIG. 126 is graphed.
- FIG. 13 is a diagram illustrating a state in which each of the input pixel values shown in FIG. 130 is shifted by a predetermined shift amount.
- FIG. 132 is a graph showing an approximation function that approximates each pixel value included in the thin-line-containing data region of FIG. 127 in consideration of the spatial continuity.
- FIG. 13 is a diagram illustrating the spatial mixing region.
- FIG. 134 is a view for explaining an approximation function that approximates a real-world signal in the spatial mixing region.
- Fig. 135 is a graph of an approximation function that approximates the real-world signal corresponding to the thin-line-containing data area in Fig. 127, taking into account both the integration characteristics of the sensor and the stationarity in the spatial direction. is there.
- FIG. 136 is a block diagram illustrating a configuration example of a real-world estimator that uses a first-order polynomial approximation method among function approximation methods having the principle shown in FIG.
- FIG. 137 is a flowchart for explaining the real world estimation process executed by the real world estimation unit having the configuration of FIG.
- FIG. 138 is a view for explaining the tap range.
- FIG. 139 is a view for explaining signals in the real world having stationarity in the spatial direction.
- FIG. 140 is a view for explaining the integration effect when the sensor is CCD.
- FIG. 141 illustrates the distance in the cross-sectional direction.
- FIG. 142 is a block diagram illustrating a configuration example of a real-world estimator that uses a quadratic polynomial approximation method among function approximation methods having the principle shown in FIG.
- FIG. 144 is a flowchart illustrating the estimation processing of the real world executed by the real world estimation unit having the configuration of FIG.
- FIG. 144 illustrates the tap range
- FIG. 145 is a diagram for explaining the direction of continuity in the spatiotemporal direction.
- FIG. 146 is a diagram for explaining the integration effect when the sensor is a CCD.
- FIG. 147 is a view for explaining signals in the real world having stationarity in the spatial direction.
- FIG. 148 is a view for explaining signals in the real world having stationarity in the space-time direction.
- FIG. 149 is a block diagram illustrating a configuration example of a real-world estimator that uses a three-dimensional function approximation method among function approximation methods having the principle shown in FIG.
- FIG. 150 is a flowchart illustrating the estimation processing of the real world executed by the real world estimation unit having the configuration of FIG.
- FIG. 151 illustrates the principle of the reintegration method, which is an example of the embodiment of the image generation unit in FIG.
- FIG. 152 is a diagram illustrating an example of an input pixel and an approximation function that approximates a real-world signal corresponding to the input pixel.
- FIG. 153 is a view for explaining an example of creating four high-resolution pixels in one input pixel shown in FIG. 152 from the approximation function shown in FIG.
- FIG. 154 is a block diagram illustrating a configuration example of an image generation unit that uses a one-dimensional reintegration method among the reintegration methods having the principle shown in FIG.
- FIG. 155 is a flowchart illustrating an image generation process performed by the image generation unit having the configuration of FIG.
- FIG. 156 is a diagram illustrating an example of the original image of the input image.
- FIG. 157 is a diagram illustrating an example of image data corresponding to the image of FIG.
- FIG. 158 is a diagram illustrating an example of an input image.
- FIG. 159 is a diagram illustrating an example of image data corresponding to the image of FIG.
- FIG. 160 is a diagram illustrating an example of an image obtained by performing a conventional classification adaptive process on an input image.
- FIG. 161 is a diagram illustrating an example of image data corresponding to the image in FIG.
- FIG. 162 is a diagram illustrating an example of an image obtained by performing processing of the one-dimensional reintegration method of the present invention on an input image.
- FIG. 163 is a diagram illustrating an example of image data corresponding to the image of FIG.
- FIG. 164 is a view for explaining signals in the real world having stationarity in the spatial direction.
- FIG. 165 is a block diagram illustrating a configuration example of an image generation unit that uses a two-dimensional reintegration method among the reintegration methods having the principle shown in FIG.
- FIG. 166 is a diagram for explaining the distance in the cross-sectional direction.
- FIG. 167 is a flowchart illustrating the image generation processing executed by the image generation unit having the configuration of FIG.
- FIG. 168 is a diagram illustrating an example of the input pixel.
- FIG. 169 is a view for explaining an example of creating four high-resolution pixels in one input pixel shown in FIG. 168 by the two-dimensional reintegration method.
- H1170 is a diagram for explaining the direction of continuity in the spatiotemporal direction.
- FIG. 171 is a block diagram illustrating a configuration example of an image generation unit that uses a three-dimensional reintegration method among the reintegration methods having the principle shown in FIG.
- FIG. 172 is a flowchart illustrating an image generation process performed by the image generation unit having the configuration of FIG.
- FIG. 173 is a block diagram showing another configuration of the image generating unit to which the present invention is applied.
- FIG. 174 is a flowchart illustrating the process of generating an image by the image generating unit in FIG. 173.
- FIG. 175 is a view for explaining a process of generating a quadruple-density pixel from input pixels.
- FIG. 176 is a diagram illustrating a relationship between an approximate function indicating a pixel value and a shift amount.
- FIG. 177 is a block diagram illustrating another configuration of the image generation unit to which the present invention has been applied.
- FIG. 178 is a flowchart illustrating the process of generating an image by the image generating unit in FIG. 177.
- FIG. 179 is a diagram illustrating a process of generating a quadruple-density pixel from an input pixel.
- FIG. 180 is a diagram illustrating a relationship between an approximate function indicating a pixel value and a shift amount.
- FIG. 181 is a block diagram illustrating an example of the configuration of an image generation unit that uses the one-dimensional reintegration method of the class classification adaptive processing correction method, which is an example of the embodiment of the image generation unit in FIG. 3. .
- FIG. 182 is a block diagram illustrating a configuration example of the class classification adaptive processing unit of the image generation unit in FIG. 181.
- FIG. 183 is a block diagram illustrating a configuration example of a class classification adaptive processing unit in FIG. 181, and a learning device that determines coefficients used by the class classification adaptive processing correction unit by learning.
- FIG. 184 illustrates a detailed configuration example of the learning unit for class classification adaptive processing in FIG. 183.
- FIG. ' is a block diagram illustrating a configuration example of a class classification adaptive processing unit in FIG. 181, and a learning device that determines coefficients used by the class classification adaptive processing correction unit by learning.
- FIG. 184 illustrates a detailed configuration example of the learning unit for class classification adaptive processing in FIG. 183.
- FIG. ' illustrates a detailed configuration example of the learning unit for class classification adaptive processing in FIG. 183.
- FIG. 185 is a diagram illustrating an example of a processing result of the class classification adaptive processing unit in FIG. 182.
- FIG. 186 is a diagram showing a difference image between the predicted image in FIG. 185 and the HD image.
- Figure 187 shows the HD image of Figure 185, which corresponds to the four HD pixels from the left in the figure out of the six HD pixels that are continuous in the X direction included in the area shown in Figure 186.
- FIG. 6 is a diagram showing plots of specific pixel values, specific pixel values of an SD image, and actual waveforms (real-world signals).
- FIG. 188 is a diagram showing a difference image between the predicted image shown in FIG. 185 and the HD image.
- FIG. 189 shows the HD image of Fig. 185 corresponding to the four HD pixels from the left in the figure out of the six HD pixels consecutive in the X direction included in the area shown in Fig. 188.
- FIG. 6 is a diagram showing plots of specific pixel values, specific pixel values of an SD image, and actual waveforms (real-world signals).
- FIG. 190 is a view for explaining the knowledge obtained based on the contents shown in FIG. 187 to FIG. 189.
- FIG. 191 is a block diagram illustrating a configuration example of the class classification adaptive processing correction unit of the image generation unit in FIG.
- FIG. 192 is a block diagram illustrating a detailed configuration example of the class classification adaptive processing correction learning unit in FIG.
- FIG. 193 is a view for explaining the tilt in the pixel.
- FIG. 194 is a diagram illustrating the SD image in FIG. 185 and a feature amount image in which the in-pixel inclination of each pixel of the SD image is used as a pixel value.
- FIG. 195 is a view for explaining a method of calculating the in-pixel inclination.
- FIG. 196 is a view for explaining a method for calculating the in-pixel inclination.
- FIG. 197 is a flowchart illustrating an image generation process performed by the image generation unit having the configuration in FIG.
- FIG. 198 is a flowchart for explaining the details of the input image class classification adaptive processing in the image generation processing in FIG. 197.
- FIG. 199 is a flowchart for explaining the details of the correction processing of the class classification adaptive processing in the image generation processing of FIG. 197.
- FIG. 200 is a diagram illustrating an example of the arrangement of class taps.
- FIG. 201 is a diagram illustrating an example of the class classification.
- FIG. 202 is a diagram illustrating an example of a prediction tap arrangement.
- FIG. 203 is a flowchart illustrating the learning processing of the learning device in FIG.
- FIG. 204 is a flowchart illustrating details of the learning process for the class classification adaptive process in the learning process of FIG.
- FIG. 205 is a flowchart for explaining the details of the learning process for class classification adaptive processing correction in the learning processing of FIG.
- FIG. 206 shows the predicted image of FIG. 185 and an image obtained by adding the corrected image to the predicted image (the image generated by the image generating unit of FIG. 181).
- FIG. 207 is a block diagram illustrating a first configuration example of a signal processing device using the combined method, which is another example of the embodiment of the signal processing device in FIG. 1.
- FIG. 208 is a block diagram illustrating a configuration example of an image generation unit that performs the classification adaptive processing in the signal processing device of FIG.
- FIG. 209 is a block diagram illustrating a configuration example of a learning device for the image generation unit in FIG.
- FIG. 210 is a flowchart illustrating signal processing executed by the signal processing device having the configuration of FIG.
- FIG. 211 is a flowchart illustrating the details of the execution processing of the class classification adaptive processing of the signal processing of FIG.
- FIG. 212 is a flowchart illustrating the learning processing of the learning device in FIG. 209.
- FIG. 2 13 is a block diagram illustrating another example of the embodiment of the signal processing device in FIG. 1, which illustrates a second configuration example of the signal processing device using the combined method.
- FIG. 214 is a flowchart illustrating signal processing executed by the signal processing device having the configuration of FIG.
- FIG. 215 is a block diagram illustrating a third configuration example of the signal processing device using the combined method, which is another example of the embodiment of the signal processing device in FIG.
- FIG. 216 is a flowchart illustrating signal processing executed by the signal processing device having the configuration of FIG.
- FIG. 217 is a block diagram illustrating a fourth example of the configuration of the signal processing device using the combined method, which is another example of the embodiment of the signal processing device in FIG.
- FIG. 218 is a flowchart illustrating signal processing executed by the signal processing device having the configuration of FIG.
- FIG. 219 is a block diagram illustrating a fifth configuration example of the signal processing device using the combined method, which is another example of the embodiment of the signal processing device in FIG.
- FIG. 220 is a flowchart illustrating signal processing executed by the signal processing device having the configuration of FIG.
- FIG. 221 is a block diagram showing a configuration of another embodiment of the data continuity detecting unit.
- FIG. 222 is a flowchart illustrating the data continuity detection processing by the data continuity detection unit in FIG. BEST MODE FOR CARRYING OUT THE INVENTION
- FIG. 1 illustrates the principle of the present invention.
- events phenomena
- Real world 1 events include light (image), sound, pressure, temperature, mass, density, lightness / darkness, or smell.
- Events in the real world 1 are distributed in the spatiotemporal direction.
- the image of the real world 1 is the distribution of the light intensity of the real world 1 in the spatiotemporal direction.
- the senor 2 converts information indicating an event of the real world 1 into data 3. It can be said that a signal that is information indicating an event (phenomenon) in the real world 1 having dimensions such as space, time, and mass is acquired by the sensor 2 and converted into data.
- a signal that is information indicating an event (phenomenon) in the real world 1 having dimensions such as space, time, and mass is acquired by the sensor 2 and converted into data.
- the distribution of events such as images, sound, pressure, temperature, mass, density, brightness / darkness, or smell in the real world 1 is also referred to as a signal that is information indicating an event of the real world 1.
- a signal that is information indicating an event in the real world 1 is also simply referred to as a signal in the real world 1.
- a signal includes a phenomenon or an event, and includes a signal that the transmission side does not intend.
- the data 3 (detection signal) output from the sensor 2 is information obtained by projecting information indicating an event of the real world 1 to a lower-dimensional space-time than the real world 1.
- data 3 which is image data of a moving image, is obtained by projecting an image in the three-dimensional spatial direction and the temporal direction of the real world 1 into a two-dimensional spatial direction and a spatiotemporal direction consisting of the temporal direction.
- Information is also, for example, when data 3 is digital data, data 3 is rounded according to the unit of sampling.
- data 3 is analog data, the information in data 3 is compressed or a part of the information is deleted by a limiter or the like according to the dynamic range.
- data 3 is significant information for estimating signals that are information indicating events (phenomena) in the real world 1. Contains.
- information having stationarity included in data 3 is used as significant information for estimating a signal which is information of the real world 1.
- Stationarity is newly defined It is a concept to do.
- the event of the real world 1 includes a certain feature in a direction of a predetermined dimension.
- a shape, a pattern, a color, or the like is continuous in a space direction or a time direction, or a pattern of a shape, a pattern, or a color is repeated.
- the information indicating the event of the real world 1 includes a certain feature in the direction of the predetermined dimension.
- a linear object such as a thread, a string, or a rope has a constant cross-sectional shape at an arbitrary position in the longitudinal direction, that is, a constant in the longitudinal direction.
- the constant feature in the spatial direction that the cross-sectional shape is the same at an arbitrary position in the length direction arises from the feature that the linear object is long. Therefore, the image of the linear object has a certain feature in the longitudinal direction, that is, in the spatial direction, that the cross-sectional shape is the same at an arbitrary position in the longitudinal direction.
- a single-color object which is a tangible object extending in the spatial direction
- an image of a single-color object which is a tangible object extending in the spatial direction
- the signal of the real world 1 has a certain characteristic in the direction of the predetermined dimension.
- continuity such a feature that is fixed in the direction of the predetermined dimension is called continuity.
- the continuity of a signal in the real world 1 (real world) refers to a characteristic of a signal indicating an event in the real world 1 (real world), which is constant in a predetermined dimension.
- data 3 is a signal obtained by projecting a signal indicating information of an event of the real world 1 having a predetermined dimension by the sensor 2, and thus the continuity of the signal of the real world is Is included.
- Data 3 can also be said to include the stationarity of the real-world signal projected.
- the continuity included in the signal of the real world 1 (real world) is derived from the data 3. Part is missing.
- data 3 includes, as data continuity, a part of the continuity of the signal of the real world 1 (real world).
- the data continuity is a feature of data 3 that is constant in a predetermined dimension direction.
- data continuity of data 3 is used as significant information for estimating a signal that is information indicating an event in the real world 1.
- information indicating a missing event of the real world 1 is generated by performing signal processing on the data 3 using the stationarity of the data.
- the continuity in the spatial direction or the temporal direction among the length (space), time, and mass of a signal which is information indicating an event in the real world 1, is used.
- the senor 2 is composed of, for example, a digital still camera or a video camera, and captures an image of the real world 1 and outputs the obtained data 3 as image data to the signal processing device 4. I do.
- the sensor 2 can be, for example, a thermography device or a pressure sensor using photoelasticity.
- the signal processing device 4 is composed of, for example, a personal computer.
- the signal processing device 4 is configured, for example, as shown in FIG. CPU
- the (Central Processsing Unit) 21 executes various processes according to a program stored in a ROM (Read Only Memory) 22 or a storage unit 28.
- ROM Read Only Memory
- RAM Random Access Memory 23 programs executed by the CPU 21 and data are stored as appropriate.
- ROM 22 and RAM 23 are interconnected by a bus 24.
- An input / output interface 25 is also connected to the CPU 21 via a bus 24.
- the input / output interface 25 is connected to an input unit 26 composed of a keyboard, mouse, microphone, etc., and an output unit 27 composed of a display, speakers, etc. Have been.
- the CPU 21 executes various processes in response to a command input from the input unit 26. Then, the CPU 21 outputs an image, a sound, or the like obtained as a result of the processing to the output unit 27.
- the storage unit 28 connected to the input / output interface 25 is composed of, for example, a hard disk, and stores programs executed by the CPU 21 and various data.
- the communication unit 29 communicates with external devices via the Internet or other networks. In the case of this example, the communication unit 29 functions as an acquisition unit that takes in the data 3 output from the sensor 2.
- a program may be acquired via the communication unit 29 and stored in the storage unit 28.
- the drive 30 connected to the input / output interface 25 drives the magnetic disk 51, optical disk 52, magneto-optical disk 53, or semiconductor memory 54 when they are mounted, and drives them there. Get recorded programs and data.
- the acquired programs and data are transferred to and stored in the storage unit 28 as necessary.
- FIG. 3 is a block diagram showing the signal processing device 4.
- each function of the signal processing device 4 is realized by hardware or software. That is, each block diagram in this specification may be considered as a hardware block diagram or a function block diagram by software.
- FIG. 3 is a diagram showing a configuration of the signal processing device 4 which is an image processing device.
- the input image (image data as an example of the data 3) input to the signal processing device 4 is supplied to the data continuity detecting unit 101 and the real world estimating unit 102.
- the data continuity detector 101 detects data continuity from the input image and The data continuity information indicating the obtained continuity is supplied to the real world estimating unit 102 and the image generating unit 103.
- the data continuity information includes, for example, the position of a pixel region having data continuity in the input image, the direction of the pixel region having data continuity (the angle or inclination in the time direction and the spatial direction), or the data Includes the length of the area of pixels that have stationarity. Details of the configuration of the data continuity detecting unit 101 will be described later.
- the real world estimating unit 102 estimates the signal of the real world 1 based on the input image and the data continuity information supplied from the data continuity detecting unit 101.
- the real-world estimating unit 102 estimates an image, which is a real-world signal, incident on the sensor 2 when the input image is acquired.
- the real world estimation unit 102 supplies real world estimation information indicating the result of estimation of the signal of the real world 1 to the image generation unit 103. Details of the configuration of the real world estimation unit 102 will be described later.
- the image generation unit 103 generates a signal that is more similar to the signal of the real world 1 based on the real world estimation information indicating the estimated signal of the real world 1 supplied from the real world estimation unit 102. And output the generated signal.
- the image generation unit 103 shows the data continuity information supplied from the data continuity detection unit 101 and the estimated real world 1 signal supplied from the real world estimation unit 102 Based on the real world estimation information, it generates a signal that is more similar to the real world 1 signal and outputs the generated signal.
- the image generation unit 103 generates an image that is closer to the image of the real world 1 based on the real world estimation information, and outputs the generated image as an output image.
- the image generating unit 1 0 based on the data continuity information and actual world estimation information, the real world and generate an image that approximates the first image, c for example and outputs the generated image as an output image
- the image The generation unit 103 integrates the estimated image of the real world 1 in a desired space direction or time direction range based on the real world estimation information, thereby comparing the image with the input image in the space direction or the time. Generates a high-resolution image depending on the direction, and outputs the generated image as an output image. For example, the image generation unit 103 generates an image by outer interpolation, and outputs the generated image as an output image.
- FIG. 4 is a diagram for explaining the principle of processing in the conventional signal processing device 121.
- the conventional signal processing device 122 uses data 3 as a reference for processing and performs processing such as high resolution processing on data 3 as a processing target.
- the conventional signal processor 1 2 1 the real world 1 is not considered, and the data 3 is the final criterion, and it is not possible to obtain more information than the information contained in the data 3 as output. Can not.
- the conventional signal processing device 1 2 1 does not consider at all the distortion caused by the sensor 2 (difference between the signal which is the information of the real world 1 and the data 3) existing in the data 3.
- the device 122 outputs a signal containing distortion. Further, depending on the content of the processing of the signal processing device 121, the distortion caused by the sensor 2 existing in the data 3 is further amplified, and the data including the widened distortion is output.
- the processing is executed in consideration of (the signal of) the real world 1 itself.
- FIG. 5 is a diagram illustrating the principle of processing in the signal processing device 4 according to the present invention.
- the signal information indicating the event of the actual world 1 sensor 2 is acquired, the sensor 2 is, in terms of outputting the data 3 obtained by projecting the signal which is information of the actual world 1, c however is the same as the conventional,
- a signal which is information indicating an event of the real world 1 acquired by the sensor 2 is explicitly considered.
- the signal processing is performed while being aware that the data 3 includes the distortion caused by the sensor 2 (the difference between the signal which is the information of the real world 1 and the data 3).
- the result of the processing is not limited by the information and distortion included in the data 3. It is possible to obtain more accurate and more accurate processing results for events. That is, according to the present invention, a more accurate and higher-precision processing result can be obtained for a signal that is input to the sensor 2 and that indicates information of the event in the real world 1.
- 6 and 7 are diagrams for more specifically explaining the principle of the present invention.
- a signal of the real world 1 which is an image is converted into an optical system 14 1 including a lens or an optical LPF (Low Pass Filter) to form an example of the sensor 2.
- An image is formed on the light receiving surface of a certain CCD (Charge Coupled Device). Since the CCD, which is an example of the sensor 2, has an integration characteristic, the data 3 output from the CCD has a difference from the image of the real world 1. Details of the integration characteristic of the sensor 2 will be described later.
- the relationship between the image of the real world 1 acquired by the CCD and the data 3 captured and output by the CCD is clearly considered. That is, the relationship between the data 3 and the signal that is the real-world information acquired by the sensor 2 is clearly considered.
- the signal processing device 4 approximates (describes) the real world 1 using a model 16 1.
- the model 16 1 is represented by, for example, N variables. More precisely, the model 16 1 approximates (describes) the real world 1 signal.
- the signal processor 4 extracts M data 16 2 from the data 3.
- the signal processing device 4 uses the continuity of the data included in the data 3.
- the signal processing device 4 extracts the data 162 for predicting the model 161, based on the stationarity of the data included in the data 3.
- the model 16 1 is bound by the stationarity of the data. That is, when the model 16 1 is acquired by the sensor 2, an event of the real world 1 having a stationarity (a characteristic that is constant in a predetermined dimension) that causes a stationarity of the data in the data 3. Approximate information (signal)).
- the model 16 1 represented by the N variables is predicted from the M data 16 2. Can be. As described above, by predicting the model 16 1 that approximates (describes) (the signal of) the real world 1, the signal processing device 4 can consider the signal that is the information of the real world 1.
- An image sensor such as a CCD or a complementary metal-oxide semiconductor (CMOS) sensor, which captures an image, projects a signal, which is information of the real world, into two-dimensional data when imaging the real world.
- CMOS complementary metal-oxide semiconductor
- Each pixel of the image sensor has a predetermined area as a so-called light receiving surface (light receiving area). Light incident on a light receiving surface having a predetermined area is integrated in the spatial direction and the time direction for each pixel, and is converted into one pixel value for each pixel.
- the image sensor captures an image of an object in the real world, and outputs image data obtained as a result of the capture in units of one frame. That is, the image sensor acquires the signal of the real world 1, which is the light reflected by the object of the real world 1, and outputs the data 3.
- an image sensor outputs 30 frames of image data per second.
- the exposure time of the image sensor can be set to 1/30 seconds.
- the exposure time is a period from the time when the image sensor starts converting the incident light into electric charges to the time when the conversion of the incident light into electric charges ends.
- the exposure time is also referred to as a shutter time.
- FIG. 8 is a diagram illustrating an example of the arrangement of pixels on the image sensor.
- a to I indicate individual pixels. Pixels represent the image displayed by the image data. It is arranged on a plane corresponding to the image.
- One detection element corresponding to one pixel is arranged on the image sensor. When the image sensor captures an image of the real world 1, one detection element outputs one pixel value corresponding to one pixel constituting the image data.
- the position of the detector element in the spatial direction X corresponds to the position in the horizontal direction on the image displayed by the image data
- the position of the detector element in the spatial direction Y (Y coordinate) corresponds to the image. Corresponds to the vertical position on the image displayed by the data.
- the distribution of the light intensity of the real world 1 has a spread in the three-dimensional spatial direction and the temporal direction, but the image sensor acquires the light of the real world 1 in the two-dimensional spatial direction and the temporal direction, Generates data 3 representing the distribution of light intensity in the two-dimensional spatial and temporal directions.
- the detection element which is a CCD, converts light input to the light receiving surface (light receiving area) (detection area) into electric charges for a period corresponding to the shutter time, and converts the converted electric charges.
- Light is the information (signal) in the real world 1 whose intensity is determined by its position in three-dimensional space and time.
- the distribution of light intensity in the real world 1 is a function with variables x, y, and z in three-dimensional space, and time t.
- the amount of electric charge stored in the detector element which is a CCD, is almost proportional to the intensity of light incident on the entire light-receiving surface, which has a two-dimensional spatial extent, and the time the light is incident. .
- the detection element adds the electric charge converted from the light incident on the entire light receiving surface to the already accumulated electric charge in a period corresponding to the shutter time.
- the detection element integrates light incident on the entire light receiving surface having a two-dimensional spatial spread for a period corresponding to the shutter time, and accumulates an amount of charge corresponding to the integrated light. . It can be said that the detection element has an integrating effect on space (light receiving surface) and time (Shutter time).
- the electric charge accumulated in the detecting element is converted into a voltage value by a circuit (not shown), and the voltage value is further converted into a pixel value such as digital data and output as data 3. You. Therefore, each pixel value output from the image sensor is
- (Signal) has a value projected onto a one-dimensional space, which is the result of integrating a part having a temporal and spatial spread in the temporal direction of the shutter time and the spatial direction of the light receiving surface of the detection element.
- the pixel value of one pixel is represented by integration of F (x, y, t).
- F (x, y, t) is a function representing the distribution of light intensity on the light receiving surface of the detection element.
- the pixel value P is represented by Expression (1).
- Equation (1) is the spatial coordinate (X coordinate) of the left boundary of the light receiving surface of the detection element.
- x 2 is the spatial coordinate (X coordinate) of the right boundary of the light-receiving surface of the detector elements Ru der.
- yi is the spatial coordinate (Y coordinate) of the upper boundary of the light receiving surface of the detection element.
- y 2 is the lower boundary spatial coordinates of the light-receiving surface of the detecting element (Y-coordinate). Is the time at which the conversion of incident light into charge has started. Is the time at which the conversion of the incident light into charges has been completed.
- the gain of the pixel value of the image data output from the image sensor is corrected, for example, for the entire frame.
- Each pixel value of the image data is the integrated value of the light incident on the light receiving surface of each detection element of the image sensor, and of the light incident on the image sensor of the real world 1 which is smaller than the light receiving surface of the detection element.
- the light waveform is hidden by the pixel value as an integrated value.
- the waveform of a signal expressed with reference to a predetermined dimension is also simply referred to as a waveform.
- the image data since the image of the real world 1 is integrated in the spatial direction and the temporal direction in units of pixels, the image data lacks a part of the stationarity of the image of the real world 1 and the real world 1 Only another part of the stationarity of one image is included in the image data. Or, the image data has changed from the stationarity of the real world 1 image. May be included.
- FIG. 10 is a diagram for explaining the relationship between the light incident on the detection elements corresponding to the pixels D to F and the pixel value.
- F (x) in FIG. 10 is an example of a function representing the distribution of light intensity in the real world 1 with the coordinate X in the spatial direction X in space (on the detection element) as a variable.
- C In other words, F (x) is an example of a function representing the distribution of light intensity in the real world 1 when it is constant in the spatial direction Y and the time direction.
- L indicates the length in the spatial direction X of the light receiving surface of the detection element corresponding to pixel D to pixel F.
- the pixel value of one pixel is represented by the integral of F (x).
- the pixel value P of the pixel E is represented by Expression (2).
- Xl is the spatial coordinate in the spatial direction X of the left boundary of the light receiving surface of the detection element corresponding to the pixel E.
- x 2 is a spatial coordinate in the spatial direction X of the right boundary of the light-receiving surface of the detecting element corresponding to the pixel E.
- FIG. 11 is a diagram illustrating the relationship between the passage of time, the light incident on the detection element corresponding to one pixel, and the pixel value.
- F (t) in FIG. 11 is a function representing the distribution of light intensity in the real world 1 with time t as a variable.
- F (t) is an example of a function that represents the distribution of light intensity in the real world 1 when it is constant in the spatial direction Y and the spatial direction X.
- t s indicates the shirt time.
- Frame # n-1 is a frame that is earlier in time than frame #n, and frame is a frame that is later in time than frame #n. That is, frame # n-1, frame #n, and frame # n + l are frame # n-1, frame #n, And frames # n + l.
- the shirt time t s and the frame interval are the same.
- the pixel value of one pixel is represented by the integral of F (t).
- the pixel value P of the pixel in frame #n is represented by Expression (3).
- Equation (3) is the time at which the conversion of incident light into electric charge has started.
- t 2 is the time at which the conversion of the incident light into charges has been completed.
- the integration effect in the spatial direction by the sensor 2 is simply referred to as the spatial integration effect
- the integration effect in the time direction by the sensor 2 is simply referred to as the time integration effect
- the spatial integration effect or the time integration effect is also simply referred to as an integration effect.
- FIG. 12 is a diagram illustrating an image of a linear object (for example, a thin line) in the real world 1, that is, an example of a light intensity distribution.
- the upper position in the figure indicates the light intensity (level)
- the upper right position in the figure indicates the position in the spatial direction X which is one direction in the spatial direction of the image.
- the position on the right in the middle indicates the position in the spatial direction Y, which is another direction in the spatial direction of the image.
- the image of the linear object in the real world 1 includes a certain stationarity.
- the image shown in Fig. 12 has the continuity that the cross-sectional shape (change in level with respect to position change in the direction orthogonal to the length direction) is the same at an arbitrary position in the length direction .
- FIG. 13 is a diagram illustrating an example of pixel values of an image corresponding to the image illustrated in FIG. 12 and obtained by actual imaging.
- FIG. 14 is a schematic diagram of the image data shown in FIG.
- the schematic diagram shown in Fig. 14 is a linear line with a diameter smaller than the length L of the light-receiving surface of each pixel, which extends in a direction deviated from the pixel array (vertical or horizontal array of pixels) of the image sensor.
- FIG. 3 is a schematic diagram of image data obtained by capturing an image of an object with an image sensor. When the image data shown in FIG. 14 is acquired, the image incident on the image sensor is the image of the linear object in the real world 1 in FIG.
- the upper position in the figure indicates the pixel value
- the upper right position in the figure indicates the position in the spatial direction X which is one direction in the spatial direction of the image
- the right position in the figure Indicates the position in the spatial direction Y, which is another direction in the spatial direction of the image.
- the directions indicating the pixel values in FIG. 14 correspond to the level directions in FIG. 12, and the spatial direction X and the spatial direction Y in FIG. 14 are the same as the directions in FIG.
- the linear object When an image of a linear object whose diameter is shorter than the length of the light receiving surface of each pixel is captured by the image sensor, the linear object is schematically represented in the image data obtained as a result of the imaging, for example, It is represented by a plurality of arcs (kamaboko-shaped) of a predetermined length that are arranged diagonally.
- Each arc shape is almost the same.
- One arc shape is formed vertically on one row of pixels or horizontally on one row of pixels.
- one arc shape in FIG. 14 is formed on one column of pixels vertically.
- the image of the linear object in the real world 1 has an arbitrary position in the length direction and a spatial direction.
- the stationarity of the same cross-sectional shape in Y is lost.
- the continuity that the image of the linear object in the real world 1 has is the same shape formed on one row of pixels vertically or one row of pixels horizontally. It can be said that there is a change to a stationary state in which certain arc shapes are arranged at regular intervals.
- FIG. 15 is a diagram showing an example of an image of the real world 1 of an object having a single color and a straight edge, which is a color different from the background, that is, an example of the distribution of light intensity.
- the upper position in the figure indicates the light intensity (level)
- the upper right position in the figure indicates the position in the spatial direction X which is one direction in the spatial direction of the image.
- the position to the right of indicates the position in the spatial direction Y, which is another direction in the spatial direction of the image.
- the image of the real world 1 of an object having a straight edge in a color different from the background has a predetermined constancy. That is, the image shown in FIG. 15 has stationarity in which the cross-sectional shape (change in level with respect to change in position in the direction perpendicular to the edge) is the same at an arbitrary position in the length direction of the edge.
- FIG. 16 is a diagram showing an example of pixel values of image data obtained by actual imaging corresponding to the image shown in FIG. As shown in FIG. 16, the image data is composed of pixel values in units of pixels, and thus has a step-like shape.
- FIG. 17 is a schematic diagram of the image data shown in FIG.
- FIG. 17 shows a single-color, straight-line color that is different from the background and whose edge extends in a direction deviating from the arrangement of pixels (vertical or horizontal arrangement of pixels) of the image sensor.
- FIG. 2 is a schematic diagram of image data obtained by capturing an image of the real world 1 of an object having an edge using an image sensor.
- the image incident on the image sensor was of a different color from the background shown in Fig. 15 and had a single color, linear edge. It is an image of the real world 1.
- the upper position in the figure indicates the pixel value
- the upper right position in the figure indicates the position in the spatial direction X which is one direction in the spatial direction of the image
- the right position in the figure Indicates the position in the spatial direction Y, which is another direction in the spatial direction of the image.
- the direction indicating the pixel value in FIG. 17 corresponds to the direction of the level in FIG. 15, and the spatial direction X and the spatial direction Y in FIG. 17 are the same as the directions in FIG.
- the linear edge is schematically represented in image data obtained as a result of the imaging. For example, a plurality of pawls of a predetermined length, which are arranged diagonally
- Each claw shape is almost the same shape.
- One claw shape is formed vertically on one row of pixels or horizontally on one row of pixels. For example, in FIG. 17, one claw shape is formed vertically on one column of pixels.
- the real world 1 is a color having a color different from the background and having a monochromatic, linear edge.
- the continuity of the image of the real world 1, which is a color different from the background and has a single color, and has a linear edge has an image of one pixel vertically or one pixel horizontally. It can be said that the same shape of the claw shape formed on the pixel of the column has changed to the stationary state in which it is arranged at a constant interval.
- the data continuity detecting unit 101 detects such continuity of data included in, for example, data 3 which is an input image. For example, the data continuity detecting unit 101 detects data continuity by detecting a region having a certain feature in a predetermined dimension direction. For example, the data continuity detecting unit 101 detects a region shown in FIG. 14 in which the same arc shapes are arranged at regular intervals. Further, for example, the data continuity detecting unit 101 detects a region shown in FIG. 17 in which the same claw shapes are arranged at regular intervals.
- the data continuity detecting unit 101 detects data continuity by detecting an angle (inclination) in the spatial direction indicating a similar shape arrangement.
- the data continuity detecting unit 101 detects the continuity of data by detecting angles (movements) in the spatial direction and the temporal direction, which indicate the parallel arrangement of similar shapes in the spatial direction and the temporal direction. Is detected.
- the data continuity detecting unit 101 detects data continuity by detecting a length of an area having a certain characteristic in a direction of a predetermined dimension.
- the portion of the data 3 in which the image of the real world 1 of the object having a single color and having a linear edge and different from the background is projected by the sensor 2 is also referred to as a binary edge.
- desired high-resolution data 18 1 is generated from the data 3.
- the real world 1 is estimated from the data 3, and the high-resolution data 18 1 is generated based on the estimation result. That is, as shown in Figure 19, the real world 1 is estimated from data 3 and High-resolution data 18 1 is generated from the estimated real world 1 in consideration of the data 3.
- the sensor 2 which is a CCD has an integral characteristic as described above. That is, one unit of data 3 (for example, pixel value) is a signal of the real world 1
- the virtual high-resolution sensor applies the process of projecting the real-world 1 signal to data 3 to the estimated real-world 1, resulting in a high-resolution sensor.
- Data 1 8 1 can be obtained.
- the signal of the real world 1 can be estimated from the data 3
- the signal of the real world 1 is calculated for each detection region of the detection element of the virtual high-resolution sensor ( By integrating (in the spatiotemporal direction), one value included in the high-resolution data 18 1 can be obtained.
- the data 3 cannot represent the small change of the signal of the real world 1. Therefore, by comparing the signal of the real world 1 estimated from the data 3 with the change of the signal of the real world 1 and integrating every smaller region (in the spatiotemporal direction), the signal of the real world 1 is obtained. It is possible to obtain high-resolution data 18 1 indicating a small change in
- high-resolution data 181 can be obtained by integrating the estimated real world 1 signal in the detection area.
- the image generation unit 103 integrates the estimated signal of the real world 1 in a space-time region of each detection element of the virtual high-resolution sensor. Thus, high-resolution data 18 1 is generated.
- the relation between the data 3 and the real world 1, the stationarity, and the spatial mixing in the data 3 are used.
- mixing means that in data 3, signals for two objects in the real world 1 are mixed into one value.
- Spatial mixing refers to spatial mixing of signals for two objects due to the spatial integration effect of the sensor 2.
- Real world 1 itself consists of an infinite number of phenomena, so in order to express real world 1 itself, for example, by mathematical formulas, an infinite number of variables are needed. From Data 3, it is not possible to predict all events in the real world 1.
- a part of the signal of the real world 1 having a stationarity which can be represented by f (x, y, Z , t), is approximated by a model 16 1 represented by N variables. Then, as shown in FIG. 22, the prediction is made from M data 162 in the model 16 1 force data 3.
- model 161 is represented by N variables based on stationarity, and second, sensor Based on the integration characteristics of 2, it is necessary to formulate an expression using N variables that shows the relationship between the model 16 1 represented by N variables and the M data 16 2 . Since the model 16 1 is represented by N variables based on the force S and the stationarity, it shows the relationship between the model 16 1 represented by N variables and the M data 16 2. It can be said that the equation using N variables describes the relationship between the stationary signal part of the real world 1 and the data stationary part 3.
- the data continuity detection unit 101 is based on a signal portion of the real world 1 having continuity. The features of data 3 where data continuity occurs and the features of data steadiness are detected.
- the edge has a slope.
- the arrow B in FIG. 23 indicates the edge inclination.
- the inclination of the predetermined edge can be represented by an angle with respect to a reference axis or a direction with respect to a reference position.
- the inclination of the predetermined edge can be represented by an angle between the coordinate axis in the spatial direction X and the edge.
- a predetermined inclination of ⁇ can be represented by a direction indicated by a length in the spatial direction X and a length in the spatial direction Y.
- An image of the real world 1 that is a color that is different from the background and has a single color, and has a straight edge.
- the image of the real world 1 is displayed in the data 3.
- the claw shape corresponding to the edge is arranged at the position indicated by A 'in Fig. 23 with respect to the position of interest (A) in Fig. 23, and corresponds to the inclination of the edge of the image of the real world 1 in Fig. 23.
- the claw shapes corresponding to the edges are arranged in the direction of the inclination indicated by '.
- the model 16 1 represented by N variables approximates a real-world signal portion that causes data continuity in data 3.
- the data stationarity occurs in the data 3 shown in Fig. 24, and the value obtained by integrating the signal of the real world 1 is output from the detection element of the sensor 2 by focusing on the values belonging to the mixed region.
- the formula is established as equal to For example, multiple expressions can be developed for multiple values in data 3 where data continuity occurs.
- A indicates the position of interest of the edge, and A, indicates the pixel (position) with respect to the position of interest (A) in the image of the real world 1.
- the mixed region is defined as a pair of two objects in the real world 1 in data 3. This is the area of the data where the mixed signals are one value.
- the image for the object having the straight edge and the image for the background are integrated. Pixel values belong to the mixed area.
- FIG. 25 is a diagram illustrating signals for two objects in the real world 1 and values belonging to a mixed area when an equation is formed.
- the left side in Fig. 25 is the signal of the real world 1 for two objects in the real world 1 acquired in the detection area of one detecting element of the sensor 2 and having a predetermined spread in the spatial direction X and the spatial direction Y. Is shown.
- the right side in FIG. 25 shows the pixel value P of one pixel of data 3 where the signal of the real world 1 shown on the left side of FIG. 25 is projected by one detection element of the sensor 2. That is, the signal of the real world 1 is projected onto two objects in the real world 1 having a predetermined spread in the spatial direction X and the spatial direction Y acquired by one detecting element of the sensor 2, The pixel value P of one pixel is shown.
- L in FIG. 25 indicates the signal level of the real world 1 in the white part of FIG. 25 for one object in the real world 1.
- R in FIG. 25 indicates the level of the signal of the real world 1 in the shaded portion of FIG. 25 with respect to another object in the real world 1.
- the mixing ratio a is a ratio of a signal (area) to two objects, which is incident on a detection area having a predetermined spread in the spatial direction X and the spatial direction Y of one detecting element of the sensor 2. Show.
- the mixing ratio is incident on the detection area of one detection element of the sensor 2 having a predetermined spread in the spatial direction X and the spatial direction Y with respect to the area of the detection area of one detection element of the sensor 2.
- the relationship between the level L, the level R, and the pixel value P can be expressed by Expression (4).
- level R is the pixel of data 3 located on the right side of the pixel of interest.
- the level L may be the pixel value of data 3 located on the left side of the pixel of interest.
- the mixing ratio a and the mixing region can be considered in the time direction as in the spatial direction.
- the ratio of the signal for the two objects incident on the detection area of one detection element of the sensor 2 in the time direction Changes.
- the signals for the two objects, which are incident on the detection area of one detection element of the sensor 2 and change in proportion in the time direction, are projected to one value of the data 3 by the detection element of the sensor 2.
- time mixing The mixing in the time direction of the signals for the two objects due to the time integration effect of the sensor 2 is called time mixing.
- the data continuity detecting unit 101 detects, for example, a pixel area in the data 3 on which the signals of the real world 1 for the two objects in the real world 1 are projected.
- the data continuity detecting unit 101 detects, for example, a tilt in the data 3 corresponding to the tilt of the edge of the image of the real world 1.
- the real-world estimator 102 for example, based on the region of the pixel having the predetermined mixture ratio ⁇ detected by the data continuity detector 101 and the gradient of the region, ⁇ changes Estimate the real world 1 signal by formulating an expression using ⁇ ⁇ variables that shows the relationship between the model represented by numbers 16 1 and ⁇ ⁇ ⁇ data 16 2 I do. Further, a specific estimation of the real world 1 will be described.
- the real-world signal represented by the function F (x, y, z, t) on the cross section in the spatial direction Z (position of the sensor 2) Signal is determined by position x in spatial direction X, position y in spatial direction Y, and time t
- the detection area of the sensor 2 has a spread in the spatial direction X and the spatial direction Y.
- the approximation function f (x, y, t) is a function that approximates the signal of the real world 1 acquired by the sensor 2 and having a spread in the spatial direction and the temporal direction.
- Shall be
- the value P (x, y, t) of the data 3 is, for example, a pixel value output by the sensor 2 which is an image sensor.
- the value obtained by projecting the approximate function f (x, y, t) can be expressed as a projection function S (x, y, t).
- the function F ( X , y, z, t) representing the signal of the real world 1 can be a function of infinite order.
- the function Si (x, y, t) can be described from the description of the function fi (x, y, t). .
- equation (7) the relationship between data 3 and real-world signals can be formulated as equation (7). Can be.
- j is the data index.
- N is the number of variables representing the model 16 1 approximating the real world 1.
- M is the number of data 162 included in data 3.
- the variables can be made independent as Wi .
- i indicates the number of variables as it is.
- the form of the function represented by can be made independent, and a desired function can be used as.
- the number of variables N can be defined without depending on the form of the function, and the variables can be obtained from the relationship between the number N of variables ⁇ and the number M of data.
- the real world 1 can be estimated from the data 3.
- N variables are defined, that is, equation (5) is defined. This is made possible by describing the real world 1 using stationarity.
- a signal of the real world 1 can be described by a model 161, in which a cross section is represented by a polynomial and the same cross-sectional shape continues in a certain direction.
- the projection by the sensor 2 is formulated, and the equation (7) is described.
- the result of integrating the signals of the real world 2 is formulated as data 3.
- data 162 is collected from an area having data continuity detected by the data continuity detecting unit 101.
- stationarity Data 162 are collected.
- N M
- the number N of variables is equal to the number M of expressions, so that the variables can be obtained by establishing a simultaneous equation.
- variable ⁇ can be obtained by the least square method.
- equation (9) for predicting data 3 from real world 1 according to equation (7) is shown.
- Equation (1 2) is derived from equation (1 1).
- Si ( Xj , Yj , tj) is described as Si (j).
- Expression (13) can be expressed as ° M AT 3 ⁇ 4 AT _ 'M-A..T-.
- Equation (13) Si represents the projection of the real world 1.
- Pj represents data 3.
- Equation (13) ⁇ is a variable that describes the characteristics of the signal in the real world 1 and seeks to obtain.
- the real world estimating unit 102 estimates the real world 1 by inputting the data 3 into the equation (13), for example, and obtaining W MAT by a matrix solution or the like.
- the cross-sectional shape of the signal in the real world 1 that is, the level change with respect to the position change, is described by a polynomial. Assume that the cross section of the signal of the real world 1 is constant, and that the cross section of the signal of the real world 1 moves at a constant speed c. Then, the projection of the signal of the real world 1 by the sensor 2 onto the data 3 It is formulated by integration in three dimensions in the spatiotemporal direction of the signal.
- Equations (18) and (19) are obtained from the assumption that the cross-sectional shape of the signal in the real world 1 moves at a constant speed. dx _
- the cross-sectional shape of the signal in the real world 1 is expressed by Expression (20) by using Expressions (18) and (19).
- Equation (2 1) S (x, y, t) is expressed in the spatial direction X from the position x s to the position x e , in the spatial direction Y, from the position y s to the position y e , and in the time direction t, It shows the integrated value of the region from time to time t e , that is, the region represented by the rectangular parallelepiped of space-time.
- equation (13) By solving equation (13) using a desired function f (x ′, y ′) that can determine equation (21), the signal of the real world 1 can be estimated.
- equation (23) is obtained.
- Figure 27 is a diagram showing an example of M data 162, extracted from data 3, for example, 27 pixel values are extracted as data 162, and the extracted pixel values.
- j is from 0 to 26.
- the pixel value of the pixel corresponding to the position of interest at time ⁇ which is n is P 13 (x, y, t), and the direction in which the pixel values of the pixels having data continuity are arranged
- the direction in which the same claw shape detected by the data continuity detection unit 101 is arranged is P 4 (X, y, t), P 13 (x, y, t), and P 22 In the direction connecting (, y, t), pixel values P 9 (x, y, t) to P 17 (X, y, t) at time t which is n, temporally before n
- the region where the pixel value as data 3 output from the image sensor as sensor 2 is acquired has a spread in the time direction or two-dimensional spatial direction as shown in FIG. 28. . Therefore, for example, as shown in FIG. 29, the center of gravity of the rectangular parallelepiped (the area where the pixel value is obtained) corresponding to the pixel can be used as the position of the pixel in the spatiotemporal direction.
- the circle in Fig. 29 indicates the center of gravity.
- the real world estimating unit 102 calculates, for example, 27 pixel values P. From (x, y, t) to P 2 6 (x, y, t) and equation (2 3), generate equation (1 3) and estimate W to estimate the signal of real world 1 .
- a Gaussian function or a sigmoid function can be used as the function fi (x, y, t).
- the data 3 has a value obtained by integrating the signal of the real world 1 in the time direction and the two-dimensional spatial direction.
- the pixel value of data 3 output from the image sensor of sensor 2 is the light that is incident on the detection element.
- the signal of real world 1 is integrated in the time direction with the detection time, which is the shutter time. It has a value integrated in the light receiving area of the detection element in the spatial direction.
- high-resolution data 181 which has higher resolution in the spatial direction, is a sensor that outputs the estimated real world 1 signal and the data 3 in the time direction. It is generated by integrating in the same time as the detection time of 2, and by integrating in a narrower area in the spatial direction compared to the light receiving area of the detection element of the sensor 2 that has output the data 3.
- the area where the estimated signal of the real world 1 is integrated is the sensor that output data 3.
- the setting can be made completely independently of the light receiving area of the detection element of the second element.
- the high-resolution data 18 1 is given a resolution that is an integer multiple in the spatial direction with respect to the data 3 as well as 5/3 times. Resolution can be provided.
- the estimated integration time of the signal of the real world 1 is determined by the detection element of the sensor 2 that outputs the data 3.
- the high-resolution data 18 1 has a resolution that is an integral multiple of the data 3 in the time direction with respect to the data 3.
- the high-resolution data 181 is generated by integrating the estimated signal of the real world 1 only in the spatial direction without integrating it in the time direction. Is done.
- high-resolution data 181 which has higher resolution in the temporal and spatial directions, uses the estimated real-world 1 signal as the sensor 2 that outputs data 3 in the spatial direction. Integrates in a narrower area compared to the light-receiving area of the detector element, and integrates in a shorter time compared to the detection time of sensor 2 that output data 3 in the time direction.
- the region and time in which the estimated signal of the real world 1 is integrated can be set completely independently of the light receiving region of the detection element of the sensor 2 that outputs the data 3 and the shutter time.
- the image generation unit 103 receives, for example, the signal of the estimated real world 1 By integrating in the spatiotemporal domain of, higher resolution data is generated in the time direction or the space direction.
- FIG. 35 shows the original image of the input image.
- FIG. 36 is a diagram illustrating an example of the input image.
- the input image shown in FIG. 36 is an image generated by using the average value of the pixel values of the pixels belonging to the block of 2 ⁇ 2 pixels of the image shown in FIG. 35 as the pixel value of one pixel. It is. That is, the input image is an image obtained by applying spatial integration that imitates the sensor integration characteristics to the image shown in FIG.
- FIG. 37 is a diagram showing an image obtained by applying the conventional classification adaptive processing to the input image shown in FIG.
- the class classification adaptation process includes a class classification process and an adaptation process.
- the class classification process classifies data into classes based on their properties, and performs an adaptation process for each class.
- the adaptive processing for example, a low-quality or standard-quality image is converted into a high-quality image by mapping using a predetermined tap coefficient.
- FIG. 38 is a diagram illustrating a result of detecting a thin line region from the input image illustrated in the example of FIG. 36 by the data continuity detecting unit 101.
- a white region indicates a thin line region, that is, a region where the arc shapes shown in FIG. 14 are arranged.
- FIG. 39 is a diagram showing an example of an output image output from the signal processing device 4 according to the present invention, using the image shown in FIG. 36 as an input image. As shown in FIG. 39, according to the signal processing device 4 of the present invention, it is possible to obtain an image closer to the thin line image of the original image shown in FIG.
- FIG. 40 is a flowchart for explaining signal processing by the signal processing device 4 according to the present invention.
- step S101 the data continuity detecting unit 101 executes a process of detecting continuity.
- the data continuity detection unit 101 detects the continuity of the data included in the input image, which is data 3, and outputs data continuity information indicating the continuity of the detected data to the real world estimation unit 1002. And to the image generation unit 103.
- the data continuity detecting unit 101 detects the continuity of data corresponding to the continuity of a signal in the real world.
- the continuity of the data detected by the data continuity detection unit 101 is a part of the continuity of the image of the real world 1 included in the data 3, or This is a stationary state that has changed from the stationary state of the signal in the real world 1.
- the data continuity detecting unit 101 detects data continuity by detecting an area having a certain feature in a direction of a predetermined dimension. In addition, for example, the data continuity detecting unit 101 detects data continuity by detecting an angle (inclination) in the spatial direction indicating a similar shape arrangement.
- step S101 The details of the processing for detecting the stationarity in step S101 will be described later.
- the data continuity information can be used as a feature quantity indicating the feature of data 3.
- step S102 the real world estimating unit 102 executes a process of estimating the real world. That is, the real world estimating unit 102 estimates the signal of the real world 1 based on the input image and the data continuity information supplied from the data continuity detecting unit 101. For example, in the process of step S102, the real world estimating unit 102 estimates the signal of the real world 1 by predicting a model 161 that approximates (describes) the real world 1. You. The real world estimating unit 102 supplies the real world estimation information indicating the estimated signal of the real world 1 to the image generating unit 103.
- the real-world estimator 102 predicts the width of a linear object
- Estimate 1 signal Also, for example, the real world estimating unit 102 estimates the signal of the real world 1 by predicting a level indicating the color of a linear object.
- step S102 Details of the process of estimating the real world in step S102 will be described later.
- the real world estimation information can be used as a feature amount indicating the feature of the data 3.
- step S103 the image generation unit 103 executes a process of generating an image, and the process ends. That is, the image generation unit 103 generates an image based on the real world estimation information, and outputs the generated image. Alternatively, the image generation unit 103 generates an image based on the data continuity information and the real world estimation information, and outputs the generated image.
- the image generation unit 103 integrates a function approximating the generated real-world optical signal in the spatial direction based on the real-world estimation information, thereby obtaining the input image. Generates a higher-resolution image in the spatial direction compared to, and outputs the generated image. For example, the image generation unit 103 integrates a function approximating the generated real-world optical signal in the spatio-temporal direction based on the real-world estimation information, so that the time-domain Generates a high-resolution image in the spatial direction and outputs the generated image. Details of the image generation process in step S103 will be described later.
- the signal processing device 4 detects the data continuity from the data 3 and estimates the real world 1 based on the detected data continuity. Then, the signal processing device 4 generates a signal that is closer to the real world 1 based on the estimated real world 1.
- a first signal which is a real-world signal having a first dimension
- a second dimension of a second dimension that is less than the first dimension in which a part of the stationarity of the real-world signal is missing.
- FIG. 41 is a block diagram showing the configuration of the data continuity detecting unit 101. As shown in FIG.
- the data continuity detecting unit 101 shown in FIG. 41 has a data continuity detection unit 101 included in the data 3 which is generated due to continuity that the cross-sectional shape of the object is the same when a thin object is imaged. Detect data continuity.
- the data continuity detector 101 shown in FIG. 41 has a change in the position in the direction orthogonal to the length direction at an arbitrary position in the length direction of the image of the real world 1 which is a thin line. Detects the stationarity of the data contained in Data 3, resulting from the stationarity that the change in light level with respect to is the same.
- the data continuity detecting unit 101 having the configuration shown in FIG. 41 includes a slanted image included in data 3 obtained by capturing an image of a thin line with the sensor 2 having a spatial integration effect. A region where a plurality of arc shapes (kamaboko shapes) of a predetermined length, which are arranged adjacent to each other, is detected.
- the data continuity detector 1 ⁇ 1 is the part of the image data other than the part of the image data (hereinafter also referred to as the stationary component) where the thin line image with data continuity is projected from the input image which is data 3. (Hereinafter referred to as a non-stationary component), and from the extracted non-stationary component and the input image, a pixel on which the image of the real world 1 thin line is projected is detected, and the real world 1 thin line in the input image is detected. Detects the area consisting of the pixels on which the image is projected.
- the non-stationary component extracting unit 201 extracts the non-stationary component from the input image, and outputs the non-stationary component information indicating the extracted non-stationary component together with the input image to the vertex detecting unit 202 and the simple This is supplied to the key change detecting section 203.
- non-stationary The component extraction unit 201 extracts the non-stationary component as the background by approximating the background in the input image as the data 3 with a plane.
- a solid line indicates a pixel value of data 3
- a dotted line indicates an approximate value indicated by a plane approximating the background.
- A indicates the pixel value of the pixel on which the thin line image is projected
- PL indicates a plane approximating the background.
- the pixel values of a plurality of pixels in the image data portion having data continuity are discontinuous with respect to the non-stationary component.
- the non-stationary component extraction unit 201 is configured to project a plurality of pixels of the image data, which is data 3, in which an image, which is an optical signal of the real world 1, is projected, and a part of the stationarity of the image of the real world 1 is missing. Detect discontinuities in values.
- the vertex detection unit 202 and the monotone increase / decrease detection unit 203 remove non-stationary components from the input image based on the non-stationary component information supplied from the non-stationary component extraction unit 201.
- the vertex detection unit 202 and the monotone increase / decrease detection unit 203 set the pixel value of a pixel on which only the background image is projected to 0 in each pixel of the input image, thereby To remove unsteady components.
- the vertex detection unit 202 and the monotonous increase / decrease detection unit 203 subtract the value approximated by the plane PL from the pixel value of each pixel of the input image, and thereby, Remove components.
- the vertex detection unit 202 to the continuity detection unit 204 can process only the portion of the image data on which the fine line is projected, and Processing in the detecting unit 202 to the continuity detecting unit 204 becomes easier.
- non-stationary component extraction unit 201 may supply the image data obtained by removing the non-stationary component from the input image to the vertex detection unit 202 and the monotone increase / decrease detection unit 203. No.
- image data in which an unsteady component has been removed from an input image that is, image data including only pixels including a steady component
- image data projected from the vertex detection unit 202 to the continuity detection unit 204 to which the image of the thin line is projected will be described.
- the cross-sectional shape in the spatial direction Y (change in pixel value with respect to the change in position in the spatial direction) of the image data on which the fine line image shown in Fig. 42 is projected is sensor 2 when there is no optical LPF. From the spatial integration effect of the image sensor, a trapezoid shown in FIG. 44 or a triangle shown in FIG. 45 can be considered. However, a normal image sensor has an optical LPF, and an image sensor acquires an image that has passed through the optical LPF and projects the acquired image onto data 3, so that in reality, the spatial direction Y of the thin line image data is
- the cross-sectional shape is similar to a Gaussian distribution as shown in FIG.
- the vertex detection unit 202 to the continuity detection unit 204 are pixels on which the fine line image is projected, and the same cross-sectional shape (change in pixel value with respect to change in position in the spatial direction) is displayed in the vertical direction of the screen.
- the vertex detection unit 202 to the continuity detection unit 204 detect and detect an area in the input image where an arc shape (kamaboko type) is formed on one column of pixels vertically. It is determined whether or not the areas are arranged adjacent to each other in the horizontal direction, and the connection of the areas where the arc shape is formed corresponding to the length direction of the thin line image which is the signal of the real world 1 is detected.
- the vertex detection unit 202 to the continuity detection unit 204 detect a region where pixels of the fine line image are projected and where the same cross-sectional shape is arranged at regular intervals in the horizontal direction of the screen. Then, by detecting the connection of the detected areas corresponding to the length direction of the thin line of the real world 1, the area having the data continuity is detected. An area consisting of pixels onto which an image has been projected is detected. That is, the vertex detection unit 202 to the continuity detection unit 204 detects an area where an arc shape is formed on one row of pixels in the input image, and the detected area is vertical. Judgment is made as to whether or not they are adjacent to each other in the direction, and the skew of an area where an arc shape is formed corresponding to the length direction of the thin line image which is the signal of the real world 1 is detected.
- the vertex detecting unit 202 detects a pixel having a larger pixel value than the surrounding pixels, that is, the vertex, and supplies vertex information indicating the position of the vertex to the monotone increase / decrease detecting unit 203.
- the vertex detector 202 compares the pixel values of the pixels located on the upper side of the screen and the pixel values of the pixels located on the lower side of the screen. Then, a pixel having a larger pixel value is detected as a vertex.
- the vertex detection unit 202 detects one or a plurality of vertices from one image, for example, an image of one frame.
- One screen contains frames or fields. The same applies to the following description.
- the vertex detection unit 202 selects a pixel of interest from pixels that have not yet been set as the pixel of interest from the image of one frame, and determines the pixel value of the pixel of interest and the pixel value of the pixel above the pixel of interest. Is compared with the pixel value of the target pixel and the pixel value of the lower pixel of the target pixel, and has a pixel value larger than the pixel value of the upper pixel, and is larger than the pixel value of the lower pixel. A target pixel having a pixel value is detected, and the detected target pixel is set as a vertex.
- the vertex detection unit 202 supplies vertex information indicating the detected vertex to the monotonous increase / decrease detection unit 203.
- the vertex detector 202 may not detect the vertex in some cases. For example, when the pixel values of the pixels of one image are all the same, or when the pixel value decreases in the 1 or 2 direction, no vertex is detected. In this case, the thin line image is not projected on the image data. Based on the vertex information indicating the position of the vertex supplied from the vertex detecting unit 202, the monotonous increase / decrease detecting unit 203 detects the vertex detected by the vertex detecting unit 202 in the vertical direction. A candidate for a region consisting of pixels on which a thin line image is projected, which is a pixel arranged in a column, is detected, and region information indicating the detected region is supplied to the continuity detector 204 together with vertex information.
- the monotonous increase / decrease detection unit 203 detects an area composed of pixels having a pixel value that is monotonically decreased with respect to a pixel value of the vertex as an area composed of pixels onto which a thin line image is projected. Detect as a candidate.
- Monotonic decrease means that the pixel value of a pixel at a longer distance from the vertex is smaller than the pixel value of a pixel at a shorter distance from the vertex.
- the monotonous increase / decrease detection unit 203 detects a region composed of pixels having a monotonically increasing pixel value as a candidate for a region composed of pixels onto which a thin line image is projected, based on the pixel value of the vertex.
- Monotonically increasing means that the pixel value of the pixel at a longer distance from the vertex is larger than the pixel value of the pixel at a shorter distance from the vertex.
- the processing for the region composed of pixels having monotonically increasing pixel values is the same as the processing for the region composed of pixels having monotonically decreasing pixel values, and a description thereof will be omitted.
- the monotonous increase / decrease detection unit 203 calculates the difference between the pixel value of each pixel and the pixel value of the upper pixel, and the pixel value of the lower pixel for each pixel in one column vertically with respect to the vertex. Find the difference between. Then, the monotone increase / decrease detection unit 203 detects an area where the pixel value monotonously decreases by detecting a pixel whose sign of the difference changes.
- the monotonous increase / decrease detection unit 203 detects a region having a pixel value having the same sign as that of the pixel value of the vertex based on the sign of the pixel value of the vertex from the region where the pixel value is monotonically decreasing. Is detected as an indication of the area consisting of the pixels on which the thin line image is projected. Put out.
- the monotone increase / decrease detection unit 203 compares the sign of the pixel value of each pixel with the sign of the pixel value of the upper pixel and the sign of the pixel value of the lower pixel, and the sign of the pixel value changes. By detecting all pixels, an area consisting of pixels having the pixel value of the same sign as the vertex is detected from the area where the pixel value monotonously decreases.
- the monotonous increase / decrease detection unit 203 detects an area composed of pixels arranged in the up-down direction, the pixel value of which monotonously decreases with respect to the vertex, and which has the pixel value of the same sign as the vertex.
- FIG. 47 is a diagram illustrating a process of detecting a vertex and a monotonically increasing / decreasing region, which detects a pixel region on which a thin line image is projected from a pixel value with respect to a position in the spatial direction Y.
- P indicates a vertex.
- P indicates a vertex.
- the vertex detection unit 202 compares the pixel value of each pixel with the pixel value of a pixel adjacent in the spatial direction Y, and determines a pixel value larger than the pixel value of the two pixels adjacent in the spatial direction Y.
- the vertex P is detected by detecting the pixel having the vertex P.
- the region consisting of the vertex P and the pixels on both sides of the vertex P in the spatial direction Y is a monotonically decreasing region in which the pixel values of the pixels on both sides in the spatial direction Y monotonically decrease with respect to the pixel value of the vertex P.
- the arrow indicated by A and the arrow indicated by B indicate monotonically decreasing regions existing on both sides of the vertex P.
- the monotone increase / decrease detection unit 203 finds a difference between the pixel value of each pixel and the pixel value of a pixel adjacent to the pixel in the spatial direction Y, and detects a pixel whose sign of the difference changes.
- the monotonous increase / decrease detection unit 203 sets the boundary between the detected pixel whose sign of the difference changes and the pixel on the near side (vertex P side) in a thin line area composed of pixels onto which the thin line image is projected. Of the boundary.
- the monotonous increase / decrease detection unit 203 detects the pixel value of each pixel in the monotonically decreasing region. 2004/001585
- the sign is compared with the sign of the pixel value of the pixel adjacent to the pixel in the spatial direction Y, and the pixel whose sign of the pixel value changes is detected.
- the monotone increase / decrease detection unit 203 sets the boundary between the detected pixel whose sign of the pixel value changes and the pixel on the near side (vertex ⁇ side) as the boundary of the thin line area.
- the boundary of the thin line region that is the boundary between the pixel whose sign of the pixel value changes and the pixel on the near side (vertex ⁇ side) is indicated by D.
- a thin line region F composed of pixels onto which a thin line image is projected is a region sandwiched between a thin line region boundary C and a thin line region boundary D.
- the monotone increase / decrease detection unit 203 finds a thin line region F longer than a predetermined threshold, that is, a thin line region F including a number of pixels larger than the threshold, from the thin line region F composed of such a monotone increase / decrease region. For example, when the threshold value is 3, the monotonous increase / decrease detection unit 203 detects a thin line region F including four or more pixels.
- the monotonous increase / decrease detection unit 203 calculates the pixel value of the vertex P, the pixel value of the pixel on the right side of the vertex P, and the pixel value of the pixel on the left side of the vertex P.
- the pixel value of the vertex P exceeds the threshold value, the pixel value of the pixel on the right side of the vertex P is less than the threshold value, and the pixel value of the pixel on the left side of the vertex P is less than the threshold value.
- the thin line region F to which P belongs is detected, and the detected thin line region F is set as a catchment of a region including pixels including components of a thin line image.
- the pixel value of the vertex P is equal to or less than the threshold value, the pixel value of the pixel on the right side of the vertex P exceeds the threshold value, or the pixel value of the pixel on the left side of the vertex P exceeds the threshold value.
- F is determined not to include the component of the thin line image, and is removed from the candidate of the region including the pixel including the component of the thin line image.
- the monotonous increase / decrease detection unit 203 compares the pixel value of the vertex P with the threshold value, and moves the vertex P in the spatial direction X (the direction indicated by the dotted line AA ′). ) Is compared with the threshold value, and the pixel value of the vertex P exceeds the threshold value, and the pixel value of the pixel adjacent in the spatial direction X is equal to or less than the threshold value.
- FIG. 49 is a diagram illustrating pixel values of pixels arranged in the spatial direction X indicated by a dotted line AA ′ in FIG.
- Exceeds the pixel value is the threshold T h s of the vertex P, pixel values of pixels adjacent in the spatial direction X of the vertex P is less than or equal to the threshold value T h s, fine line region F where the vertex P belongs, including components of the thin line .
- the monotonous increase / decrease detection unit 203 compares the difference between the pixel value of the vertex P and the pixel value of the background with a threshold value based on the pixel value of the background, and also determines the vertex P in the spatial direction X.
- the difference between the pixel value of the adjacent pixel and the pixel value of the background is compared with the threshold value, and the difference between the pixel value of the vertex! 3 and the pixel value of the background exceeds the threshold value and the pixel of the pixel adjacent in the spatial direction X
- a fine line region F to which the vertex P belongs, in which the difference between the value and the pixel value of the background is equal to or smaller than the threshold value, may be detected.
- the monotone increase / decrease detection unit 203 is an area composed of pixels whose pixel values decrease monotonically with the sign of the pixel value being the same as that of the vertex P with respect to the vertex P.
- Monotonic increase / decrease region information indicating that the pixel value of the pixel on the right side of P is equal to or less than the threshold value and the pixel value of the pixel on the left side of the vertex P is equal to or less than the threshold value is supplied to the continuity detection unit 204.
- the thin line image includes the projected pixels.
- the area indicated by the monotone increasing / decreasing area information includes pixels arranged in a line in the vertical direction of the screen and includes an area formed by projecting a thin line image.
- the vertex detection unit 202 and the monotone increase / decrease detection unit 203 use the property that the change in the pixel value in the spatial direction Y is similar to the Gaussian distribution in the pixel on which the thin line image is projected. Then, a steady area composed of pixels onto which the thin line image is projected is detected.
- the continuity detection unit 204 includes pixels that are horizontally adjacent to each other in the area that is composed of vertically arranged pixels and that is indicated by the monotone increase / decrease area information supplied from the monotone increase / decrease detection unit 203. Regions, i.e., have similar pixel value changes and overlap vertically Area is detected as a continuous area, and vertex information and data continuity information indicating the detected continuous area are output.
- the data continuity information includes monotonically increasing / decreasing area information, information indicating the connection of areas, and the like.
- the detected continuous region includes the pixels on which the fine lines are projected. Since the detected continuous area includes pixels on which fine lines are projected and arranged at regular intervals so that arc shapes are adjacent to each other, the detected continuous area is regarded as a steady area, and continuity detection is performed.
- the unit 204 outputs data continuity information indicating the detected continuous area.
- the continuity detecting unit 204 determines that the arc shape in the data 3 obtained by imaging the thin line, which is generated from the continuity of the image of the thin line in the real world 1 and is continuous in the length direction, is adjacent.
- the candidates of the areas detected by the vertex detection unit 202 and the monotone increase / decrease detection unit 203 are further narrowed down.
- FIG. 50 is a diagram illustrating a process of detecting the continuity of the monotone increase / decrease region.
- the continuity detector 204 performs two monotonic operations when the thin line area F composed of pixels arranged in one row in the vertical direction of the screen includes pixels that are adjacent in the horizontal direction. It is assumed that there is continuity between the increase / decrease regions, and that no continuity exists between the two thin line regions F when pixels adjacent in the horizontal direction are not included.
- fine line region made up of pixels arranged in a line in the longitudinal direction of the screen, when containing a pixel adjacent to the pixel and lateral fine line region F 0 consisting of pixels arranged in a row in the vertical direction of the screen, fine line It is assumed to be continuous with the area F 0 .
- a thin line area F 0 composed of pixels arranged in one column in the vertical direction of the screen includes a thin line area 1 when a pixel in the thin line area composed of pixels arranged in one column in the vertical direction of the screen and a pixel horizontally adjacent thereto are included. It is said to be continuous with ⁇ .
- the vertex detection unit 202 to the continuity detection unit 204 detect pixels that are arranged in a line in the upper and lower direction of the screen and that are formed by projecting a thin line image. .
- the vertex detection unit 202 to the continuity detection unit 204 Pixels that are aligned in one row in the horizontal direction and that consist of areas where the fine line image is projected, and pixels that are aligned in one row in the horizontal direction on the screen and where the fine line image is projected The territory consisting of is detected.
- the vertex detection unit 202 compares the pixel values of the pixels located on the left side of the screen and the pixel values of the pixels located on the right side of the screen with respect to the pixels arranged in one row in the horizontal direction of the screen. Then, a pixel having a larger pixel value is detected as a vertex, and vertex information indicating the position of the detected vertex is supplied to the monotone increase / decrease detector 203.
- the vertex detection unit 202 detects one or a plurality of vertices from one image, for example, one frame image.
- the vertex detection unit 202 selects a pixel of interest from pixels that have not yet been set as the pixel of interest from the image of one frame, and calculates the pixel value of the pixel of interest and the pixel value of the pixel to the left of the pixel of interest.
- the pixel value of the pixel of interest is compared with the pixel value of the pixel on the right side of the pixel of interest, and the pixel value of the pixel on the left side is larger than the pixel value of the pixel on the right side. Is detected, and the detected pixel of interest is set as the vertex.
- the vertex detection unit 202 supplies vertex information indicating the detected vertex to the monotonous increase / decrease detection unit 203.
- the vertex detector 202 may not detect the vertex in some cases.
- the monotonous increase / decrease detection unit 203 is a pixel arranged in a line in the left and right direction with respect to the vertex detected by the vertex detection unit 202, and is a target for an area composed of pixels on which a thin line image is projected. Is supplied to the continuity detection unit 204 together with the vertex information and monotonically increasing / decreasing area information indicating the detected area.
- the monotonous increase / decrease detection unit 203 detects an area composed of pixels having a pixel value that is monotonically decreased with respect to a pixel value of the vertex as an area composed of pixels onto which a thin line image is projected. Detect as a catch.
- the monotonous increase / decrease detection unit 203 detects, for each pixel in a row lateral to the vertex, The difference between the pixel value of each pixel and the pixel value of the pixel on the left and the pixel value of the pixel on the right are calculated. Then, the monotonic increase / decrease detection unit 203 detects a region where the pixel value monotonously decreases by detecting a pixel whose sign of the difference changes.
- the monotonous increase / decrease detection unit 203 detects a region having a pixel value having the same sign as that of the pixel value of the vertex based on the sign of the pixel value of the vertex from the region where the pixel value is monotonically decreasing. Is detected as a candidate for an area composed of pixels onto which a thin line image is projected.
- the monotone increase / decrease detection unit 203 compares the sign of the pixel value of each pixel with the sign of the pixel value of the left pixel or the sign of the pixel value of the right pixel, and the sign of the pixel value changes. By detecting a pixel having a pixel value, an area composed of pixels having the pixel value of the same sign as the vertex is detected from the area where the pixel value monotonously decreases.
- the monotone increase / decrease detection unit 203 detects a region arranged in the left-right direction, the pixel value of which is monotonously decreased with respect to the vertex, and the pixel region having the same sign as the vertex.
- the monotone increase / decrease detection unit 203 obtains a thin line region longer than a predetermined threshold, that is, a thin line region including a number of pixels larger than the threshold, from the thin line region composed of such a monotone increase / decrease region.
- the monotonous increase / decrease detection unit 203 calculates the pixel value of the vertex, the pixel value of the pixel above the vertex, and the pixel value of the pixel below the vertex.
- the pixel value of the vertex exceeds the threshold value, the pixel value of the pixel above the vertex is less than the threshold value, and the thin line region to which the pixel value of the pixel below the vertex is less than the threshold value belongs.
- the detected and detected thin line region is set as a candidate for a region including pixels including components of a thin line image.
- the thin line region to which the vertex whose pixel value is less than or equal to the threshold value, the pixel value of the pixel above the vertex exceeds the threshold value, or the pixel value of the pixel below the vertex exceeds the threshold value belongs to It is determined that the image does not include the component of the thin line image, and is removed from the candidate of the region including the pixel including the component of the thin line image.
- the monotonous increase / decrease detection unit 203 determines the pixel value of the vertex based on the pixel value of the background.
- the difference between the pixel value of the background and the pixel value of the background is compared with the threshold value, and the difference between the pixel value of the pixel vertically adjacent to the vertex and the pixel value of the background is compared with the threshold value.
- the difference between the pixel value of the pixel and the pixel value of the background exceeds the threshold value, and the difference between the pixel value of the vertically adjacent pixel and the pixel value of the background is equal to or less than the threshold value. May be set as a region candidate.
- the monotone increase / decrease detection unit 203 is an area composed of pixels whose pixel values decrease monotonously with the vertex as the reference and the sign of the pixel value is the same as the vertex, and the vertex exceeds the threshold value, and the right side of the vertex Is supplied to the continuity detecting unit 204, indicating that the pixel value of the pixel of the apex is less than or equal to the threshold value and the pixel value of the pixel on the left side of the vertex is less than the threshold value.
- the region indicated by the monotone increase / decrease region information is a line of pixels arranged in the horizontal direction of the screen, and includes a region formed by projecting a thin line image.
- the continuity detection unit 204 includes pixels that are vertically adjacent to each other in the region composed of pixels arranged in the horizontal direction, which is indicated by the monotone increase / decrease region information supplied from the monotone increase / decrease detection unit 203. Regions, that is, regions that have similar pixel value changes and overlap in the horizontal direction are detected as continuous regions, and vertex information and data continuity indicating the detected continuous regions are detected. Output information.
- the data continuity information includes information indicating the connection between the areas.
- the detected continuous region includes the pixels on which the fine lines are projected. Since the detected continuous area includes pixels on which fine lines are projected and arranged at regular intervals so that arc shapes are adjacent to each other, the detected continuous area is regarded as a steady area, and continuity detection is performed.
- the unit 204 outputs data continuity information indicating the detected continuous area.
- the continuity detecting section 204 is continuous in the length direction.
- the vertex detection unit 202 and the monotony are utilized by using the stationarity of the data 3 obtained by imaging the thin line, which is generated from the stationarity of the line image, and in which the arc shapes are arranged at regular intervals so that they are adjacent to each other.
- the area candidates detected by the increase / decrease detection unit 203 are further narrowed down.
- FIG. 51 is a diagram illustrating an example of an image in which a stationary component is extracted by approximation on a plane.
- FIG. 52 is a diagram showing a result of detecting a vertex from the image shown in FIG. 51 and detecting a monotonically decreasing region. In FIG. 52, the part shown in white is the detected area.
- FIG. 53 is a diagram illustrating a region in which continuity is detected by detecting continuity of an adjacent region from the image illustrated in FIG. 52.
- the portion shown in white is the region where continuity is detected.
- the continuity detection shows that the region is further specified.
- FIG. 54 is a diagram showing the pixel values of the region shown in FIG. 53, that is, the pixel values of the region where continuity is detected.
- the data continuity detecting unit 101 can detect the continuity included in the data 3 as the input image. That is, the data continuity detecting unit 101 can detect the continuity of the data included in the data 3 that is generated by projecting the image of the real world 1 as a thin line onto the data 3. The data continuity detecting unit 101 detects, from the data 3, an area composed of pixels onto which the image of the real world 1 as a thin line is projected.
- FIG. 55 is a diagram illustrating an example of another process of detecting an area having stationarity, on which a thin line image is projected, in the stationarity detection unit 101.
- the continuity detecting unit 101 when the values of the adjacent differences are the same among the absolute values of the differences arranged corresponding to the pixels, the pixel corresponding to the absolute value of the two differences (A pixel sandwiched between the absolute values of the two differences) is determined to include a thin line component. Note that the continuity detection unit 101 determines that the absolute value of a difference is smaller than a predetermined threshold value when adjacent differential values are the same among the absolute values of the differentials arranged corresponding to the pixels. Then, it is determined that the pixel corresponding to the absolute value of the two differences (the pixel sandwiched between the absolute values of the two differences) does not include a thin line component.
- the continuity detector 101 can also detect a thin line by such a simple method.
- FIG. 56 is a flowchart for explaining the processing of the continuity detection.
- the non-stationary component extracting unit 201 extracts a non-stationary component, which is a portion other than the portion where the thin line is projected, from the input image.
- the non-stationary component extraction unit 201 supplies, together with the input image, the non-stationary component information indicating the extracted non-stationary component to the vertex detection unit 202 and the monotone increase / decrease detection unit 203. Details of the process of extracting the unsteady component will be described later.
- step S2 ⁇ 2 the vertex detection unit 202 removes the non-stationary component from the input image based on the non-stationary component information supplied from the non-stationary component extraction unit 201, and outputs it to the input image. Only pixels containing stationary components are left. Further, in step S202, the vertex detector 202 detects a vertex.
- the vertex detection unit 202 compares the pixel value of each pixel with the pixel values of the upper and lower pixels for the pixel including the stationary component. Then, a vertex is detected by detecting a pixel having a pixel value larger than the pixel value of the upper pixel and the pixel value of the lower pixel. Also, in step S202, the vertex detection unit 202, when executing the processing based on the horizontal direction of the screen, determines the pixel value of each pixel for the pixel including the stationary component, The vertex is detected by comparing the pixel value of the pixel with the pixel value of the right pixel with the pixel value larger than the pixel value of the right pixel.
- the vertex detection unit 202 supplies vertex information indicating the detected vertex to the monotonous increase / decrease detection unit 203.
- step S203 the monotone increase / decrease detection unit 203 removes the non-stationary component from the input image based on the non-stationary component information supplied from the non-stationary component extraction unit 201, and outputs the non-stationary component to the input image. Only pixels containing stationary components are left. Further, in step S203, the monotone increase / decrease detecting unit 203 detects the monotone increase / decrease with respect to the vertex based on the vertex information indicating the position of the vertex supplied from the vertex detecting unit 202. A region consisting of pixels having data continuity is detected.
- the monotonous increase / decrease detection unit 203 When executing processing based on the vertical direction of the screen, the monotonous increase / decrease detection unit 203 vertically arranges the pixels based on the pixel values of the vertices and the pixel values of the pixels arranged vertically in one column. By detecting the monotonous increase / decrease of pixels in one row, the pixels of which one thin line image is projected, detect an area composed of pixels having data continuity. That is, in step S203, the monotonous increase / decrease detection unit 203, when executing the processing with the vertical direction of the screen as a reference, determines each of the vertices and the pixels vertically arranged in one column with respect to the vertices.
- the difference between the pixel value of the pixel and the pixel value of the upper or lower pixel is determined, and the pixel whose sign of the difference changes is detected.
- the monotone increase / decrease detection unit 203 determines the sign of the pixel value of each pixel and the sign of the pixel value of the pixel above or below the vertex and the pixels arranged in one column vertically with respect to the vertex. And detects a pixel whose sign of the pixel value changes.
- the monotonous increase / decrease detection unit 203 compares the pixel value of the vertex and the pixel values of the right and left pixels of the vertex with the threshold, and the pixel value of the vertex exceeds the threshold, and the right and left pixels An area consisting of pixels whose pixel value is equal to or smaller than the threshold value is detected.
- the monotone increase / decrease detection unit 203 supplies the continuity detection unit 204 with monotone increase / decrease region information indicating the monotone increase / decrease region, using the region thus detected as a monotone increase / decrease region.
- the monotonous increase / decrease detection unit 203 executes the processing based on the horizontal direction of the screen.
- the pixels are arranged in a row in a row, and are monotonically composed of the pixels onto which one thin line image is projected.
- an area consisting of pixels having data continuity is detected. That is, in step S203, the monotonous increase / decrease detection unit 203, when executing the processing with the horizontal direction of the screen as a reference, determines each of the vertices and the pixels arranged in one row horizontally with respect to the vertices.
- the difference between the pixel value of the left pixel and the pixel value of the left or right pixel is obtained, and the pixel whose sign of the difference changes is detected.
- the monotone increase / decrease detection unit 203 calculates the sign of the pixel value of each pixel and the sign of the pixel value of the pixel on the left or right side of the pixel for the pixels arranged in a row laterally to the vertex. And detects the pixel whose sign of the pixel value changes. Further, the monotonous increase / decrease detection unit 203 compares the pixel value of the vertex, that is, the pixel value of the pixel on the upper side and lower side of the vertex with the threshold value, and the pixel value of the vertex exceeds the threshold value. An area consisting of pixels whose pixel value of the lower pixel is equal to or smaller than a threshold is detected.
- the monotone increase / decrease detection unit 203 supplies the continuity detection unit 204 with monotone increase / decrease region information indicating the monotone increase / decrease region, using the region thus detected as a monotone increase / decrease region.
- step S204 the monotone increase / decrease detection unit 203 determines whether or not the processing of all pixels has been completed.
- the non-stationary component extraction unit 201 detects the vertices of all the pixels of one screen (for example, a frame or a field) of the input image, and determines whether a monotonous increase / decrease area has been detected.
- step S204 If it is determined in step S204 that the processing of all the pixels has not been completed, that is, it is determined that there is still a pixel that is not subjected to the processing of the detection of the vertex and the detection of the monotonous increase / decrease area, the step S204 Returning to 202, select a pixel to be processed from pixels that are not targeted for vertex detection and monotonous increase / decrease area detection processing, and repeat the vertex detection and monotonous increase / decrease area detection processing .
- step S204 if it is determined that the processing of all pixels has been completed, that is, it is determined that a vertex and a monotonous increase / decrease region has been detected for all pixels, the process proceeds to step S205, and continuity detection is performed.
- Part 204 is based on the monotone increase / decrease area information. Detect the continuity of the region. For example, the continuity detecting unit 204 determines that when a monotone increasing / decreasing area, which is indicated by monotonous increasing / decreasing area information and is composed of pixels arranged in one row in the vertical direction of the screen, includes horizontally adjacent pixels, Assume that there is continuity between two monotone increase / decrease regions, and that there is no continuity between the two monotone increase / decrease regions when pixels adjacent in the horizontal direction are not included.
- the continuity detecting unit 204 detects that when a monotone increasing / decreasing area, which is indicated by monotonous increasing / decreasing area information and is composed of pixels arranged in one row in the horizontal direction, includes pixels that are vertically adjacent to each other, Assume that there is continuity between two monotone increase / decrease regions, and that there is no continuity between the two monotone increase / decrease regions when pixels adjacent in the vertical direction are not included.
- the continuity detecting unit 204 sets the detected continuous area as a steady area having data continuity, and outputs data continuity information indicating the position of the vertex and the steady area.
- the data continuity information includes information indicating the connection between the areas.
- the data continuity information output from the continuity detection unit 204 indicates a thin line region that is a steady region and includes pixels onto which a thin line image of the real world 1 is projected.
- step S206 the continuity direction detection unit 205 determines whether or not processing of all pixels has been completed. That is, the continuity direction detecting unit 205 determines whether or not the continuity of the area has been detected for all pixels of a predetermined frame of the input image.
- step S206 If it is determined in step S206 that the processing of all the pixels has not been completed, that is, it is determined that there are still pixels that have not been subjected to the processing for detecting the continuity of the region, the process returns to step S205. Then, the pixel to be processed is selected from the pixels not to be subjected to the processing for detecting the continuity of the area, and the processing for detecting the continuity of the area is repeated. If it is determined in step S206 that the processing of all the pixels has been completed, that is, it is determined that the continuity of the area has been detected for all the pixels, the processing ends. In this way, the continuity contained in the input image data 3 is detected.
- the data continuity detector 101 shown in FIG. 41 detects the continuity of the data in the time direction based on the continuity region of the data detected from the frame of data 3. Can be detected.
- the continuity detection unit 204 selects the area having the stationarity of the detected data in frame #n, and the area having the stationarity of the detected data in frame-1. , And frame # n + l, the continuity of the data in the time direction is detected by connecting the ends of the regions based on the detected continuity of the data.
- Frame # n-l is a frame that precedes frame #n
- frame # n + l is a frame that precedes frame. That is, frame #n_l, frame #n, and frame # n + l are displayed in the order of frame #n_l, frame, and frame +1.
- G is an area having the stationarity of the detected data in frame #n, an area having a stationarity of the detected data in frame #nl, and frame +
- Fig. 1 the motion vector obtained by connecting one end of each of the regions having continuity of the detected data is shown, and G 'is the other of each of the regions having continuity of the detected data.
- the motion vector obtained by connecting one end is shown.
- the motion vector G and the motion vector G ' are examples of the continuity of data in the time direction.
- the data continuity detecting unit 101 having the configuration shown in FIG. 41 can output information indicating the length of the region having data continuity as data continuity information.
- FIG. 58 is a view showing a configuration of a non-stationary component extraction unit 201 which extracts a non-stationary component by approximating a non-stationary component, which is a part of image data having no stationarity, in a plane.
- FIG. 58 is a view showing a configuration of a non-stationary component extraction unit 201 which extracts a non-stationary component by approximating a non-stationary component, which is a part of image data having no stationarity, in a plane.
- the non-stationary component extraction unit 201 shown in FIG. 58 has a predetermined number of pixels from the input image.
- the block is approximated by a plane so that the error between the block and the value indicated by the plane is less than a predetermined threshold, and a non-stationary component is extracted.
- the input image is supplied to the block extraction unit 221 and output as it is.
- the block extracting unit 222 extracts a block including a predetermined number of pixels from the input image. For example, the block extracting unit 222 extracts a block composed of 7 ⁇ 7 pixels and supplies the extracted block to the plane approximating unit 222. For example, the block extracting unit 221 moves the pixel serving as the center of the extracted block in the raster scan order, and sequentially extracts the block from the input image.
- the plane approximating unit 222 approximates the pixel values of the pixels included in the block with a predetermined plane. For example, the plane approximating unit 222 approximates the pixel values of the pixels included in the block on the plane represented by the equation (24).
- X indicates the position of the pixel in one direction (spatial direction X) on the screen
- y indicates the position of the pixel on the screen.
- z indicates an approximate value represented by a plane.
- a indicates the inclination of the plane in the spatial direction X
- b indicates the inclination of the plane in the spatial direction Y.
- c indicates a plane offset (intercept).
- the plane approximation unit 2 2 2 calculates the slope a, the slope b, and the offset c by regression processing, and obtains the pixels included in the block on the plane represented by the equation (24). Is approximated.
- the plane approximation unit 2 2 2 calculates the slope a, the slope b, and the offset c by regression processing with rejection, and obtains the pixels of the pixels included in the block on the plane represented by the equation (2 4). Approximate values.
- the plane approximation unit 222 finds the plane represented by the equation (24) that minimizes the error with respect to the pixel value of the block pixel using the least squares method, and includes the plane in the block. The pixel value of the pixel to be approximated.
- the plane approximating unit 2 2 2 approximates the block with a plane represented by the equation (2 4).
- the function is not limited to the plane represented by the equation (2 4), but is a function having a higher degree of freedom, for example, a plane represented by a polynomial of degree n (n is an arbitrary integer). May be approximated.
- the repetition determination unit 223 calculates an error between the approximate value indicated by the plane approximating the pixel value of the block and the pixel value of the corresponding pixel of the block.
- Equation (25) is an equation indicating an error e i that is a difference between an approximate value indicated by a plane approximating the pixel value of the block and the pixel value z i of the corresponding pixel of the block.
- z hat (letters with z appended are referred to as z hat.
- a and n indicate the inclination in the spatial direction X of the plane approximating the pixel value of the block
- b indicates the space of the plane approximating the pixel value of the block.
- c and d denote the offset (intercept) of the plane that approximates the pixel value of the block.
- the repetition determination unit 223 rejects the largest pixel represented by the equation (25) between the approximate value and the pixel value of the pixel corresponding to the block, that is, the error ei power S. In this way, the pixel on which the thin line is projected, that is, the pixel having continuity, is rejected.
- the repetition determination unit 222 supplies rejection information indicating the rejected pixel to the plane approximation unit 222.
- the repetition determination unit 223 calculates a standard error, and the standard error is equal to or more than a predetermined threshold value for approximation end determination, and more than half of the pixels of the block are not rejected. At this time, the repetition determination unit 222 causes the plane approximation unit 222 to repeat the plane approximation processing on the pixels included in the block excluding the rejected pixels.
- the plane approximates the non-stationary component by approximating the pixels excluding the rejected pixels with a plane.
- the repetition determination unit 223 ends the approximation using the plane.
- the standard error e s is calculated by, for example, Expression (26).
- the repetition determination unit 223 may calculate not only the standard error but also the sum of the squares of the errors of all the pixels included in the block and execute the following processing.
- a pixel having a stationary state that is, a pixel including a thin line component, indicated by a black circle in the figure, is a plurality of times. Will be rejected.
- the repetition determination unit 223 When the approximation by the plane is finished, the repetition determination unit 223 outputs information indicating the plane on which the pixel value of the block is approximated (the slope and intercept of the plane in Expression (24)) as the unsteady component information.
- the repetition determination unit 223 compares the number of rejections for each pixel with a predetermined threshold, and determines that a pixel whose number of rejections is equal to or greater than the threshold is a pixel including a steady component and includes a steady component.
- Information indicating a pixel may be output as steady component information.
- the vertex detection unit 202 to the continuity direction detection unit 205 execute the respective processes on pixels including the stationary component indicated by the stationary component information.
- Figure 60 shows the average of the pixel values of 2 ⁇ 2 pixels of the original image from the image containing the thin line.
- FIG. 4 is a diagram illustrating an example of an input image generated by using values as pixel values.
- FIG. 61 is a diagram showing an image in which a standard error obtained as a result of approximating the image shown in FIG. 60 by a plane without rejection is used as a pixel value.
- a block composed of 5 ⁇ 5 pixels for one pixel of interest is approximated by a plane.
- a white pixel is a pixel having a larger pixel value, that is, a pixel having a larger standard error
- a black pixel is a pixel having a smaller pixel value, that is, a pixel having a smaller standard error.
- FIG. 62 is an image in which the standard error obtained when the image shown in FIG. 60 is rejected and approximated by a plane is used as a pixel value.
- white pixels are pixels having larger pixel values, that is, pixels having a larger standard error
- black pixels are pixels having smaller pixel values, that is, pixels having a smaller standard error. It can be seen that the standard error as a whole is smaller when rejection is performed than when no rejection is performed.
- FIG. 63 is a diagram showing an image in which, when the image shown in FIG. 60 is rejected and approximated by a plane, the number of rejections is set as a pixel value.
- white pixels are larger pixel values, that is, pixels having more rejections
- black pixels are lower pixel values, that is, pixels having less rejection times.
- FIG. 64 shows the slope of the plane in the spatial direction X that approximates the pixel value of the block as the pixel value.
- FIG. FIG. 65 is a diagram illustrating an image in which the inclination in the spatial direction Y of the plane approximating the pixel value of the block is set as the pixel value.
- FIG. 66 is a diagram illustrating an image including approximate values indicated by a plane approximating pixel values of a block. From the image shown in Fig. 66, it can be seen that the thin line has disappeared.
- Fig. 67 shows the image shown in Fig. 60, in which the average value of the block of 2 X 2 pixels of the original image is generated as the pixel value of the pixel, and the approximate value shown by the plane shown in Fig. 66
- FIG. 7 is a diagram showing an image composed of a difference from an image composed of. Since the non-stationary component is removed from the pixel values of the image in FIG. 67, the pixel values include only the values to which the fine line images are projected. As can be seen from FIG.
- FIG. 68 is a flowchart corresponding to step S201 and illustrating a process of extracting a non-stationary component by the non-stationary component extracting unit 201 having the configuration shown in FIG.
- the block extraction unit 222 extracts a block consisting of a predetermined number of pixels from the input pixels, and supplies the extracted block to the plane approximation unit 222.
- the block extraction unit 221 selects one pixel from the input pixels that is not yet selected, and extracts a block composed of 7 ⁇ 7 pixels centered on the selected pixel. I do.
- the block extracting unit 221 can select pixels in raster scan order.
- the plane approximating unit 222 approximates the extracted block with a plane.
- the plane approximating unit 222 approximates the pixel values of the pixels of the extracted block by a plane, for example, by regression processing.
- the plane approximation unit 222 approximates, by a plane, the pixel straight line of the pixels excluding the rejected pixels among the pixels of the extracted block by the regression processing.
- iterative determination unit 2 2 3 Performs repeated judgment. For example, a standard error is calculated from a pixel value of a block pixel and an approximate value of an approximate plane, and the number of rejected pixels is calculated to repeatedly execute the determination.
- step S224 the repetition determination unit 223 determines whether or not the standard error is equal to or larger than the threshold. When it is determined that the standard error is equal to or larger than the threshold, the process proceeds to step S225.
- step S224 the repetition determination unit 223 determines whether or not more than half of the pixels in the block have been rejected, and whether or not the standard error is equal to or greater than a threshold. If it is determined that half or more of the pixels have not been rejected and the standard error is equal to or greater than the threshold, the process may proceed to step S225.
- step S225 the repetition determination unit 223 calculates, for each pixel of the block, the error between the pixel value of the pixel and the approximate value of the approximated plane, rejects the pixel with the largest error, and performs plane approximation. to notify the department 2 2 2.
- the procedure returns to step S 222, and the approximation process using a plane and the repetition determination process are repeated for the pixels of the block excluding the rejected pixels.
- step S 225 if a block shifted by one pixel in the raster scan direction is extracted by the process of step S 221, as shown in FIG. 59, the pixel including the thin line component (the black circle in the figure) Will be rejected multiple times.
- step S224 If it is determined in step S224 that the standard error is not greater than or equal to the threshold, the topic is approximated by a plane, and the process proceeds to step S226.
- step S224 the repetition determination unit 223 determines whether or not more than half of the pixels in the block have been rejected, and whether or not the standard error is equal to or greater than a threshold. If more than half of the pixels are rejected or if it is determined that the standard error is not greater than or equal to the threshold value, the process may proceed to step S225.
- step S226 the repetition determination unit 222 determines the pixel of the block 2004/001585
- the slope and intercept of the plane approximating the value are output as non-stationary component information.
- step S 227 the block extraction unit 221 determines whether or not processing has been completed for all pixels of one screen of the input image, and determines that there is a pixel that has not been processed yet. In this case, the process returns to step S221, and a block is extracted from pixels that have not been processed yet, and the above process is repeated.
- step S227 If it is determined in step S227 that the processing has been completed for all the pixels of one screen of the input image, the processing ends.
- the non-stationary component extraction unit 201 having the configuration shown in FIG. 58 can extract the non-stationary component from the input image. Since the unsteady component extraction unit 201 extracts the unsteady component of the input image, the vertex detection unit 202 and the monotone increase / decrease detection unit 203 detect the input image and the unsteady component extraction unit 201. By calculating the difference from the extracted non-stationary component, the processing can be performed on the difference including the stationary component.
- the standard error when rejected the standard error when not rejected, the number of rejected pixels, the slope of the spatial direction X of the plane (a hat in equation (2 4)) calculated in the approximation process using the plane ,
- the slope of the plane in the spatial direction Y (b hat in equation (24)), the level when replaced by a plane (c hat in equation (24)), and the pixel value of the input image and the plane
- the difference from the indicated approximate value can be used as a feature value.
- FIG. 69 is a flowchart for explaining the processing of extracting a stationary component by the non-stationary component extraction unit 201 shown in FIG. 58 in place of the processing of extracting the unsteady component corresponding to step S201. It is one.
- the processing in steps S224 to S245 is the same as the processing in steps S221 to S225, and a description thereof will be omitted.
- step S246 the repetition determination unit 223 outputs the difference between the approximate value indicated by the plane and the pixel value of the input image as a stationary component of the input image. That is, the repetition determination unit 223 outputs the difference between the approximate value based on the plane and the pixel value that is the true value. Note that the repetition determination unit 223 outputs a pixel value of a pixel whose difference between the approximate value indicated by the plane and the pixel value of the input image is equal to or greater than a predetermined threshold value as a stationary component of the input image. You may.
- step S 247 is the same as the process in step S 227, and a description thereof will not be repeated.
- the non-stationary component extraction unit 201 subtracts the approximate value indicated by the plane approximating the pixel value from the pixel value of each pixel of the input image, Non-stationary components can be removed from the input image.
- the vertex detection unit 202 to the continuity detection unit 204 can process only the steady component of the input image, that is, the value on which the image of the thin line is projected, and the vertex detection unit Processing from 202 to the continuity detecting unit 204 becomes easier.
- FIG. 70 illustrates another process of extracting a stationary component by the non-stationary component extraction unit 201 shown in FIG. 58 instead of the process of extracting the non-stationary component corresponding to step S 201. It is a flow chart. The processing of steps S261 to S265 is the same as the processing of steps S221 to S225, and a description thereof will be omitted.
- step S266 the repetition determination unit 223 stores the number of rejections for each pixel, returns to step S266, and repeats the processing.
- step S264 If it is determined in step S264 that the standard error is not greater than or equal to the threshold value, the block is approximated by a plane, so the process proceeds to step S2667. It is determined whether or not processing has been completed for all pixels on the screen. If it is determined that there is a pixel that has not been processed yet, the process returns to step S2661, and a pixel that has not been processed yet is determined. The block is extracted for, and the above processing is repeated.
- step S267 If it is determined in step S267 that the processing has been completed for all the pixels of one screen of the input image, the process proceeds to step S2688, and the repetition determination unit 223 determines the pixels not yet selected. Select one pixel from and for the selected pixel,. It is determined whether the number of rejections is equal to or greater than the threshold. For example, in step S268, the repetition determination unit 222 determines whether or not the number of rejections for the selected pixel is equal to or greater than a previously stored threshold.
- step S268 If it is determined in step S268 that the number of rejections for the selected pixel is equal to or greater than the threshold value, the selected pixel includes a stationary component.
- the unit 223 outputs the pixel value of the selected pixel (pixel value in the input image) as a steady component of the input image, and proceeds to step S270. If it is determined in step S268 that the number of rejections for the selected pixel is not equal to or greater than the threshold value, the processing in step S266 is skipped because the selected pixel does not include a stationary component. Then, the procedure proceeds to step S270. That is, no pixel value is output for a pixel for which it is determined that the number of rejections is not greater than or equal to the threshold value. In addition, the repetition determination unit 223 may output a pixel value in which 0 is set for a pixel for which the number of rejections is determined not to be equal to or larger than the threshold value.
- step S270 the repetition determination unit 222 determines whether or not the process of determining whether the number of rejections is equal to or greater than a threshold has been completed for all pixels of one screen of the input image. If it is determined that the processing has not been completed for all the pixels, there is a pixel that has not been processed yet, so the process returns to step S 268, and one pixel is selected from the pixels that have not been processed yet. And repeat the process described above. If it is determined in step S270 that the processing has been completed for all the pixels of one screen of the input image, the processing ends.
- the non-stationary component extraction unit 201 can output the pixel value of the pixel including the stationary component among the pixels of the input image as the stationary component information. That is, the non-stationary component extracting unit 201 can output the pixel value of the pixel including the component of the thin line image among the pixels of the input image.
- FIG. 71 shows another process of extracting a stationary component by the non-stationary component extraction unit 201 shown in FIG. 58 instead of the process of extracting the non-stationary component corresponding to step S 201.
- Step S 2 8 1 to Step S 2 8 8 Is similar to the processing in steps S2661 through S2688, and therefore description thereof is omitted.
- step S289 the repetition determination unit 223 outputs the difference between the approximate value indicated by the plane and the pixel value of the selected pixel as a stationary component of the input image. That is, the repetition determination unit 223 outputs an image obtained by removing the non-stationary component from the input image as the constancy information.
- step S290 is the same as the processing in step S270, and a description thereof will be omitted.
- the non-stationary component extraction unit 201 can output an image obtained by removing the non-stationary component from the input image as the stationarity information.
- the real-world optical signal is projected, and a part of the continuity of the real-world optical signal is lost.
- Generates a model (function) that approximates the optical signal by detecting the stationarity of the data from the discontinuity that is output and estimating the stationarity of the optical signal in the real world based on the stationarity of the detected data.
- a model function
- the second image data is generated based on the generated function, a more accurate and more accurate processing result can be obtained for a real-world event.
- FIG. 72 is a block diagram illustrating another configuration of the data continuity detecting unit 101.
- the same parts as those shown in FIG. 41 are denoted by the same reference numerals, and description thereof will be omitted.
- a plurality of first monotone increasing / decreasing regions arranged in the first monotone increasing / decreasing region in the monotonous increasing / decreasing region detected by the continuity detecting unit 204 are shown.
- the input image is supplied to the non-stationary component extracting unit 201 and the data continuity direction detecting unit 301.
- the vertex detection unit 202 supplies vertex information indicating the detected vertex to the monotonous increase / decrease detection unit 203 and the data continuity direction detection unit 301.
- the continuity detector 204 supplies the vertex information and the data continuity information indicating the continuous area (thin line area (monotonous increase / decrease area)) to the data continuity direction detector 301.
- the data continuity direction detecting unit 301 includes an input image, vertex information indicating a vertex supplied from the vertex detecting unit 202, and pixels belonging to a continuous area detected by the continuity detecting unit 204. Based on the value, the direction of the continuity of the constant region, which is the continuous direction of the constant region having data continuity, is detected.
- the data continuity direction detection unit 301 outputs vertex information, detected continuous regions, and data continuity information indicating the direction of continuity of the steady regions.
- FIG. 73 is a block diagram illustrating a configuration of the data continuity direction detection unit 301. That is, the data continuity direction detection unit 301 includes a pixel value change detection unit 3221 and a direction detection unit 3222.
- the pixel value change detection unit 321 includes an input image, vertex information indicating a vertex supplied from the vertex detection unit 202, and a continuous region (a thin line region (fine line region) detected by the continuity detection unit 204). Based on the pixel values belonging to the monotone increasing / decreasing region, the change in the pixel value in the region is detected, and information indicating the change in the pixel value is supplied to the direction detecting unit 3222.
- the pixel value change detection unit 3 21 calculates, for each continuous region detected by the continuity detection unit 204, the pixel value of the vertex and the pixel value of each pixel belonging to the region. Calculate the difference with.
- the pixel value change detection unit 3 21 is configured to calculate a pixel value of a pixel belonging to a region, a pixel value of a pixel adjacent to a vertex of another region adjacent to the region, and a pixel value of each pixel belonging to the region. Calculate the difference with.
- the pixel value change detection unit 3 2 1 calculates, for each pixel belonging to the region, the difference between the pixel value of the vertex and the pixel value of the pixel adjacent to the vertex of another region adjacent to the region.
- the information indicating the difference between the pixel value of each pixel belonging to the region and the pixel value of the pixel is supplied to the direction detection unit 3222.
- the direction detecting unit 3 2 2 is configured to calculate the pixel value change supplied from the pixel value change detecting unit 3 2 1
- the direction of the continuity of the stationary region that is, the direction of the continuity of the stationary region having the data continuity, is detected based on the information indicating the continuity of the data.
- the direction detection unit 3222 determines whether the decrement from the vertex indicated by the difference between the pixel value of the vertex belonging to the region and the pixel value of the pixel belonging to the region is adjacent to the region.
- the pixel value of the pixel adjacent to the vertex in the area of the area coincides with the increment indicated by the difference between the pixel value of the pixel belonging to the area and the pixel value of the adjacent pixel, the direction determined from the area and the area adjacent to It is detected as the direction of the continuity of the constant region.
- FIG. 74 is a diagram illustrating an example of an input image including a bee 1, a shape image, that is, a so-called moiré, which appears when a fine repeating pattern is captured by the sensor 2 which is an image sensor.
- the correct direction of the continuity of the constant region in the data 3 is obtained by utilizing the property of the data 3 on which the fine line image is projected.
- FIG. 76 is a diagram showing pixels of data 3 on which a thin line image is projected.
- the horizontal direction indicates the spatial direction X
- the vertical direction indicates the spatial direction Y.
- the area between the two dotted lines indicates the area where the single thin line image is projected.
- P indicates a vertex.
- FIG. 77 is a diagram illustrating the pixel values of the pixels in three columns in the data 3 onto which the thin line image in FIG. 76 is projected.
- the upper direction of the figure indicates the pixel value
- the upper right direction of the figure indicates the spatial direction Y
- the lower right direction of the figure indicates the spatial direction X.
- P indicates a vertex.
- the waveform represented by the pixel value of the pixel of the data 3 onto which the thin line image is projected has a typical arc shape.
- the fine line image has almost the same diameter and the same level regardless of the part, so the total sum of the pixel values obtained by projecting a fine line image of a certain length is always constant .
- the value of the image of the thin line of a certain length projected on the pixel values of the plurality of pixels is , Become constant.
- the data continuity detector 101 uses the property that the sum of the pixel values on which an image of a fixed-length fine line is projected is always constant. Find the correct direction of the continuity of the constant region corresponding to.
- the data continuity detecting unit 101 whose configuration is shown in FIG. 72, is configured such that when the pixel value of a pixel belonging to the fine line region decreases, the pixel of the fine line region at a position corresponding to the correct direction of the fine line Using the fact that the pixel value increases in accordance with the decrease, the correct direction of the continuity of the steady region corresponding to the direction of the thin line is obtained.
- the absolute value of the reduction A from the pixel value of the vertex P to the pixel value of the pixel ei belongs to the thin line region adjacent to the thin line region to which the vertex P belongs, and It is equal to the absolute value of the increment B from the pixel to be added to the pixel e 2 adjacent to the pixel.
- the data continuity direction detection unit 301 sets a pixel value whose pixel value changes in response to a change in the pixel value of the thin line region when two or more thin line regions are connected to one thin line region. Is detected as the stationary direction. That is, when two or more thin line regions are continuous with one thin line region, the data continuity direction detection unit 301 detects a pixel whose pixel value decreases in response to an increase in the pixel value of the thin line region. A thin line region including a value is detected as a direction of continuity, or a thin line region including a pixel value whose pixel value increases in response to a decrease in the pixel value of the thin line region is detected as a direction of continuity of the steady region.
- FIG. 82 to FIG. 84 are diagrams showing examples of processing results.
- FIG. 82 is a diagram illustrating an example of an input image.
- a thin line image is included in the upper right corner in the figure.
- the thin line image disappears as shown in FIG. 83.
- FIG. 83 it is determined that a thin line image is included in the upper left part of the figure.
- step S306 the data continuity direction detection unit 301 executes a process of detecting the direction of data continuity. The details of the process of detecting the direction of data continuity will be described later.
- step S307 Since the processing in step S307 is the same as the processing in step S206 in FIG. The description is omitted.
- step S331 the pixel value change detection unit 3221 of the data continuity direction detection unit 301 determines whether the vertex P belongs based on the data continuity information supplied from the continuity detection unit 204. Determines whether there are two or more monotonous increase / decrease areas that are continuous with the monotone increase / decrease area that is the thin line area, and determines that there are two or more monotone increase / decrease areas that are continuous with the monotone increase / decrease area to which vertex P belongs. If so, it is necessary to detect the correct continuity direction, so the process proceeds to step S3332, and the pixel value change detection unit 3221 acquires the pixel value of the monotonous increase / decrease region.
- the pixel value change detection unit 3221 calculates a change in the pixel value of the pixel in the monotone increase / decrease region to which the vertex P belongs.
- the pixel value change detection unit 3221 supplies the calculated change in the pixel value of the pixel in the fine monotonous increase / decrease region to which the vertex P belongs to the direction detection unit 3222.
- the pixel value change detection unit 3221 calculates a decrease in the pixel value of the pixel in the monotone increasing / decreasing area to which the vertex P belongs based on the pixel value of the vertex P.
- step S3334 the pixel value change detection unit 3221 calculates a change in the pixel value of the pixel in the monotone increase / decrease region adjacent to the monotone increase / decrease region to which the vertex P belongs.
- the pixel value change detecting section 3221 supplies the calculated change in pixel value of the pixel in the monotone increasing / decreasing area adjacent to the fine monotonous increasing / decreasing area to which the vertex P belongs to the direction detecting section 3222.
- the pixel value change detection unit 3221 determines, for the pixels in the monotone increasing / decreasing region adjacent to the fine monotonous increasing / decreasing region to which the vertex P belongs, the pixels belonging to the adjacent monotonous increasing / decreasing region, and the pixels adjacent to the vertex P Calculate the increment of the pixel value based on the pixel value.
- step S335 the direction detection unit 3222 calculates the absolute value of the change in the pixel value of the pixel in the fine monotonous increase / decrease area to which the vertex P belongs and the monotonous increase / decrease adjacent to the fine monotone increase / decrease area to which the vertex P belongs The absolute value of the change in the pixel value of the pixel in the area is compared with the absolute value. For example, the direction detection unit 3 2 2 calculates the vertex P for the pixel in the fine monotonous increase / decrease area to which the vertex P belongs.
- step S336 the direction detecting unit 3222 determines the adjacent monotone increasing / decreasing area having a small difference in absolute value compared in the processing of step S335, and the fine monotonous increasing / decreasing area to which the vertex P belongs.
- the direction determined is the direction of data continuity, and the process ends.
- the direction detection unit 3222 outputs data indicating an area having data continuity and data continuity information including information indicating a direction of data continuity.
- the direction detection unit 3222 sets a vector having a vertex P as a start point and an end point at a vertex of an adjacent monotone increasing / decreasing area having a small absolute value difference as a vector indicating a steady direction.
- step S331 If it is determined in step S331 that there are no more than two thin line regions that are continuous with the thin line region to which vertex P belongs, it is not necessary to detect the correct data continuity direction. The processing from 33 to step S336 is skipped, and the processing ends.
- the data continuity detecting unit 101 having the configuration shown in FIG. 72 can detect an area having data continuity and also detect a direction of data continuity.
- the direction of the data continuity is determined based on the value (difference value) obtained by subtracting the approximate value approximated by the fitted plane from the pixel value of the pixel of the input image. It can also be detected.
- Figure 87 shows a data constancy detector 101 that detects the direction of data continuity based on a difference value obtained by subtracting the approximate value approximated by the fitted plane from the pixel value of the pixel of the input image.
- FIG. 3 is a block diagram showing the configuration of FIG.
- the data continuity direction detection unit 301 converts the approximate value approximated by the fitted plane into the input image. From the difference value subtracted from the pixel value of the pixel, a change in the pixel value in the monotone increase / decrease area is detected. The data continuity direction detecting unit 301 detects the direction of continuity of the steady area based on a change in the pixel value of the monotonous increase / decrease area detected from the difference value.
- the real-world optical signal is projected, and the discontinuity of the pixel values of a plurality of pixels in the image data in which a part of the continuity of the real-world optical signal is missing is detected.
- Detect the vertex of the change in pixel value detect the monotone increase / decrease area where the pixel value monotonically increases or decreases from the vertex, and find another monotone increase / decrease area in the detected monotone increase / decrease area on the screen If the monotonous increase / decrease area existing at the position where the continuity of the steady area is detected as the steady area having the continuity of the image data and the direction of the continuity of the steady area is detected, the correct direction of the continuity of the steady area is detected. Will be able to do it.
- a real-world optical signal is projected, and the discontinuity of the pixel values of a plurality of pixels in the image data in which part of the continuity of the real-world optical signal is missing is detected.
- Detect the vertex of the change detect the monotone increase / decrease area where the pixel value monotonically increases or decreases from the vertex, and place another monotone increase / decrease area in the detected monotone increase / decrease area at an adjacent position on the screen.
- the existing monotone increase / decrease area is detected as a steady area having stationarity of the image data, and the pixel values of a plurality of first pixels arranged in the first monotone increase / decrease area in the detected monotone increase / decrease area are detected.
- FIG. 88 is a block diagram showing the configuration of the real world estimating unit 102.
- the real-world estimator 102 uses the input image and the data continuity information supplied from the continuity detector 101 to generate a real-world 1 signal
- the width of the thin line is detected, and the level of the thin line (light intensity of the signal in the real world 1) is estimated.
- the line width detection unit 211 is configured to emit the fine line image supplied from the continuity detection unit 101.
- the width of the thin line is detected based on data continuity information indicating a steady region, which is a thin line region, composed of shaded pixels.
- the line width detecting unit 2101 supplies thin line width information indicating the width of the detected thin line to the signal level estimating unit 2102 together with the data continuity information.
- the signal level estimator 2 102 is a real world 1 signal based on the input image, thin line width information indicating the width of the thin line supplied from the line width detector 2 101, and data continuity information. Then, it estimates the level of the thin line image, that is, the level of light intensity, and outputs real world estimation information indicating the width of the thin line and the level of the thin line image.
- FIG. 89 and FIG. 90 are diagrams for explaining the process of detecting the width of the thin line in the signal of the real world 1.
- the area surrounded by a thick line indicates one pixel
- the area surrounded by a dotted line indicates a thin line area formed by pixels onto which a thin line image is projected.
- Circles indicate the center of gravity of the thin line area.
- hatched lines indicate images of thin lines incident on the sensor 2. It can be said that the oblique line indicates the area where the fine line image of the real world 1 is projected on the sensor 2.
- S indicates the inclination calculated from the position of the center of gravity of the thin line region
- D indicates the overlap of the thin line region.
- the slope S is the distance between the centers of gravity in pixels, since the thin line regions are adjacent to each other.
- the overlap D of the thin line regions is the number of adjacent pixels in the two thin line regions.
- W indicates the width of the thin line.
- the width W of the thin line is 1.
- the width W is 1/3.
- the line width detecting unit 2101 detects the width of the thin line from the inclination calculated from the position of the center of gravity of the thin line region and the overlap of the thin line region.
- FIG. 91 is a diagram for explaining a process of estimating the level of the signal of the thin line in the signal of the real world 1.
- a region surrounded by a thick line indicates one pixel
- a region surrounded by a dotted line indicates a thin line region formed of pixels onto which a thin line image is projected.
- E indicates the length of the thin line region in units of pixels in the thin line region
- D indicates the overlap of the thin line regions (the number of pixels adjacent to other thin line regions).
- the signal level of the thin line is approximated to be constant within the processing unit (thin line area), and the level of the image other than the thin line projected on the pixel value of the pixel on which the thin line is projected is the level of the adjacent pixel. Approximate to be equal to the level for the pixel value.
- the level of the thin line signal is c
- the level of the left part of the portion where the thin line signal is projected in the signal (image) projected in the thin line region is A
- B the level on the right side of the part where the thin line signal is projected.
- the area on the left side of the thin line in the thin / ⁇ area is (E-D) / 2.
- the area on the right side of the thin line in the thin line region is (E ⁇ D) / 2.
- the first term on the right side of Eq. (27) is the portion of the pixel value where the signal of the same level as the level of the signal projected to the pixel adjacent to the left is projected. Can be represented.
- Equation (28) Ai represents the pixel value of the pixel adjacent on the left.
- Equation (28) hi denotes the ratio of the area of the signal projected on the pixels in the thin line region to the level of the signal projected on the pixel adjacent to the left side. That is, a, indicates the ratio of the pixel value of the pixel adjacent to the left side included in the pixel value of the pixel in the thin line area and the same pixel value.
- i indicates the position of the pixel adjacent to the left side of the thin line region.
- the ratio of the same pixel value as is ⁇ . It is. 9 1, are contained in the pixel value of the pixel of the thin line area, the proportion of the same pixel value as the pixel value A t of pixels adjacent to the left side of the thin line area is Arufaiota. 9 1, are contained in the pixel value of the pixel of the thin line area, the proportion of the same pixel value as the pixel value Alpha 2 of pixels adjacent to the left side of the fine line region, o; 2.
- Equation (27) The second term on the right-hand side of Equation (27) is the part of the pixel value where a signal of the same level as the level of the signal projected to the pixel adjacent on the right is projected. Can be represented.
- Equation (29) Bj indicates the pixel value of the pixel adjacent on the right.
- j indicates the position of the pixel adjacent to the right side of the thin line area.
- the pixel value of the pixel in the thin line area is adjacent to the right side of the thin line area.
- the ratio of the pixel value that is the same as the pixel value of the pixel in question is 8 i. 9 1, are contained in the pixel value of the pixel of the thin line area, the proportion of the same pixel value as the pixel value B 2 of pixels adjacent to the right side of the fine line region, is a 3 2.
- the signal level estimating unit 2102 calculates the pixel values of the image other than the thin line out of the pixel values included in the thin line region based on the equations (23) and (29). By removing the pixel values of the image other than the fine line from the pixel value of the fine line region based on the equation (27), the pixel value of the image of only the fine line among the pixel values included in the fine line region is obtained. Then, the signal level estimating unit 2102 obtains the signal level of the thin line from the pixel value of the image of the thin line only and the area of the thin line.
- the signal level estimating unit 2102 calculates the pixel value of the image of only the fine line among the pixel values included in the fine line region, and the area of the fine line in the fine line region, that is, the overlapping of the fine line region! ) To calculate the signal level of the thin line.
- the signal level estimating unit 210 outputs real world estimation information indicating the width of the thin line and the level of the signal of the thin line in the signal of the real world 1.
- the waveform of the thin line is described geometrically instead of the pixel, so that any resolution can be used.
- the line width detecting unit 2101 detects the width of the thin line based on the data continuity information. For example, the line width detection unit 2101 divides the overlap by the slope from the slope calculated from the position of the center of gravity of the thin line area and the overlap of the thin line area, thereby obtaining the thin line in the signal of the real world 1. Estimate the width of
- the signal level estimating unit 2102 estimates the signal level of the thin line based on the width of the thin line and the pixel value of the pixel adjacent to the thin line region, and the signal level is estimated.
- the real world estimation information indicating the width of the thin line and the level of the signal of the thin line is output, and the processing ends.
- the signal level estimating unit 2102 calculates the pixel value at which the image other than the fine line included in the fine line region is projected, and the image other than the fine line is projected from the fine line region.
- the pixel value at which the image of only the fine line is projected is obtained by removing the pixel value obtained by the calculation, and the signal level of the fine line is calculated from the pixel value at which the image of the fine line only is projected and the area of the fine line. By calculating, the level of the thin line in the signal of the real world 1 is estimated.
- the real world estimating unit 102 can estimate the width and level of the thin line of the signal of the real world 1.
- the real-world optical signal is projected, the continuity of the data is detected in the first image data in which a part of the continuity of the real-world optical signal is missing, and the continuity of the data is detected.
- FIG. 93 is a block diagram illustrating another configuration of the real world estimating unit 102.
- the real-world estimator 102 shown in Fig. 93 detects the region again based on the input image and the data continuity information supplied from the data continuity detector 101, and detects it again.
- the width of the thin line in the image, which is the signal of the real world 1 is detected based on the region thus obtained, and the light intensity (level) of the signal of the real world 1 is estimated.
- the real world estimating unit 102 having the configuration shown in FIG. 93 a stationary region composed of pixels onto which a thin line image is projected is detected again, and the real world estimating unit 102
- the width of the thin line in the image which is the signal of is detected, and the light intensity of the signal of the real world 1 is estimated.
- the data continuity information supplied from the data continuity detecting unit 101 and input to the real world estimating unit 102 shown in FIG. 93 includes the thin line of the input image as data 3.
- the information includes non-stationary component information indicating a non-stationary component other than the steady component on which the image is projected, monotonous increase / decrease region information indicating a monotone increase / decrease region in the steady region, and information indicating a steady region.
- the non-stationary component information included in the data continuity information includes a slope and an intercept of a plane approximating a non-stationary component such as a background in the input image.
- the data continuity information input to the real world estimation unit 102 is supplied to the boundary detection unit 212.
- the input image input to the real world estimator 102 is supplied to the boundary detector 211 and the signal level estimator 210.
- the boundary detection unit 2 1 2 1 generates an image consisting only of the stationary component on which the thin line image is projected from the non-stationary component information included in the data continuity information and the input image, and an image consisting only of the stationary component. Based on the calculated distribution ratio, the distribution ratio indicating the ratio of the projection of the fine line image, which is the signal of the real world 1, projected on the pixel, is calculated, and the regression line indicating the boundary of the fine line region is calculated from the calculated distribution ratio. As a result, the thin line region, which is the steady region, is detected again.
- FIG. 94 is a block diagram illustrating a configuration of the boundary detection unit 2 121.
- the distribution ratio calculation unit 2131 generates an image composed of only the stationary component on which the thin line image is projected, from the data continuity information, the non-stationary component information included in the data continuity information, and the input image. More specifically, the distribution ratio calculation unit 2 131 detects the adjacent monotone increase / decrease region in the steady region from the input image based on the monotone increase / decrease region information included in the data continuity information, By subtracting the approximate value approximated by the plane indicated by the slope and intercept included in the steady-state component information from the detected pixel value of the pixel belonging to the monotonous increase / decrease region, the steady-state component on which the fine line image is projected is calculated. Generate an image consisting of only
- the distribution ratio calculation unit 2 131 subtracts the approximate value approximated by the plane indicated by the slope and intercept included in the stationary component information from the pixel value of the pixel of the input image, thereby It is also possible to generate an image consisting of only the stationary component on which the image is projected.
- the distribution ratio calculation unit 2 1 3 1 converts the thin line image, which is the signal of the real world 1, into two adjacent monotone increase / decrease regions in the steady A distribution ratio indicating a ratio distributed to pixels is calculated.
- the distribution ratio calculation unit 213 supplies the calculated distribution ratio to the regression line calculation unit 213.
- the numerical values in the two columns on the left side of Fig. 95 are calculated from the pixel values of the input image by subtracting the approximate values approximated by the planes indicated by the slopes and intercepts included in the stationary component information. Of these, the pixel values of two columns of pixels are shown vertically.
- the two regions surrounded by a square on the left side of FIG. 95 indicate two adjacent monotone increase / decrease regions, a monotone increase / decrease region 2 1 4 1 1 1 and a monotone increase / decrease region 2 1 4 1-2.
- the numerical values shown in the monotonous increase / decrease area 2 1 4 1-1 and the monotone increase / decrease area 2 1 4 1-2 are the values of the pixels belonging to the monotonous increase / decrease area which is the stationary area detected by the data continuity detection unit 101. Indicates a pixel value.
- the values in one column on the right side of FIG. 95 indicate values obtained by adding the pixel values of the pixels arranged sideways among the pixel values of the two columns on the left side of FIG. That is, the numerical value in one column on the right side of FIG. 95 is a monotone increasing / decreasing area composed of pixels in one column vertically, and a thin line image is projected for each two horizontally adjacent pixels for two adjacent pixels. A value obtained by adding the obtained pixel values is shown.
- each pixel consists of one row of pixels vertically, and belongs to one of the adjacent monotone increase / decrease regions 2 1 4 1 1 1 and the monotone increase / decrease region 2 1 4 1 1 2, and the pixel value of the horizontally adjacent pixel is , 2 and 58, the added value is 60.
- Each consists of one row of pixels vertically and belongs to one of the adjacent monotone increasing / decreasing areas 2 1 4 1 1 1 and 2 1 4 1 1 2, and the pixel value of the horizontally adjacent pixel is 1 and When it is 65, the value added is 66.
- the value obtained by adding the pixel values to which the fine line image is projected is substantially constant for the pixels that are horizontally composed of one row of pixels and are vertically adjacent to two adjacent monotone increasing / decreasing areas.
- the distribution ratio calculation unit 2 1 3 1 uses the property that the value obtained by adding the pixel values of the projected image of the thin line to the adjacent pixels of the two adjacent monotone increasing / decreasing areas becomes almost constant. How the image is distributed to the pixel values of the pixels in one column You.
- the distribution ratio calculation unit 2 1 3 1 is a monotonically increasing / decreasing area composed of pixels in one column vertically, and the pixel values of pixels belonging to two adjacent ones are For each pixel, the distribution ratio is calculated for each pixel belonging to two adjacent monotone increasing / decreasing regions by dividing by the value obtained by adding the pixel values onto which the fine line image is projected. However, as a result of the calculation, 100 is set for the distribution ratio exceeding 100.
- the pixel value of a horizontally adjacent pixel that is a monotonically increasing / decreasing area composed of one column of pixels vertically and belonging to two adjacent pixels is 2 and 58, respectively.
- the added value is 60, a distribution ratio of 3.5 to 96.5 is calculated for each pixel.
- a distribution ratio of 1.5 and 98.5 is calculated.
- the pixel value at which the thin line image is projected is determined for each horizontally adjacent pixel.
- the distribution ratio is calculated based on the sum of the two values that is closer to the pixel value at vertex P.
- the pixel value of the vertex P is 81 and the pixel value of the pixel belonging to the monotone increasing / decreasing area of interest is 79, the pixel value of the pixel adjacent to the left is 3 and the pixel value of the pixel adjacent to the right is 3
- the pixel value of the pixel to be executed is 1 1
- the value obtained by adding the pixel values of the adjacent pixels on the left side is 8 2
- the value obtained by adding the pixel values of the adjacent pixels on the right side is 7 8
- a pixel value 82 closer to the pixel value 81 of the vertex P is selected, and a distribution ratio is calculated based on pixels adjacent to the left side.
- the pixel value of the vertex P is 81 and the pixel value of the pixel belonging to the monotonous increase / decrease area of interest is 75
- the pixel value of the pixel adjacent to the left is 0
- the pixel value of the pixel adjacent to the right is If the pixel value of the pixel to be processed is 3, the value obtained by adding the pixel values of the adjacent pixels on the left side is 75, and the value obtained by adding the pixel values of the adjacent pixels on the right side is 7 8, so the vertex P Pixel value 8 closer to 1 is selected, 7 8 is selected, next to the right
- the distribution ratio is calculated based on the pixels in contact.
- the distribution ratio calculation unit 2131 calculates the distribution ratio for a monotonously increasing / decreasing region composed of one column of pixels vertically.
- the distribution ratio calculation unit 2 1 3 1 calculates the distribution ratio for the monotone increase / decrease area composed of one row of pixels in the same manner.
- the regression line calculation unit 2 1 3 2 assumes that the boundary of the monotonous increase / decrease region is a straight line, and based on the distribution ratio calculated by the distribution ratio calculation unit 2 1 3 1, By calculating the regression line shown, the monotonous increase / decrease region in the steady region is detected again. With reference to FIG. 98 and FIG. 99, a process of calculating a regression line indicating a boundary of a monotonous increase / decrease region in the regression line calculation unit 211 will be described.
- the regression line calculation unit 2132 calculates a regression line for the upper boundary of the monotone increase / decrease region 2141-1 to the monotone increase / decrease region 21414-1-5 by regression processing.
- the regression line calculation unit 2 1 3 2 calculates that the sum of the squares of the distances from the pixels located on the upper boundary of the monotone increase / decrease area 2 1 4 1 1 1 to the monotone increase / decrease area 2 1 4 1 Calculate the straight line A
- the black circles indicate the monotone increase / decrease region 2 1 4 1-1 to the monotone increase / decrease region.
- the regression line calculation unit 2 1 3 2 calculates a regression line for the lower boundary of the monotone increase / decrease area 2 141-1 to the monotone increase / decrease area 2 1 4 1-5 by regression processing. For example, the regression line calculation unit 2 1 3 2 calculates that the sum of the squares of the distances from the pixels located on the lower boundary of the monotone increasing / decreasing region 2 1 4 1—1 to the monotonic increasing / decreasing region 2 1 Calculate the straight line B.
- the regression line calculation unit 2 1 3 2 detects the monotone increase / decrease region in the steady region again by determining the boundary of the monotone increase / decrease region based on the calculated regression line.
- the regression line calculation unit 2 1 3 2 calculates the upper boundary of the monotone increase / decrease region 2 1 4 1—1 to the monotone increase / decrease region 2 1 4 1-5 based on the calculated line A. Is determined.
- the regression line calculation unit 2 1 3 2 has a monotonous increase / decrease region 2 1 4
- the regression line calculation unit 2 13 2 sets the pixels closest to the calculated straight line A to be included in the region for each of the monotone increase / decrease regions 2 1 4 1 1 1 1 to 2 1 4 1 1 5 To determine the upper boundary.
- the regression line calculation unit 2 1 3 2 uses the calculated straight line B based on the monotone increase / decrease region 2 1 4 1—1 to the monotone increase / decrease region 2 1 4 1—5 Determine the boundaries.
- the regression line calculation unit 2 13 2 calculates the lower boundary from the pixel closest to the calculated straight line B for each of the monotone increase / decrease regions 2 1 4 1 1 1 to 2 1 4 1-5. To determine.
- the regression line calculation unit 2 13 2 sets the pixels closest to the calculated straight line B to be included in the region for each of the monotone increase / decrease regions 2 14 1 _ 1 to 2 1 4 1-5. To determine the upper boundary.
- the regression line calculation unit 2 13 2 monotonically increases or decreases the pixel value from the vertex based on the regression line that regresses the boundary of the steady area detected by the data continuity detection unit 101.
- the detected area is detected again.
- the regression line calculation unit 2 1 3 2 determines the boundary of the monotonous increase / decrease region based on the calculated regression line, and again detects and detects the region that is the monotone increase / decrease region in the steady region.
- the area information indicating the area is supplied to the line width detecting unit 210.
- the boundary detection unit 2 1 2 1 calculates the distribution ratio indicating the ratio of the projection of the thin line image, which is the signal of the real world 1, projected on the pixel, and monotonically calculates the distribution ratio from the calculated distribution ratio.
- the boundary detection unit 2 1 2 1 calculates the distribution ratio indicating the ratio of the projection of the thin line image, which is the signal of the real world 1, projected on the pixel, and monotonically calculates the distribution ratio from the calculated distribution ratio.
- the line width detection unit 2101 shown in FIG. 93, is similar to the case shown in FIG. 88 based on the region information indicating the region detected again, supplied from the boundary detection unit 211. Processing detects the width of the thin line.
- the line width detection unit 210 supplies the signal level estimation unit 210 with thin line width information indicating the width of the detected thin line, together with the data continuity information.
- the processing of the signal level estimator 2 102 shown in FIG. 93 is the same as that shown in FIG. Since it is a process, its description is omitted.
- FIG. 100 is a flowchart corresponding to the process of step S 102 and illustrating the process of estimating the real world by the real world estimating unit 102 having the configuration shown in FIG. 93.
- the boundary detection unit 2121 detects a reproduction region based on the pixel values of the pixels belonging to the stationary region detected by the data continuity detection unit 101. Execute the detection process. The details of the boundary detection processing will be described later.
- step S2122 and step S2123 is the same as the processing in step S2101 and step S2102, and a description thereof will be omitted.
- FIG. 101 is a flowchart for explaining the boundary detection processing corresponding to the processing in step S2121.
- the distribution ratio calculation unit 2131 calculates a distribution ratio indicating the rate at which the thin line image is projected, based on the data continuity information indicating the monotone increase / decrease region and the input image. .
- the distribution ratio calculation unit 2 131 detects an adjacent monotone increase / decrease region in the steady region from the input image based on the monotone increase / decrease region information included in the data continuity information. By subtracting the approximate value approximated by the plane indicated by the slope and intercept included in the steady-state component information from the pixel values of the pixels belonging to the monotonous increase / decrease region, an image consisting of only the steady-state component on which the thin line image is projected can be obtained. Generate.
- the distribution ratio calculation unit 2 1 3 1 divides the pixel value of a pixel belonging to two adjacent pixels, which is a monotonous increase / decrease region composed of one column of pixels, by the sum of the pixel values of the adjacent pixels. Then, the distribution ratio is calculated for each pixel belonging to two adjacent monotone increase / decrease regions.
- the distribution ratio calculation unit 213 supplies the calculated distribution ratio to the zero return straight line calculation unit 213.
- step S2132 the regression line calculation unit 2132 calculates the regression line indicating the boundary of the monotonous increase / decrease region based on the distribution ratio indicating the ratio of the projection of the thin line image.
- the area within the area is detected again.
- the regression line calculation unit 2 1 3 2 assumes that the boundary of the monotone increase / decrease region is a straight line, and By calculating the regression line indicating the boundary and calculating the regression line indicating the boundary of the other end of the monotonous decrease region, the monotone increase / decrease region in the steady region is detected again.
- the regression line calculation unit 211 supplies the detected region information indicating the region in the steady region to the line width detection unit 2101, and the process ends.
- the real-world estimator 102 shown in FIG. 93 detects again the area composed of the pixels on which the thin line image is projected, and based on the detected area again, The width of the thin line in the signal image is detected, and the light intensity (level) of the signal in the real world 1 is estimated. By doing so, it is possible to more accurately detect the width of the thin line and more accurately estimate the light intensity for the real world 1 signal.
- the discontinuity of the pixel values of a plurality of pixels is detected and detected in the first image data in which the real-world optical signal is projected and a part of the continuity of the real-world optical signal is missing. From the detected discontinuity, a steady area having data continuity is detected, the area is detected again based on the pixel values of the pixels belonging to the detected steady area, and the real world is detected again based on the detected area.
- estimating it is possible to obtain more accurate and more accurate processing results for real-world events.
- the real world estimating unit 102 that outputs, as real world estimation information, the differential value of the approximation function in the spatial direction for each pixel in a region having stationarity.
- the reference pixel extraction unit 2221 processes each pixel of the input image based on the data continuity information (stationarity angle or area information) input from the data continuity detection unit 101. It is determined whether or not the pixel is an area, and if it is a processing area, information on reference pixels necessary to obtain an approximation function that approximates the pixel value of the pixel of the input image (a plurality of pixels around the pixel of interest required for calculation) The pixel position and pixel value are extracted and output to the approximate function estimator 222.
- the approximation function estimator 222 uses the least squares method to approximate the approximation function that describes the pixel values of the pixels around the target pixel based on the reference pixel information input from the reference pixel extractor 222. And outputs the estimated approximation function to the differential processing unit 220 3.
- the differential processing unit 2203 calculates the angle of the data continuity information (for example, the angle of a thin line or a binary edge with respect to a predetermined axis, based on the approximation function input from the approximation function estimation unit 222).
- the shift amount of the position of the pixel to be generated from the pixel of interest is calculated, and the differential value at the position on the approximation function corresponding to the shift amount (one-dimensional direction from the line corresponding to the stationarity)
- the derivative value of the function that approximates the pixel value of each pixel corresponding to the distance along the distance is calculated, and information on the position of the pixel of interest, the pixel value, and the slope of the stationarity is added, and this is added to the real world.
- the information is output to the image generation unit 103 as estimation information.
- step S221 the reference pixel extraction unit 222 acquires the angle as the data continuity information and the area information from the data continuity detection unit 101 together with the input image.
- step S2202 the reference pixel extraction unit 2201 sets a target pixel from unprocessed pixels of the input image.
- step S2203 the reference image extraction unit 2201 determines whether or not the pixel of interest is from the processing area based on the information on the area of the data continuity information. If it is determined that the pixel is not a pixel, the process proceeds to step S221, and for the pixel of interest, it is determined through the approximate function estimating unit 222 that the pixel of interest is outside the processing area, The differential processing unit 2203 sets the differential value of the corresponding pixel of interest to 0, and further adds the pixel value of the pixel of interest as real-world estimation information.
- the image data is output to the image generation unit 103, and the process proceeds to step S2211. If it is determined that the target pixel belongs to the processing area, the process proceeds to step S2204.
- step S224 the reference pixel extraction unit 2201 determines from the angle information included in the data continuity information that the direction having data continuity is an angular force close to the horizontal direction or close to the vertical. It is determined whether or not it is an angle. That is, the reference pixel extraction unit 2221 determines that the angle ⁇ ⁇ ⁇ having data continuity is approximately 45 degrees or 135 degrees. 4001585
- the direction of the continuity of the pixel of interest is determined to be close to the horizontal direction. If the angle 0 with data continuity satisfies 45 degrees ⁇ 0 ⁇ 135 degrees, the The direction of stationarity is determined to be close to the vertical direction.
- the reference pixel extraction unit 2201 extracts the position information and the pixel value of the reference pixel corresponding to the determined direction from the input image, and the approximation function estimation unit 2202 Output to That is, the reference pixel is data used when calculating an approximation function described later, and therefore it is desirable that the reference pixel be extracted according to the slope. Accordingly, a reference pixel in a long range in the horizontal direction or the vertical direction is extracted corresponding to the determination direction. More specifically, for example, as shown in FIG. 104, when the gradient Gf is close to the vertical direction, it is determined that the direction is the vertical direction. In this case, as shown in FIG. 104, for example, as shown in FIG.
- the reference pixel extraction unit 2221 vertically shifts two pixels each in the vertical (up and down) direction around the pixel of interest and one pixel each in the horizontal (left and right) direction so that the total is 15 pixels. Pixels in a long range are extracted as reference pixels.
- the horizontal direction will be 1 pixel each in the vertical (up / down) direction and 2 pixels each in the horizontal (left / right) direction, centering on the target pixel, for a total of 15 pixels. Then, a pixel in a longer range is extracted as a reference pixel, and is output to the approximate function estimator 222.
- the number of reference pixels is not limited to 15 pixels as described above, but may be any other number.
- step S 2206 the approximate function estimating unit 220 2 uses the least squares method to calculate the approximate function based on the information of the reference pixel input from the reference pixel extracting unit 220 1.
- f (x) is estimated and output to the differential processing section 2203.
- the approximation function f (x) is a polynomial as shown in the following equation (30).
- f (X) 3 ⁇ 4 ' 1 ⁇ ( ⁇ + 3 ⁇ 4 ⁇ ⁇ ⁇ 1 + ⁇ ⁇ - ⁇ ⁇ ⁇ + ⁇ ⁇ ⁇ ⁇ (3 0)
- each coefficient of the polynomial in equation (3 0) ⁇ through W n
- an approximation function f (x) that approximates the pixel value (reference pixel value) of each reference pixel is obtained, however, more reference pixel values are required than the number of coefficients. Therefore, for example, in the case where the reference pixel is as shown in Fig.
- the approximate function estimating unit 222 estimates by solving the following equation (31) using the least square method. .
- step S2207 the differential processing unit 2203 determines the position of the pixel to be generated based on the approximate function f (x) input from the approximate function estimation unit 222. Find the shift amount.
- the center of the pixel of interest is to obtain the differential value at Pin (Xin, Yin).
- Xin, Yin) Since this shift amount is Cx (0), it is substantially zero.
- the pixel Pin is a square having (Xin, Yin) as the approximate center of gravity
- the pixels Pa and Pb are (Xin, Yin + 0.25) and (Xin, Yin-0.25).
- Each rectangle is a long rectangle in the horizontal direction in the figure with the approximate center of gravity.
- step S2208 the differential processing unit 2 2 Q 3 differentiates the approximate function f (x) to obtain a first-order differential function f (x) 'of the approximate function, and calculates the shift amount by seeking a differential value at corresponding locations, which c that is output to the image generation unit 1 0 3 as actual world estimation information, in this case, the differential processing unit 2 2 0 3, the differential value f (Xin) ' Then, the position (in this case, the target pixel (Xin, Yin)), its pixel value, and information on the inclination in the direction of continuity are added and output.
- step S2209 the differentiation processing unit 2203 determines whether or not enough differential values necessary to generate pixels of the required density have been obtained. For example, in this case, only the differential value to obtain the double density is obtained (only the differential value to obtain the double density in the space direction Y direction is obtained). It is determined that the differential values required to generate the pixels having the required density have not been determined, and the process returns to step S2207.
- step S2207 the differentiating unit 2203 again determines, based on the approximate function f (x) input from the approximate function estimating unit 2202, the position of the pixel to be generated. Find the shift amount. That is, in this case, the differential processing unit 2203 obtains a differential value required to divide each of the two divided pixels Pa and Pb into two. Since the positions of the pixels Pa and Pb are the positions indicated by the black circles in FIG. 105, the differential processing unit 222 obtains the shift amount corresponding to each position.
- the shift amounts of the pixels Pa and Pb are Cx (0.25) and Cx (—0.25), respectively.
- step S2208 the differentiation processing unit 2203 first-order differentiates the approximate function f (x), and calculates the differential value at a position corresponding to the shift amount corresponding to each of the pixels Pa and Pb. Then, this is output to the image generation unit 103 as real world estimation information.
- the differential processing unit 2203 calculates the differential function f (x) for the obtained approximate function f (x).
- the position shifted by the shift amount Cx (0.25), Cx (— 0.25) (Xin— Cx (0.25)) and (Xin— Cx (-0.25)) the differential values at the positions f (Xin— Cx (0.25) ⁇ 'and f (Xin— Cx (-0.25)) ', add the position information corresponding to the derivative value, and output this as real world estimation information.
- the pixel value information is output
- the differentiation processing unit 220 3 again obtains only the differential values necessary to generate the pixel having the required density. For example, in this case, the differential value to obtain the density of 4 times has been obtained, so only the necessary number of pixels to generate the required density is obtained. It is determined that the differential value has been obtained, and the process proceeds to step S2 211.
- step S2211 the reference pixel extraction unit 2201 determines whether or not all pixels have been processed. If it is determined that not all pixels have been processed, the processing is performed in step S221. Return to 2 202. If it is determined in step S2211 that all pixels have been processed, the processing ends.
- pixels PO 1, P 02, P 03, and P 04 are squares whose center of gravity is the position of the four backslashes in the figure. Since the length of each side is 1 for the pixel Pin, the pixel P 0 1, P 02, P 03, and P 04 are each approximately 0.5).
- the differential value necessary to generate an image was output as real world estimation information. It is equivalent to the slope of the approximation function f (X) at.
- the reference pixel extraction unit 2 211 processes each pixel of the input image based on the data continuity information (stationarity angle or area information) input from the data continuity detection unit 101. It is determined whether the area is a processing area, and if it is a processing area, information on reference pixels necessary to calculate the inclination from the input image (a plurality of peripheral pixels including a pixel of interest required for calculation, The pixel or the position of a plurality of neighboring pixels arranged in the horizontal direction including the pixel of interest, and information on each pixel value) are extracted and output to the inclination estimating unit 222.
- the inclination estimating unit 2 2 1 2 generates information on the inclination of the pixel position necessary for pixel generation based on the information on the reference pixel input from the reference pixel extracting unit 2 2 1 Is output to the image generation unit 103. More specifically, the inclination estimating unit 2 2 1 2 calculates the inclination at the position of the target pixel on the approximation function f ( ⁇ ) that approximates the real world using the difference information of the pixel values between pixels. , This, the position information of the target pixel, the pixel value, And, information on the inclination of the direction of the stationarity is output as real world estimation information.
- step S2221 the reference pixel extraction unit 2221 acquires the angle as the data continuity information and the area information from the data continuity detection unit 101 together with the input image.
- step S2222 the reference pixel extraction unit 2221 sets a target pixel from unprocessed pixels of the input image.
- step S2223 the reference image extraction unit 2221 determines whether or not the pixel of interest is from the processing area based on the information on the area of the data continuity information, and If it is determined that the pixel is not a pixel, the process proceeds to step S 2 228, and informs the inclination estimating unit 2 212 that the pixel of interest is out of the processing area, and accordingly, The inclination estimating unit 222 sets the inclination of the corresponding pixel of interest to 0, adds the pixel value of the pixel of interest to the image generation unit 103 as real-world estimation information, and outputs the information.
- the processing proceeds to step S2229. If it is determined that the target pixel belongs to the processing area, the process proceeds to step S2224.
- step S 2 224 the reference pixel extraction unit 2 2 1 1 determines, based on the angle information included in the data continuity information, that the direction having data continuity is an angle force close to a horizontal direction or a force close to a vertical direction. It is determined whether or not it is an angle. That is, when the angle 0 having data continuity is 0 degree S ⁇ 45 degrees or 135 degrees ⁇ ⁇ ⁇ 180 degrees, the direction of the continuity of the target pixel is: If it is determined that it is close to the horizontal direction and the angle ⁇ ⁇ ⁇ having data continuity is 45 degrees ⁇ ⁇ ⁇ 135 degrees, it is determined that the direction of continuity of the target pixel is close to the vertical direction.
- the reference pixel extraction unit 2221 extracts the position information and the pixel value of the reference pixel corresponding to the determined direction from the input image, and the inclination estimation unit 2211. Output to 2. That is, the reference pixel calculates an inclination described later. It is desirable to extract the data according to the inclination indicating the direction of the stationarity, since the data will be used in such a case. Therefore, corresponding to the determination direction of either the horizontal direction or the vertical direction, a reference pixel in a long range in the direction is extracted. More specifically, for example, when it is determined that the inclination is close to the vertical direction, the reference pixel extraction unit 2211 outputs the central pixel (0, 0, 0) in FIG.
- the reference pixel extraction unit 2211 extracts a pixel in a long range in the vertical direction as a reference pixel so that a total of five pixels, two pixels each in the vertical (up / down) direction, with the target pixel as the center.
- a pixel in a long range in the horizontal direction is extracted as a reference pixel so that a total of 5 pixels including two pixels in the horizontal (left / right) direction around the pixel of interest are obtained.
- the number of reference pixels is not limited to five as described above, and may be any other number.
- the slope estimating unit 22 12 receives the input from the reference pixel extracting unit 2211. The shift amount of each pixel value is calculated based on the obtained information of the reference pixel and the gradient G f in the stationary direction.
- the inclination estimating unit 2 2 1 2 obtains these shift amounts Cx ( ⁇ 2) to Cx (2).
- the reference pixel (0, one 1>, Cx (- l) _ l / G f
- the reference pixel (0, One 2), Cx (- a 2 / G f - 2) .
- step S 2 227 the gradient estimating unit 2 212 calculates (estimates) the gradient on the approximate function f (x) at the position of the target pixel. For example, as shown in FIG. 109, when the direction of continuity of the pixel of interest is at an angle close to the vertical direction, the pixel value differs greatly between pixels adjacent in the horizontal direction, and between pixels in the vertical direction. Since the change between pixels is small and the changes are similar, the slope estimating unit 2 2 1 2 captures the change between pixels in the vertical direction as a change in the spatial direction X due to the shift amount, thereby obtaining the vertical The difference between pixels is replaced with the difference between pixels in the horizontal direction, and the slope on the approximate function f (x) at the position of the target pixel is obtained.
- the pixel value P, the shift amount Cx, and the slope Kx (the slope on the approximation function f (x)) satisfy the following equation (32).
- the slope estimator 2 2 1 2 calculates the minimum autonomous of one variable with respect to this variable Kx (slope). Find the slope KX by multiplication.
- the slope estimating unit 2 2 1 2 obtains the slope of the target pixel by solving a normal equation such as the following equation (33), and obtains the pixel value of the target pixel, and Information on the inclination of the direction of continuity is added, and the information is output to the image generation unit 103 as real world estimation information.
- i is a number for identifying a pair of the pixel value P and the shift amount C of the above-described reference pixel, and is 1 to ⁇ . Further, m is the number of reference pixels including the target pixel.
- step S2229 the reference pixel extraction unit 2221 determines whether or not all pixels have been processed. If it is determined that not all pixels have been processed, the processing is performed in step S220. Return to 2 2 2 2. If it is determined in step S2229 that all pixels have been processed, the processing ends.
- the inclination output as the real world estimation information by the above-described processing is used when the pixel value to be finally obtained is calculated by outer interpolation. Further, in the above example, the inclination when calculating a double density pixel has been described as an example. However, when calculating a pixel having a higher density, a pixel value calculation is further required. The inclination at many positions may be obtained.
- the example of obtaining the double-density pixel value has been described.However, since the approximate function f (x) is a continuous function, the pixel value of a pixel at a position other than the double-density is also required. The inclination can be obtained.
- real-world estimator 1 0 2 outputs, as real-world estimation information, a differential value on an approximation function in the frame direction (time direction) for each pixel in a region having stationarity. Will be described.
- the reference pixel extraction unit 2 2 3 1 generates an input image based on the data continuity information (stationary motion (motion vector) and area information) input from the data continuity detection unit 101. It is determined whether or not each pixel is a processing area. If the pixel is a processing area, information on reference pixels necessary for obtaining an approximation function that approximates the pixel value of the pixel of the input image (information required for calculation) The position and the pixel value of a plurality of pixels around the target pixel are extracted and output to the approximate function estimating unit 222.
- data continuity information stationary motion (motion vector) and area information
- the approximation function estimator 2 2 3 2 generates an approximation function that approximately describes the pixel value of each pixel around the target pixel based on the information of the reference pixel in the frame direction input from the reference pixel extractor 2 2 3 1. Estimation is performed based on the least squares method, and the estimated function is output to the differential processing unit 2 2 3 3.
- the differential processing unit 2 233 determines the pixel to be generated from the pixel of interest according to the movement of the data continuity information.
- the shift amount of the position in the frame direction is calculated, and the differential value at the position on the approximate function in the frame direction according to the shift amount (the pixel of each pixel corresponding to the distance along the one-dimensional direction from the line corresponding to the stationarity)
- the value of the pixel of interest, the pixel value, and the information of the stationary motion are added, and the image generation unit 103 calculates the real-world estimation information.
- step S2241 the reference pixel extraction unit 2221 acquires the motion as the data continuity information and the area information from the data continuity detection unit 101 together with the input image.
- step S2242 the reference pixel extraction unit 2231 sets a pixel of interest from unprocessed pixels of the input image.
- step S2243 the reference image extraction unit 2221 determines whether or not the target pixel belongs to the processing area based on the information on the area of the data continuity information, and If it is determined that the pixel is not a pixel, the process proceeds to step S2250, and it is determined that the pixel of interest is out of the processing region via the approximate function estimating unit 2232 and the differentiation processing unit. In response to this, the differential processing unit 2 2 3 3 sets the differential value of the corresponding pixel of interest to 0, and adds the pixel value of the pixel of interest as real-world estimation information.
- the image data is output to the image generation unit 103, and the process proceeds to step S2251. If it is determined that the target pixel belongs to the processing area, the processing proceeds to step S224.
- step S 2 244 the reference pixel extraction unit 2 231, based on the motion information included in the data continuity information, determines that the direction having the data continuity is a motion force close to the spatial direction It is determined whether or not the movement is close.
- the angle indicating the time and the in-plane direction of the space with respect to the frame direction as a reference axis is set to 0 V.
- the reference pixel extraction unit 2 The motion is determined to be close to the frame direction (time direction). If the angle with data continuity is 0 ° 45 ° ⁇ 0 and 135 °, the continuity direction of the pixel of interest is determined to be close to the spatial direction. I do.
- the reference pixel extraction unit 2201 extracts the position information and the pixel value of the reference pixel corresponding to the determined direction from the input image, and extracts the approximation function estimation unit 22 3 Output to 2. That is, since the reference pixel is data used when calculating an approximation function described later, it is desirable that the reference pixel be extracted according to the angle. Therefore, corresponding to the determination direction of either the frame direction or the spatial direction, a reference pixel in a long range in the direction is extracted. More specifically, for example, If the movement direction Vf is close to the spatial direction as shown by 113, it is determined that the movement direction is the spatial direction, and in this case, the reference pixel extracting unit 2231 determines, for example, as shown in FIG.
- the pixel (t, y) (— 1, 2), (1-1, 1), (-1 , 0), (1-1, -1), (1-1, -2), (0, 2), (0, 1), (0, 0), (0, -1), (0, -2 ), (1, 2), (1, 1), (1, 0), (1, — 1), (1, 1 2) are extracted.
- the size of each pixel in the frame direction and the spatial direction is one.
- the reference pixel extraction unit 2231 sets a total of 15 pixels for each two pixels in the spatial (upper and lower in the figure) direction and one frame in each of the frame (left and right in the figure) direction around the pixel of interest. Then, pixels in a range in which the spatial direction is longer than the frame direction are extracted as reference pixels.
- the direction is the frame direction
- a total of 15 pixels for each 1 pixel in the space (up and down in the figure) and 2 frames in the frame (left and right in the figure) around the pixel of interest are extracted as reference pixels and output to the approximate function estimating unit 2232.
- the number of reference pixels is not limited to 15 pixels as described above, and may be other numbers.
- step S2246 the approximate function estimating unit 2232 estimates the approximate function f (t) by the least square method based on the information of the reference pixels input from the reference pixel extracting unit 2231, and outputs the estimated function to the differentiation processing unit 2233. I do.
- the approximate function f (t) is a polynomial as shown in the following equation (34).
- f (t) 1 t n + ⁇ 3 ⁇ 4t n -i-.-. -f3 ⁇ 4_ 1 ⁇ ⁇ ⁇ (34)
- f (t) 1 t n + ⁇ 3 ⁇ 4t n -i-.-. -f3 ⁇ 4_ 1 ⁇ ⁇ ⁇ (34)
- the polynomial is assumed to be a polynomial up to the 14th order, and the approximate function is estimated by obtaining the coefficients W 15 .
- a simultaneous equation may be set by setting an approximate function f (x) composed of a 15th-order polynomial.
- the approximate function estimating unit 223 2 estimates the following equation (35) by solving the following equation (35) using the least square method.
- the number of reference pixels may be changed according to the degree of the polynomial.
- Ct (ty) is a shift amount, which is the same as Cx (ty) described above, and when the slope of the constancy is represented by Vf , Is defined by
- the approximate function f (t) is defined in the frame direction (time
- step S 2 247 the differentiation processing unit 2 2 3 3 shifts the position of the pixel to be generated based on the approximation function f (t) input from the approximation function estimation unit 2 2 3 2 Find the quantity.
- the differential processing unit 223 3 In order to divide the pixel into two parts, Pat and Pbt, which have twice the density, as shown in Fig. 11-14, the center position of the pixel of interest is calculated as the differential value at Pin (Tin, Yin). Find the shift amount of (Tin, Yin). Since this shift amount is Ct (0), it is substantially zero.
- the pixel Pin is a square with (Tin, Yin) as the approximate center of gravity, and the pixels Pat and Pbt are (Tin, Yin + 0.25) and (Tin, Yin—0.25). Each rectangle is a rectangle that is long in the horizontal direction in the figure at the approximate center of gravity.
- a length of 1 in the frame direction T of the target pixel Pin corresponds to a shutter time for one frame.
- step S2248 the differential processing unit 2 2 3 3 differentiates the approximate function f (t) to obtain a first-order differential function f (t) ′ of the approximate function, and according to the obtained shift amount, The differential value at the position is calculated and output to the image generation unit 103 as real world estimation information. That is, in this case, the differential processing unit 2 2 3 3 obtains the differential value f (Tin) ′, and the position (in this case, the target pixel (Tin, Yin)), the pixel value, and the stationarity Direction And information of the movement of the object is added and output.
- step S2249 the differential processing unit 2233 determines whether or not enough differential values are required to generate pixels of the required density. In this case, only the differential value to obtain a double density in the spatial direction is obtained (only the differential value to obtain a double density in the frame direction is not obtained). It is determined that the differential values necessary to generate the pixel have not been obtained, and the process returns to step S2247.
- step S 2 247 the differentiating unit 220 3 again uses the approximate function f (t) input from the approximate function estimating unit 220 2 Find the shift amount. That is, in this case, the differential processing unit 2203 obtains a differential value required to further divide each of the pixels Pat and Pbt into two. Since the positions of the pixels Pa1; and Pbt are the positions indicated by the black circles in FIG. 114, the differential processing unit 223 determines the shift amount corresponding to each position. The shift amounts of the pixels Pat and Pbt are Ct (0.25) and Ct (_0.25), respectively.
- step S 2 248 the differentiation processing section 223 3 differentiates the approximation function f (t) to obtain a differential value at a position corresponding to the shift amount corresponding to each of the pixels Pa Pbt, This is output to the image generation unit 103 as real world estimation information.
- the differential processing unit 2 2 3 3 performs the differentiation function f (t) for the obtained approximate function f (t) as shown in FIG. , And the positions are shifted by the shift amounts Ct (0.25) and Ct (0.25) in the spatial direction T (Tin—Ct (0.25)) and (Tin—Ct (—0 .25)) is obtained as f (Tin— Ct (0.25)) f (Tin -Ct (-0.25)) ', and the position information corresponding to the differential value is obtained. This is added and output as real world estimation information. Note that the pixel value information is output in the first processing, so that the pixel value information is not added.
- step S 2 249 again, the differentiation processing section 2 233 determines whether or not enough differential values are required to generate a pixel of the required density. You. For example, in this case, since the differential value has been obtained for the spatial direction Y and the frame direction T so that the density is doubled (4 times in total), the pixels with the required density are generated. It is determined that the differential values necessary for the calculation have been obtained, and the processing proceeds to step S2251.
- step S2251 the reference pixel extraction unit 2231 determines whether or not all pixels have been processed, and when it is determined that not all pixels have been processed, the processing is performed in step S221. Return to 2 2 4 2. If it is determined in step S2251 that all the pixels have been processed, the processing ends.
- the pixel is calculated by an approximation function of the center position of the divided pixel. Since the image is divided by extrapolation using differential values, information on a total of three differential values is required to generate quadruple-density pixels.
- the pixels POI t, P 0 2 t, P 0 3 t, and P 0 4 t are finally obtained for one pixel (in FIG. 114, the pixel P 0 1 t, P 0 2 t, P 0 3 t, and P 0 4 t are squares whose center of gravity is the position of the four marks in the figure. Since each pixel is 1, the pixels P 0 1 t, P 0 2 t, P 0 3 t, and P 0 4 t are each approximately 0.5).
- To generate double-density pixels first generate double-density pixels in the frame direction or spatial direction (the first steps S2247 and S2248 described above). Then, the two divided pixels are divided in the direction perpendicular to the first division and the direction perpendicular to the first (in this case, the frame direction) (the second step S2247, S2248 above). Processing).
- the differential value when calculating a quadruple-density pixel has been described as an example. However, when calculating a pixel with a higher density, steps S 2 247 to S 2 2 By repeating the processing of 49, more differential values required for the calculation of the pixel value may be obtained.
- steps S 2 247 to S 2 2 By repeating the processing of 49, more differential values required for the calculation of the pixel value may be obtained.
- an example of obtaining a pixel value of double density has been described, but since the approximate function f (t) is a continuous function, Necessary differential values can be obtained for pixel values other than degrees.
- the differential value required to generate an image is output as real-world estimation information. It is equivalent to the slope of the approximation function f (t) at.
- the reference pixel extraction unit 2221 processes each pixel of the input image based on the data continuity information (stationary motion and area information) input from the data continuity detection unit 101. It is determined whether or not it is a region, and if it is a processing region, information on reference pixels necessary to obtain a tilt from the input image (a plurality of peripheral pixels that are aligned in the spatial direction and include the pixel of interest required for calculation) The position of a pixel or a plurality of peripheral pixels arranged in the frame direction including the pixel of interest, and information on each pixel value) are extracted, and the slope estimating unit 2 is extracted.
- data continuity information stationary motion and area information
- the inclination estimating unit 222 generates the information of the inclination of the pixel position required for pixel generation based on the information of the reference pixel input from the reference pixel extracting unit 222, and outputs the real world estimation information. Is output to the image generation unit 103. More specifically, the inclination estimating unit 2 252 uses the difference information of the pixel values between pixels to determine the frame direction at the position of the pixel of interest on an approximation function that approximately represents the pixel value of each reference pixel. Then, the position information of the pixel of interest, the pixel value, and the information on the motion in the stationary direction are output as real-world estimation information.
- step S 2 261 the reference pixel extraction unit 225 1, together with the input image, The data as the data continuity information and the area information are acquired from the data continuity detection unit 101.
- step S2262 the reference pixel extraction unit 2221 sets a pixel of interest from unprocessed pixels of the input image.
- step S2263 the reference image extraction unit 2221 determines whether or not the target pixel belongs to the processing area based on the information on the area of the data continuity information, and If it is determined that the pixel is not a pixel, the process proceeds to step S2268, and the slope estimation unit 2252 is notified that the pixel of interest is outside the processing area, and accordingly, The inclination estimating unit 222 sets the inclination of the corresponding pixel of interest to 0, further adds the pixel value of the pixel of interest to the image generating unit 103 as real-world estimation information, and outputs the information.
- the processing proceeds to step S2269. If it is determined that the target pixel belongs to the processing area, the process proceeds to step S2264.
- the reference pixel extraction unit 2221 determines from the motion information included in the data continuity information that the motion of the data continuity is close to the motion force close to the frame direction or close to the spatial direction. It is determined whether or not it is a movement. That is, in a plane composed of the frame direction T and the space direction Y, if the angle indicating the direction in the time and space directions with respect to the frame direction as a reference axis is ⁇ V, the reference pixel extraction unit 2 If the angle of the data stationary motion ⁇ V is 0 ° ⁇ 0V ⁇ 45 ° or 135 ° ⁇ ⁇ V and 180 °, the stationary motion of the pixel of interest is determined to be close to the frame direction. If the angle 0 v having the data constancy is 45 degrees ⁇ ⁇ v ⁇ 135 degrees, it is determined that the stationary movement of the target pixel is close to the spatial direction.
- the reference pixel extraction unit 2221 extracts the position information and the pixel value of the reference pixel corresponding to the determined direction from the input image, and the inclination estimation unit 222. Output to 2. That is, since the reference pixel is data used when calculating a slope described later, it is desirable that the reference pixel be extracted according to the stationary motion. Therefore, it corresponds to the judgment direction of either the frame direction or the spatial direction. Thus, a reference pixel in a long range in the direction is extracted. More specifically, for example, when it is determined that the motion is close to the spatial direction, the reference pixel extracting unit 2251, as shown in FIG. 118, outputs the central pixel (t, y) in FIG.
- the reference pixel extraction unit 2251 extracts pixels in a long range in the spatial direction as reference pixels so that a total of 5 pixels including 2 pixels each in the spatial direction (up and down in the figure) with the target pixel as the center. .
- pixels in a long horizontal range are extracted as reference pixels so that a total of 5 pixels, 2 pixels in the frame (left and right in the figure) direction, centering on the target pixel Then, it outputs the result to the approximate function estimating unit 2252.
- the number of reference pixels is not limited to five as described above, and may be other numbers.
- step S2266 the inclination estimating unit 2252 calculates the shift amount of each pixel value based on the information of the reference pixel input from the reference pixel extracting unit 2251 and the direction of the motion Vf in the stationary direction. I do.
- the inclination estimating unit 2252 obtains these shift amounts Ct ( ⁇ 2) to Ct (2).
- the inclination estimating unit 2 252 obtains these shift amounts Ct (1 ⁇ 2) to Ct (2).
- step S2267 the inclination estimating unit 2222 calculates (estimates) the inclination of the pixel of interest in the frame direction. For example, as shown in Fig. 118, when the direction of the continuity of the pixel of interest is at an angle close to the spatial direction, the pixel values adjacent to each other in the frame direction are significantly different. Since the change between pixels is small and the changes are similar between the pixels, the inclination estimating unit 2252 captures the change between pixels in the spatial direction as the change in the frame direction T due to the shift amount. The difference between pixels in the spatial direction is replaced with the difference between pixels in the frame direction, and the inclination at the pixel of interest is obtained.
- each pixel in FIG. 119 is P (0, 2), P (0, 1), P (0, 0), P (0, — 1), P (0, -It is expressed by 2).
- the pixel value P, the shift amount C t, and the slope K t (the slope on the approximation function f (t)) satisfy the following equation (36).
- the slope estimator 2 2 5 2 uses the normal equation (3 7) shown below.
- the inclination of the target pixel is obtained, and the pixel value of the target pixel and information on the inclination in the direction of the stationarity are added, and the resultant is output to the image generation unit 103 as real world estimation information.
- i is a number for identifying a pair of the pixel value P of the reference pixel and the shift amount Ct, and is 1 to m.
- M is the number of reference pixels including the target pixel.
- step S2269 the reference pixel extraction unit 2251 determines whether or not all pixels have been processed. If it is determined that not all pixels have been processed, the process proceeds to step S2269. Return to 2 2 6 2. If it is determined in step S2269 that all the pixels have been processed, the processing ends.
- the inclination in the frame direction which is output as the real world estimation information by the above-described processing, is used when a pixel value to be finally obtained is calculated by external interpolation. Also, in the above example, the inclination when calculating a double density pixel has been described as an example. However, when calculating a pixel with a higher density, there are more points necessary for calculating the pixel value. You may ask for the tilt at the position.
- the force approximation function f (t) described in the example of obtaining the double-density pixel value is a continuous function
- the required gradient is also required for the pixel value of the pixel at a position other than the double-density.
- the order of the process of calculating the gradient or the derivative on the approximate function with respect to the frame direction or the spatial direction does not matter.
- the spatial direction in the above example, the description has been made using the relationship between the spatial direction Y and the frame direction T. However, even when the relationship between the spatial direction X and the frame direction T is used, Good.
- the slope or the derivative value in any one-dimensional direction may be selectively obtained from any two-dimensional relationship in the spatiotemporal direction.
- the frame direction (time The gradient on the approximation function of the direction can be generated as real-world estimation information and output further.
- FIG. 3 Another example of the embodiment of the real world estimating unit 102 (FIG. 3) will be described with reference to FIGS.
- FIG. 120 is a diagram for explaining the principle of the embodiment of this example.
- a signal (distribution of light intensity) of the real world 1 which is an image incident on the sensor 2 is represented by a predetermined function F.
- a signal of the real world 1 which is an image is particularly referred to as an optical signal
- the function F is particularly referred to as an optical signal function F.
- the real world estimating unit 102 when the optical signal of the real world 1 represented by the optical signal function F has a predetermined stationarity, the real world estimating unit 102 outputs the input image from the sensor 2 (stationarity Image data including the stationarity of the data corresponding to the data) and the data stationarity information from the data stationarity detector 101 (the data stationarity information corresponding to the stationarity of the data of the input image).
- the optical signal function F is estimated by approximating the optical signal function F with a predetermined function f.
- the function soil ′ is particularly referred to as an approximate function f.
- the real-world estimator 102 uses the model 16 1 (FIG. 7) represented by the approximate function f to generate the image represented by the optical signal function F. (The optical signal of the real world 1). Therefore, the embodiment of this example will be described below. Is referred to as a function approximation technique.
- FIG. 121 is a view for explaining the integration effect when the sensor 2 is a CCD. As shown in FIG. 121, a plurality of detection elements 2-1 are arranged on the plane of the sensor 2.
- the direction parallel to a predetermined side of the detection element 2-1 is defined as the X direction, which is one direction in the spatial direction
- the direction perpendicular to the X direction is defined as the other direction in the spatial direction.
- the direction perpendicular to the XY plane is the t direction, which is the time direction.
- the spatial shape of each of the detection elements 2-1 of the sensor 2 is a square having one side length of one.
- the shutter time (exposure time) of the sensor 2 is set to 1.
- y 0
- the pixel value P output from the detection element 2-1 having the center at the origin in the spatial direction is represented by the following equation (38).
- FIG. 122 is a view for explaining a specific example of the integration effect of the sensor 2.
- the X and Y directions are the X and Y directions of sensor 2 (Fig. 12
- One part of the optical signal of the real world 1 (hereinafter, such a part is called an area)
- Reference numeral 2301 denotes an example of a region having a predetermined stationarity.
- the area 2301 is one part (continuous area) of a continuous optical signal.
- the region 2301 is shown as being divided into 20 small regions (square regions). This means that the size of the area 2301 is equivalent to the size in which four detection elements (pixels) of sensor 2 are arranged in the X direction and five in the Y direction. It is for representing. That is, area 2
- Each of the 20 small areas (virtual areas) in 301 corresponds to one pixel.
- the white part in the figure of the region 2301 represents an optical signal corresponding to a thin line. Therefore, the area 2301 has stationarity in the direction in which the thin line continues. Therefore, hereinafter, the region 2301 is referred to as a fine-line-containing real world region 2301.
- fine line containing data area 2302 is output.
- Each pixel in the thin line containing data area 2302 is shown as an image in the figure, but is actually data representing one predetermined value. That is, due to the integration effect of the sensor 2, the thin-line-containing real world area 2301 has 20 pixels each having one predetermined pixel value (4 pixels in the X direction and 4 pixels in the Y direction). It changes into a thin-line-containing data area 2302 divided into 5 pixels (total of 20 pixels) (distorted).
- FIG. 123 is a view for explaining another specific example of the integration effect of the sensor 2 (an example different from FIG. 122).
- the X direction and the Y direction represent the X direction and the ⁇ direction of the sensor 2 (FIG. 122).
- One part (area) 2303 of the optical signal of the real world 1 is another example of the area having a predetermined stationarity (an example different from the real world area 2301 containing a thin line in Fig. 122).
- the region 2303 is a region having the same size as the fine-line-containing real world region 2301. That is, like the real world region 2301 containing fine lines, the region 2303 is actually a part (continuous region) of the optical signal of the real world 1 which is continuous. In this case, it is shown as being divided into 20 small areas (square areas) corresponding to one pixel of the sensor 2.
- the region 2303 includes a first portion having a predetermined first light intensity (value) and an edge of a second portion having a predetermined second light intensity (value). I have. Therefore, the region 2303 has stationarity in the direction in which the edge continues. Therefore, hereinafter, the region 2303 is referred to as a binary edge-containing real world region 2303.
- the sensor 2 when the real world area 2303 (one part of the optical signal of the real world 1) containing the binary edge is detected by the sensor 2, the sensor 2 outputs the input image (pixel value) by the integration effect.
- An area 2304 (hereinafter, referred to as a binary edge-containing data area 2304) is output.
- Each pixel value in the binary edge-containing data area 2304 is represented as an image in the figure similarly to the fine line-containing data area 2302, but in practice, a predetermined value Is data representing That is, due to the integration effect of the sensor 2, the binary edge-containing real world region 2303 has 20 pixels each having a predetermined one pixel value (4 pixels in the X direction and 5 pixels in the ⁇ direction). It changes into a binary edge-containing data area 2304 divided into a total of 20 pixels) (distorted).
- Conventional image processing apparatuses use the image data output from the sensor 2 such as the thin line containing data area 2302 and the binary edge containing data area 2304 as the origin (reference) and the image data.
- the subsequent image processing was performed with the processing as the target. That is, the image data output from the sensor 2 is Despite being different (distorted) from the optical signal, the conventional image processing apparatus performed image processing with the data different from the optical signal in the real world 1 as positive.
- the real world estimating unit 102 uses the thin line containing data area 2302 By approximating the optical signal function F (optical signal of the real world 1) with the approximation function f from the image data (input image) output from the sensor 2 such as the binary edge-containing data area 2304, Estimate the signal function F.
- FIG. 124 is a diagram again showing the fine-line-containing real world region 2301 shown in FIG. 122 described above.
- the X direction and the Y direction represent the X direction and the Y direction of the sensor 2 (FIG. 122).
- the first function approximation method is, for example, the optical signal function F (x, y, t) corresponding to the fine-line-containing real world area 2301, as shown in FIG.
- the one-dimensional waveform projected in the direction of (2 3 1 1) (hereinafter, such a waveform is referred to as an X-section waveform F (x)) is represented by an nth-order (n is an arbitrary integer) polynomial.
- This is a method of approximating with a certain approximation function f (x). Therefore, the first function approximation method will be referred to as the one-dimensional polynomial approximation method below. Called.
- the X-sectional waveform F (x) to be approximated is not limited to the one corresponding to the thin-wire-containing real world region 2301 in FIG. That is, as will be described later, in the one-dimensional polynomial approximation method, it is possible to approximate any one of the X-section waveforms F (x) corresponding to the optical signal of the real world 1 having a stationarity. is there.
- the direction of projection of the optical signal function F (x, y, t) is not limited to the X direction, but may be the Y direction or the t direction. That is, in the one-dimensional polynomial approximation method, the optical signal function
- the function F (y) that projects F (x, y, t) in the Y direction can be approximated by a predetermined approximation function f (y), and the optical signal function F (x, y, t) It is also possible to approximate the function F (t), which is obtained by projecting in the t direction, with a predetermined approximation function f (t).
- the one-dimensional polynomial approximation method for example, approximates the X-section waveform F (x) with an approximation function f (x) that is an nth-order polynomial as shown in the following equation (39) It is a method to do.
- the real-world estimator 102 calculates the coefficient (feature) ⁇ of Ti (of equation (39)) to obtain the X-section waveform F Estimate (x).
- the method of calculating the feature value is not particularly limited, and for example, the following first to third methods can be used.
- the first method is a method conventionally used.
- the second method is a method newly invented by the applicant of the present invention, and is a method in which spatial continuity is further taken into consideration with respect to the first method.
- the integration effect of the sensor 2 is not considered in the first method and the second method. Therefore, the approximation function f (x) obtained by substituting the feature value calculated by the first method or the second method into the above equation (39) is Although it is an approximation function of the input image, it is not strictly speaking an approximation function of the X-sectional waveform F (x).
- the present applicant has invented a third method of calculating the feature value ⁇ by further considering the integration effect of the sensor 2 with respect to the second method.
- the approximation function ⁇ ′ ( ⁇ ) obtained by substituting the feature quantity ⁇ calculated by the third method into the above-described equation (39) is given by the following equation in consideration of the integration effect of the sensor 2.
- the first method and the second method cannot be said to be one-dimensional polynomial approximation methods, and only the third method is a one-dimensional polynomial approximation method.
- the second method is an embodiment of the real-world estimator 102 of the present invention, which is different from the one-dimensional polynomial approximation method. That is, FIG. 125 is a diagram for explaining the principle of the embodiment corresponding to the second method.
- the real-world estimation is performed.
- the unit 102 receives the input image from the sensor 2 (image data including the stationarity of the data corresponding to the stationarity) and the data stationarity information from the data stationarity detector 101 (input image data X-section waveform using data stationarity information corresponding to stationarity
- the second method does not consider the integration effect of the sensor 2 and only approximates the input image. Therefore, it cannot be said that the second method is the same method as the third method. However, the second method is superior to the first method in that it takes into account spatial continuity.
- the following prediction equation (40) is defined assuming that it holds within the real world area 2301 containing fine lines of 126.
- X represents the pixel position relative to the X direction from the target pixel.
- y indicates the pixel position relative to the Y direction from the pixel of interest.
- e represents the error.
- the pixel of interest is the thin-line-containing data area 2302 (the fine-line-containing real-world area 2301 (FIG. 124) is the sensor 2).
- the pixel is the second pixel in the X direction from the left and the third pixel in the Y direction from the bottom.
- the center of the target pixel is defined as the origin (0,0), and the coordinate system (hereinafter referred to as the target image) with the X and y axes parallel to the X and Y directions (Fig. (Referred to as an elementary coordinate system).
- the coordinate value (x, y) of the target pixel coordinate system indicates the relative pixel position.
- P (x, y) represents a pixel value at a relative pixel position (x, y). Specifically, in this case, in the fine line content data area 2302,
- FIG. 127 shows a graph of the pixel value P (x, y).
- each graph represents the pixel value
- the horizontal axis represents the relative position X in the X direction from the pixel of interest.
- the dotted line in the first graph from the top represents the input pixel value P, -2
- the three-dot chain line in the second graph from the top represents the input pixel value P (x, -1)
- the solid line in the second graph is the input pixel value ⁇ ( ⁇ , 0)
- the dashed line in the fourth graph from the top is the input pixel value ,, 1)
- the fifth from the top (first from the bottom) is the graph.
- the two-dot chain lines in the figure respectively represent the input pixel value ⁇ ( ⁇ , 2).
- equation (41) Since equation (41) is composed of 20 equations, the approximation function: If the number of features ⁇ (x) ⁇ is less than 20, ie, the approximation function (X ) Is a polynomial of degree less than 19, for example, the feature value ⁇ can be calculated using the least squares method. The specific method of the least squares method will be described later.
- the approximation function (X) calculated by the least square method using Equation (41) (the calculated feature amount ⁇
- the generated approximation function ⁇ (x)) looks like the curve shown in Figure 128.
- the vertical axis represents the pixel value
- the horizontal axis represents the relative position X from the target pixel.
- the dotted line represents the input pixel value P (x, -2)
- the three-dot chain line represents the input pixel value P (x, _l)
- the solid line represents the input pixel value P (x, 0)
- the dashed-dotted line indicates the input pixel value P (x, l)
- the dashed-dotted line indicates the input pixel value P (x, 2).
- two or more lines actually overlap, but in Fig. 128, each line is drawn so that they do not overlap so that each line can be distinguished . And the 20 input pixel values distributed in this way
- the approximation function (X) is obtained by calculating the pixel value in the Y direction (the pixel value having the same relative position in the X direction from the target pixel) ⁇ ( ⁇ , -2), ⁇ ( ⁇ , -1), ⁇ It simply represents a curve connecting the average of ( ⁇ , 0), ⁇ ( ⁇ , l), P (x, 2 ) in the X direction. That is, the approximate function (X) is generated without considering the spatial continuity of the optical signal.
- the approximation target is the fine-line-containing real-world region 2301 (Fig. 124).
- the X direction and the Y direction represent the X direction and the Y direction of the sensor 2 (FIG. 12 1).
- the direction of the stationarity of the data represented by the gradient G f corresponding to G F and the angle F ) formed by the X direction can be output.
- the data continuity information output from the data continuity detecting unit 101 is not used at all.
- the direction of continuity in the spatial direction of the fine-line-containing real world area 2301 is substantially the direction of angle 0.
- the first method assumes that the direction of spatial continuity of the fine-line containing real world region 2301 is in the Y direction (ie, assuming that the angle 0 is 90 degrees). This is a method of calculating the feature quantity ⁇ of the approximation function (X).
- the approximation function (X) becomes a function whose waveform becomes dull and the detail is reduced from the original pixel value.
- the approximation function (X) generated by the first method has a waveform that is significantly different from the actual X-section waveform F (x).
- the present applicant has invented a second method of calculating the feature amount ⁇ (using the angle 0) by further considering the stationarity in the spatial direction with respect to the first method.
- the second method is a method of calculating the feature Wi of the approximation function f 2 (x), assuming that the direction of the continuity of the fine-line-containing real world region 2301 is substantially the direction of the angle 0.
- the gradient G f representing the stationarity of the data corresponding to the stationarity in the spatial direction is expressed by the following equation (42).
- Equation (42) dx represents a small movement amount in the X direction as shown in FIG. 129, and dy represents It represents the small amount of movement in the Y direction with respect to dx.
- the shift amount C x (y) is defined as the following equation (43)
- the equation corresponding to the equation (40) used in the first method in the second method is The following equation (44) is obtained.
- equation (40) used in the first method shows that the position X in the X direction of the pixel center position (x, y) is the position of the pixel located at the same position.
- the pixel values P (x, y) represent the same value.
- equation (40) indicates that pixels having the same pixel value continue in the Y direction (there is a continuity in the Y direction).
- the equation (44) used in the second method shows that the pixel value P (x, y) of the pixel whose center position is, y) is the target pixel (the center position is the origin. It does not match the pixel value ( ⁇ f 2 (x)) of the pixel located X away from the (0, 0) pixel in the X direction, and further shifts in the X direction from that pixel C x
- Expression (44) indicates that pixels having the same pixel value continue in the angle ⁇ direction corresponding to the shift amount C x (y) (there is a continuity in the substantially angle ⁇ direction). .
- the shift amount C x (y) becomes the stationarity in the spatial direction (in this case, the stationarity represented by the slope G F in FIG. 12 (strictly speaking, the data represented by the slope G f (44) The correction amount considering the stationarity))), and the expression (44) is obtained by correcting the expression (40) by the shift amount C x (y).
- ⁇ ( ⁇ , -2) f 2 (1-C x (-2)) + e 3
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/544,873 US7593601B2 (en) | 2003-02-25 | 2004-02-13 | Image processing device, method, and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003-046861 | 2003-02-25 | ||
JP2003046861A JP4143916B2 (ja) | 2003-02-25 | 2003-02-25 | 画像処理装置および方法、記録媒体、並びにプログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004077354A1 true WO2004077354A1 (ja) | 2004-09-10 |
Family
ID=32923245
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/001585 WO2004077354A1 (ja) | 2003-02-25 | 2004-02-13 | 画像処理装置および方法、並びにプログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US7593601B2 (ja) |
JP (1) | JP4143916B2 (ja) |
KR (1) | KR101041060B1 (ja) |
CN (1) | CN1332355C (ja) |
WO (1) | WO2004077354A1 (ja) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4214459B2 (ja) * | 2003-02-13 | 2009-01-28 | ソニー株式会社 | 信号処理装置および方法、記録媒体、並びにプログラム |
JP4144377B2 (ja) * | 2003-02-28 | 2008-09-03 | ソニー株式会社 | 画像処理装置および方法、記録媒体、並びにプログラム |
JP4144378B2 (ja) * | 2003-02-28 | 2008-09-03 | ソニー株式会社 | 画像処理装置および方法、記録媒体、並びにプログラム |
US20080170767A1 (en) * | 2007-01-12 | 2008-07-17 | Yfantis Spyros A | Method and system for gleason scale pattern recognition |
JP4882999B2 (ja) * | 2007-12-21 | 2012-02-22 | ソニー株式会社 | 画像処理装置、画像処理方法、プログラム、および学習装置 |
US8600213B2 (en) * | 2011-10-26 | 2013-12-03 | Xerox Corporation | Filtering source video data via independent component selection |
KR101838342B1 (ko) * | 2011-10-26 | 2018-03-13 | 아이큐브드 연구소 주식회사 | 화상 처리 장치, 화상 처리 방법, 및 기록매체 |
US9709990B2 (en) * | 2012-12-21 | 2017-07-18 | Toyota Jidosha Kabushiki Kaisha | Autonomous navigation through obstacles |
JP6609098B2 (ja) * | 2014-10-30 | 2019-11-20 | キヤノン株式会社 | 表示制御装置、表示制御方法、及びコンピュータプログラム |
US10095944B2 (en) * | 2015-08-28 | 2018-10-09 | Tata Consultancy Services Limited | Methods and systems for shape based image analysis for detecting linear objects |
JP6800938B2 (ja) * | 2018-10-30 | 2020-12-16 | キヤノン株式会社 | 画像処理装置、画像処理方法及びプログラム |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH02285475A (ja) * | 1989-03-27 | 1990-11-22 | Hughes Aircraft Co | エッジとラインとの抽出方法とその装置 |
JPH05342352A (ja) * | 1992-06-08 | 1993-12-24 | Dainippon Screen Mfg Co Ltd | 多階調画像のエッジ抽出装置 |
JPH0896145A (ja) * | 1994-09-21 | 1996-04-12 | Nec Corp | 曲線検出装置 |
JPH11239363A (ja) * | 1998-02-23 | 1999-08-31 | Nippon Telegr & Teleph Corp <Ntt> | 映像中文字領域抽出装置および方法およびその方法を記録した記録媒体 |
JP2000201283A (ja) * | 1999-01-07 | 2000-07-18 | Sony Corp | 画像処理装置および方法、並びに提供媒体 |
JP2001084368A (ja) * | 1999-09-16 | 2001-03-30 | Sony Corp | データ処理装置およびデータ処理方法、並びに媒体 |
WO2001097510A1 (en) * | 2000-06-15 | 2001-12-20 | Sony Corporation | Image processing system, image processing method, program, and recording medium |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4648120A (en) | 1982-07-02 | 1987-03-03 | Conoco Inc. | Edge and line detection in multidimensional noisy, imagery data |
US5052045A (en) * | 1988-08-29 | 1991-09-24 | Raytheon Company | Confirmed boundary pattern matching |
US5134495A (en) * | 1990-11-07 | 1992-07-28 | Dp-Tek, Inc. | Resolution transforming raster-based imaging system |
JP3073599B2 (ja) * | 1992-04-22 | 2000-08-07 | 本田技研工業株式会社 | 画像のエッジ検出装置 |
US6621924B1 (en) * | 1999-02-26 | 2003-09-16 | Sony Corporation | Contour extraction apparatus, a method thereof, and a program recording medium |
JP2000293696A (ja) * | 1999-04-07 | 2000-10-20 | Matsushita Electric Ind Co Ltd | 画像認識装置 |
JP2002135801A (ja) * | 2000-10-25 | 2002-05-10 | Sony Corp | 画像処理装置 |
-
2003
- 2003-02-25 JP JP2003046861A patent/JP4143916B2/ja not_active Expired - Fee Related
-
2004
- 2004-02-13 WO PCT/JP2004/001585 patent/WO2004077354A1/ja active Application Filing
- 2004-02-13 KR KR1020057015860A patent/KR101041060B1/ko not_active IP Right Cessation
- 2004-02-13 CN CNB2004800051050A patent/CN1332355C/zh not_active Expired - Fee Related
- 2004-02-13 US US10/544,873 patent/US7593601B2/en not_active Expired - Fee Related
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH02285475A (ja) * | 1989-03-27 | 1990-11-22 | Hughes Aircraft Co | エッジとラインとの抽出方法とその装置 |
JPH05342352A (ja) * | 1992-06-08 | 1993-12-24 | Dainippon Screen Mfg Co Ltd | 多階調画像のエッジ抽出装置 |
JPH0896145A (ja) * | 1994-09-21 | 1996-04-12 | Nec Corp | 曲線検出装置 |
JPH11239363A (ja) * | 1998-02-23 | 1999-08-31 | Nippon Telegr & Teleph Corp <Ntt> | 映像中文字領域抽出装置および方法およびその方法を記録した記録媒体 |
JP2000201283A (ja) * | 1999-01-07 | 2000-07-18 | Sony Corp | 画像処理装置および方法、並びに提供媒体 |
JP2001084368A (ja) * | 1999-09-16 | 2001-03-30 | Sony Corp | データ処理装置およびデータ処理方法、並びに媒体 |
WO2001097510A1 (en) * | 2000-06-15 | 2001-12-20 | Sony Corporation | Image processing system, image processing method, program, and recording medium |
Also Published As
Publication number | Publication date |
---|---|
JP4143916B2 (ja) | 2008-09-03 |
US20060159368A1 (en) | 2006-07-20 |
JP2004264917A (ja) | 2004-09-24 |
CN1332355C (zh) | 2007-08-15 |
US7593601B2 (en) | 2009-09-22 |
KR101041060B1 (ko) | 2011-06-13 |
CN1754185A (zh) | 2006-03-29 |
KR20050101225A (ko) | 2005-10-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4148041B2 (ja) | 信号処理装置および信号処理方法、並びにプログラムおよび記録媒体 | |
WO2004077353A1 (ja) | 画像処理装置および方法、並びにプログラム | |
WO2004072898A1 (ja) | 信号処理装置および方法、並びにプログラム | |
WO2004077351A1 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP2004264918A (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
WO2004077354A1 (ja) | 画像処理装置および方法、並びにプログラム | |
JP4214462B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4161729B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP2004259232A (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4182827B2 (ja) | 信号処理装置および信号処理方法、並びにプログラムおよび記録媒体 | |
JP4161734B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4325296B2 (ja) | 信号処理装置および信号処理方法、並びにプログラムおよび記録媒体 | |
JP4161727B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4161733B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4161732B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4161731B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4161735B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4419453B2 (ja) | 信号処理装置および信号処理方法、並びにプログラムおよび記録媒体 | |
JP4182826B2 (ja) | 信号処理装置および信号処理方法、並びにプログラムおよび記録媒体 | |
JP4161730B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4175131B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4161728B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4155046B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4178983B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP2004246590A (ja) | 画像処理装置および方法、記録媒体、並びにプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
ENP | Entry into the national phase |
Ref document number: 2006159368 Country of ref document: US Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10544873 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020057015860 Country of ref document: KR Ref document number: 20048051050 Country of ref document: CN |
|
WWP | Wipo information: published in national office |
Ref document number: 1020057015860 Country of ref document: KR |
|
122 | Ep: pct application non-entry in european phase | ||
WWP | Wipo information: published in national office |
Ref document number: 10544873 Country of ref document: US |