WO2004077353A1 - 画像処理装置および方法、並びにプログラム - Google Patents
画像処理装置および方法、並びにプログラム Download PDFInfo
- Publication number
- WO2004077353A1 WO2004077353A1 PCT/JP2004/001584 JP2004001584W WO2004077353A1 WO 2004077353 A1 WO2004077353 A1 WO 2004077353A1 JP 2004001584 W JP2004001584 W JP 2004001584W WO 2004077353 A1 WO2004077353 A1 WO 2004077353A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- data
- pixels
- image
- real world
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 1373
- 238000000034 method Methods 0.000 title abstract description 1133
- 238000001514 detection method Methods 0.000 claims abstract description 641
- 230000003287 optical effect Effects 0.000 claims abstract description 305
- 230000010354 integration Effects 0.000 claims description 328
- 230000000694 effects Effects 0.000 claims description 191
- 230000008859 change Effects 0.000 claims description 162
- 238000003672 processing method Methods 0.000 claims description 18
- 230000002093 peripheral effect Effects 0.000 claims description 7
- 230000006870 function Effects 0.000 abstract description 882
- 230000009131 signaling function Effects 0.000 abstract description 86
- 230000000875 corresponding effect Effects 0.000 description 583
- 239000011159 matrix material Substances 0.000 description 407
- 238000010586 diagram Methods 0.000 description 376
- 238000004364 calculation method Methods 0.000 description 372
- 230000008569 process Effects 0.000 description 361
- 230000033001 locomotion Effects 0.000 description 245
- 230000003044 adaptive effect Effects 0.000 description 237
- 238000000605 extraction Methods 0.000 description 176
- 238000012937 correction Methods 0.000 description 164
- 230000007423 decrease Effects 0.000 description 161
- 230000006833 reintegration Effects 0.000 description 122
- 238000013507 mapping Methods 0.000 description 93
- 230000014509 gene expression Effects 0.000 description 84
- 239000000284 extract Substances 0.000 description 74
- 238000001914 filtration Methods 0.000 description 68
- 239000000203 mixture Substances 0.000 description 64
- 230000003247 decreasing effect Effects 0.000 description 43
- 238000002156 mixing Methods 0.000 description 37
- 230000006978 adaptation Effects 0.000 description 25
- 230000005484 gravity Effects 0.000 description 21
- 230000002829 reductive effect Effects 0.000 description 21
- 239000000243 solution Substances 0.000 description 21
- 230000002123 temporal effect Effects 0.000 description 21
- 230000000630 rising effect Effects 0.000 description 12
- 238000006243 chemical reaction Methods 0.000 description 11
- 238000003384 imaging method Methods 0.000 description 11
- 210000000078 claw Anatomy 0.000 description 10
- 238000009795 derivation Methods 0.000 description 9
- 230000004069 differentiation Effects 0.000 description 9
- 238000007796 conventional method Methods 0.000 description 7
- 239000007787 solid Substances 0.000 description 7
- 101100107923 Vitis labrusca AMAT gene Proteins 0.000 description 6
- 238000011835 investigation Methods 0.000 description 6
- 230000002194 synthesizing effect Effects 0.000 description 6
- 101100518501 Mus musculus Spp1 gene Proteins 0.000 description 5
- ATJFFYVFTNAWJD-UHFFFAOYSA-N Tin Chemical compound [Sn] ATJFFYVFTNAWJD-UHFFFAOYSA-N 0.000 description 5
- 230000002542 deteriorative effect Effects 0.000 description 5
- 230000002441 reversible effect Effects 0.000 description 5
- 230000015572 biosynthetic process Effects 0.000 description 4
- 238000013213 extrapolation Methods 0.000 description 4
- 238000003786 synthesis reaction Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000013075 data extraction Methods 0.000 description 3
- 230000006866 deterioration Effects 0.000 description 3
- 238000007373 indentation Methods 0.000 description 3
- 230000013011 mating Effects 0.000 description 3
- 230000036961 partial effect Effects 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 229910044991 metal oxide Inorganic materials 0.000 description 2
- 150000004706 metal oxides Chemical class 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- JEJAGQKHAKDRGG-UHFFFAOYSA-N 2,2-dichloroethenyl dimethyl phosphate;(2-propan-2-yloxyphenyl) n-methylcarbamate Chemical compound COP(=O)(OC)OC=C(Cl)Cl.CNC(=O)OC1=CC=CC=C1OC(C)C JEJAGQKHAKDRGG-UHFFFAOYSA-N 0.000 description 1
- 238000012935 Averaging Methods 0.000 description 1
- 101100434024 Caenorhabditis elegans gar-3 gene Proteins 0.000 description 1
- 241000951471 Citrus junos Species 0.000 description 1
- 102100036908 Equilibrative nucleoside transporter 4 Human genes 0.000 description 1
- 101100094863 Homo sapiens SLC29A4 gene Proteins 0.000 description 1
- 241000233855 Orchidaceae Species 0.000 description 1
- 101710186160 S-adenosylmethionine synthase 3 Proteins 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000001747 exhibiting effect Effects 0.000 description 1
- 239000004615 ingredient Substances 0.000 description 1
- 238000003754 machining Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000002620 method output Methods 0.000 description 1
- 238000012887 quadratic function Methods 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 230000001568 sexual effect Effects 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4023—Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
Definitions
- the present invention relates to an image processing apparatus, method, and program, and more particularly, to an image processing apparatus, method, and program in consideration of the real world from which data is acquired.
- a first signal obtained by detecting a first signal, which is a real-world signal having a first dimension, by a sensor is described.
- a third signal (image signal) whose distortion is reduced as compared with the second signal is generated.
- a first signal that is obtained by projecting a first signal that is a real-world signal having a first dimension is less than a first dimension in which a part of the continuity of the real-world signal is missing.
- the two-dimensional second signal (image signal) is derived from the second signal (image signal) in consideration of the fact that the data has stationarity corresponding to the stationarity of the missing real-world signal.
- Signal processing for estimating the first signal (image signal) has not been considered so far. Disclosure of the invention The present invention has been made in view of such a situation, and in consideration of the real world from which data is acquired, to obtain more accurate and more accurate processing results for real world events. The purpose is to be able to.
- the image processing apparatus includes a plurality of detection elements, each of which has a spatio-temporal integration effect, and is obtained by projecting a real-world optical signal onto the plurality of detection elements, the real-world optical signal being partially lacking in stationarity.
- Data continuity detecting means for detecting the continuity of data in the image data composed of the pixels of the image data, and at least one of the spatiotemporal directions of the image data based on the continuity of the data detected by the data continuity detecting means.
- Each pixel in the image data corresponding to the position in the one-dimensional direction is weighted, and the pixel data of each pixel is determined to be a pixel value obtained by at least the one-dimensional integration effect.
- Real world estimating means for generating a second function that approximates a first function representing an optical signal in the real world by approximation.
- the real-world estimating means calculates, based on the data continuity, at least one-dimensional position in the image data corresponding to at least one-dimensional distance in the spatio-temporal direction from the pixel of interest in the image data. Weighting each pixel of the image data and approximating the image data as a pixel value power S of each pixel and a pixel value obtained by at least a one-dimensional integration effect, thereby obtaining a light signal representing a real-world optical signal. A second function that approximates the function of 1 can be generated.
- the real world estimating means may set a weight corresponding to a pixel whose distance in at least one dimension from a line corresponding to data continuity is greater than a predetermined distance to zero.
- Pixel value generating means for generating a pixel value corresponding to a pixel of a desired size by integrating the first function estimated by the real world estimating means at least in a desired unit in the one-dimensional direction is further provided.
- the real world estimating means weights each pixel according to the characteristics of each pixel in the image data, and based on the continuity of the data, at least one-dimensional direction of the spatio-temporal direction from the pixel of interest in the image data Pixel value of each pixel corresponding to the position of By approximating the image data as pixel values obtained by the integration effect in the original direction, it is possible to generate a second function approximating the first function representing the optical signal in the real world .
- the real world estimating means can set a value corresponding to a first derivative of a waveform of an optical signal corresponding to each pixel as a feature of each pixel.
- the real world estimating means can set a value corresponding to the first derivative based on a change in the pixel value between each pixel and a pixel surrounding the pixel as a feature of each pixel.
- the real world estimating means can set a value corresponding to a second derivative of a waveform of an optical signal corresponding to each pixel as a feature of each pixel.
- the real world estimating means can set a value corresponding to the second derivative value based on a change in pixel value between each pixel and a peripheral pixel of each pixel as a feature of each pixel.
- the image processing method includes a plurality of detection elements, each of which has a spatio-temporal integration effect, and is obtained by projecting a real-world optical signal onto the plurality of detection elements, the real-world optical signal being partially lacking in stationarity.
- Each pixel in the image data corresponding to at least the one-dimensional direction position is weighted, and the pixel data of each pixel is regarded as a pixel value obtained by at least the one-dimensional direction integration effect.
- the program according to the present invention includes a plurality of stationary elements of the real-world optical signal which are obtained by projecting the real-world optical signal onto a plurality of detection elements each having a spatiotemporal integration effect. Based on the data continuity detection step for detecting data continuity in the image data composed of pixels, and at least one of the spatiotemporal directions of the image data based on the data continuity detected by the data continuity detection step.
- the computer is caused to execute a real world estimation step of generating a second function that approximates the first function representing the optical signal of the real world.
- a steady state of a real-world optical signal obtained by projecting a real-world optical signal onto a plurality of detection elements each having a spatiotemporal integration effect is provided.
- the continuity of the data in the image data consisting of a plurality of pixels with some missing properties is detected, and based on the continuity of the data, the image corresponding to at least one-dimensional position in the spatio-temporal direction of the image data
- the real world A second function is generated that approximates the first function that represents the light signal.
- FIG. 1 is a diagram illustrating the principle of the present invention.
- FIG. 2 is a block diagram illustrating an example of a configuration of the signal processing device.
- FIG. 3 is a block diagram showing a signal processing device.
- FIG. 4 is a diagram illustrating the principle of processing of a conventional signal processing device.
- FIG. 5 is a diagram illustrating the principle of processing of the signal processing device.
- FIG. 6 is a diagram for more specifically explaining the principle of the present invention.
- FIG. 7 is a diagram for more specifically explaining the principle of the present invention.
- FIG. 8 is a diagram illustrating an example of the arrangement of pixels on the image sensor.
- FIG. 9 is a diagram for explaining the operation of the detection element which is a CCD.
- FIG. 10 is a diagram for explaining the relationship between the light incident on the detection elements corresponding to the pixels D to F and the pixel value.
- FIG. 11 is a diagram illustrating the relationship between the passage of time, the light incident on the detection element corresponding to one pixel, and the pixel value.
- FIG. 12 is a diagram illustrating an example of an image of a linear object in the real world.
- FIG. 13 is a diagram illustrating an example of pixel values of image data obtained by actual imaging.
- FIG. 14 is a schematic diagram of image data.
- FIG. 15 is a diagram showing an example of an image of the real world 1 of an object having a single color and a straight edge, which is a color different from the background.
- FIG. 16 is a diagram illustrating an example of pixel values of image data obtained by actual imaging.
- FIG. 17 is a schematic diagram of image data.
- FIG. 18 is a diagram illustrating the principle of the present invention.
- FIG. 19 is a diagram illustrating the principle of the present invention.
- FIG. 20 is a diagram illustrating an example of generation of high-resolution data.
- FIG. 21 is a diagram illustrating approximation by a model.
- FIG. 22 is a diagram illustrating model estimation based on M pieces of data.
- FIG. 23 is a diagram illustrating the relationship between real-world signals and data.
- FIG. 24 is a diagram showing an example of data of interest when formulating an equation.
- FIG. 25 is a diagram illustrating signals for two objects in the real world and values belonging to a mixed region when an equation is formed.
- FIG. 26 is a diagram for explaining the stationarity expressed by the equations (18), (19), and (22).
- FIG. 27 is a diagram illustrating an example of M pieces of data extracted from the data.
- FIG. 28 is a diagram illustrating an area where a pixel value that is data is obtained.
- FIG. 29 is a diagram illustrating approximation of the position of a pixel in the spatiotemporal direction.
- FIG. 30 is a diagram for explaining integration of real-world signals in the time direction, that is, the two-dimensional spatial direction in the data.
- FIG. 31 is a diagram illustrating an integration area when generating high-resolution data having a higher resolution in the spatial direction.
- FIG. 32 is a diagram illustrating an integration area when generating high-resolution data having a higher resolution in the time direction.
- FIG. 33 is a diagram for explaining an integration area when generating high-resolution data from which motion blur has been removed.
- FIG. 34 is a diagram illustrating an integration area when generating high-resolution data having a higher resolution in the time-space direction.
- FIG. 35 shows the original image of the input image.
- FIG. 36 is a diagram illustrating an example of the input image.
- FIG. 37 is a diagram showing an image obtained by applying the conventional classification adaptive processing.
- FIG. 38 is a diagram showing a result of detecting a thin line region.
- FIG. 39 is a diagram illustrating an example of an output image output from the signal processing device.
- FIG. 40 is a flowchart illustrating signal processing by the signal processing device.
- FIG. 41 is a block diagram illustrating a configuration of the data continuity detection unit.
- Figure 42 is a diagram showing an image of the real world with a thin line in front of the background.
- FIG. 43 is a view for explaining the approximation of the background by a plane.
- FIG. 44 is a diagram showing a cross-sectional shape of image data on which a thin line image is projected.
- FIG. 45 is a diagram showing a cross-sectional shape of image data on which a thin line image is projected.
- FIG. 46 is a diagram illustrating a cross-sectional shape of image data on which a thin line image is projected.
- FIG. 47 is a diagram for describing processing of detecting a vertex and detecting a monotonous increase / decrease region.
- FIG. 48 is a diagram illustrating a process of detecting a thin line region in which the pixel value of the vertex exceeds the threshold value and the pixel value of an adjacent pixel is equal to or less than the threshold value.
- FIG. 49 is a diagram illustrating the pixel values of the pixels arranged in the direction indicated by the dotted line AA ′ in FIG.
- FIG. 50 is a diagram illustrating a process of detecting the continuity of the monotone increase / decrease region.
- FIG. 51 is a diagram illustrating an example of an image in which a stationary component is extracted by approximation on a plane.
- FIG. 52 is a diagram showing a result of detecting a monotonically decreasing region.
- FIG. 53 is a diagram showing an area where continuity is detected.
- FIG. 54 is a diagram illustrating pixel values of an area where continuity is detected.
- FIG. 55 is a diagram illustrating an example of another process of detecting a region where a thin line image is projected.
- FIG. 56 is a flowchart for explaining the processing of the continuity detection.
- FIG. 57 is a diagram illustrating a process of detecting the continuity of data in the time direction.
- FIG. 58 is a block diagram illustrating a configuration of the non-stationary component extraction unit.
- Figure 59 illustrates the number of rejections.
- FIG. 60 is a diagram illustrating an example of an input image.
- FIG. 61 is a diagram showing an image in which a standard error obtained as a result of approximation by a plane without rejection is used as a pixel value.
- FIG. 62 is a diagram illustrating an image in which the standard error obtained as a result of rejection and approximation by a plane is used as a pixel value.
- FIG. 63 is a diagram illustrating an image in which the number of rejections is set as a pixel value.
- FIG. 64 is a diagram illustrating an image in which the inclination of the plane in the spatial direction X is a pixel value.
- FIG. 65 is a diagram illustrating an image in which the inclination of the plane in the spatial direction Y is a pixel value.
- FIG. 66 is a diagram showing an image composed of approximate values indicated by a plane.
- FIG. 67 is a diagram illustrating an image including a difference between an approximate value indicated by a plane and a pixel value.
- FIG. 68 is a flowchart illustrating the process of extracting the unsteady component.
- FIG. 69 is a flowchart for explaining the process of extracting the stationary component.
- FIG. 70 is a flowchart illustrating another process of extracting a steady component.
- FIG. 71 is a flowchart illustrating still another process of extracting a stationary component.
- FIG. 72 is a block diagram showing another configuration of the data continuity detecting unit.
- FIG. 73 is a view for explaining activities in an input image having data continuity.
- FIG. 74 is a diagram illustrating a block for detecting an activity.
- FIG. 75 is a diagram for explaining an angle of data continuity with respect to activity.
- FIG. 76 is a block diagram showing a more detailed configuration of the data continuity detector.
- FIG. 77 is a diagram illustrating a set of pixels.
- FIG. 78 is a view for explaining the relationship between the position of a set of pixels and the angle of data continuity.
- FIG. 79 is a flowchart for describing processing for detecting data continuity.
- FIG. 80 is a diagram showing a set of pixels extracted when detecting the continuity angle of data in the time direction and the space direction.
- Fig. 81 is a block diagram showing another more detailed configuration of the data continuity detection unit c Fig. 82 is a set of pixels consisting of a number of pixels according to the range of the angle of the set straight line FIG.
- FIG. 83 is a view for explaining the range of the angle of the set straight line.
- FIG. 84 is a diagram illustrating the range of the angle of the set straight line, the number of pixel sets, and the number of pixels for each pixel set.
- FIG. 85 is a diagram illustrating the number of pixel sets and the number of pixels for each pixel set.
- FIG. 86 is a diagram illustrating the number of pixels in each pixel set.
- FIG. 87 is a diagram illustrating the number of pixel sets and the number of pixels for each pixel set.
- FIG. 88 is a diagram illustrating the number of pixel sets and the number of pixels for each pixel set.
- FIG. 89 is a diagram illustrating the number of pixel sets and the number of pixels for each pixel set.
- FIG. 90 is a diagram illustrating the number of pixel sets and the number of pixels for each pixel set.
- FIG. 91 is a diagram illustrating the number of pixel sets and the number of pixels for each pixel set.
- FIG. 92 is a diagram illustrating the number of pixel sets and the number of pixels for each pixel set.
- FIG. 93 is a flowchart illustrating a process of detecting data continuity.
- FIG. 94 is a block diagram showing still another configuration of the data continuity detecting unit.
- FIG. 95 is a block diagram showing a more detailed configuration of the data continuity detector.
- FIG. 96 is a diagram illustrating an example of
- FIG. 97 is a diagram illustrating a process of calculating the absolute value of the pixel value difference between the target block and the reference block.
- FIG. 98 is a diagram illustrating the distance in the spatial direction X between the position of a pixel around the target pixel and a straight line having an angle ⁇ .
- FIG. 99 is a diagram illustrating a relationship between the shift amount ⁇ and the angle 0.
- FIG. 100 is a diagram showing a distance in the spatial direction X between a position of a pixel around the target pixel and a straight line passing through the target pixel and having an angle of 0 with respect to the shift amount ⁇ .
- FIG. 101 is a diagram showing a reference block that passes through the pixel of interest and has a minimum distance from a straight line having an angle ⁇ ⁇ ⁇ ⁇ with respect to the axis in the spatial direction X.
- FIG. 102 is a diagram for explaining a process of setting the range of the continuity angle of the detected data to 1Z2.
- FIG. 103 is a flowchart for explaining processing for detecting data continuity.
- FIG. 104 is a diagram showing blocks extracted when detecting the continuity angle of the data in the inter-direction and the spatial direction.
- FIG. 105 is a block diagram illustrating a configuration of a data continuity detecting unit that executes a process of detecting data continuity based on a component signal of an input image.
- FIG. 106 is a block diagram illustrating a configuration of a data continuity detection unit that executes a process of detecting data continuity based on a component signal of an input image.
- FIG. 107 is a block diagram showing still another configuration of the data continuity detecting unit.
- FIG. 108 is a view for explaining the continuity angle of data with respect to a reference axis in an input image.
- FIG. 109 is a diagram illustrating an angle of data continuity with respect to a reference axis in an input image.
- FIG. 110 is a diagram illustrating an angle of data continuity with respect to a reference axis in an input image.
- FIG. 11 is a diagram illustrating a relationship between a change in a pixel value and a regression line with respect to a spatial position of a pixel in an input image.
- FIG. 112 is a diagram illustrating the angle between the regression line ⁇ and, for example, an axis indicating the spatial direction X which is a reference axis.
- FIG. 113 is a diagram showing an example of the area.
- FIG. 114 is a flowchart illustrating a process of detecting data continuity performed by the data continuity detection unit having the configuration illustrated in FIG. 107.
- FIG. 115 is a block diagram showing still another configuration of the data continuity detecting unit.
- FIG. 116 is a diagram illustrating a relationship between a change in a pixel value and a regression line with respect to a position of a pixel in a spatial direction in an input image.
- FIG. 117 is a diagram for explaining the relationship between the standard deviation and a region having data stationarity. .
- FIG. 118 is a diagram illustrating an example of a region.
- FIG. 119 is a flowchart illustrating a process of detecting data continuity by the data continuity detecting unit configured as illustrated in FIG.
- FIG. 120 is a flowchart illustrating another process of detecting data continuity by the data continuity detecting unit configured as illustrated in FIG.
- FIG. 12 1 is a block diagram showing a configuration of a data continuity detecting unit that detects the angle of a thin line or a binary edge according to the present invention as data continuity information.
- FIG. 122 is a diagram for explaining a method of detecting data continuity information.
- FIG. 123 is a diagram for explaining a method of detecting data continuity information.
- FIG. 124 is a diagram showing a more detailed configuration of the data continuity detector of FIG.
- FIG. 125 is a diagram for explaining the horizontal / vertical determination processing.
- FIG. 126 illustrates the horizontal / vertical determination process.
- FIG. 127A is a diagram illustrating the relationship between a thin line in the real world and a thin line imaged by a sensor.
- FIG. 127B is a diagram illustrating the relationship between a thin line in the real world and a thin line imaged by a sensor.
- FIG. 127C is a diagram for explaining the relationship between a thin line in the real world and a thin line imaged by a sensor.
- FIG. 128A is a diagram for explaining the relationship between the thin lines and the background of the real world image.
- FIG. 128B is a diagram for explaining the relationship between the thin lines and the background of the real world image.
- FIG. 12A is a diagram illustrating the relationship between a thin line of an image captured by a sensor and a background.
- FIG. 12B is a diagram for explaining a relationship between a thin line of an image captured by a sensor and a background.
- FIG. 13OA is a diagram illustrating an example of a relationship between a thin line of an image captured by a sensor and a background.
- FIG. 130B is a view for explaining an example of the relationship between a thin line of an image captured by a sensor and a background.
- FIG. 13A is a diagram illustrating the relationship between the thin lines and the background of the real world image.
- FIG. 13B is a diagram illustrating the relationship between the thin lines and the background of the real world image.
- FIG. 13A is a diagram for explaining a relationship between a thin line of an image captured by a sensor and a background.
- FIG. 13B is a diagram for explaining a relationship between a thin line of an image captured by a sensor and a background.
- FIG. 13A is a diagram illustrating an example of a relationship between a thin line of an image captured by a sensor and a background.
- FIG. 13B is a diagram for explaining an example of the relationship between a thin line of an image captured by a sensor and a background.
- FIG. 1 34 is a diagram showing a model for obtaining the angle of a thin line.
- FIG. 135 is a diagram showing a model for obtaining the angle of a thin line.
- FIG. 13A is a diagram for explaining the maximum value and the minimum value of the pixel value of the dynamic range block corresponding to the target pixel.
- FIG. 13B is a diagram for explaining the maximum value and the minimum value of the pixel value of the dynamic range block corresponding to the target pixel.
- FIG. 13A is a diagram for explaining how to obtain the angle of the thin line.
- FIG. 13B is a diagram for explaining how to obtain the angle of the thin line.
- FIG. 13C is a diagram for explaining how to obtain the angle of the thin line.
- FIG. 138 is a view for explaining how to obtain the angle of a thin line.
- FIG. 139 is a diagram for explaining the extraction block and the dynamic range block.
- FIG. 140 is a diagram for explaining the solution of the least squares method.
- FIG. 141 is a diagram for explaining the solution of the least squares method.
- FIG. 144A is a diagram for explaining binary edges.
- FIG. 144B illustrates a binary edge
- FIG. 144C illustrates a binary edge
- FIG. 144A is a diagram illustrating a binary edge of an image captured by a sensor.
- FIG. 144B is a diagram for explaining binary edges of an image captured by a sensor.
- FIG. 144A is a diagram illustrating an example of a ⁇ binary edge captured by a sensor.
- FIG. 144 is a diagram illustrating an example of a binary edge of an image captured by a sensor.
- FIG. 145 is a diagram illustrating binary edges of an image captured by a sensor.
- FIG. 144B is a diagram for explaining binary edges of an image captured by a sensor.
- FIG. 146 is a diagram showing a model for determining the angle of a binary edge.
- FIG. 147A is a view for explaining a method of obtaining the angle of a binary edge.
- FIG. 147B is a diagram for explaining a method of obtaining the angle of the binary edge.
- FIG. 147C is a diagram for explaining a method of obtaining the angle of the binary edge.
- FIG. 148 is a view for explaining a method of obtaining the angle of a binary edge.
- FIG. 149 is a flowchart illustrating a process of detecting the angle of a thin line or a binary edge as data continuity.
- FIG. 150 is a flowchart illustrating the data extraction process.
- FIG. 121 is a flowchart for explaining the addition processing to the normal equation.
- FIG. 15A is a diagram comparing the inclination of the thin line obtained by applying the present invention with the angle of the thin line obtained by using the correlation.
- FIG. 15B is a diagram comparing the inclination of the thin line obtained by applying the present invention with the angle of the thin line obtained by using the correlation.
- FIG. 15A is a diagram comparing the slope of a binary edge obtained by applying the present invention with the angle of a thin line obtained using correlation.
- FIG. 15B is a diagram comparing the slope of a binary edge obtained by applying the present invention with the angle of a thin line obtained using correlation.
- FIG. 154 is a block diagram illustrating a configuration of a data continuity detection unit that detects a mixture ratio as data continuity information to which the present invention is applied.
- FIG. 155A is a diagram for explaining how to determine the mixture ratio.
- FIG. 155B is a diagram for explaining how to obtain the mixture ratio.
- FIG. 155C is a diagram for explaining how to determine the mixture ratio.
- Fig. 156 is a flowchart explaining the process of detecting the mixture ratio as data continuity.
- FIG. 157 is a flowchart for explaining the process of adding to the normal equation.
- FIG. 158A is a diagram showing an example of the distribution of the mixing ratio of the fine lines.
- FIG. 158B is a diagram illustrating an example of the distribution of the mixing ratio of the thin lines.
- FIG. 159A is a diagram showing an example of the distribution of the mixture ratio of binary wedges.
- FIG. 159B is a diagram showing an example of a distribution of the mixing ratio of the binary wedge.
- FIG. 160 is a diagram illustrating linear approximation of the mixture ratio.
- Figure 16 1A is a diagram for explaining a method for obtaining the motion of an object as data continuity information.
- Fig. 16 1B is a diagram for explaining a method of obtaining the motion of an object as data continuity information
- Figure 16 2A is a diagram for explaining a method of obtaining the motion of an object as data continuity information.
- FIG. 16 2D is a diagram for explaining a method of obtaining the motion of an object as data continuity information.
- FIG. 16A is a diagram illustrating a method of obtaining a mixture ratio due to the movement of an object as data continuity information.
- FIG. 16B is a diagram illustrating a method for obtaining a mixture ratio due to the movement of an object as data continuity information.
- FIG. 16C is a diagram illustrating a method for obtaining a mixture ratio due to the movement of an object as data continuity information.
- FIG. 164 is a diagram illustrating linear approximation of the mixture ratio when the mixture ratio due to the motion of the object is obtained as data continuity information.
- FIG. 165 is a block diagram illustrating a configuration of a data continuity detection unit that detects a processing area to which the present invention is applied as data continuity information.
- FIG. 166 is a flowchart illustrating processing for detecting continuity by the data continuity detection unit in FIG.
- FIG. 167 is a diagram for explaining the integration range of the processing for detecting continuity by the data continuity detection unit in FIG.
- FIG. 168 is a diagram for explaining the integration range of the processing of continuity detection by the data continuity detection unit in FIG.
- FIG. 169 is a block diagram illustrating another configuration of the data continuity detection unit that detects a processing area to which the present invention is applied as data continuity information.
- FIG. 170 is a flowchart illustrating processing for detecting continuity by the data continuity detecting unit in FIG. 169.
- FIG. 171 is a diagram for explaining the integration range of the continuity detection processing by the data continuity detection unit in FIG. 169.
- FIG. 172 is a view for explaining the integration range of the processing of the continuity detection by the data continuity detection unit in FIG. 169.
- FIG. 173 is a block diagram illustrating a configuration of the real world estimating unit 102.
- FIG. 174 is a diagram illustrating a process of detecting the width of a thin line in a signal in the real world.
- FIG. 175 is a diagram illustrating a process of detecting the width of a thin line in a signal in the real world.
- Fig. 176 is a diagram illustrating the process of estimating the level of the thin-line signal in the real-world signal
- FIG. 177 is a flowchart illustrating the process of estimating the real world.
- FIG. 178 is a block diagram illustrating another configuration of the real world estimation unit.
- FIG. 179 is a block diagram illustrating the configuration of the boundary detection unit.
- FIG. 180 is a diagram for explaining the process of calculating the distribution ratio.
- FIG. 181 is a diagram for explaining the process of calculating the distribution ratio.
- FIG. 182 is a diagram for explaining the process of calculating the distribution ratio.
- FIG. 183 is a diagram for explaining a process of calculating a regression line indicating a boundary of a monotone increase / decrease region.
- FIG. 184 is a diagram for explaining a process of calculating a regression line indicating a boundary of a monotone increase / decrease region.
- FIG. 185 is a flowchart illustrating the process of estimating the real world.
- FIG. 186 is a flowchart illustrating the process of boundary detection.
- FIG. 187 is a block diagram illustrating a configuration of a real world estimating unit that estimates a differential value in a spatial direction as real world estimation information.
- FIG. 188 is a flowchart for explaining the processing of the real world estimation by the real world estimation unit in FIG.
- FIG. 189 is a diagram illustrating a reference pixel.
- FIG. 190 is a view for explaining positions where differential values in the spatial direction are obtained.
- FIG. 191 is a diagram for explaining the relationship between the differential value in the spatial direction and the shift amount.
- FIG. 192 is a block diagram illustrating a configuration of a real world estimating unit that estimates the inclination in the spatial direction as real world estimation information.
- FIG. 193 is a flowchart illustrating a process of real world estimation by the real world estimation unit in FIG. Four
- FIG. 194 is a view for explaining the processing for obtaining the inclination in the spatial direction.
- FIG. 195 is a diagram for explaining the processing for obtaining the inclination in the spatial direction.
- FIG. 196 is a block diagram illustrating a configuration of a real world estimating unit that estimates a differential value in the frame direction as real world estimation information.
- FIG. 197 is a flowchart describing the processing of the real world estimation by the real world estimation unit in FIG. 196.
- FIG. 198 is a view for explaining reference pixels.
- FIG. 199 is a diagram for explaining a position for obtaining a differential value in the frame direction.
- FIG. 200 is a diagram for explaining the relationship between the differential value in the frame direction and the shift amount.
- FIG. 201 is a block diagram illustrating a configuration of a real world estimating unit that estimates a tilt in a frame direction as real world estimation information.
- FIG. 202 is a flowchart illustrating the process of real world estimation by the real world estimating unit of FIG.
- FIG. 203 is a view for explaining the processing for obtaining the inclination in the frame direction.
- FIG. 204 is a view for explaining the processing for obtaining the inclination in the frame direction.
- FIG. 205 is a diagram for explaining the principle of the function approximation method, which is an example of the embodiment of the real world estimation unit in FIG.
- FIG. 206 is a view for explaining the integration effect when the sensor is CCD.
- FIG. 207 is a view for explaining a specific example of the integration effect of the sensor of FIG.
- FIG. 208 is a view for explaining another specific example of the integration effect of the sensor of FIG. 206.
- FIG. 209 is a diagram showing the real world region containing fine lines shown in FIG.
- FIG. 210 illustrates the principle of an example of the embodiment of the real world estimating unit in FIG. 3 in comparison with the example in FIG.
- FIG. 211 is a diagram illustrating the thin-line-containing data area shown in FIG.
- FIG. 212 is a graph in which each of the pixel values included in the thin line containing data area of FIG. 211 is graphed.
- FIG. 2 13 is a graph of an approximation function that approximates each pixel value included in the thin line containing data area of FIG.
- FIG. 214 is a diagram for explaining the stationarity in the spatial direction of the real world region containing fine lines shown in FIG.
- FIG. 215 is a graph in which each of the pixel values included in the thin line containing data area of FIG. 211 is graphed.
- 0216 is a diagram illustrating a state in which each of the input pixel values shown in FIG. 215 is shifted by a predetermined shift amount.
- FIG. 217 is a graph showing an approximation function that approximates each pixel value included in the thin-line-containing data area of FIG. 212 in consideration of the spatial continuity.
- FIG. 218 is a diagram illustrating a spatial mixing region.
- FIG. 219 is a diagram illustrating an approximation function that approximates a real-world signal in the spatial mixing region.
- Figure 220 is a graph of an approximation function that approximates the real-world signal corresponding to the thin-line-containing data area in Figure 212, taking into account both the integration characteristics of the sensor and the stationarity in the spatial direction. is there.
- FIG. 221 is a block diagram illustrating a configuration example of a real-world estimator that uses a first-order polynomial approximation method among function approximation methods having the principle shown in FIG.
- FIG. 222 is a flowchart illustrating a real world estimation process performed by the real world estimation unit having the configuration of FIG.
- FIG. 223 is a diagram illustrating the tap range.
- FIG. 224 is a view for explaining signals in the real world having stationarity in the spatial direction.
- FIG. 225 is a view for explaining the integration effect when the sensor is CCD.
- FIG. 226 is a view for explaining the distance in the cross-sectional direction.
- FIG. 227 is a block diagram illustrating a configuration example of a real-world estimator that uses a quadratic polynomial approximation method among function approximation methods having the principle shown in FIG. 2004/001584
- FIG. 228 is a flowchart illustrating the estimation processing of the real world executed by the real world estimation unit having the configuration of FIG.
- FIG. 229 is a diagram illustrating the tap range.
- FIG. 230 illustrates the direction of continuity in the spatiotemporal direction.
- FIG. 231 is a view for explaining the integration effect when the sensor is CCD.
- FIG. 232 is a diagram for explaining signals in the real world having stationarity in the spatial direction.
- FIG. 233 is a diagram for explaining signals in the real world having continuity in the spatiotemporal direction.
- FIG. 234 is a block diagram illustrating a configuration example of a real-world estimator that uses a three-dimensional function approximation method among function approximation methods having the principle shown in FIG.
- FIG. 235 is a flowchart illustrating the real world estimation processing executed by the real world estimation unit having the configuration of FIG.
- FIG. 236 is a diagram for explaining the principle of the reintegration method, which is an example of the embodiment of the image generation unit in FIG.
- FIG. 237 is a diagram illustrating an example of an input pixel and an approximation function that approximates a real-world signal corresponding to the input pixel.
- FIG. 238 is a view for explaining an example of creating four high-resolution pixels in one input pixel shown in FIG. 237 from the approximation function shown in FIG.
- FIG. 239 is a block diagram illustrating a configuration example of an image generation unit that uses a one-dimensional reintegration method among the reintegration methods having the principle shown in FIG.
- FIG. 240 is a flowchart illustrating the image generation processing executed by the image generation unit having the configuration of FIG.
- FIG. 241 is a diagram illustrating an example of an original image of the input image.
- FIG. 242 is a diagram illustrating an example of image data corresponding to the image of FIG.
- FIG. 243 is a diagram illustrating an example of an input image.
- FIG. 244 is a diagram illustrating an example of image data corresponding to the image of 0 243.
- FIG. 246 is a diagram illustrating an example of image data corresponding to the image of FIG.
- FIG. 247 is a diagram illustrating an example of an image obtained by performing the processing of the one-dimensional reintegration method of the present invention on an input image.
- FIG. 248 is a diagram illustrating an example of image data corresponding to the image in FIG.
- FIG. 249 is a diagram for explaining signals in the real world having stationarity in the spatial direction.
- FIG. 250 is a block diagram illustrating a configuration example of an image generation unit that uses a two-dimensional reintegration method among the reintegration methods having the principle shown in FIG.
- FIG. 251 is a diagram for explaining the distance in the cross-sectional direction.
- FIG. 25 is a flowchart illustrating an image generation process performed by the image generation unit having the configuration of FIG.
- FIG. 253 is a diagram illustrating an example of an input pixel.
- FIG. 254 is a diagram for explaining an example of creating four high-resolution pixels in one input pixel shown in FIG. 25 3 by the two-dimensional reintegration method.
- FIG. 255 is a diagram illustrating the direction of continuity in the spatiotemporal direction.
- FIG. 256 is a block diagram illustrating a configuration example of an image generation unit that uses a three-dimensional reintegration method among the reintegration methods having the principle shown in FIG.
- FIG. 257 is a flowchart illustrating an image generation process performed by the image generation unit having the configuration of FIG.
- FIG. 258 is a block diagram showing another configuration of the image generation unit to which the present invention is applied.
- FIG. 259 is a flowchart illustrating the process of generating an image by the image generating unit in FIG.
- FIG. 260 is a diagram illustrating a process of generating a quadruple-density pixel from an input pixel.
- FIG. 261 is a diagram showing a relationship between an approximate function indicating a pixel value and a shift amount.
- FIG. 262 is a block diagram showing another configuration of the image generation unit to which the present invention is applied.
- FIG. 263 is a flowchart illustrating processing of generating an image by the image generating unit in FIG. 262.
- FIG. 264 is a diagram illustrating a process of generating a quadruple-density pixel from an input pixel.
- FIG. 265 is a diagram illustrating a relationship between an approximate function indicating a pixel value and a shift amount.
- FIG. 266 is a block diagram illustrating a configuration example of an image generation unit that uses the one-dimensional reintegration method of the class classification adaptive processing correction method, which is an example of the embodiment of the image generation unit in FIG.
- FIG. 267 is a block diagram illustrating a configuration example of the class classification adaptive processing unit of the image generation unit in FIG. 266.
- FIG. 268 is a block diagram illustrating a configuration example of a learning device that determines, by learning, coefficients used by the class classification adaptive processing unit and the class classification adaptive processing correction unit in FIG. 266.
- FIG. 269 is a block diagram illustrating a detailed configuration example of the learning unit for class classification adaptive processing in FIG. 268.
- FIG. 270 is a diagram illustrating an example of a processing result of the classification adaptive processing unit in FIG. 267.
- FIG. 271 is a diagram illustrating a difference image between the predicted image in FIG. 270 and the HD image.
- FIG. 272 shows the concrete example of the HD image in Fig. 270, which corresponds to the four HD pixels from the left in the figure out of the six HD pixels consecutive in the X direction included in the area shown in Fig. 271.
- FIG. 4 is a diagram showing a plot of a pixel value, a specific pixel value of an SD image, and an actual waveform (real world signal).
- FIG. 273 is a diagram illustrating a difference image between the predicted image in FIG. 270 and the HD image.
- FIG. 274 shows the details of the HD image in Fig. 270 corresponding to the four HD pixels from the left in the figure among the six HD pixels consecutive in the X direction included in the area shown in Fig. 273.
- FIG. 7 is a diagram showing plots of typical pixel values, specific pixel values of an SD image, and actual waveforms (real world signals).
- FIG. 275 is a view for explaining the knowledge obtained based on the contents shown in FIG. 272 to FIG. 274.
- FIG. 276 is a block diagram illustrating a configuration example of the class classification adaptive processing correction unit of the image generation unit in FIG.
- FIG. 277 is a block diagram illustrating a detailed configuration example of the class classification adaptive processing correction learning unit in FIG.
- FIG. 278 is a view for explaining the tilt in the pixel.
- FIG. 279 is a diagram illustrating the SD image of FIG. 270 and a feature amount image in which the in-pixel inclination of each pixel of the SD image is used as a pixel value.
- FIG. 280 is a view for explaining a method of calculating an in-pixel inclination.
- FIG. 281 is a view for explaining a method of calculating an in-pixel inclination.
- FIG. 282 is a flowchart illustrating an image generation process performed by the image generation unit having the configuration of FIG.
- FIG. 283 is a flowchart illustrating details of the input image class classification adaptation process of the image generation process of FIG. 282.
- FIG. 284 is a flowchart illustrating the details of the correction processing of the class classification adaptive processing in the image generation processing of FIG. 282.
- FIG. 285 is a diagram for explaining an example of the arrangement of class taps.
- Figure 286 is a diagram that converts one example of class classification.
- FIG. 287 is a diagram illustrating an example of a prediction tap arrangement.
- FIG. 288 is a flowchart illustrating the learning processing of the learning device in FIG.
- FIG. 289 is a flowchart for explaining the details of the learning processing for class classification adaptive processing in the learning processing in FIG. 288.
- FIG. 290 is a flowchart illustrating details of the learning process for correcting the class classification adaptive process in the learning process of FIG.
- FIG. 291 is a diagram illustrating the predicted image of FIG. 270 and an image obtained by adding the corrected image to the predicted image (image generated by the image generating unit of FIG. 266).
- FIG. 292 is a block diagram illustrating a first configuration example of a signal processing device using the combined method, which is another example of the embodiment of the signal processing device in FIG.
- FIG. 293 is a block diagram illustrating a configuration example of an image generation unit that performs the classification adaptive process in the signal processing device in FIG.
- FIG. 294 is a block diagram illustrating a configuration example of a learning device for the image generation unit in FIG.
- FIG. 295 is a flowchart illustrating signal processing executed by the signal processing device having the configuration of FIG.
- FIG. 296 is a flowchart illustrating details of execution processing of the class classification adaptive processing of the signal processing of FIG.
- FIG. 297 is a flowchart illustrating the learning processing of the learning device in FIG.
- FIG. 298 is a block diagram illustrating another example of the embodiment of the signal processing device in FIG. 1, which illustrates a second configuration example of the signal processing device using the combined method.
- FIG. 299 is a flowchart illustrating signal processing executed by the signal processing device having the configuration of FIG.
- FIG. 300 is a block diagram illustrating a third configuration example of the signal processing device using the combined method, which is another example of the embodiment of the signal processing device in FIG. 1.
- FIG. 301 is a flowchart illustrating signal processing executed by the signal processing device having the configuration shown in FIG.
- FIG. 302 is a block diagram illustrating a fourth configuration example of the signal processing device using the combined method, which is another example of the embodiment of the signal processing device in FIG.
- FIG. 303 is a flowchart illustrating signal processing executed by the signal processing device having the configuration shown in FIG.
- FIG. 304 is a block diagram illustrating a fifth configuration example of the signal processing device using the combined method, which is another example of the embodiment of the signal processing device in FIG.
- FIG. 305 is a flowchart illustrating signal processing executed by the signal processing device having the configuration in FIG.
- FIG. 303 is a block diagram showing a configuration of another embodiment of the data continuity detecting unit. 1584
- FIG. 307 is a flowchart illustrating the data continuity detection processing by the data continuity detection unit in FIG.
- FIG. 308 is a diagram illustrating an example of data extracted by the real world estimation unit in FIG.
- FIG. 309 is a view for explaining another example of data extracted by the real world estimating unit in FIG.
- FIG. 310 is a diagram illustrating a comparison between the case where the data of FIG. 308 is used as the data extracted by the real world estimator of FIG. 3 and the case where the data of FIG. 309 is used. O.
- FIG. 311 is a diagram showing an example of an input image from the sensor of FIG.
- FIG. 3 12 is a diagram illustrating an example of a weighting method for performing weighting according to the distance in the cross-sectional direction.
- FIG. 3 13 is a diagram for explaining the distance in the cross-sectional direction.
- FIG. 314 is another diagram for explaining the distance in the sectional direction.
- FIG. 315 is a diagram illustrating an example of a weighting method for performing weighting according to spatial correlation.
- FIG. 316 is a diagram illustrating an example of an image generated based on the estimated real world, in which the real world is estimated without using the weighting method.
- FIG. 317 is a diagram illustrating an example of an image in which the real world is estimated using a weighting method and is generated based on the estimated real world.
- FIG. 318 is a diagram showing another example of an image in which the real world is estimated without using the weighting method and which is generated based on the estimated real world.
- FIG. 319 is a diagram illustrating another example of an image in which the real world is estimated using the weighting method and is generated based on the estimated real world.
- FIG. 320 is a diagram illustrating an example of a signal of the real world 1 having stationarity in the space-time direction.
- FIG. 321 is a diagram illustrating an example of a t-section waveform F (t) at a predetermined position X in the spatial direction X and a function fi (t) serving as an index of an approximate function thereof.
- FIG. 322 is a diagram illustrating an example of an approximate function: f (t) generated without performing weighting using the function) of FIG. 321 as an index.
- Fig. 32 is a diagram showing the time transition of the same t-section waveform F (t) as in Fig. 32, and shows an example of the range including the data extracted by the real-world estimator in Fig. 3.
- FIG. 32 is a diagram showing the time transition of the same t-section waveform F (t) as in Fig. 32, and shows an example of the range including the data extracted by the real-world estimator in Fig. 3.
- FIG. 32 is a diagram showing the time transition of the same t-section waveform F (t) as in Fig. 32, and shows an example of the range including the data extracted by the real-world estimator in Fig. 3.
- FIG. 324 is a diagram for explaining the reason for using each of the first derivative and the second derivative of the waveform as weighting.
- FIG. 325 is a diagram for explaining the reason for using each of the first derivative and the second derivative of the waveform as weighting.
- FIG. 326 is a diagram illustrating an example of a case where a predetermined t-section waveform F (t) is approximated by a one-dimensional polynomial approximation method.
- FIG. 327 is a diagram for explaining the physical meaning of the feature quantity of the approximate function f (x, y) of the real-world signal, which is a two-dimensional polynomial.
- FIG. 328 is a diagram illustrating an example of an input image from the sensor 2.
- FIG. 329 is a diagram illustrating an example of a real-world signal corresponding to the input image of FIG.
- FIG. 330 is a diagram illustrating an example of an image generated based on the estimated real world, in which the real world is estimated without using a method that considers the addition property.
- FIG. 331 is a diagram illustrating another example of an image generated based on the estimated real world, in which the real world is estimated using a method that takes into account the addition property.
- FIG. 332 is a block diagram illustrating a configuration example of a real-world estimator to which the first filtering technique is applied.
- FIG. 33 is a block diagram illustrating another configuration example of the real world estimator to which the first filtering method is applied.
- FIG. 334 is a flowchart illustrating an example of the real world estimation process of the real world estimation unit in FIG.
- FIG. 335 is a block diagram illustrating a detailed configuration example of the filter coefficient generation unit of the real world estimation unit in FIG.
- FIG. 336 is a flowchart illustrating an example of filter coefficient generation processing of the filter coefficient generation unit in FIG.
- FIG. 337 is a block diagram illustrating a configuration example of an image processing device to which the second filtering method is applied.
- FIG. 338 is a block diagram illustrating a detailed configuration example of the image generation unit of the signal processing device in FIG.
- FIG. 339 is a block diagram illustrating another detailed configuration example of the image generation unit of the signal processing device in FIG.
- FIG. 340 is a flowchart illustrating an example of image processing by the image processing device in FIG. 337.
- FIG. 341 is a block diagram illustrating a detailed configuration example of the filter coefficient generation unit of the image generation unit in FIG.
- FIG. 342 is a flowchart illustrating an example of filter coefficient generation processing of the filter coefficient generation unit in FIG.
- FIG. 344 is a block diagram illustrating a configuration example of an image processing apparatus to which the combined method and the second and third filtering methods are applied.
- FIG. 344 is a block diagram illustrating a detailed configuration example of an error estimating unit to which the third filtering method is applied, in the image processing apparatus in FIG.
- FIG. 345 is a block diagram illustrating another detailed configuration example of the error estimation unit to which the third filtering method is applied, in the image processing device in FIG.
- FIG. 346 is a block diagram illustrating a detailed configuration example of the filter coefficient generation unit of the error estimation unit in FIG. 344.
- FIG. 347 is a flowchart illustrating an example of image processing of the image processing apparatus in FIG.
- FIG. 348 is a flowchart illustrating an example of a calculation process of a mapping error of the error estimating unit in FIG. 344.
- FIG. 349 is a flowchart illustrating an example of a filter coefficient generation process of the filter coefficient generation unit in FIG.
- FIG. 350 is a block diagram illustrating a configuration example of a data continuity detection unit to which the third filtering technique is applied.
- FIG. 351 is a block diagram illustrating an example of a process of detecting data continuity of the data continuity detection unit in FIG. 350.
- FIG. 352 is a block diagram illustrating a configuration example of a data continuity detection unit to which the full range search method and the third filtering method are applied.
- FIG. 353 is a flowchart illustrating a process of detecting data continuity of the data continuity detection unit in FIG.
- FIG. 354 is a block diagram illustrating another configuration example of the data continuity detection unit to which the full range search method and the third filtering method are applied.
- FIG. 355 is a flowchart for describing processing for detecting data continuity of the data continuity detecting unit in FIG.
- FIG. 356 is a block diagram illustrating yet another configuration example of the data continuity detection unit to which the full range search method is applied.
- FIG. 357 is a flowchart for describing an example of processing of data continuity detection by the data continuity detection unit in FIG.
- FIG. 358 is a block diagram illustrating a configuration example of a signal processing device to which the full range search method is applied.
- FIG. 359 is a flowchart illustrating an example of signal processing of the signal processing device of FIG.
- FIG. 360 is a flowchart illustrating an example of signal processing of the signal processing device of FIG. PT / JP2004 / 001584
- FIG. 1 illustrates the principle of the present invention.
- events phenomena
- Real world 1 events include light (image), sound, pressure, temperature, mass, density, lightness / darkness, or smell.
- Events in the real world 1 are distributed in the spatiotemporal direction.
- the image of the real world 1 is the distribution of the light intensity of the real world 1 in the spatiotemporal direction.
- the events of real world 1 that can be acquired by sensor 2 are converted into data 3 by sensor 2. It can be said that the sensor 2 obtains information indicating an event in the real world 1.
- the senor 2 converts information indicating an event of the real world 1 into data 3. It can be said that a signal that is information indicating an event (phenomenon) in the real world 1 having dimensions such as space, time, and mass is acquired by the sensor 2 and converted into data.
- a signal that is information indicating an event (phenomenon) in the real world 1 having dimensions such as space, time, and mass is acquired by the sensor 2 and converted into data.
- a signal that is information indicating an event of the real world 1 is also referred to as a signal that is information indicating an event of the real world 1.
- a signal that is information indicating an event in the real world 1 is also simply referred to as a signal in the real world 1.
- a signal includes a phenomenon and an event, and includes a signal that the transmission side does not intend.
- the data 3 (detection signal) output from the sensor 2 is information obtained by projecting information indicating an event of the real world 1 to a lower-dimensional space-time than the real world 1.
- data 3 which is image data of a moving image, is obtained by projecting a three-dimensional spatial and temporal image of the real world 1 into a two-dimensional spatial and temporal spatio-temporal image.
- Information is also, for example, when data 3 is digital data, data 3 is rounded according to the unit of sampling.
- data 3 is analog data, in data 3 according to the dynamic range, The information has been compressed or part of the information has been deleted due to a limiter.
- data 3 is significant information for estimating signals that are information indicating events (phenomena) in the real world 1. Contains.
- information having stationarity included in data 3 is used as significant information for estimating a signal which is information of the real world 1.
- Stationarity is a newly defined concept.
- the event of the real world 1 includes a certain feature in a direction of a predetermined dimension.
- a certain feature for example, in an object (a tangible object) in the real world 1, a pattern of shapes, patterns, colors, or the like in which shapes, patterns, or colors are continuous in the space direction or the time direction repeats.
- the information indicating the event of the real world 1 includes a certain feature in the direction of the predetermined dimension.
- a linear object such as a thread, a string, or a rope has a constant cross-sectional shape at an arbitrary position in the longitudinal direction, that is, a constant in the longitudinal direction.
- the constant feature in the spatial direction that the cross-sectional shape is the same at an arbitrary position in the length direction arises from the feature that the linear object is long. Therefore, the image of the linear object has a certain feature in the longitudinal direction, that is, in the spatial direction, that the cross-sectional shape is the same at an arbitrary position in the longitudinal direction.
- a single-color object which is a tangible object extending in the spatial direction
- an image of a single-color object which is a tangible object extending in the spatial direction
- the signal of the real world 1 has a certain characteristic in the direction of the predetermined dimension.
- continuity such a feature that is fixed in the direction of the predetermined dimension is called continuity.
- the continuity of a signal in the real world 1 (real world) refers to a characteristic of a signal indicating an event in the real world 1 (real world), which is constant in a predetermined dimension.
- data 3 is a signal obtained by projecting a signal indicating information of an event of the real world 1 having a predetermined dimension by the sensor 2, and thus the continuity of the signal of the real world is Is included.
- Data 3 can also be said to include the stationarity of the real-world signal projected.
- data 3 includes, as data continuity, a part of the continuity of the signal of the real world 1 (real world).
- the data continuity is a feature of data 3 that is constant in a predetermined dimension direction.
- the continuity of data included in data 3 is used as significant information for estimating a signal that is information indicating an event in the real world 1.
- information indicating a missing event of the real world 1 is generated by performing signal processing on the data 3 using the stationarity of the data.
- the stationarity in the space direction or the time direction is used.
- the senor 2 is composed of, for example, a digital still camera or a video camera, and captures an image of the real world 1 and obtains image data 3, which is obtained data 3. Output to the signal processor 4.
- the sensor 2 can be, for example, a thermography device or a pressure sensor using photoelasticity.
- the signal processing device 4 is composed of, for example, a personal computer.
- the signal processing device 4 is configured, for example, as shown in FIG. CPU
- the (Central Processing Unit) 21 executes various processes according to a program stored in a ROM (Read Only Memory) 22 or a storage unit 28.
- ROM Read Only Memory
- RAM Random Access Memory 23 programs executed by the CPU 21 and data are stored as appropriate.
- An input / output interface 25 is also connected to the CPU 21 via a bus 24.
- the input / output interface 25 is connected to an input unit 26 including a keyboard, a mouse, and a microphone, and an output unit 27 including a display, a speaker, and the like.
- the CPU 21 executes various processes in response to a command input from the input unit 26. Then, the CPU 21 outputs an image, a sound, or the like obtained as a result of the processing to the output unit 27.
- the storage unit 28 connected to the input / output interface 25 is composed of, for example, a hard disk and stores programs executed by the CPU 21 and various data.
- the communication unit 29 communicates with external devices via the Internet or other networks. In the case of this example, the communication unit 29 functions as an acquisition unit that takes in the data 3 output from the sensor 2.
- a program may be acquired via the communication unit 29 and stored in the storage unit 28.
- the drive 30 connected to the input / output interface 25 drives the magnetic disk 51, optical disk 52, magneto-optical disk 53, or semiconductor memory 54 when they are mounted, and records them. Get the programs and data that are being used. The acquired programs and data are transferred to and stored in the storage unit 28 as necessary.
- FIG. 3 is a block diagram showing the signal processing device 4. Note that it does not matter whether each function of the signal processing device 4 is realized by hardware or software. That is, each block diagram in this specification may be considered as a hardware block diagram or a function block diagram by software.
- FIG. 3 is a diagram showing a configuration of the signal processing device 4 which is an image processing device.
- the input image (image data as an example of the data 3) input to the signal processing device 4 is supplied to the data continuity detecting unit 101 and the real world estimating unit 102.
- the data continuity detection unit 101 detects data continuity from the input image and supplies data continuity information indicating the detected continuity to the real world estimation unit 102 and the image generation unit 103.
- the data continuity information includes, for example, the position of a pixel region having data continuity in the input image, the direction of the pixel region having data continuity (the angle or inclination in the time direction and the spatial direction), or the data Includes the length of the area of pixels that have stationarity. Details of the configuration of the data continuity detecting unit 101 will be described later.
- the real world estimating unit 102 estimates the signal of the real world 1 based on the input image and the data continuity information supplied from the data continuity detecting unit 101.
- the real-world estimating unit 102 estimates an image, which is a real-world signal, incident on the sensor 2 when the input image is acquired.
- the real world estimation unit 102 supplies real world estimation information indicating the result of estimation of the signal of the real world 1 to the image generation unit 103. Details of the configuration of the real world estimation unit 102 will be described later.
- the image generation unit 103 generates a signal that is more similar to the signal of the real world 1 based on the real world estimation information indicating the estimated signal of the real world 1 supplied from the real world estimation unit 102. And output the generated signal.
- the image generation unit 103 receives the data continuity information supplied from the data continuity detection unit 101 and the real-world estimation unit 102 Based on the supplied real world estimation information indicating the estimated real world 1 signal, a signal that is closer to the real world 1 signal is generated, and the generated signal is output.
- the image generation unit 103 generates an image that is closer to the image of the real world 1 based on the real world estimation information, and outputs the generated image as an output image.
- the image generation unit 103 based on the data continuity information and the real world estimation information, the image generation unit 103 generates an image that is closer to the image of the real world 1 and outputs the generated image as an output image.
- the image generation unit 103 integrates the estimated image of the real world 1 in a desired space direction or time direction based on the real world estimation information, thereby comparing the input image with the spatial image. Generates a high-resolution image in the direction or time direction, and outputs the generated image as an output image.
- the image generation unit 103 generates an image by outer interpolation, and outputs the generated image as an output image.
- FIG. 4 is a diagram for explaining the principle of processing in the conventional signal processing device 121.
- the conventional signal processing device 122 uses data 3 as a reference for processing and performs processing such as high resolution processing on data 3 as a processing target.
- the real world 1 is not considered, and the data 3 is the final criterion, and it is not possible to obtain more information than the information contained in the data 3 as output. Can not.
- the distortion difference between the signal which is the information of the real world 1 and the data 3
- the device 122 outputs a signal containing distortion. Further, depending on the content of the processing of the signal processing device 121, the distortion caused by the sensor 2 existing in the data 3 is further amplified, and data including the amplified distortion is output.
- the processing is executed in consideration of (the signal of) the real world 1 itself.
- FIG. 5 is a diagram illustrating the principle of processing in the signal processing device 4 according to the present invention. It is the same as the conventional one in that the sensor 2 acquires a signal that is information indicating an event in the real world 1 and the sensor 2 outputs data 3 obtained by projecting the signal that is the information of the real world 1.
- a signal that is acquired by the sensor 2 and that is information indicating an event of the real world 1 is explicitly considered.
- the signal processing is performed while being aware that the data 3 includes the distortion caused by the sensor 2 (the difference between the signal which is the information of the real world 1 and the data 3).
- the result of the processing is not limited by the information and distortion included in the data 3. It is possible to obtain more accurate and more accurate processing results for events. That is, according to the present invention, a more accurate and higher-precision processing result can be obtained for a signal that is input to the sensor 2 and that indicates information of the event in the real world 1.
- 6 and 7 are diagrams for more specifically explaining the principle of the present invention.
- a signal of the real world 1 which is an image is converted into an optical system 14 1 including a lens or an optical LPF (Low Pass Filter) to form an example of the sensor 2.
- An image is formed on the light receiving surface of a certain CCD (Charge Coupled Device). Since the CCD, which is an example of the sensor 2, has an integration characteristic, the data 3 output from the CCD has a difference from the image of the real world 1. Details of the integration characteristic of the sensor 2 will be described later.
- the relationship between the image of the real world 1 acquired by the CCD and the data 3 captured and output by the CCD is clearly considered. That is, The relationship between the data 3 and the signal, which is the real-world information acquired by the sensor 2, is clearly considered.
- the signal processing device 4 approximates (describes) the real world 1 using a model 16 1.
- the model 16 1 is represented by, for example, N variables. More precisely, the model 16 1 approximates (describes) the real world 1 signal.
- the signal processor 4 extracts M data 16 2 from the data 3.
- the signal processing device 4 uses the continuity of the data included in the data 3.
- the signal processing device 4 extracts the data 162 for predicting the model 161, based on the stationarity of the data included in the data 3.
- the model 16 1 is bound by the stationarity of the data.
- the model 16 1 represented by the N variables is predicted from the M data 16 2.
- the signal processing device 4 can consider the signal that is the information of the real world 1.
- An image sensor such as a CCD or a complementary metal-oxide semiconductor (CMOS) sensor, which captures an image, projects a signal, which is information of the real world, into two-dimensional data when imaging the real world.
- Each pixel of the image sensor has a predetermined area as a so-called light receiving surface (light receiving area). Light incident on a light receiving surface having a predetermined area is integrated in the spatial direction and the time direction for each pixel, and is converted into one pixel value for each pixel.
- the spatial and temporal integration of an image will be described with reference to FIGS.
- the image sensor captures an image of an object in the real world, and outputs image data obtained as a result of the capture in units of one frame. That is, the image sensor acquires the signal of the real world 1, which is the light reflected by the object of the real world 1, and outputs the data 3.
- an image sensor outputs 30 frames of image data per second.
- the exposure time of the image sensor can be set to 1/30 seconds.
- the exposure time is a period from the time when the image sensor starts converting the incident light into electric charges to the time when the conversion of the incident light into electric charges ends.
- the exposure time is also referred to as a shutter time.
- FIG. 8 is a diagram illustrating an example of the arrangement of pixels on the image sensor.
- a to I indicate individual pixels.
- the pixels are arranged on a plane corresponding to the image displayed by the image data.
- One detection element corresponding to one pixel is arranged on the image sensor.
- one detection element outputs one pixel value corresponding to one pixel constituting the image data.
- the position of the detector element in the spatial direction X corresponds to the position in the horizontal direction on the image displayed by the image data
- the position of the detector element in the spatial direction Y (Y coordinate) corresponds to the image.
- the distribution of the light intensity of the real world 1 has a spread in the three-dimensional spatial direction and the temporal direction, but the image sensor acquires the light of the real world 1 in the two-dimensional spatial direction and the temporal direction, Generates data 3 representing the distribution of light intensity in the two-dimensional spatial and temporal directions.
- the detection element which is a CCD, converts light input to the light receiving surface (light receiving area) (detection area) into electric charges for a period corresponding to the shutter time, and converts the converted electric charges.
- Light is the information (signal) in the real world 1 whose intensity is determined by its position in three-dimensional space and time.
- the distribution of light intensity in the real world 1 is 2004/001584
- the amount of electric charge stored in the detector element which is a CCD, is almost proportional to the intensity of light incident on the entire light-receiving surface, which has a two-dimensional spatial extent, and the time the light is incident. .
- the detection element adds the electric charge converted from the light incident on the entire light receiving surface to the already accumulated electric charge in a period corresponding to the shutter time.
- the detection element integrates light incident on the entire light receiving surface having a two-dimensional spatial spread for a period corresponding to the shutter time, and accumulates an amount of charge corresponding to the integrated light. . It can be said that the detection element has an integrating effect on space (light receiving surface) and time (shutter time).
- the electric charge stored in the detection element is converted into a voltage value by a circuit (not shown), and the voltage value is further converted into a pixel value such as digital data and output as data 3. Therefore, the individual pixel values output from the image sensor represent a part of the information (signal) of the real world 1 that has a temporal and spatial spread, in the time direction of the shutter time and the spatial direction of the light receiving surface of the detection element. It has the value of the one-dimensional space that is the result of integration.
- the pixel value of one pixel is represented by integration of F (x, y, t).
- F (x, y, t) is a function representing the distribution of light intensity on the light receiving surface of the detection element.
- the pixel value P is represented by Expression (1).
- equation (1) is the spatial coordinate (X coordinate) of the left boundary of the light receiving surface of the detection element. Is the spatial coordinate (X coordinate) of the right boundary of the light receiving surface of the detection element.
- Yl is the spatial coordinate (Y coordinate) of the upper boundary of the light receiving surface of the detection element.
- y Is the spatial coordinate (Y coordinate) of the lower boundary of the light receiving surface of the detector. is there. Is the time at which the conversion of incident light into charge has started. Is the time at which the conversion of the incident light into charges has been completed.
- the gain of the pixel value of the image data output from the image sensor is corrected, for example, for the entire frame.
- Each pixel value of the image data is the integrated value of the light incident on the light receiving surface of each detection element of the image sensor, and of the light incident on the image sensor of the real world 1 which is smaller than the light receiving surface of the detection element.
- the light waveform is hidden by the pixel value as an integrated value.
- the waveform of a signal expressed with reference to a predetermined dimension is also simply referred to as a waveform.
- the image data since the image of the real world 1 is integrated in the spatial direction and the temporal direction in units of pixels, the image data lacks a part of the stationarity of the image of the real world 1 and the real world 1 Only another part of the stationarity of one image is included in the image data.
- the image data may include stationarity that has changed from the stationarity of the real world 1 image.
- FIG. 10 is a diagram for explaining the relationship between the light incident on the detection elements corresponding to the pixels D to F and the pixel value.
- F (x) in FIG. 10 is an example of a function that represents the distribution of light intensity in the real world 1 with the coordinate X in the spatial direction X in space (on the detection element) as a variable.
- F (x) is an example of a function representing the distribution of light intensity in the real world 1 when it is constant in the spatial direction Y and the time direction.
- L indicates the length in the spatial direction X of the light receiving surface of the detection element corresponding to pixel D to pixel F.
- the pixel value of one pixel is represented by the integral of F (x).
- the pixel value P of the pixel E is represented by Expression (2).
- Xl is the spatial coordinate in the spatial direction X of the left boundary of the light receiving surface of the detection element corresponding to the pixel E.
- x 2 is a spatial coordinate in the spatial direction X of the right boundary of the light-receiving surface of the detecting element corresponding to the pixel E.
- FIG. 11 is a diagram illustrating the relationship between the passage of time, the light incident on the detection element corresponding to one pixel, and the pixel value.
- F (t) in FIG. 11 is a function representing the distribution of light intensity in the real world 1 with time t as a variable.
- F (t) is an example of a function that represents the distribution of light intensity in the real world 1 when it is constant in the spatial direction Y and the spatial direction X.
- t s indicates the shirt time.
- Frame #n_l is a frame that is earlier in time than frame #n, and frame is a frame that is later in time than frame. That is, frame # n-1, frame #n, and frame # n + l are displayed in the order of frame # n-1, frame, and frame # n + l.
- the shirt time t s and the frame interval are the same.
- the pixel value of one pixel is represented by the integral of F (t).
- the pixel value P of the pixel in frame #n is represented by Expression (3).
- Equation (3) is the time at which the conversion of incident light into electric charge has started.
- t 2 is the time at which the conversion of the incident light into charges has been completed.
- the integration effect in the spatial direction by the sensor 2 is simply referred to as the spatial integration effect
- the integration effect in the time direction by the sensor 2 is simply referred to as the time integration effect
- the spatial integration effect or the time integration effect is also simply referred to as an integration effect.
- FIG. 12 is a diagram illustrating an image of a linear object (for example, a thin line) in the real world 1, that is, an example of a distribution of light intensity.
- the upper position in the figure indicates the light intensity (level)
- the upper right position in the figure indicates the position in the spatial direction X which is one direction in the spatial direction of the image.
- the position on the right in the middle indicates the position in the spatial direction Y, which is another direction in the spatial direction of the image.
- the image of the linear object in the real world 1 includes a certain stationarity. That is, the image shown in Fig. 12 has the continuity that the cross-sectional shape (change in level with respect to change in position in the direction orthogonal to the length direction) is the same at any position in the length direction. You.
- FIG. 13 is a diagram showing an example of pixel values of image data obtained by actual imaging corresponding to the image shown in FIG.
- FIG. 14 is a schematic diagram of the image data shown in FIG.
- FIG. 14 is a linear object with a diameter smaller than the length L of the light receiving surface of each pixel, extending in a direction deviating from the arrangement of pixels (vertical or horizontal arrangement of pixels) of the image sensor.
- FIG. 4 is a schematic diagram of image data obtained by imaging the image of FIG. When the image data shown in FIG. 14 is acquired, the image incident on the image sensor is the image of the linear object in the real world 1 in FIG.
- the upper position in the figure indicates the pixel value
- the upper right position in the figure indicates the position in the spatial direction X which is one direction in the spatial direction of the image
- the right position in the figure indicates the pixel value.
- the directions indicating the pixel values in FIG. 14 correspond to the level directions in FIG. 12, and the spatial direction X and the spatial direction Y in FIG. 14 are the same as the directions in FIG.
- the linear object When an image of a linear object having a diameter shorter than the length L of the light receiving surface of each pixel is captured by an image sensor, the linear object is schematically represented in image data obtained as a result of the imaging, for example, A plurality of arcs (kamaboko type) of a predetermined length that are arranged diagonally expressed.
- Each arc shape is almost the same.
- One arc shape is formed vertically on one row of pixels or horizontally on one row of pixels.
- one arc shape in FIG. 14 is formed on one column of pixels vertically.
- the spatial direction can be set at an arbitrary position in the longitudinal direction, which has the image of the linear object in the real world 1.
- the stationarity that the cross-sectional shape in Y is the same is lost.
- the continuity that the image of the linear object in the real world 1 has is the same shape formed on one row of pixels vertically or one row of pixels horizontally. It can be said that there is a change to a stationary state in which certain arc shapes are arranged at regular intervals.
- FIG. 15 is a diagram showing an example of an image of the real world 1 of an object having a single color and a straight edge, which is a color different from the background, that is, an example of the distribution of light intensity.
- the upper position in the figure indicates the light intensity (level)
- the upper right position in the figure indicates the position in the spatial direction X which is one direction in the spatial direction of the image.
- the position to the right of indicates the position in the spatial direction Y, which is another direction in the spatial direction of the image.
- the image of the real world 1 of an object having a straight edge in a color different from the background has a predetermined constancy. That is, the image shown in FIG. 15 has stationarity in which the cross-sectional shape (change in level with respect to change in position in the direction perpendicular to the edge) is the same at an arbitrary position in the length direction of the edge.
- FIG. 16 is a diagram showing an example of pixel values of image data obtained by actual imaging corresponding to the image shown in FIG. As shown in FIG. 16, the image data is composed of pixel values in units of pixels, and thus has a step-like shape.
- FIG. 17 is a schematic diagram of the image data shown in FIG.
- FIG. 17 shows a single-color, straight-line color that is different from the background and whose edge extends in a direction deviating from the arrangement of pixels (vertical or horizontal arrangement of pixels) of the image sensor.
- FIG. 4 is a schematic diagram of image data obtained by capturing an image of the real world 1 of an object having an edge using a image sensor.
- the image incident on the di-sensor is a real-color image of an object having a single color and a straight edge, which is a different color from the background, as shown in FIG.
- the upper position in the figure indicates the pixel value
- the upper right position in the figure indicates the position in the spatial direction X which is one direction in the spatial direction of the image
- the right position in the figure Indicates the position in the spatial direction Y, which is another direction in the spatial direction of the image.
- the direction indicating the pixel value in FIG. 17 corresponds to the direction of the level in FIG. 15, and the spatial direction X and the spatial direction Y in FIG. 17 are the same as the directions in FIG.
- linear edge When an image of the real world 1 which is a color different from the background and has a single-color, linear edge is captured by an image sensor, the linear edge is schematically represented in image data obtained as a result of the imaging. For example, it is represented by a plurality of pawls of a predetermined length, which are arranged obliquely.
- Each claw shape is almost the same shape.
- One claw shape is formed vertically on one row of pixels or horizontally on one row of pixels.
- one claw shape is formed vertically on one column of pixels.
- the image data obtained by being captured by the image sensor there is a real-world 1 image of an object having a color different from the background and having a single-color, linear edge.
- the continuity of the same cross-sectional shape at any position along the edge length has been lost.
- the continuity of the image of the real world 1 which is a color different from the background and has a single color, and has a linear edge, has an image of one pixel vertically or one pixel horizontally. It can be said that the same shape of the claw shape formed on the pixel in the column has changed to a stationary state in which it is arranged at regular intervals.
- the data continuity detecting unit 101 detects such continuity of data included in, for example, data 3 which is an input image.
- the data continuity detecting unit 101 detects continuity of data by detecting a territory having a certain feature in a predetermined dimension direction.
- the data continuity detecting unit 101 detects a region shown in FIG. 14 in which the same arc shapes are arranged at regular intervals.
- the data continuity detecting unit 101 detects a region shown in FIG. 17 in which the same claw shapes are arranged at regular intervals.
- the data continuity detecting unit 101 detects data continuity by detecting an angle (inclination) in the spatial direction indicating a similar shape arrangement.
- the data continuity detecting unit 101 detects the angle (movement) in the spatial direction and the temporal direction, which indicates the parallel arrangement of the same shape in the spatial direction and the temporal direction. Detect stationarity.
- the data continuity detecting unit 101 detects data continuity by detecting a length of an area having a certain characteristic in a direction of a predetermined dimension.
- the portion of the data 3 in which the image of the real world 1 of the object having a single color and having a linear edge and different from the background is projected by the sensor 2 is also referred to as a binary edge.
- desired high-resolution data 18 1 is generated from the data 3.
- the real world 1 is estimated from the data 3, and the high-resolution data 18 1 is generated based on the estimation result. That is, as shown in FIG. 19, the real world 1 is estimated from the data 3 and the high-resolution data 18 1 is converted from the estimated real world 1 Data 1 8 1 is generated.
- the sensor 2 which is a CCD has an integral characteristic as described above. That is, one unit of data 3 (for example, pixel value) is a signal from the real world 1
- the virtual high-resolution sensor applies the process of projecting the real-world 1 signal to data 3 to the estimated real-world 1, resulting in a high-resolution sensor.
- Data 1 8 1 can be obtained.
- the signal of the real world 1 can be estimated from the data 3, the signal of the real world 1 is calculated for each detection area of the detection element of the virtual high-resolution sensor.
- By integrating (spatio-temporally), one value included in high-resolution data 18 1 can be obtained.
- the data 3 cannot represent the small change of the signal of the real world 1. Therefore, by comparing the signal of the real world 1 estimated from the data 3 with the change of the signal of the real world 1 and integrating each smaller area (in the space-time direction), the signal of the real world 1 is obtained. It is possible to obtain high-resolution data 18 1 indicating a small change in
- high-resolution data 181 can be obtained by integrating the estimated real world 1 signal in the detection area.
- the image generation unit 103 integrates the estimated real-world 1 signal in a space-time direction region of each detection element of a virtual high-resolution sensor, for example, to obtain a high-resolution image.
- the relation between the data 3 and the real world 1, the stationarity, and the spatial mixing in the data 3 are used.
- mixing means that in data 3, signals for two objects in the real world 1 are mixed into one value.
- Spatial mixing refers to spatial mixing of signals for two objects due to the spatial integration effect of the sensor 2.
- Real world 1 itself consists of an infinite number of phenomena, so in order to express real world 1 itself, for example, by mathematical formulas, an infinite number of variables are needed. From Data 3, it is not possible to predict all events in the real world 1.
- the stationary real-world signal part which can be represented by f ( X , y, z, t), is approximated by a model 161 represented by N variables. Then, as shown in FIG. 22, the model 16 1 is predicted from the M data 16 2 in the force S and the data 3.
- model 161 is represented by N variables based on stationarity
- sensor Based on the integration characteristics of 2
- N variables that shows the relationship between the model 16 1 represented by N variables and the M data 16 2
- the model 16 1 is represented by N variables based on the stationarity
- the number of N data 16 2 representing the relationship between the model 16 1 represented by N variables and the M data 16 2 It can be said that the equation using variables describes the relationship between the stationary signal part of the real world 1 and the data stationary part 3.
- the data continuity detecting unit 101 detects the features of the data 3 where the data continuity occurs and the features of the data where the continuity occurs, based on the signal portion of the real world 1 having the continuity.
- the edge has a slope.
- the arrow B in FIG. 23 indicates the edge inclination.
- the inclination of the predetermined edge can be represented by an angle with respect to a reference axis or a direction with respect to a reference position.
- the inclination of the predetermined edge can be represented by an angle between the coordinate axis in the spatial direction X and the edge.
- the inclination of the predetermined edge can be represented by a direction indicated by the length in the spatial direction X and the length in the spatial direction Y.
- Figure 23 shows the position of A 'in Fig. 23 with respect to the position of interest (A) of the edge in the image of World 1 and the claw shapes corresponding to the edge, which correspond to the inclination of the edge of the image of Real World 1.
- the claws corresponding to the edges are arranged in the direction of the inclination indicated by B and.
- the model 16 1 represented by N variables approximates a real-world signal portion that causes data continuity in data 3.
- the data stationarity occurs in the data 3 shown in Fig. 24, and the value obtained by integrating the signal of the real world 1 is output from the detection element of the sensor 2 by focusing on the values belonging to the mixed region.
- the formula is established as equal to For example, multiple expressions can be developed for multiple values in data 3 where data continuity occurs.
- A indicates the position of interest of the edge
- a ′ indicates (the position of) a pixel in the image of the real world 1 with respect to the position of interest (A) of the edge.
- the mixed area refers to an area of data in which the signals for two objects in the real world 1 are mixed into one value in data 3.
- the data 3 for a real-world image 1 of an object having a single color and a straight edge, which is a color different from the background the image for the object having the straight edge and the image for the background are integrated. Pixel values belong to the mixed area.
- FIG. 25 is a diagram illustrating signals for two objects in the real world 1 and values belonging to a mixed region when an equation is formed.
- the left side in Fig. 25 is the signal of the real world 1 for two objects in the real world 1 acquired in the detection area of one detection element of the sensor 2 and having a predetermined spread in the spatial direction X and the spatial direction Y. Is shown.
- the right side in FIG. 25 shows the pixel value P of one pixel of data 3 where the signal of the real world 1 shown on the left side of FIG. 25 is projected by one detection element of the sensor 2. That is, it is acquired by one detecting element of sensor 2. Further, a pixel value ⁇ of one pixel of data 3 is shown, in which signals of the real world 1 are projected for two objects in the real world 1 having a predetermined spread in the spatial direction X and the spatial direction Y.
- L in FIG. 25 indicates the signal level of the real world 1 in the white part of FIG. 25 for one object in the real world 1.
- R in FIG. 25 indicates the level of the signal of the real world 1 in the shaded portion of FIG. 25 with respect to another object in the real world 1.
- the mixing ratio a is a ratio of a signal (area) to two objects, which is incident on a detection area having a predetermined spread in the spatial direction X and the spatial direction Y of one detecting element of the sensor 2.
- the mixing ratio ⁇ has a predetermined spread in the spatial direction X and the spatial direction ⁇ with respect to the area of the detection region of one detection element of the sensor 2 and is incident on the detection area of one detection element of the sensor 2.
- the relationship between the level L, the level R, and the pixel value P can be expressed by Expression (4).
- level R may be the pixel value of the pixel of data 3 located on the right side of the pixel of interest.
- level L may be the pixel value of data 3 located to the left of the pixel of interest.
- the mixing ratio ⁇ and the mixing region can be considered in the time direction as in the spatial direction.
- the ratio of the signal to the two objects incident on the detection area of one detection element of the sensor 2 in the time direction is Change.
- the signals for the two objects, which are incident on the detection area of one detection element of the sensor 2 and change in proportion in the time direction, are projected to one value of the data 3 by the detection element of the sensor 2.
- the mixing in the time direction of the signals for the two objects due to the time integration effect of the sensor 2 is called time mixing.
- the data continuity detecting unit 101 detects, for example, a pixel area in the data 3 on which the signals of the real world 1 for the two objects in the real world 1 are projected.
- the data continuity detecting unit 101 detects, for example, a tilt in the data 3 corresponding to the tilt of the edge of the image of the real world 1.
- the real world estimating unit 102 for example, based on the region of pixels having the predetermined mixture ratio ⁇ detected by the data continuity detecting unit 101, and the gradient of the region, Estimate the real world 1 signal by formulating an expression using ⁇ ⁇ variables that shows the relationship between the model represented by numbers 16 1 and ⁇ ⁇ ⁇ data 16 2 I do. Further, a specific estimation of the real world 1 will be described.
- the real-world signal represented by the function F ( X , y, z, t) on the cross section in the spatial direction Z (position of the sensor 2) Signal is determined by position x in spatial direction X, position y in spatial direction Y, and time t
- the detection area of the sensor 2 has a spread in the spatial direction X and the spatial direction ⁇ .
- the approximation function f (x, y, t) is a function that approximates the signal of the real world 1 acquired by the sensor 2 and having a spatial and temporal spread.
- the value P (x, y, t) of the data 3 is obtained by the projection of the signal of the real world 1 by the sensor 2.
- the value P (x, y, t) of the data 3 is, for example, a pixel value output from the sensor 2 which is an image sensor.
- the value obtained by projecting the approximate function f (x, y, t) can be expressed as a projection function S (x, y, ⁇ ).
- the function F (x, y, z, 1 :) representing the signal of the real world 1 can be a function of infinite order.
- the function Si (x, y, t) can be described from the description of the function fi (x, y, t). .
- equation (6) by formulating the projection of sensor 2, from equation (5), the relationship between data 3 and the real-world signal can be formulated as equation (7). Can be.
- j is the data index.
- N is the number of variables representing the model 1 6 1 approximating the real world 1.
- M is the number of data 16 2 included in data 3.
- the variables can be made independent.
- i indicates the number of variables as it is.
- the form of the function represented by can be made independent, and a desired function can be used as.
- the number N of variables can be defined irrespective of the form of the function, and the variable ⁇ can be obtained from the relationship between the number N of variables Wi and the number M of data.
- the real world 1 can be estimated from the data 3.
- N variables are defined, that is, equation (5) is defined. This is made possible by describing the real world 1 using stationarity.
- a signal of the real world 1 can be described by a model 161, in which a cross section is represented by a polynomial and the same cross-sectional shape continues in a certain direction.
- the projection by the sensor 2 is formulated, and the equation (7) is described.
- the result of integrating the signals of the real world 2 is formulated as data 3.
- data 162 is collected from an area having data continuity detected by the data continuity detecting unit 101.
- data 162 of an area where a certain cross section continues which is an example of stationarity, is collected.
- variable ⁇ can be obtained by the least square method.
- P, j (Xjjj.tj) is a predicted value.
- Equation 10 The sum of squares E of the difference between the predicted value ⁇ 'and the measured value P is given by Equation (10),
- Equation (1 2) is derived from equation (1 1).
- the normal equation at this time is shown by equation (13).
- Si (Xj, y) is described as Si (j).
- Equation (13) Si represents the projection of the real world 1.
- Pj represents data 3.
- Wi is a variable that describes the characteristics of the signal in the real world 1 and seeks to obtain. Therefore, it is possible to estimate the real world 1 by inputting data 3 into equation (13) and obtaining W MAT by a matrix solution method or the like. That is, by calculating the expression (17), the real world 1 can be estimated.
- the real world estimating unit 102 estimates the real world 1 by, for example, inputting the data 3 into the equation (13) and obtaining the W MAT by a matrix solution or the like.
- the cross-sectional shape of the signal in the real world 1 that is, the level change with respect to the position change, is described by a polynomial. It is assumed that the cross section of the signal of the real world 1 is constant and the cross section of the signal of the real world 1 moves at a constant speed. Then, the projection of the signal of the real world 1 by the sensor 2 onto the data 3 is formulated by integration of the signal of the real world 1 in three dimensions in the space-time direction.
- Equations (18) and (19) are obtained from the assumption that the cross-sectional shape of the signal in the real world 1 moves at a constant speed.
- the cross-sectional shape of the signal in the real world 1 is expressed by Expression (20) by using Expressions (18) and (19).
- S (x, y, t) is expressed as follows from the position x s to the position x e in the spatial direction X, from the position y s to the position y e in the spatial direction Y, and from the time direction t in the spatial direction X. It shows the integrated value of the region from time t s to time t e , that is, the region represented by the space-time rectangular parallelepiped.
- equation (2 1) By solving equation (13) using a desired function f (x ′, y ′) that can determine equation (2 1), the signal of real world 1 can be estimated.
- Equation (22) the function shown in Equation (22) is used.
- the signal of the real world 1 includes the stationarity represented by the equations (18), (19), and (22). This is because, as shown in Figure 2 6 shows that the cross section of constant shape is moving in the space-time direction.
- equation (2 3) is obtained.
- FIG. 27 is a diagram illustrating an example of M pieces of data 162 extracted from the data 3. For example, 27 pixel values are extracted as data 16 2, and the extracted pixel value is set to Pj (x, y, t). In this case, j is 0 to 26.
- the pixel value of the pixel corresponding to the target position at time t, which is n, is Pi 3 (x, y, t), and the pixel values of the pixels having data continuity are arranged.
- the region where the pixel value as data 3 output from the image sensor as sensor 2 is obtained has a spread in the time direction and the two-dimensional spatial direction as shown in FIG. Therefore, for example, as shown in FIG. 29, the center of gravity of the rectangular parallelepiped (the area where the pixel value is obtained) corresponding to the pixel can be used as the position of the pixel in the spatiotemporal direction.
- the circle in Fig. 29 indicates the center of gravity.
- the real world estimating unit 102 calculates, for example, 27 pixel values P. From (x, y, t) to P 2 6 (x, y, t) and equation (2 3), generate equation (1 3) and calculate W to estimate the signal of real world 1 .
- a Gaussian function or a sigmoid function can be used as the function (x, y, t).
- the data 3 has a value obtained by integrating the signal of the real world 1 in the time direction, that is, the two-dimensional spatial direction.
- the pixel value of data 3 output from the image sensor of sensor 2 is the light that is incident on the detection element.
- the signal of real world 1 is integrated in the time direction with the detection time, which is the shutter time. It has a value integrated in the light receiving area of the detection element in the spatial direction.
- high-resolution data 181 which has higher resolution in the spatial direction, is a sensor that outputs the estimated real world 1 signal and the data 3 in the time direction. It is generated by integrating in the same time as the detection time of 2, and by integrating in a narrower area in the spatial direction compared to the light receiving area of the detection element of the sensor 2 that has output the data 3.
- the region where the estimated signal of the real world 1 is integrated is the light reception of the detection element of the sensor 2 that outputs the data 3 It can be set completely independent of the area.
- the high-resolution data 18 1 is given a resolution that is an integer multiple in the spatial direction with respect to the data 3 as well as 5/3 times. Resolution can be provided.
- the high-resolution data 181 which has higher resolution in the time direction, uses the estimated real world 1 signal in the spatial direction and the detector 2 of the sensor 2 that outputs the data 3 in the spatial direction. It is generated by integrating in the same area as the light receiving area of the above, and integrating in the time direction in a shorter time as compared with the detection time of the sensor 2 that outputs the data 3.
- the estimated integration time of the signal of the real world 1 is determined by the detection element of the sensor 2 that outputs the data 3.
- the high-resolution data 18 1 has a resolution that is an integral multiple in the time direction with respect to the data 3, as well as a resolution that is a rational multiple of the data 3 in the time direction, such as 7 times 4 times. You can have.
- the high-resolution data 181 is generated by integrating the estimated signal of the real world 1 only in the spatial direction without integrating it in the time direction. Is done.
- high-resolution data 181 which has higher resolution in the temporal and spatial directions, uses the estimated real-world 1 signal as the sensor 2 that outputs data 3 in the spatial direction. Integrates in a narrower area compared to the light-receiving area of the detector element, and integrates in a shorter time compared to the detection time of sensor 2 that output data 3 in the time direction.
- the region and time in which the estimated signal of the real world 1 is integrated can be set completely independent of the light receiving region of the detecting element of the sensor 2 that has output the data 3 and the shutter time.
- the image generation unit 103 integrates, for example, the estimated signal of the real world 1 in a desired spatio-temporal region, so that higher-resolution data can be obtained in the time direction or the space direction.
- FIG. 35 shows the original image of the input image.
- FIG. 36 is a diagram illustrating an example of the input image.
- the input image shown in FIG. 36 is an image generated by using the average value of the pixel values of the pixels belonging to the block of 2 ⁇ 2 pixels of the image shown in FIG. 35 as the pixel value of one pixel. It is. That is, the input image is added to the image shown in Fig. 35.
- An image obtained by applying spatial integration that imitates the sensor's integration characteristics is obtained.
- FIG. 37 is a diagram showing an image obtained by applying the conventional classification adaptive processing to the input image shown in FIG.
- the class classification adaptation process includes a class classification process and an adaptation process.
- the class classification process classifies data into classes based on their properties, and performs an adaptation process for each class.
- the adaptive processing for example, a low-quality or standard-quality image is converted into a high-quality image by mapping using a predetermined tap coefficient.
- FIG. 38 is a diagram illustrating a result of detecting a thin line region from the input image illustrated in the example of FIG. 36 by the data continuity detecting unit 101.
- a white region indicates a thin line region, that is, a region where the arc shapes shown in FIG. 14 are arranged.
- FIG. 39 is a diagram showing an example of an output image output from the signal processing device 4 according to the present invention, using the image shown in FIG. 36 as an input image. As shown in FIG. 39, according to the signal processing device 4 of the present invention, it is possible to obtain an image closer to the thin line image of the original image shown in FIG.
- FIG. 40 is a flowchart for explaining signal processing by the signal processing device 4 according to the present invention.
- step S1 • 1 the data continuity detecting unit 101 executes a process of detecting continuity.
- the data continuity detection unit 101 detects the continuity of the data included in the input image, which is data 3, and outputs data continuity information indicating the continuity of the detected data to the real world estimation unit 1002. And to the image generation unit 103.
- the data continuity detecting unit 101 detects the continuity of data corresponding to the continuity of a signal in the real world.
- the continuity of the data detected by the data continuity detecting unit 101 is determined by the force included in the data 3, which is a part of the constancy of the image of the real world 1, or This is a stationary state that has changed from the stationary state of the signal in the real world 1.
- the data continuity detecting unit 101 detects data continuity by detecting an area having a certain feature in a direction of a predetermined dimension. Also, for example, the data continuity detecting unit 101 detects data continuity by detecting an angle (inclination) in the spatial direction indicating a similar shape arrangement.
- step S101 The details of the processing for detecting the stationarity in step S101 will be described later.
- the data continuity information can be used as a feature quantity indicating the feature of data 3.
- step S102 the real world estimating unit 102 executes a process of estimating the real world. That is, the real world estimating unit 102 estimates the signal of the real world 1 based on the input image and the data continuity information supplied from the data continuity detecting unit 101. For example, in the processing of step S102, the real world estimating unit 102 estimates the signal of the real world 1 by predicting a model 161 that approximates (describes) the real world 1. The real world estimating unit 102 supplies the real world estimation information indicating the estimated signal of the real world 1 to the image generating unit 103.
- the real world estimation unit 102 estimates the signal of the real world 1 by estimating the width of a linear object. Also, for example, the real world estimating unit 102 estimates the signal of the real world 1 by predicting a level indicating the color of a linear object.
- step S102 Details of the process of estimating the real world in step S102 will be described later.
- the real world estimation information can be used as a feature amount indicating the feature of the data 3.
- step S103 the image generation unit 103 executes a process of generating an image, and the process ends. That is, the image generation unit 103 uses the real world estimation information to Generate an image and output the generated image. Alternatively, the image generation unit 103 generates an image based on the data continuity information and the real world estimation information, and outputs the generated image.
- the image generation unit 103 integrates a function approximating the estimated real-world optical signal in the spatial direction based on the real-world estimation information, thereby obtaining the input image. Generates a higher-resolution image in the spatial direction compared to, and outputs the generated image. For example, based on the real world estimation information, the image generation unit 103 divides a function approximating the estimated real world optical signal in the spatio-temporal direction, thereby comparing the function with the input image in the temporal direction. And generate a high-resolution image according to the spatial direction and output the generated image. Details of the image generation process in step S103 will be described later.
- the signal processing device 4 detects the data continuity from the data 3 and estimates the real world 1 based on the detected data continuity. Then, the signal processing device 4 generates a signal that is closer to the real world 1 based on the estimated real world 1.
- a first signal which is a real-world signal having a first dimension
- a second dimension of a second dimension that is less than the first dimension in which a part of the stationarity of the real-world signal is missing.
- FIG. 41 is a block diagram showing a configuration of the data continuity detecting unit 101. As shown in FIG.
- the data continuity detection unit 101 shown in FIG. 41 has a data continuity detector that has the same cross-sectional shape when a thin-line object is imaged.
- the stationarity of the data contained in 3 is detected.
- the data continuity detector 101 shown in FIG. 41 has a change in the position in the direction orthogonal to the length direction at an arbitrary position in the length direction of the image of the real world 1 which is a thin line. Detects the stationarity of the data contained in Data 3, resulting from the stationarity that the change in light level with respect to is the same.
- the data continuity detecting unit 101 having the configuration shown in FIG. 41 includes a slanted image included in data 3 obtained by capturing an image of a thin line with the sensor 2 having a spatial integration effect. A region where a plurality of arc shapes (kamaboko shapes) of a predetermined length, which are arranged adjacent to each other, is detected.
- the data continuity detection unit 101 is a part of the image data other than the image data part (hereinafter, also referred to as a stationary component) where the thin line image having the data continuity is projected from the input image which is the data 3. (Hereinafter referred to as a non-stationary component), and from the extracted non-stationary component and the input image, a pixel on which the image of the real world 1 thin line is projected is detected, and the real world 1 thin line in the input image is detected. Detects the area consisting of the pixels on which the image is projected.
- the non-stationary component extraction unit 201 extracts the non-stationary component from the input image, and outputs the non-stationary component information indicating the extracted non-stationary component together with the input image to the vertex detection unit 202 and the monotone increase / decrease detection. Supply to part 203.
- non-stationary The component extraction unit 201 extracts the non-stationary component as the background by approximating the background in the input image as the data 3 with a plane.
- a solid line indicates a pixel value of data 3
- a dotted line indicates an approximate value indicated by a plane approximating the background.
- A indicates the pixel value of the pixel on which the thin line image is projected
- PL indicates a plane approximating the background.
- the non-stationary component extraction unit 201 is configured to project a plurality of pixels of the image data, which is data 3, in which an image, which is an optical signal of the real world 1, is projected, and a part of the stationarity of the image of the real world 1 is missing. Detect discontinuities in values. ⁇
- the vertex detection unit 202 and the monotone reduction detection unit 203 remove the non-stationary component from the input image based on the non-stationary component information supplied from the non-stationary component extraction unit 201.
- the vertex detection unit 202 and the monotone increase / decrease detection unit 203 set the pixel value of a pixel on which only the background image is projected to 0 in each pixel of the input image, thereby To remove unsteady components.
- the vertex detection unit 202 and the monotonous increase / decrease detection unit 203 subtract the value approximated by the plane PL from the pixel value of each pixel of the input image, and thereby, Remove components.
- the vertex detection unit 202 to the continuity detection unit 204 can process only the portion of the image data on which the fine line is projected, and Processing in the detecting unit 202 to the continuity detecting unit 204 becomes easier.
- non-stationary component extraction unit 201 may supply the image data obtained by removing the non-stationary component from the input image to the vertex detection unit 202 and the monotone increase / decrease detection unit 203.
- image data in which an unsteady component has been removed from an input image that is, image data including only pixels including a steady component
- image data projected from the vertex detection unit 202 to the continuity detection unit 204 to which the image of the thin line is projected will be described.
- the cross-sectional shape in the spatial direction Y (change of the pixel value with respect to the change in the position in the spatial direction) of the image data onto which the thin line image shown in Fig. 42 is projected is the sensor 2 when there is no optical LPF. From the spatial integration effect of the image sensor, a trapezoid shown in FIG. 44 or a triangle shown in FIG. 45 can be considered. However, a normal image sensor has an optical LPF, and an image sensor has an image passing through the optical LPF. PC leak 00 1584
- the cross-sectional shape of the thin line image data in the spatial direction Y is similar to a Gaussian distribution as shown in FIG.
- the vertex detection unit 202 to the continuity detection unit 204 are pixels on which the fine line image is projected, and the same cross-sectional shape (change in pixel value with respect to change in position in the spatial direction) is displayed in the vertical direction of the screen.
- the vertex detection unit 202 to the continuity detection unit 204 detect and detect an area in the input image where an arc shape (kamaboko type) is formed on one column of pixels vertically. It is determined whether or not the areas are arranged adjacent to each other in the horizontal direction, and the connection of the areas where the arc shape is formed corresponding to the length direction of the thin line image which is the signal of the real world 1 is detected.
- the vertex detection unit 202 to the continuity detection unit 204 detect a region where pixels of the fine line image are projected and where the same cross-sectional shape is arranged at regular intervals in the horizontal direction of the screen. Then, by detecting the connection of the detected areas corresponding to the length direction of the thin line in the real world 1, the area where the data of the thin line is projected, which is an area having data continuity, is detected. Is detected. That is, the vertex detection unit 202 to the continuity detection unit 204 detects an area where an arc shape is formed on one row of pixels in the input image, and the detected area is vertical. It is determined whether or not they are arranged adjacent to each other in the direction, and the connection of the areas where the arc shape is formed corresponding to the length direction of the thin line image which is the signal of the real world 1 is detected.
- the vertex detecting unit 202 detects a pixel having a larger pixel value than the surrounding pixels, that is, the vertex, and supplies vertex information indicating the position of the vertex to the monotone increase / decrease detecting unit 203.
- the vertex detector Detects a pixel having a larger pixel value as a vertex as compared with the pixel value of a pixel located on the upper side of the screen and the pixel value of a pixel located on the lower side of the screen.
- the vertex detection unit 202 detects one or a plurality of vertices from one image, for example, an image of one frame.
- One screen contains frames or fields. The same applies to the following description.
- the vertex detection unit 202 selects a pixel of interest from pixels that have not yet been set as the pixel of interest from the image of one frame, and determines the pixel value of the pixel of interest and the pixel value of the pixel above the pixel of interest. Is compared with the pixel value of the target pixel and the pixel value of the lower pixel of the target pixel, and has a pixel value larger than the pixel value of the upper pixel, and is larger than the pixel value of the lower pixel. A target pixel having a pixel value is detected, and the detected target pixel is set as a vertex.
- the vertex detection unit 202 supplies vertex information indicating the detected vertex to the monotonous increase / decrease detection unit 203.
- the vertex detector 202 may not detect the vertex in some cases. For example, when the pixel values of the pixels of one image are all the same, or when the pixel value decreases in the 1 or 2 direction, no vertex is detected. In this case, the thin line image is not projected on the image data.
- the monotonous increase / decrease detecting unit 203 Based on the vertex information indicating the position of the vertex supplied from the vertex detecting unit 202, the monotonous increase / decrease detecting unit 203 detects the vertex detected by the vertex detecting unit 202 in the vertical direction. It detects the catching of the area consisting of the pixels on which the thin line image is projected, and supplies area information indicating the detected area to the continuity detecting unit 204 together with the vertex information. More specifically, the monotonous increase / decrease detection unit 203 detects an area composed of pixels having a pixel value that is monotonically decreased with respect to a pixel value of the vertex as an area composed of pixels onto which a thin line image is projected. Detect as a candidate.
- Monotonic decrease means that the pixel value of a pixel at a longer distance from the vertex is smaller than the pixel value of a pixel at a shorter distance from the vertex.
- the monotonous increase / decrease detection unit 203 detects a region composed of pixels having a monotonically increasing pixel value as a candidate for a region composed of pixels onto which a thin line image is projected, based on the pixel value of the vertex. I do.
- Monotonically increasing means that the pixel value of the pixel at a longer distance from the vertex is larger than the pixel value of the pixel at a shorter distance from the vertex.
- the processing for the region composed of pixels having monotonically increasing pixel values is the same as the processing for the region composed of pixels having monotonically decreasing pixel values, and a description thereof will be omitted.
- the monotonous increase / decrease detection unit 203 calculates the difference between the pixel value of each pixel and the pixel value of the upper pixel, and the pixel value of the lower pixel for each pixel in one column vertically with respect to the vertex. Find the difference between. Then, the monotone increase / decrease detection unit 203 detects an area where the pixel value monotonously decreases by detecting a pixel whose sign of the difference changes.
- the monotonous increase / decrease detection unit 203 detects a region having a pixel value having the same sign as that of the pixel value of the vertex based on the sign of the pixel value of the vertex from the region where the pixel value is monotonically decreasing. Is detected as a candidate for an area composed of pixels onto which a thin line image is projected.
- the monotone increase / decrease detection unit 203 compares the sign of the pixel value of each pixel with the sign of the pixel value of the upper pixel and the sign of the pixel value of the lower pixel, and the sign of the pixel value changes. By detecting all pixels, an area consisting of pixels having the pixel value of the same sign as the vertex is detected from the area where the pixel value monotonously decreases.
- the monotonous increase / decrease detection unit 203 detects an area composed of pixels arranged in the up-down direction, the pixel value of which monotonously decreases with respect to the vertex, and which has the pixel value of the same sign as the vertex.
- FIG. 47 is a diagram for explaining a process of detecting a vertex and detecting a monotonously increasing / decreasing region for detecting a pixel region on which a thin line image is projected from a pixel value with respect to a position in the spatial direction Y.
- ⁇ indicates a vertex.
- ⁇ indicates a vertex.
- the vertex detection unit 202 compares the pixel value of each pixel with the pixel value of a pixel adjacent thereto in the spatial direction ⁇ , and determines a pixel value larger than the pixel value of the two pixels adjacent to the spatial direction ⁇ .
- the vertex ⁇ is detected by detecting the pixel having the ⁇ .
- the region consisting of the vertex ⁇ and the pixels on both sides of the vertex ⁇ in the spatial direction ⁇ is a monotonically decreasing region in which the pixel values of the pixels on both sides in the spatial direction ⁇ monotonically decrease with respect to the pixel value of the vertex ⁇ .
- arrows indicated by ⁇ and arrows indicated by B indicate monotonically decreasing regions existing on both sides of the vertex P.
- the monotone increase / decrease detection unit 203 finds a difference between the pixel value of each pixel and the pixel value of a pixel adjacent to the pixel in the spatial direction Y, and detects a pixel whose sign of the difference changes.
- the monotonous increase / decrease detection unit 203 sets the boundary between the detected pixel whose sign of the difference changes and the pixel on the near side (vertex P side) in a thin line area composed of pixels onto which the thin line image is projected. Of the boundary.
- the monotonous increase / decrease detection unit 203 compares the sign of the pixel value of each pixel with the sign of the pixel value of the pixel adjacent to the pixel in the spatial direction Y in the monotonically decreasing region, and determines the sign of the pixel value. A changing pixel is detected.
- the monotonous increase / decrease detection unit 203 sets the boundary between the detected pixel whose sign of the pixel value changes and the pixel on the near side (vertex P side) as the boundary of the thin line area.
- a thin line region F composed of pixels onto which a thin line image is projected is a region sandwiched between a thin line region boundary C and a thin line region boundary D.
- the monotone increase / decrease detection unit 203 finds a thin line region F longer than a predetermined threshold, that is, a thin line region F including a number of pixels larger than the threshold, from the thin line region F composed of such a monotone increase / decrease region. For example, when the threshold value is 3, the monotonous increase / decrease detection unit 203 detects a thin line region F including four or more pixels.
- the monotonous increase / decrease detection unit 203 calculates the pixel value of the vertex P, the pixel value of the pixel on the right side of the vertex P, and the pixel value of the pixel on the left side of the vertex P.
- the pixel value of the vertex P exceeds the threshold value, the pixel value of the pixel on the right side of the vertex P is less than the threshold value, and the pixel value of the pixel on the left side of the vertex P is less than the threshold value.
- a thin line area F to which P belongs is detected, and the detected thin line area F is set as a candidate for an area including pixels including components of a thin line image.
- the pixel value of the vertex P is equal to or less than the threshold value, the pixel value of the pixel on the right side of the vertex P exceeds the threshold value, or the pixel value of the pixel on the left side of the vertex P exceeds the threshold value.
- F is determined not to include the component of the thin line image, and is removed from the candidate of the region including the pixel including the component of the thin line image.
- the monotonous increase / decrease detection unit 203 compares the pixel value of the vertex P with the threshold value, and moves the vertex P in the spatial direction X (the direction indicated by the dotted line AA ′).
- the pixel value of the pixel adjacent to the vertex P is compared with the threshold value, and the pixel value of the pixel at the vertex P exceeds the threshold value, and the pixel value of the pixel adjacent to the spatial direction X is equal to or less than the threshold value.
- FIG. 49 is a diagram illustrating pixel values of pixels arranged in the spatial direction X indicated by a dotted line AA ′ in FIG. It exceeds the pixel value is the threshold T h s of the vertex P, pixel values of the pixels ⁇ the spatial direction X of the vertex P is less than or equal to the threshold value T h s, fine line region F where the vertex P belongs, including components of the thin line .
- the monotonous increase / decrease detection unit 203 compares the difference between the pixel value of the vertex P and the pixel value of the background with a threshold based on the pixel value of the background, Compare the difference between the pixel value of the pixel adjacent to the direction X and the pixel value of the background with the threshold, and select the vertex?
- the fine line area F to which the vertex P belongs, where the difference between the pixel value of the pixel and the background pixel value exceeds the threshold and the difference between the pixel value of the pixel adjacent to the spatial direction X and the pixel value of the background is equal to or smaller than the threshold is detected. You may make it.
- the monotone increase / decrease detection unit 203 is an area composed of pixels whose pixel values decrease monotonically with the sign of the pixel value being the same as that of the vertex P with respect to the vertex P.
- Monotonic increase / decrease region information indicating that the pixel value of the pixel on the right side of P is equal to or less than the threshold value and the pixel value of the pixel on the left side of the vertex P is equal to or less than the threshold value is supplied to the continuity detection unit 204.
- the thin line image includes the projected pixels.
- the area indicated by the monotone increasing / decreasing area information includes pixels arranged in a line in the vertical direction of the screen and includes an area formed by projecting a thin line image.
- the vertex detection unit 202 and the monotone increase / decrease detection unit 203 use the property that the change in the pixel value in the spatial direction Y is similar to the Gaussian distribution in the pixel on which the thin line image is projected. Then, a steady area composed of pixels onto which the thin line image is projected is detected.
- the continuity detection unit 204 includes pixels that are horizontally adjacent to each other in the area that is composed of vertically arranged pixels and that is indicated by the monotone increase / decrease area information supplied from the monotone increase / decrease detection unit 203. Regions, that is, regions that have similar changes in pixel values and that overlap in the vertical direction are detected as continuous regions, and vertex information and data continuity indicating the detected continuous regions are detected. Output information.
- the data continuity information includes monotonically increasing / decreasing area information and information indicating purple of the area.
- the detected continuous region includes the pixels on which the fine lines are projected. Since the detected continuous area includes pixels on which fine lines are projected and arranged at regular intervals so that arc shapes are adjacent to each other, the detected continuous area is regarded as a steady area, and continuity detection is performed.
- the unit 204 outputs data continuity information indicating the detected continuous area.
- the continuity detecting unit 204 determines that the arc shape in the data 3 obtained by imaging the thin line, which is generated from the continuity of the image of the thin line in the real world 1 and is continuous in the length direction, is adjacent.
- the candidates of the areas detected by the vertex detection unit 202 and the monotone increase / decrease detection unit 203 are further narrowed down.
- FIG. 50 is a diagram illustrating a process of detecting the continuity of the monotonous reduction region.
- the continuity detector 204 performs two monotonic operations when the thin line area F composed of pixels arranged in one row in the vertical direction of the screen includes pixels that are adjacent in the horizontal direction. It is assumed that there is continuity between the increase / decrease regions, and that no continuity exists between the two thin line regions F when pixels adjacent in the horizontal direction are not included.
- a thin line region F composed of pixels arranged in one column in the vertical direction of the screen is a thin line region F composed of pixels arranged in one column in the vertical direction of the screen.
- the thin line area F 0 composed of pixels arranged in one column in the vertical direction of the screen includes a thin line area F 0 including pixels in the thin line area composed of pixels arranged in one column in the vertical direction of the screen and pixels adjacent in the horizontal direction. It is said to be continuous with t .
- the vertex detection unit 202 to the continuity detection unit 204 detect pixels that are arranged in a line in the upper and lower direction of the screen and that are formed by projecting a thin line image. .
- the vertex detection unit 202 to the continuity detection unit 204 detect pixels that are arranged in a line in the upper and lower directions on the screen and that are formed by projecting a thin line image. Further, an area is detected which is a pixel arranged in a line in the left-right direction of the screen and formed by projecting a thin line image.
- the vertex detection unit 202 compares the pixel values of the pixels located on the left side of the screen and the pixel values of the pixels located on the right side of the screen with respect to the pixels arranged in one row in the horizontal direction of the screen. Then, a pixel having a larger pixel value is detected as a vertex, and vertex information indicating the position of the detected vertex is supplied to the monotone increase / decrease detector 203.
- the vertex detection unit 202 detects one or a plurality of vertices from one image, for example, one frame image.
- the vertex detection unit 202 selects a pixel of interest from pixels that have not yet been set as the pixel of interest from the image of one frame, and calculates the pixel value of the pixel of interest and the pixel value of the pixel to the left of the pixel of interest.
- the pixel value of the pixel of interest is compared with the pixel value of the pixel on the right side of the pixel of interest, and the pixel value of the pixel on the right side is larger than the pixel value of the pixel on the left side.
- a target pixel having a value is detected, and the detected target pixel is set as a vertex.
- the vertex detection unit 202 supplies vertex information indicating the detected vertex to the monotonous increase / decrease detection unit 203.
- Vertex detector 202 The force S may not detect the vertex.
- the monotone increase / decrease detection unit 203 is a pixel that is arranged in a line in the left and right direction with respect to the vertex detected by the vertex detection unit 202, and is a candidate for an area composed of pixels onto which a thin line image is projected.
- the detected and vertex information is supplied to the continuity detecting unit 204 together with the monotone increasing / decreasing area information indicating the detected area.
- the monotonous increase / decrease detection unit 203 detects an area composed of pixels having a pixel value that is monotonically decreased with respect to a pixel value of the vertex as an area composed of pixels onto which a thin line image is projected. Detect as a candidate.
- the monotonous increase / decrease detection unit 203 calculates the difference between the pixel value of each pixel and the pixel value of the pixel on the left side and the pixel value of the pixel on the right side for each pixel in one row horizontally with respect to the vertex. Find the difference. Then, the monotone increase / decrease detection unit 203 detects an area where the pixel value monotonously decreases by detecting a pixel whose sign of the difference changes.
- the monotonous increase / decrease detection unit 203 has a pixel value having the same sign as the sign of the pixel value of the vertex based on the sign of the pixel value of the vertex from the region where the pixel value is monotonically decreasing.
- the region consisting of pixels is detected as a candidate for the region consisting of pixels onto which the fine line image is projected.
- the monotone increase / decrease detection unit 203 compares the sign of the pixel value of each pixel with the sign of the pixel value of the left pixel or the sign of the pixel value of the right pixel, and the sign of the pixel value changes. By detecting a pixel having a pixel value, an area composed of pixels having the pixel value of the same sign as the vertex is detected from the area where the pixel value monotonously decreases.
- the monotone increase / decrease detection unit 203 detects a region arranged in the left-right direction, the pixel value of which is monotonously decreased with respect to the vertex, and the pixel region having the same sign as the vertex.
- the monotone increase / decrease detection unit 203 obtains a thin line region longer than a predetermined threshold, that is, a thin line region including a number of pixels larger than the threshold, from the thin line region composed of such a monotone increase / decrease region.
- the monotonous increase / decrease detection unit 203 calculates the pixel value of the vertex, the pixel value of the pixel above the vertex, and the pixel value of the pixel below the vertex.
- the pixel value of the vertex exceeds the threshold value, the pixel value of the pixel above the vertex is less than the threshold value, and the thin line region to which the pixel value of the pixel below the vertex is less than the threshold value belongs.
- the detected and detected thin line region is set as a candidate for a region including pixels including components of a thin line image.
- the thin line region to which the vertex whose pixel value is less than or equal to the threshold value, the pixel value of the pixel above the vertex exceeds the threshold value, or the pixel value of the pixel below the vertex exceeds the threshold value belongs to It is determined that the image does not include the component of the thin line image, and is removed from the candidate of the region including the pixel including the component of the thin line image.
- the monotonous increase / decrease detection unit 203 compares the difference between the pixel value of the vertex and the pixel value of the background with a threshold based on the pixel value of the background, and calculates the pixels of the pixels vertically adjacent to the vertex.
- the difference between the pixel value and the background pixel value is compared with a threshold value, and the difference between the vertex pixel value and the background pixel value exceeds the threshold value, and the pixel value of the vertically adjacent pixel and the background pixel value
- the detected fine line region having a difference of not more than the threshold value may be set as a candidate for a region including pixels including a component of the fine line image. 4 001584
- the monotone increase / decrease detection unit 203 is an area composed of pixels whose pixel values decrease monotonously with the vertex as the reference and the sign of the pixel value is the same as the vertex, and the vertex exceeds the threshold value, and the right side of the vertex Is supplied to the continuity detecting unit 204, indicating that the pixel value of the pixel of the apex is less than or equal to the threshold value and the pixel value of the pixel on the left side of the vertex is less than the threshold value.
- the region indicated by the monotone increase / decrease region information is a line of pixels arranged in the horizontal direction of the screen, and includes a region formed by projecting a thin line image.
- the continuity detection unit 204 includes pixels that are vertically adjacent to each other in the region composed of pixels arranged in the horizontal direction, which is indicated by the monotone increase / decrease region information supplied from the monotone increase / decrease detection unit 203. Regions, that is, regions that have similar pixel value changes and overlap in the horizontal direction are detected as continuous regions, and vertex information and data continuity indicating the detected continuous regions are detected. Output information.
- the data continuity information includes information indicating the connection between the areas.
- the detected continuous region includes the pixels on which the fine lines are projected. Since the detected continuous area includes pixels on which fine lines are projected and arranged at regular intervals so that arc shapes are adjacent to each other, the detected continuous area is regarded as a steady area, and continuity detection is performed.
- the unit 204 outputs data continuity information indicating the detected continuous area.
- the continuity detecting unit 204 detects that the arc shape in the data 3 obtained by imaging the thin line, which is generated from the continuity of the image of the thin line in the real world 1 and is continuous in the length direction, is Utilizing stationarity arranged at regular intervals so as to be in contact with each other, candidates for regions detected by the vertex detection unit 202 and the monotone increase / decrease detection unit 203 are further narrowed down.
- FIG. 51 is a diagram illustrating an example of an image in which a stationary component is extracted by approximation on a plane.
- Figure 52 detects the vertices in the image shown in Figure 51 and detects the monotonically decreasing area.
- FIG. In FIG. 52, the part shown in white is the detected area.
- FIG. 53 is a diagram illustrating a region in which continuity is detected by detecting continuity of an adjacent region from the image illustrated in FIG. 52.
- the portion shown in white is the region where continuity is detected.
- the continuity detection shows that the region is further specified.
- FIG. 54 is a diagram showing the pixel values of the region shown in FIG. 53, that is, the pixel values of the region where continuity is detected.
- the data continuity detecting unit 101 can detect the continuity included in the data 3 as the input image. That is, the data continuity detecting unit 101 can detect the continuity of the data included in the data 3 that is generated by projecting the image of the real world 1 as a thin line onto the data 3. The data continuity detecting unit 101 detects, from the data 3, an area composed of pixels onto which the image of the real world 1 as a thin line is projected.
- FIG. 55 is a diagram illustrating an example of another process of detecting a region having continuity, on which a thin line image is projected, in the continuity detection unit 101.
- the continuity detecting unit 101 when the values of the adjacent differences are the same among the absolute values of the differences arranged corresponding to the pixels, the pixel () corresponding to the absolute value of the two differences Pixel between the absolute values of the two differences) contains a thin line component judge. Note that the continuity detection unit 101 determines that the absolute value of a difference is smaller than a predetermined threshold value when adjacent differential values are the same among the absolute values of the differentials arranged corresponding to the pixels. Then, it is determined that the pixel corresponding to the absolute value of the two differences (the pixel sandwiched between the absolute values of the two differences) does not include a thin line component.
- the continuity detecting unit 101 can also detect a thin line by such a convenient method.
- FIG. 56 is a flowchart for explaining the processing of the continuity detection.
- the non-stationary component extracting unit 201 extracts a non-stationary component, which is a portion other than the portion where the thin line is projected, from the input image.
- the non-stationary component extraction unit 201 supplies, together with the input image, the non-stationary component information indicating the extracted non-stationary component to the vertex detection unit 202 and the monotone increase / decrease detection unit 203. Details of the process of extracting the unsteady component will be described later.
- step S202 the vertex detection unit 202 removes non-stationary components from the input image based on the non-stationary component information supplied from the non-stationary component extraction unit 201, and outputs Only pixels containing stationary components are left. Further, in step S202, the vertex detector 202 detects a vertex.
- the vertex detection unit 202 compares the pixel value of each pixel with the pixel values of the upper and lower pixels for the pixel including the stationary component. Then, a vertex is detected by detecting a pixel having a pixel value larger than the pixel value of the upper pixel and the pixel value of the lower pixel. Also, in step S202, the vertex detection unit 202, when executing the processing on the basis of the horizontal direction of the screen, determines the pixel value of each pixel with respect to the pixel including the stationary component and the pixel values of the right and left sides. The vertex is detected by comparing the pixel value of the pixel with the pixel value of the right pixel and the pixel having a pixel value larger than the pixel value of the left pixel.
- the vertex detection unit 202 supplies vertex information indicating the detected vertex to the monotonous increase / decrease detection unit 203.
- the monotone increase / decrease detection unit 203 removes the non-stationary component from the input image based on the non-stationary component information supplied from the non-stationary component extraction unit 201, and outputs the non-stationary component to the input image. Only pixels containing stationary components are left.
- the monotone increase / decrease detecting unit 203 detects the monotone increase / decrease with respect to the vertex based on the vertex information indicating the position of the vertex supplied from the vertex detecting unit 202. A region consisting of pixels having data continuity is detected.
- the monotonous increase / decrease detection unit 203 When executing processing based on the vertical direction of the screen, the monotonous increase / decrease detection unit 203 vertically arranges the pixels based on the pixel values of the vertices and the pixel values of the pixels arranged vertically in one column. By detecting the monotonous increase / decrease of pixels in one row, the pixels of which one thin line image is projected, detect an area composed of pixels having data continuity. That is, in step S203, the monotonous increase / decrease detection unit 203, when executing the processing with the vertical direction of the screen as a reference, determines each of the vertices and the pixels vertically arranged in one column with respect to the vertices.
- the difference between the pixel value of the pixel and the pixel value of the upper or lower pixel is determined, and the pixel whose sign of the difference changes is detected.
- the monotone increase / decrease detection unit 203 determines the sign of the pixel value of each pixel and the sign of the pixel value of the pixel above or below the vertex and the pixels arranged in one column vertically with respect to the vertex. And detects a pixel whose sign of the pixel value changes. Further, the monotonous increase / decrease detection unit 203 compares the pixel value of the vertex and the pixel values of the pixels on the right and left sides of the vertex with the threshold value, and the pixel value of the vertex exceeds the threshold value. Then, an area composed of pixels whose pixel value is equal to or smaller than the threshold value is detected.
- the monotone increase / decrease detection unit 203 supplies the continuity detection unit 204 with monotone increase / decrease region information indicating the monotone increase / decrease region, using the region thus detected as a monotone increase / decrease region.
- the monotonous increase / decrease detection unit 203 determines the horizontal direction based on the pixel values of the vertices and the pixel values of the pixels arranged in one row horizontally with respect to the vertices. Detects an area consisting of pixels with data continuity by detecting the monotonous increase / decrease of the pixels in one row that are projected on one thin line image. That is, in step S203, the monotonous decrease detecting section 203 detects the horizontal side of the screen.
- the difference between the pixel value of each pixel and the pixel value of the pixel on the left or right side is calculated for the vertices and the pixels arranged in one row horizontally with respect to the vertices, and the difference is calculated.
- the pixel whose sign changes is detected.
- the monotone increase / decrease detection unit 203 calculates the sign of the pixel value of each pixel and the sign of the pixel value of the pixel on the left or right side of the pixel for the pixels arranged in a row laterally to the vertex. And detects the pixel whose sign of the pixel value changes.
- the monotone increase / decrease detection unit 203 compares the pixel value of the vertex, and the pixel values of the pixels above and below the vertex with the threshold value, and the pixel value of the vertex exceeds the threshold value, and An area consisting of pixels whose pixel value of the pixel on the side is equal to or smaller than the threshold value is detected.
- the monotone increase / decrease detection unit 203 supplies the continuity detection unit 204 with monotone increase / decrease region information indicating the monotone increase / decrease region, using the region thus detected as a monotone increase / decrease region.
- step S204 the monotone increase / decrease detection unit 203 determines whether or not the processing of all pixels has been completed.
- the non-stationary component extraction unit 201 detects the vertices of all the pixels of one screen (for example, a frame or a field) of the input image, and determines whether a monotonous increase / decrease area has been detected.
- step S204 If it is determined in step S204 that the processing of all the pixels has not been completed, that is, it is determined that there is still a pixel that has not been subjected to the processing of detecting the vertices and detecting the monotone increasing / decreasing area, Returning to 2, the pixel to be processed is selected from the pixels that are not subjected to the processing of the vertex detection and the detection of the monotone increase / decrease area, and the processing of the vertex detection and the detection of the monotone increase / decrease area are repeated.
- step S204 if it is determined that the processing of all pixels has been completed, that is, it is determined that the vertices and the monotone increasing / decreasing area have been detected for all the pixels, the process proceeds to step S205, and the continuity detecting unit 2 04 detects the continuity of the detected area based on the monotone increase / decrease area information.
- the continuity detecting unit 204 determines that when a monotone increasing / decreasing area, which is indicated by monotonous increasing / decreasing area information and is composed of pixels arranged in one row in the vertical direction of the screen, includes horizontally adjacent pixels, Assuming that there is continuity between two monotone increase / decrease regions, and there is no pixel adjacent in the horizontal direction, there is continuity between two monotone increase / decrease regions.
- the continuity detecting unit 204 detects that when a monotone increasing / decreasing area, which is indicated by monotonous increasing / decreasing area information and is composed of pixels arranged in one row in the horizontal direction, includes pixels that are vertically adjacent to each other, Assume that there is continuity between two monotone increase / decrease regions, and that there is no continuity between the two monotone increase / decrease regions when pixels adjacent in the vertical direction are not included.
- the continuity detecting unit 204 sets the detected continuous area as a steady area having data continuity, and outputs data continuity information indicating the position of the vertex and the steady area.
- the data continuity information includes information indicating the connection between the areas.
- the data continuity information output from the continuity detection unit 204 indicates a thin line region that is a steady region and includes pixels onto which a thin line image of the real world 1 is projected.
- step S206 the continuity direction detection unit 205 determines whether or not processing of all pixels has been completed. That is, the continuity direction detecting unit 205 determines whether or not the continuity of the area has been detected for all pixels of a predetermined frame of the input image.
- step S206 If it is determined in step S206 that the processing of all the pixels has not been completed, that is, it is determined that there are still pixels that have not been subjected to the processing for detecting the continuity of the region, the process returns to step S205. Then, the pixel to be processed is selected from the pixels not to be subjected to the processing for detecting the continuity of the area, and the processing for detecting the continuity of the area is repeated. If it is determined in step S206 that the processing of all the pixels has been completed, that is, it is determined that the continuity of the area has been detected for all the pixels, the processing ends. In this way, the continuity contained in the input image data 3 is detected.
- the data continuity detecting unit 101 whose configuration is shown in FIG. 41, is based on the region having continuity of the data detected from the data 3 frame, Sex can be detected.
- the continuity detection unit 204 detects the continuity of the detected data in the region having the continuity of the detected data in the frame #n and the continuity of the detected data in the frame # n-l.
- the continuity of the data in the time direction is detected by connecting the ends of the regions based on the region having the detected data and the region having the detected data continuity in frame # n + 1.
- Frame # n-1 is a frame temporally before frame #n
- frame # n + 1 is a frame temporally subsequent to frame #n. That is, frame over arm #Nl, frame # n, and frame # n + l, the frame #n - 1, are displayed in the order of frames, and frame # n + l.
- G is a region having a stationarity of the detected data in frame #n, a region having a stationarity of the detected data in frame # n-1, and
- frame # n + l indicate the motion vector obtained by connecting one end of each of the detected data continuity regions, and G, indicate the detected data continuity regions, respectively. This shows the motion vector obtained by connecting the other end of.
- the motion vector G and the motion vector G ' are examples of the continuity of data in the time direction.
- the data continuity detecting unit 101 having the configuration shown in FIG. 41 can output information indicating the length of the region having data continuity as data continuity information.
- FIG. 58 is a view showing a configuration of a non-stationary component extraction unit 201 which extracts a non-stationary component by approximating a non-stationary component, which is a part of image data having no stationarity, in a plane.
- FIG. 58 is a view showing a configuration of a non-stationary component extraction unit 201 which extracts a non-stationary component by approximating a non-stationary component, which is a part of image data having no stationarity, in a plane.
- the non-stationary component extraction unit 201 shown in FIG. 58 extracts a block consisting of a predetermined number of pixels from the input image, and the error between the block and the value indicated by the plane becomes smaller than a predetermined threshold. Thus, the block is approximated by a plane, and the non-stationary component is extracted.
- the input image is supplied to the block extraction unit 221 and output as it is.
- the block extracting unit 222 extracts a block including a predetermined number of pixels from the input image. For example, the block extracting unit 222 extracts a block composed of 7 ⁇ 7 pixels and supplies the extracted block to the plane approximating unit 222. For example, the block extracting unit 222 shifts the pixel at the center of the extracted block in the raster scan order, and sequentially extracts blocks from the input image.
- the plane approximating unit 222 approximates the pixel values of the pixels included in the block with a predetermined plane. For example, the plane approximating unit 222 approximates the pixel values of the pixels included in the book on the plane represented by the equation (24).
- X indicates the position of the pixel in one direction (spatial direction X) on the screen
- y indicates the position of the pixel on the screen in the other direction.
- z indicates an approximate value represented by a plane.
- a indicates the inclination of the plane in the spatial direction X
- b indicates the inclination of the plane in the spatial direction Y.
- c is a plane offset.
- the plane approximating unit 2 2 2 calculates the slope a, the slope b, and the offset c by regression processing, and obtains the pixels of the pixels included in the block on the plane represented by the equation (24). Approximate values.
- the plane approximating unit 2 2 2 calculates the slope a, the slope b, and the offset c by regression processing with rejection, and obtains the pixel of the pixel included in the block on the plane represented by the equation (2 4). Approximate values.
- the plane approximation unit 222 finds the plane represented by the equation (24) that minimizes the error with respect to the pixel value of the block pixel by the least squares method, and includes the plane in the block. The pixel value of the pixel to be approximated.
- the plane approximation unit 222 has been described as approximating the block with the plane represented by the equation (24), it is not limited to the plane represented by the equation (24) but has a higher degree of freedom.
- a function may be approximated by a surface represented by a polynomial of degree n (n is any integer).
- the repetition determination unit 223 calculates an error between the approximate value indicated by the plane approximating the pixel value of the block and the pixel value of the corresponding pixel of the block.
- Equation (25) is an equation representing an error ei which is a difference between an approximate value indicated by a plane approximating the pixel value of the block and the pixel value zi of the corresponding pixel of the block.
- Equation (25) z hat (letters with "" appended to z are referred to as z hat.
- z hat is the pixel value of the block.
- a knot indicates the gradient in the spatial direction X of the plane approximating the pixel value of the block
- b hat indicates the spatial direction of the plane approximating the pixel value of the block.
- c indicates an offset (intercept) of a plane that approximates the pixel values of the block.
- the repetition determination unit 2 23 rejects the pixel having the largest error ei between the approximate value and the pixel value of the pixel corresponding to the block shown in Expression (25). In this way, the pixel on which the thin line is projected, that is, the pixel having continuity, is rejected.
- the repetition determination unit 2 2 3 outputs the rejection information indicating the rejected pixel to the plane approximation unit 2 2
- the repetition determination unit 2 23 calculates the standard error, and the standard error is equal to or more than a predetermined threshold value for approximation end determination, and more than half of the pixels of the block are not rejected. At this time, the repetition determination unit 222 causes the plane approximation unit 222 to repeat the plane approximation process on the pixels included in the block, excluding the rejected pixels. .
- the plane approximates the non-stationary component by approximating the pixels excluding the rejected pixels with a plane.
- the iterative determination unit 223 ends the approximation using the plane.
- the repetition determination unit 223 may calculate not only the standard error but also the sum of the squares of the errors of all the pixels included in the block, and execute the following processing.
- the repetition determination unit 2 23 When the approximation by the plane is finished, the repetition determination unit 2 23 outputs information indicating the plane that approximates the pixel value of the block (the slope and intercept of the plane in equation (24)) as unsteady component information. .
- the repetition determination unit 223 compares the number of rejections for each pixel with a predetermined threshold, and determines that a pixel whose number of rejections is equal to or greater than the threshold is a pixel including a steady component. May be output as stationary component information.
- the vertex detection unit 202 to the continuity direction detection unit 205 execute the respective processes on the pixels including the stationary component indicated by the stationary component information.
- FIG. 60 is a diagram illustrating an example of an input image in which an average value of pixel values of 2 ⁇ 2 pixels of an original image is generated as a pixel value from an image including a thin line.
- FIG. 61 is a diagram showing an image in which a standard error obtained as a result of approximating the image shown in FIG. 60 by a plane without rejection is used as a pixel value.
- a block consisting of 5 X 5 pixels for one pixel of interest is approximated by a plane.
- a white pixel is a pixel having a larger pixel value, that is, a pixel having a larger standard error
- a black pixel is a pixel having a smaller pixel value, that is, a pixel having a smaller standard error.
- FIG. 61 is an image in which the standard error obtained when the image shown in FIG. 60 is rejected and approximated by a plane is used as a pixel value.
- white pixels are pixels having larger pixel values, that is, pixels having a larger standard error
- black pixels are pixels having smaller pixel values, that is, pixels having a smaller standard error. It can be seen that the standard error as a whole is smaller when rejection is performed than when no rejection is performed.
- FIG. 63 is a diagram showing an image in which, when the image shown in FIG. 60 is rejected and approximated by a plane, the number of rejections is set as a pixel value.
- white pixels are larger pixel values, that is, pixels having more rejections
- black pixels are lower pixel values, that is, pixels having less rejection times.
- FIG. 64 is a diagram illustrating an image in which the inclination in the spatial direction X of the plane approximating the pixel value of the block is set as the pixel value.
- FIG. 65 is a diagram illustrating an image in which the inclination in the spatial direction Y of the plane approximating the pixel value of the block is set as the pixel value.
- FIG. 66 is a diagram illustrating an image including approximate values indicated by a plane approximating pixel values of a block. From the image shown in Fig. 66, it can be seen that the thin line has disappeared.
- Fig. 67 shows the image shown in Fig.
- FIG. 7 is a diagram showing an image composed of a difference from an image composed of. Since the non-stationary component is removed from the pixel values of the image in FIG. 67, the pixel values include only the values to which the fine line images are projected. As can be seen from Fig. 67, it can be confirmed that in the image composed of the difference between the original pixel value and the approximate value indicated by the approximate plane, the steady component of the original image was successfully extracted.
- the number of rejections, the inclination in the spatial direction X of the plane approximating the pixel value of the block pixel, the inclination in the spatial direction Y of the plane approximating the pixel value of the block pixel, and the plane approximating the pixel value of the block pixel The indicated approximate value and the error ei can be used as the feature amount of the input image.
- FIG. 68 is a flowchart corresponding to step S201 and illustrating the process of extracting an unsteady component by the unsteady component extracting unit 201 having the configuration shown in FIG.
- the block extracting unit 222 extracts a block composed of a predetermined number of pixels from the input pixels, and supplies the extracted block to the plane approximating unit 222.
- the block extraction unit 221 selects one pixel from the input pixels that has not been selected yet, and extracts a block composed of 7 ⁇ 7 pixels centered on the selected pixel. I do.
- the block extracting unit 221 can select pixels in raster scan order.
- the plane approximating unit 222 approximates the extracted block with a plane.
- the plane approximating unit 222 approximates the pixel values of the pixels of the extracted block by a plane, for example, by regression processing.
- the plane approximating unit 222 approximates, by a plane, the pixel values of the pixels excluding the rejected pixels among the pixels of the extracted block by the regression processing.
- the repetition determination unit 223 performs repetition determination. For example, a plane that approximates the pixel values of the pixels in the block The standard error is calculated from the approximate value and the number of rejected pixels is calculated, thereby repeatedly executing the determination.
- step S224 the repetition determination unit 223 determines whether or not the standard error is equal to or larger than the threshold. When it is determined that the standard error is equal to or larger than the threshold, the process proceeds to step S225.
- step S224 the repetition determination unit 223 determines whether or not more than half of the pixels in the block have been rejected, and whether or not the standard error is equal to or greater than a threshold. If it is determined that half or more of the pixels have not been rejected and the standard error is equal to or greater than the threshold, the process may proceed to step S225.
- step S225 the repetition determination unit 223 calculates, for each pixel of the block, the error between the pixel value of the pixel and the approximate value of the approximated plane, rejects the pixel with the largest error, and performs plane approximation. Notify part 222.
- the procedure returns to step S 222, and the approximation process using a plane and the repetition determination process are repeated for the pixels of the block excluding the rejected pixels.
- step S225 if a block shifted by one pixel in the raster scan direction is extracted by the process of step S221, as shown in FIG. 59, a pixel including a thin line component (the black circle in the figure) Will be rejected multiple times.
- step S224 If it is determined in step S224 that the standard error is not equal to or larger than the threshold value, the block is approximated by a plane, and the process proceeds to step S226.
- step S224 the repetition determination unit 222 determines whether more than half of the pixels in the block have been rejected and whether the standard error is greater than or equal to a threshold. If more than half of the pixels are rejected or if it is determined that the standard error is not greater than or equal to the threshold value, the process may proceed to step S225.
- step S226 the repetition determination unit 223 outputs the slope and intercept of the plane approximating the pixel values of the pixels of the block as non-stationary component information.
- step S227 the block extraction unit 221 determines whether or not processing has been completed for all pixels of one screen of the input image, and determines that there is a pixel that has not been processed yet. In this case, the process returns to step S221, and a block is extracted from pixels that have not been processed yet, and the above process is repeated.
- step S227 If it is determined in step S227 that the processing has been completed for all the pixels of one screen of the input image, the processing ends.
- the non-stationary component extraction unit 201 having the configuration shown in FIG. 58 can extract the non-stationary component from the input image. Since the unsteady component extraction unit 201 extracts the unsteady component of the input image, the vertex detection unit 202 and the monotone increase / decrease detection unit 203 detect the input image and the unsteady component extraction unit 201. By calculating the difference from the extracted non-stationary component, the processing can be performed on the difference including the stationary component.
- the standard error when rejected the standard error when not rejected, the number of rejected pixels, the slope of the spatial direction X of the plane (a hat in equation (2 4)) calculated in the approximation process using the plane ,
- the inclination of the plane in the spatial direction Y (b hat in equation (24)), the level when replaced by the plane (c hat in equation (24)), and the pixel values of the input image and the plane
- the difference from the approximated value can be used as a feature value.
- FIG. 69 is a flowchart illustrating processing for extracting a stationary component by the non-stationary component extraction unit 201 shown in FIG. 58 instead of the processing for extracting the unsteady component corresponding to step S201. It is.
- the processing in steps S224 to S245 is the same as the processing in steps S221 to S225, and a description thereof will be omitted.
- step S246 the repetition determination unit 223 outputs the difference between the approximate value indicated by the plane and the pixel value of the input image as a stationary component of the input image. That is, the repetition determination unit 223 outputs the difference between the approximate value based on the plane and the pixel value that is the true value. 01584
- the repetition determination unit 223 outputs a pixel value of a pixel whose difference between the approximate value indicated by the plane and the pixel value of the input image is equal to or greater than a predetermined threshold value as a stationary component of the input image. You may.
- step S 247 is the same as the process in step S 227, and a description thereof will not be repeated.
- the non-stationary component extraction unit 201 subtracts the approximate value indicated by the plane approximating the pixel value from the pixel value of each pixel of the input image, Non-stationary components can be removed from the input image.
- the vertex detection unit 202 to the continuity detection unit 204 can process only the steady component of the input image, that is, the value on which the image of the thin line is projected, and the vertex detection unit Processing from 202 to the continuity detecting unit 204 becomes easier.
- FIG. 70 illustrates another process of extracting a stationary component by the non-stationary component extraction unit 201 shown in FIG. 58 instead of the process of extracting the non-stationary component corresponding to step S 201. It is a flow chart. The processing of steps S261 to S265 is the same as the processing of steps S221 to S225, and a description thereof will be omitted.
- step S266 the repetition determination unit 223 stores the number of rejections for each pixel, returns to step S266, and repeats the processing.
- step S264 If it is determined in step S264 that the standard error is not greater than or equal to the threshold value, the block is approximated by a plane, so the process proceeds to step S2667. It is determined whether or not processing has been completed for all pixels on the screen. If it is determined that there is a pixel that has not been processed yet, the process returns to step S2661, and a pixel that has not been processed yet is determined. The block is extracted for, and the above processing is repeated.
- step S267 If it is determined in step S267 that the processing has been completed for all the pixels of one screen of the input image, the process proceeds to step S2688, where the repetition determination unit 223 determines the pixels not yet selected. Select one pixel from, and for the selected pixel, T / JP2004 / 001584
- the return determination unit 222 determines in step S268 whether or not the number of rejections for the selected pixel is equal to or greater than a previously stored threshold value.
- step S268 If it is determined in step S268 that the number of rejections for the selected pixel is equal to or greater than the threshold value, the selected pixel includes a stationary component.
- the unit 223 outputs the pixel value of the selected pixel (pixel value in the input image) as a steady component of the input image, and proceeds to step S270. If it is determined in step S268 that the number of rejections for the selected pixel is not equal to or greater than the threshold value, the processing in step S266 is skipped because the selected pixel does not include a stationary component. Then, the procedure proceeds to step S270. That is, no pixel value is output for a pixel for which it is determined that the number of rejections is not greater than or equal to the threshold value. In addition, the repetition determination unit 223 may output a pixel value in which 0 is set for a pixel for which the number of rejections is determined not to be equal to or larger than the threshold value.
- step S270 the repetition determination unit 222 determines whether or not the process of determining whether the number of rejections is equal to or greater than a threshold has been completed for all pixels of one screen of the input image. If it is determined that the processing has not been completed for all the pixels, there is a pixel that has not been processed yet, so the process returns to step S 268, and one pixel is selected from the pixels that have not been processed yet. And repeat the above process. If it is determined in step S270 that the processing has been completed for all the pixels of one screen of the input image, the processing ends.
- the non-stationary component extraction unit 201 can output the pixel value of the pixel including the stationary component among the pixels of the input image as the stationary component information. That is, the non-stationary component extracting unit 201 can output the pixel value of the pixel including the component of the thin line image among the pixels of the input image.
- FIG. 71 shows another process of extracting a stationary component by the non-stationary component extraction unit 201 shown in FIG. 58 instead of the process of extracting the non-stationary component corresponding to step S 201. It is a flowchart explaining. Step S 2 8 1 to Step S 2 8 8 PT / JP2004 / 001584
- step S2661 Is the same as the processing from step S2661 to step S2688, and the description thereof will be omitted.
- step S289 the repetition determination unit 223 outputs the difference between the approximate value indicated by the plane and the pixel value of the selected pixel as a stationary component of the input image. That is, the repetition determination unit 223 outputs an image obtained by removing the non-stationary component from the input image as the constancy information.
- step S290 is the same as the processing in step S270, and a description thereof will be omitted.
- the non-stationary component extraction unit 201 can output an image obtained by removing the non-stationary component from the input image as the stationarity information.
- the real-world optical signal is projected, and a part of the continuity of the real-world optical signal is lost.
- Generates a model (function) that approximates the optical signal by detecting the stationarity of the data from the discontinuity that is output and estimating the stationarity of the optical signal in the real world based on the stationarity of the detected data.
- a model function
- the second image data is generated based on the generated function, a more accurate and more accurate processing result can be obtained for a real-world event.
- FIG. 72 is a block diagram showing another configuration of the data continuity detecting unit 101.
- the change in pixel value of the target pixel, which is the target pixel, in the spatial direction of the input image, that is, the activity in the spatial direction of the input image According to the detected and detected activities, a set of pixels consisting of a predetermined number of pixels in one column in the vertical direction or one column in the horizontal direction is provided at each angle with respect to the target pixel and the reference axis.
- the correlation of a plurality of extracted and extracted pixel sets is detected, and the continuity angle of the data with respect to the reference axis in the input image is detected based on the correlation.
- the data continuity angle refers to the angle formed by the reference axis and the direction of a predetermined dimension, which data 3 has, in which certain features repeatedly appear. Certain features repeatedly appear JP2004 / 001584
- change means, for example, a change in a value with respect to a change in position in data 3, that is, a case where the cross-sectional shapes are the same.
- the reference axis may be, for example, an axis indicating the spatial direction X (horizontal direction of the screen) or an axis indicating the spatial direction Y (vertical direction of the screen).
- the input image is supplied to the activity detecting unit 401 and the data selecting unit 402.
- the activity detector 401 detects a change in pixel value of the input image in the spatial direction, that is, the activity in the spatial direction, and outputs activity information indicating the detection result to the data selector 402 and the stationary direction derivation. Supply to part 404.
- the activity detector 401 detects a change in the pixel value in the horizontal direction of the screen and a change in the pixel value in the vertical direction of the screen, and detects the detected change in the pixel value in the horizontal direction and the detected pixel value in the vertical direction.
- the change in the pixel value in the horizontal direction is larger than the change in the pixel value in the vertical direction, or the change in the pixel value in the vertical direction is larger than the change in the pixel value in the horizontal direction. It detects whether the change in pixel value is large.
- the activity detection unit 401 indicates whether the change in the pixel value in the horizontal direction is larger than the change in the pixel value in the vertical direction, which is the result of the detection, or By comparison, activity information indicating that the change in pixel value in the vertical direction is large is supplied to the data selection unit 402 and the stationary direction derivation unit 404.
- one row of pixels in the vertical direction has an arc shape (kamaboko shape).
- a claw shape is formed, and the arc shape or the claw shape is repeatedly formed in a direction closer to vertical. That is, if the change in the pixel value in the horizontal direction is large compared to the change in the pixel value in the vertical direction, the standard axis is assumed to be the axis indicating the spatial direction X.
- the sex angle is any value between 45 degrees and 90 degrees.
- the change in the pixel value in the vertical direction is greater than the change in the pixel value in the horizontal direction, for example, an arc or nail shape is formed in one row of pixels in the horizontal direction, and the arc or nail shape is horizontal. It is formed repeatedly in a direction closer to the direction.
- the change in the pixel value in the vertical direction is larger than the change in the pixel value in the horizontal direction
- the reference axis is the axis indicating the spatial direction X
- the stationarity angle is any value between 0 and 45 degrees.
- the activity detection unit 401 extracts, from the input image, a block composed of nine 3 ⁇ 3 pixels centered on the pixel of interest, as shown in FIG.
- the activity detection unit 401 calculates the sum of the differences between the pixel values of vertically adjacent pixels and the sum of the differences of the pixel values of horizontally adjacent pixels.
- the sum of the differences between the pixel values of horizontally adjacent pixels, hdiff is obtained by Eq. (27).
- the activity detection unit 401 compares the calculated sum of pixel value differences h di ff for horizontally adjacent pixels and the sum of pixel value differences v d ff for vertically adjacent pixels, and calculates the input image.
- the range of the continuity angle of the data with respect to the reference axis may be determined. That is, in this case, the activity detection unit 401 determines whether the shape indicated by the change in the pixel value with respect to the position in the spatial direction is repeatedly formed in the horizontal direction or the vertical direction. . For example, the change in the pixel value in the horizontal direction for an arc formed on one row of pixels in the horizontal direction is larger than the change in the pixel value in the vertical direction.
- the change in pixel value in the vertical direction for the arc is larger than the change in pixel value in the horizontal direction.
- the direction of data continuity i.e., the change in the direction of a certain dimension of a certain feature of the input image that is data 3 is smaller than the change in the direction orthogonal to the data continuity. It can be said.
- the difference in the direction orthogonal to the direction of data continuity (hereinafter also referred to as the non-stationary direction) is larger than the difference in the direction of data continuity.
- the activity detection unit 401 detects the sum h diff of the calculated pixel values of the horizontally adjacent pixels and the difference h of the pixel values of the vertically adjacent pixels. Comparing the sum v diff , if the sum h diff of the pixel values of the horizontally adjacent pixels is large, the degree of stationarity of the data with respect to the reference axis is between 45 degrees and 1 35 If the sum of the differences between the pixel values of vertically adjacent pixels, v diff, is large, the degree of stationarity of the data with respect to the reference axis is 0 to 4 degrees. It is determined to be any value of 5 degrees or any value of 135 degrees to 180 degrees.
- the activity detecting unit 401 supplies activity information indicating the result of the determination to the data selecting unit 402 and the steady direction deriving unit 404.
- the activity detector 401 extracts a block of an arbitrary size, such as a block of 5 ⁇ 5 pixels of 25 pixels or a block of 7 ⁇ 7 pixels of 49 pixels, and determines the activity. Can be detected.
- the data selection unit 402 selects the pixel of interest from the pixels of the input image in order, and based on the activity information supplied from the activity detection unit 401, for each angle with respect to the pixel of interest and the reference axis, A plurality of pixel sets consisting of a predetermined number of pixels in one column in the vertical direction or one column in the horizontal direction are extracted.
- the data selection section 402 sets a predetermined column in the vertical direction at every predetermined angle in the range of 45 degrees to 135 degrees with respect to the pixel of interest and the reference axis. A plurality of pixel sets consisting of the number of pixels are extracted.
- the angle of data continuity is 0 to 45 degrees or 1 3 Since the value is any value between 5 degrees and 180 degrees, the data selection unit 402 sets the range of 0 degrees to 45 degrees or 135 degrees to 180 degrees with respect to the target pixel and the reference axis. For each predetermined angle, a plurality of pixel sets each consisting of a predetermined number of pixels in one row in the horizontal direction are extracted.
- the data selection unit 402 sets the pixel of interest and For each predetermined angle in the range of 45 degrees to 135 degrees with respect to the reference axis, a plurality of pixel sets consisting of a predetermined number of pixels in one column in the vertical direction are extracted.
- the data selection unit 402 For each predetermined angle in the range of 0 to 45 degrees or 135 to 180 degrees with respect to the pixel and the reference axis, a set of pixels consisting of a predetermined number of pixels in a row in the horizontal direction is defined. , Extract multiple.
- the data selection unit 402 supplies a plurality of sets of the extracted pixels to the error estimation unit 403.
- the error estimator 403 detects the correlation of the pixel set for each angle with respect to a plurality of sets including the extracted pixels.
- the error estimator 403 calculates the pixel value of the pixel at the corresponding position in the set of pixels for a plurality of sets of pixels having a predetermined number of pixels in one column in the vertical direction corresponding to one angle. Detect correlation. The error estimator 403 detects the correlation between the pixel values of the pixels at the corresponding positions in the set, for a plurality of sets of pixels consisting of a predetermined number of pixels in one row in the horizontal direction corresponding to one angle. .
- the error estimating unit 403 supplies correlation information indicating the detected correlation to the stationary direction deriving unit 404.
- the error estimating unit 4003 calculates, as a value indicating the correlation, the pixel value of the set of pixels including the pixel of interest supplied from the data selecting unit 402 and the corresponding position in another set.
- the sum of the absolute values of the differences between the pixel values of the pixels is calculated, and the sum of the absolute values of the differences is supplied to the stationary direction deriving unit 404 as correlation information.
- the stationary direction derivation unit 404 uses the reference axis of the input image corresponding to the continuity of the missing optical signal of the real world 1 as a reference. Detects the continuity angle of the obtained data and outputs data continuity information indicating the angle. For example, based on the correlation information supplied from the error estimator 403, the stationary direction deriving unit 404 detects the angle with respect to the set of pixels having the highest correlation as the angle of data continuity, Data continuity information indicating the angle with respect to the detected set of pixels having the strongest correlation is output.
- FIG. 76 is a public view showing a more detailed configuration of the data continuity detector 101 shown in FIG.
- the data selection section 402 includes a pixel selection section 4111- 1 to a pixel selection section 4111-L.
- the error estimating section 4003 includes an estimation error calculating section 4122_1 to an estimation error calculating section 4122L.
- the stationary direction deriving unit 4 04 includes a minimum error angle selecting unit 4 13.
- the pixel selection units 4 1 1-1 to 4 1 1 1 L The processing will be described.
- Each of the pixel selection units 4 1 1 1 1 1 to 4 1 1 1 to 4 1 1 1 L sets a straight line having a different predetermined angle that passes through the pixel of interest with the axis indicating the spatial direction X as a reference axis.
- the pixel selection unit 4 1 1-1 to the pixel selection unit 4 1 1 L are pixels belonging to one vertical column of pixels to which the pixel of interest belongs, and a predetermined number of pixels above the pixel of interest, And a predetermined number of pixels below the target pixel and the target pixel are selected as a set of pixels.
- the pixel selection units 4111-1 to 4111-L determine the pixel of interest from the pixels belonging to one vertical column of pixels to which the pixel of interest belongs. Nine pixels are selected as a set of pixels as the center. Four
- one square-shaped square indicates one pixel.
- a circle shown in the center indicates a target pixel.
- the pixel selecting unit 4 1 1 1 1 to the pixel selecting unit 4 1 1 1 L are pixels belonging to one vertical pixel column to which the pixel of interest belongs, and one vertical pixel column to the left. Select the pixel closest to the set straight line.
- the lower left circle of the target pixel indicates an example of the selected pixel.
- the pixel selection units 4 1 1 1 1 to 4 1 1 L are pixels belonging to one vertical column of pixels to which the target pixel belongs and one vertical column of pixels to the left. Then, a predetermined number of pixels above the selected pixel, a predetermined number of pixels below the selected pixel, and the selected pixel are selected as a set of pixels.
- the pixel selection units 4111-1 to 4111-1L each include one vertical column of pixels to which the pixel of interest belongs, and one vertical column on the left.
- Nine pixels are selected as a set of pixels, centering on the pixel closest to the straight line, from the pixels belonging to the pixel row of.
- L is a pixel belonging to one vertical column of pixels to which the pixel of interest belongs and a second vertical column of pixels to the left. Then, the pixel closest to the straight line set for each is selected.
- the leftmost circle shows an example of the selected pixel.
- the pixel selection units 4 1 1 1 1 1 to 4 1 1 ⁇ L are connected to the first column of pixels to which the target pixel belongs and the second column of pixels to the left on the left.
- a predetermined number of pixels belonging to the selected pixel, a predetermined number of pixels above the selected pixel, a predetermined number of pixels below the selected pixel, and the selected pixel are selected.
- the pixel selection unit 4111-1-1 through the pixel selection unit 4111-L are arranged on the left side of the column of one pixel to which the pixel of interest belongs, and the second column on the left side. Then, from the pixels belonging to one pixel column, nine pixels are selected as a set of pixels centering on the pixel closest to the straight line.
- the pixel selection unit 4 1 1 1 1 to the pixel selection unit 4 1 1 _L are pixels belonging to one vertical pixel column to which the pixel of interest belongs, and one vertical pixel column to the right. Select the pixel closest to the set straight line.
- the upper right circle of the target pixel indicates an example of the selected pixel.
- the pixel selection units 4111_1 to 4111-L are pixels belonging to one vertical pixel column to which the pixel of interest belongs and one vertical pixel column to the right. Then, a predetermined number of pixels above the selected pixel, a predetermined number of pixels below the selected pixel, and the selected pixel are selected as a set of pixels.
- the pixel selection units 4111-1 to 4111-1L are each composed of one vertical column of the pixel to which the pixel of interest belongs and one vertical column on the right.
- Nine pixels are selected as a set of pixels, centering on the pixel closest to the straight line, from the pixels belonging to the pixel row of.
- the pixel selectors 4 1 1-1 to 4 1 1 1 L belong to one vertical column of pixels to which the pixel of interest belongs, and belong to the second vertical column of pixels to the right.
- a predetermined number of pixels above the selected pixel, a predetermined number of pixels below the selected pixel, and the selected pixel are selected as a set of pixels.
- the pixel selection units 4 1 1 1 1 to 4 1 1—L are arranged in a vertical column of the pixel to which the pixel of interest belongs and a second vertical column on the right side. Then, from the pixels belonging to one pixel column, nine pixels are selected as a set of pixels centering on the pixel closest to the straight line.
- each of the pixel selection units 4111-1 to 4111L selects five pixel sets.
- the pixel selection unit 4111- 1 to the pixel selection unit 4111-L select a set of pixels at different angles (straight lines set at different angles). For example, the pixel selection unit 4 1 1 1 1 selects a set of pixels for 45 degrees, the pixel selection unit 4 1 1 1 2 selects a set of pixels for 47.5 degrees, The pixel selector 4111-3 selects a set of pixels at 50 degrees.
- the pixel selection unit 4 1 1 1 1 to the pixel selection unit 4 1 1 1 L select a set of pixels at an angle of 2.5 degrees from 52.5 degrees to 135 degrees.
- the number of sets of pixels can be any number such as, for example, three or seven, and does not limit the present invention. Further, the number of pixels selected as one set can be an arbitrary number such as, for example, 5 or 13 and does not limit the present invention.
- the pixel selection units 411-1 to 411-1L can select a set of pixels from a predetermined range of pixels in the vertical direction.
- the pixel selection unit 4111-1-1 through the pixel selection unit 4111-L are composed of 121 pixels vertically (60 pixels upward and 60 pixels downward with respect to the pixel of interest). ), Select a set of pixels.
- the data continuity detection unit 101 sets the axis indicating the spatial direction X to 8.
- the continuity angle of the data can be detected.
- the pixel selection unit 4 1 1 1 1 1 1 supplies the selected pixel pair to the estimation error calculation unit 4 1 2-1, and the pixel selection unit 4 1 1 _ 2 converts the selected pixel pair into the estimation error calculation unit 4 1 2-2 Similarly, each of the pixel selection units 4 1 1 1 to 3 to the pixel selection units 4 1 1 to L converts the selected set of pixels to an estimation error calculation unit 4 1 2 3 to an estimation error calculation unit 4 1 2 1 Supply to each of L.
- the estimation error calculation unit 4 1 2—1 to the estimation error calculation unit 4 1 2—L are pixel selection units 4 1
- a correlation between pixel values of pixels at corresponding positions in a plurality of sets supplied from any one of the pixel selection units 4 1 1 to L is detected.
- the estimation error calculation unit 4 1 2 1 1 to the estimation error calculation unit 4 1 2 ⁇ L may be used as a value indicating the correlation from any one of the pixel selection unit 4 1 1 1 1 to the pixel selection unit 4 1 1 1 L.
- the supplied set of pairs including the pixel of interest The sum of the absolute value of the difference between the pixel value of the pixel and the pixel value of the corresponding position in the other set is calculated.
- the estimation error calculation units 4 1 2-1 to 4 1 2-L are supplied from any of the pixel selection units 4 1 1 1 1 to 4 1 1 1 -L. Also, based on the pixel values of the set of pixels including the pixel of interest and the pixel values of the set of pixels belonging to one vertical column of pixels to the left of the pixel of interest, The difference between the pixel values is calculated, and the absolute value of the difference between the pixel values is calculated in order from the pixel above, so that the difference between the pixel values of the second pixel from the top is calculated. Calculate the sum of the absolute values.
- the estimation error calculation section 4 1 2-1 to the estimation error calculation section 4 1 2-L include the pixel of interest supplied from any of the pixel selection sections 4 1 1 1 1 to 4 1 1 1 L Based on the pixel value of the pixel of the string and the pixel value of a set of pixels belonging to the second vertical column of pixels to the left of the pixel of interest, the difference between the pixel values in order from the upper pixel The sum of the absolute values of the calculated differences is calculated.
- the estimation error calculation unit 4 1 2-1 to the estimation error calculation unit 4 1 2-L are supplied from any of the pixel selection unit 4 1 1 1 1 to the pixel selection unit 4 1 1 1 L.
- the difference between the pixel value of the uppermost pixel based on the pixel value of the set of pixels including the pixel and the pixel value of the set of pixels belonging to one vertical pixel row to the right of the pixel of interest is calculated.
- the estimation error calculation unit 4 1 2-1 to the estimation error calculation unit 4 1 2-L are configured to calculate the pixel of interest supplied from any of the pixel selection units 4 1 1 1 1 to 4 1 1 L. Based on the pixel values of the set of pixels included and the pixel values of the set of pixels that belong to the second vertical column of pixels to the right of the pixel of interest, the pixel values in order from the pixel above The absolute value of the difference is calculated, and the sum of the absolute values of the calculated differences is calculated.
- the estimation error calculation unit 4 1 2-1 to the estimation error calculation unit 4 1 2 1 L add all the sums of the absolute values of the pixel value differences calculated in this way, and calculate the absolute value of the pixel value difference. Calculate the sum.
- the estimation error calculation units 4 1 2-1 to 4 1 2-L supply information indicating the detected correlation to the minimum error angle selection unit 4 13. For example, the estimation error calculation units 4 12-1 to 4 12 _L supply the sum of the absolute values of the calculated pixel value differences to the minimum error angle selection unit 4 13.
- estimation error calculation units 4 1 2-1 to 4 1 2-L are not limited to the sum of the absolute values of the pixel value differences, but may be based on the sum of the squares of the pixel value differences or the pixel values. Other values, such as the calculated correlation coefficient, can be calculated as the correlation value.
- the minimum error angle selection unit 413 is configured to calculate the missing real world 1 based on the correlation detected by the estimation error calculation units 41-2-1 to 41-2-L for different angles.
- the angle of the continuity of the data with respect to the reference axis in the input image corresponding to the continuity of the image which is the optical signal of is detected.
- the minimum error angle selection unit 4 13 is the strongest based on the correlation detected by the estimation error calculation units 4 12-1 to 4 12-L for different angles.
- the minimum error angle selection unit 413 is the smallest of the sums of the absolute values of the pixel value differences supplied from the estimation error calculation units 41-2-1 to 412-L.
- Select the sum of The minimum error angle selection unit 4 13 is a pixel belonging to the second vertical column of pixels on the left side with respect to the pixel of interest for the selected set of pixels for which the sum has been calculated, Refers to the position of the pixel closest to the straight line and the position of the pixel that belongs to the second vertical pixel column on the right side of the pixel of interest and that is closest to the straight line .
- the minimum error angle selector 413 finds the vertical distance S between the position of the pixel of interest and the position of the pixel of interest.
- the angle ⁇ of the continuity of the data with respect to the axis indicating the spatial direction X, which is the reference axis, in the input image, which is image data, is detected.
- the pixel selection unit 4 1 1 1 1 to the pixel selection unit 4 1 1 1 L set a straight line at a predetermined angle that passes through the pixel of interest with the axis indicating the spatial direction X as a reference axis, and the horizontal to which the pixel of interest belongs Then, a predetermined number of pixels to the left of the pixel of interest, a predetermined number of pixels to the right of the pixel of interest, and a pixel of interest belonging to one pixel column are selected as a set of pixels.
- the pixel selection unit 4 1 1 1 1 to the pixel selection unit 4 1 1 ⁇ L are pixels belonging to one horizontal row of pixels to which the pixel of interest belongs and one horizontal row of pixels to the upper side. Select the pixel closest to the set straight line.
- the pixel selection units 4 1 1 1 1 to 4 1 1 L are pixels belonging to one horizontal row of pixels to which the pixel of interest belongs and one horizontal row of pixels to the upper side. Then, a predetermined number of pixels on the left side of the selected pixel, a predetermined number of pixels on the right side of the selected pixel, and the selected pixel are selected as a set of pixels.
- L is a pixel belonging to one horizontal pixel row to which the pixel of interest belongs and a second horizontal pixel row to the second upper pixel row. Then, the pixel closest to the straight line set for each is selected.
- the pixel selectors 4 1 1 1 1 to 4 1 1 1 L belong to the second row of pixels to the upper side of the row of 1 row of pixels to which the pixel of interest belongs.
- a predetermined number of pixels to the left of the selected pixel, a predetermined number of pixels to the right of the selected pixel, and a selected pixel are selected as a set of pixels.
- the pixel selectors 4 1 1 1 1 to 4 1 1 L are pixels belonging to one horizontal row of pixels to which the pixel of interest belongs and one lower horizontal row of pixels. , each JP2004 / 001584
- the pixel at the position closest to the set straight line is selected.
- the pixel selection units 4 1 1 1 1 to 4 1 1 ⁇ L are pixels belonging to one horizontal row of pixels to which the pixel of interest belongs and one horizontal row of pixels to the lower side. Then, a predetermined number of pixels on the left side of the selected pixel, a predetermined number of pixels on the right side of the selected pixel, and the selected pixel are selected as a set of pixels.
- the pixel selection unit 4 1 1 1 1 to the pixel selection unit 4 1 1 _ L are pixels belonging to the second row of pixels of the first row of pixels to which the pixel of interest belongs and the second row of pixels below. Then, the pixel closest to the straight line set for each is selected.
- the pixel selectors 4 1 1 1 1 1 to 4 1 1 L are arranged in the first row of pixels to which the pixel of interest belongs and the second row of pixels in the second row below. A predetermined number of pixels to the left of the selected pixel and a predetermined number of pixels to the right of the selected pixel that belong to the selected pixel are selected as a set of pixels.
- each of the pixel selection units 4111-1-1 through 4111-L selects five sets of pixels.
- the pixel selectors 4111_1 to 4111-L select a set of pixels at different angles.
- the pixel selection section 4 1 1—1 selects a set of pixels for 0 degrees
- the pixel selection section 4 1 1—1 2 selects a set of pixels for 2.5 degrees and selects a pixel.
- Section 4 1 1—3 selects a set of pixels for 5 degrees.
- the pixel selection unit 4 1 1 1 1 to the pixel selection unit 4 1 1 L are from 7.5 degrees to 45 degrees and from 135 degrees to 180 degrees, for angles every 2.5 degrees, Select a set of pixels.
- the pixel selection unit 4 1 1 1 1 supplies the selected pixel set to the estimation error calculation unit 4 1 2 1, and the pixel selection unit 4 1 1 1 2 4 1 2-2 Similarly, each of the pixel selection units 4 1 1 1 to 3 to the pixel selection units 4 1 1 -L converts the selected set of pixels into an estimation error calculation unit 4 1 2-3 to an estimation error calculation unit 4 1 2 1 Supply to each of L.
- the estimation error calculation unit 4 1 2-1 to the estimation error calculation unit 4 1 2-L are used for a plurality of sets supplied from any of the pixel selection units 4 1 1 1 1 to 4 1 1 L. The correlation of the pixel value of the pixel at the corresponding position is detected.
- the estimation error calculation sections 4 1 2-1 to 4 1-2 -L supply information indicating the detected correlation to the minimum error angle selection section 4 13.
- the minimum error angle selection section 4 13 3 is configured to calculate the missing optical signal of the real world 1 based on the correlations detected by the estimation error calculation sections 4 1 2-1 to 4 1 2-L. Detects the continuity angle of the data with respect to the reference axis in the input image corresponding to the continuity.
- step S 101 the data continuity detecting unit 101 shown in FIG. 72 and corresponding to the processing of step S 101 is used to determine the data continuity. The detection process will be described.
- step S401 the activity detection unit 401 and the data selection unit 402 select a pixel of interest, which is a pixel of interest, from the input image.
- the activity detector 401 and the data selector 402 select the same target pixel.
- the activity detection unit 401 and the data selection unit 402 select a pixel of interest from the input image in raster scan order.
- the activity detection unit 401 detects an activity for the target pixel. For example, the activity detecting unit 401 detects a difference between pixel values of pixels arranged in a vertical direction and a pixel value of pixels arranged in a horizontal direction of a block composed of a predetermined number of pixels centered on a target pixel. , Detect activity. The activity detection unit 401 detects activity in the spatial direction with respect to the target pixel, and supplies activity information indicating the detection result to the data selection unit 402 and the steady direction derivation unit 404.
- the data selection unit 402 selects a predetermined number of pixels centered on the target pixel as a set of pixels from the column of pixels including the target pixel.
- the data selection unit 402 is a pixel that belongs to one vertical or horizontal pixel row to which the target pixel belongs, and includes a predetermined number of pixels above or to the left of the target pixel, and a target image. 1584
- a predetermined number of pixels on the lower or right side of the element and a target pixel are selected as a set of pixels.
- the data selection unit 402 selects a predetermined number of pixel columns from the predetermined number of pixels for each predetermined range of angle based on the activity detected in the process of step S402.
- Each predetermined number of pixels is selected as a set of pixels.
- the data selection unit 402 sets a straight line passing through the pixel of interest using the axis indicating the spatial direction X as a reference axis, having an angle in a predetermined range, and Select pixels that are one or two rows apart in the vertical or vertical direction and are closest to the straight line, and a predetermined number of pixels above or to the left of the selected pixel, and below or below the selected pixel.
- a predetermined number of pixels on the right side and selected pixels closest to the line are selected as a set of pixels.
- the data selection unit 402 selects a set of pixels for each angle.
- the data selection unit 402 supplies the selected pixel set to the error estimation unit 403.
- the error estimator 403 calculates a correlation between a set of pixels centered on the target pixel and a set of pixels selected for each angle. For example, the error estimator 403 calculates, for each angle, the sum of the absolute value of the difference between the pixel value of the pixel of the group including the target pixel and the pixel value of the pixel at the corresponding position in the other group.
- the continuity angle of the data may be detected based on the mutual correlation of a set of pixels selected for each angle.
- the error estimating unit 403 supplies information indicating the calculated correlation to the stationary direction deriving unit 404.
- step S ⁇ b> 406 the stationary direction deriving unit 404, based on the correlation calculated in the processing of step S ⁇ b> 405, starts from the position of the set of pixels having the strongest correlation, The angle of the continuity of the data with respect to the reference axis in the input image, which is the image data, corresponding to the continuity of the optical signal is detected.
- the stationary direction deriving unit 404 selects the minimum sum of the absolute values of the pixel value differences, and determines the data continuity from the position of the set of pixels for which the selected sum was calculated. Detect angle ⁇ .
- the stationary direction deriving unit 404 outputs data continuity information indicating the continuity angle of the detected data.
- step S407 the data selection unit 402 determines whether or not processing of all pixels has been completed. If it is determined that processing of all pixels has not been completed, step S404 Returning to 01, the target pixel is selected from the pixels not yet selected as the target pixel, and the above-described processing is repeated.
- step S407 If it is determined in step S407 that the processing of all pixels has been completed, the processing ends.
- the data continuity detection unit 101 can detect the continuity angle of the data with respect to the reference axis in the image data corresponding to the continuity of the missing optical signal of the real world 1. it can.
- the data detection unit 101 shown in FIG. 72 detects the activity in the spatial direction of the input image with respect to the pixel of interest, which is the pixel of interest, of the frame of interest, which is the frame of interest. , According to the detected activity, the angle with respect to the pixel of interest and the reference axis in the spatial direction, and for each motion vector, the vertical direction from the frame of interest and the frame temporally before or after the frame of interest.
- a plurality of sets of surface elements consisting of a predetermined number of pixels in one row or one row in the horizontal direction are extracted, the correlation between the extracted sets of pixels is detected, and based on the correlation, the input image is used in the input image.
- the continuity angle of the data in the time direction and the space direction may be detected.
- the data selection unit 402 determines, based on the detected activity, an angle with respect to the target pixel and the reference axis in the spatial direction, and a target frame for each motion vector. From each of frame #n, frame Hn-1 and frame ⁇ # ⁇ + 1, a set of pixels consisting of a predetermined number of pixels in one column in the vertical direction or one column in the horizontal direction is extracted. .
- the frame # ⁇ _1 is a frame temporally before the frame # ⁇
- the frame 3 ⁇ 4 + 1 is a frame temporally after the frame # ⁇ . That is, 004/001584
- Frame # ⁇ -1, frame, and frame # ⁇ + 1 are displayed in the order of frame # ⁇ -1, frame # ⁇ , and frame +1.
- the error estimating unit 403 detects the correlation of the pixel set for each of one angle and one motion vector for a plurality of sets of extracted pixels.
- the stationary direction deriving unit 404 calculates, based on the correlation of the set of pixels, the continuity angle of the data in the time direction and the spatial direction in the input image corresponding to the continuity of the missing real world optical signal 1. And outputs data continuity information indicating the angle.
- FIG. 81 is a block diagram showing another more detailed configuration of the data continuity detector 101 shown in HI 72.
- the same portions as those shown in FIG. 76 are denoted by the same reference numerals, and description thereof will be omitted.
- the data selection section 402 includes a pixel selection section 4 2 1-1 to a pixel selection section 4 2 1-L.
- the error estimating section 4003 includes an estimation error calculating section 42-21-1 to an estimation error calculating section 4221-L.
- a set of pixels consisting of a number of pixels for the angle range is extracted, and a set of pixels for the angle range is extracted.
- the set of correlations is detected, and the continuity angle of the data with respect to the reference axis in the input image is detected based on the detected correlation.
- the pixel selection units 4 2 1 1 to 4 2 1 1 L The processing will be described.
- a set of pixels consisting of a fixed number of pixels does not depend on the angle of the set straight line.
- the data is extracted from a number of pixels corresponding to the angle range of the set straight line. Are extracted.
- a set of pixels is extracted by the number corresponding to the set range of the angle of the straight line.
- the pixel selection unit 4 2 1 1 1 to pixel selection unit 4 2 1—L are pixels belonging to one vertical pixel column to which the pixel of interest belongs, and correspond to the range of the angle of the straight line set for each pixel.
- the number of pixels above and below the pixel of interest and the number of pixels of interest are selected as a set of pixels.
- the pixel selection unit 4 2 1-1 to the pixel selection unit 4 2 1-L are located at a predetermined distance in the horizontal direction with respect to the pixel with respect to one vertical pixel column to which the pixel of interest belongs. And the pixels that belong to one vertical column of pixels on the right side and that are closest to the straight line set for each pixel are selected. The number of pixels above the selected pixel, the number of pixels below the selected pixel, and the selected pixel are selected as a set of pixels according to the range of the angle of the set straight line. .
- the pixel selection units 42-1-1 to 42-1-L select a number of pixels according to the set angle range of the straight line as a set of pixels.
- the pixel selection unit 4 2 1 1 1 to the pixel selection unit 4 2 1-L select a set of pixels in a number according to the set range of the angle of the straight line.
- the set of pixels contains the same number of pixels
- the thin line is located at an angle of approximately 45 degrees with respect to the spatial direction X
- the pixel on which the image of the thin line is projected And the resolution will be reduced.
- processing is performed on some of the pixels on which the fine line image is projected in the set of pixels, Accuracy may be reduced.
- the pixel selection units 4 2 1-1 to 4 2 1-L are set such that the set straight line is 45 degrees with respect to the spatial direction X so that the pixels projected with the thin line image are almost equal.
- the angle is closer to the angle, the number of pixels included in each pixel set is reduced, and the number of pixel sets is increased. Pixels and pixel sets are selected such that the number of pixels included in is increased and the number of pixel sets is reduced.
- the pixel selection unit 4 2 1-1 to the pixel selection unit 4 2 1-L have an angle of a set straight line of 45 degrees or more and 6 3.
- the range of less than 4 degrees the range indicated by A in FIGS. 83 and 84
- five pixels centered on the pixel of interest from one vertical column of the pixel of interest Is selected as a set of pixels, and five pixels are selected from the pixels belonging to one pixel column on the left and right sides within 5 pixels in the horizontal direction with respect to the target pixel.
- the pixel selection unit 4 2 1 1 1 to the pixel selection unit 4 2 1-L Select a set of 11 pixels, each consisting of 5 pixels.
- the pixel selected as the pixel closest to the set straight line is located 5 to 9 pixels vertically away from the target pixel.
- the number of columns indicates the number of columns of pixels to which a pixel is selected as a pixel set on the left or right side of the target pixel.
- the number of pixels in one column is determined by the number of pixels selected as a set of pixels from a column of pixels vertically or a column on the left or right side of the pixel of interest. Show. In Figure 84, the pixel selection range is 01584
- the vertical position of the selected pixel is shown as the pixel closest to the set line with respect to the target pixel.
- the pixel selection unit 4 2 1—1 has one pixel column vertically with respect to the pixel of interest. From this, the five pixels centered on the pixel of interest are selected as a set of pixels, and one vertical column of pixels on the left and right sides within 5 pixels in the horizontal direction with respect to the pixel of interest. From the pixels belonging to the column, each of the five pixels is selected as a pixel set. That is, the pixel selection unit 4 2 1 1 1 selects a set of 11 pixels, each including 5 pixels, from the input image. In this case, among the pixels selected as the pixels closest to the set straight line, the pixel farthest from the pixel of interest is located 5 pixels vertically away from the pixel of interest .
- a square represented by a dotted line (one square separated by a dotted line) represents one pixel, and a square represented by a solid line represents a set of pixels.
- the coordinates of the target pixel in the spatial direction X are set to 0, and the coordinates of the target pixel in the spatial direction Y are set to 0.
- a hatched square indicates a pixel closest to the target pixel or the set straight line.
- squares represented by thick lines indicate a set of pixels selected with the target pixel as the center.
- the pixel selection unit 4 2 1 _ 2 From the column of, the five pixels centered on the pixel of interest are selected as a set of pixels, and the pixels of one column vertically on the left and right sides within 5 pixels in the horizontal direction with respect to the pixel of interest From the pixels belonging to the column, each of the five pixels is selected as a pixel set. That is, the pixel selection unit 4 2 1 1 2 selects a set of 11 pixels, each consisting of 5 pixels, from the input image. In this case, of the pixels selected as the pixels closest to the set straight line, the pixel farthest from the pixel of interest is located 9 pixels vertically away from the pixel of interest .
- the pixel selection unit 4 21-1 to the pixel selection unit 4 21 1 -L have an angle of the set straight line of 63.4 degrees or more.
- the pixel of interest is shifted vertically from one pixel column to the center of the pixel of interest.
- One pixel is selected as a set of pixels, and seven pixels are selected from the pixels belonging to one pixel column on the left and right sides that are within 4 pixels in the horizontal direction with respect to the target pixel. Is selected as a set of pixels.
- the pixel selection unit 4 211-1 to pixel selection unit 4 21 1-L From the image, select a set of nine pixels, each consisting of seven pixels. In this case, the vertical position of the pixel closest to the set straight line is 8 pixels to 11 pixels with respect to the target pixel.
- the pixel selection unit 4 2 1-3 vertically shifts the pixel of interest by one column with respect to the pixel of interest. From the row of, select the seven pixels centered on the pixel of interest as a set of pixels, and select the pixels of one column vertically on the left and right sides that are within 4 pixels in the horizontal direction with respect to the pixel of interest. Each of the seven pixels is selected as a pixel set from the pixels belonging to the column. That is, the pixel selection unit 4 2 1-3 selects a set of nine pixels, each consisting of seven pixels, from the input image. In this case, among the pixels selected as the pixels closest to the set straight line, the pixel farthest from the pixel of interest is located 8 pixels vertically away from the pixel of interest .
- the pixel selection unit 4 21 1-4 From the pixel column of, select seven pixels centered on the pixel of interest as a set of pixels, and one column vertically and horizontally on the left and right within 4 pixels from the pixel of interest.
- Each of the seven pixels is selected as a pixel set from the pixels belonging to the pixel column of. That is, the pixel selection section 4 2 1—4 is composed of seven pixels each from the input image. Select a set of nine pixels. In this case, of the pixels selected as the pixels closest to the set straight line, the pixel farthest from the pixel of interest is located at a position vertically 11 pixels away from the pixel of interest. It is in.
- the pixel selection unit 4 2 1-1 to the pixel selection unit 4 2 1-L have a set straight line angle of 71.6 degrees.
- the target pixel is within the range of less than 76.0 degrees (the range indicated by C in FIGS. 83 and 84)
- the pixel of interest is shifted from one pixel column vertically to the center of the pixel of interest.
- the selected nine pixels are selected as a set of pixels, and the pixels belonging to one vertical pixel column on the left and right sides within 3 pixels horizontally are separated from the pixel of interest by 9 pixels, respectively.
- One pixel is selected as a set of pixels.
- the pixel selection unit 4 2 1-1 to pixel selection unit 4 2 1-L From, select a set of 7 pixels, each consisting of 9 pixels.
- the vertical position of the pixel closest to the set straight line is 9 pixels to 11 pixels with respect to the target pixel.
- the pixel selection unit 4 2 1-5 vertically shifts the pixel of interest by one column with respect to the target pixel. From the row of, select nine pixels centered on the pixel of interest as a set of pixels, and place one pixel vertically on the left side and right side within 3 pixels horizontally from the pixel of interest. Nine pixels are selected as a set of pixels from the pixels belonging to the pixel column. That is, the pixel selection section 4 2 1-5 selects a set of seven pixels, each consisting of nine pixels, from the input image. In this case, of the pixels selected as the pixels closest to the set straight line, the pixel farthest from the pixel of interest is located 9 pixels vertically away from the pixel of interest .
- the pixel selection unit 4 2 1-6 has one vertical line with respect to the pixel of interest. From the row of pixels, select nine pixels centered on the pixel of interest as a set of pixels, interview ⁇ Ra
- the pixel selection section 4 2 1-6 selects a set of seven pixels, each consisting of nine pixels, from the input image.
- the pixel farthest from the pixel of interest is located at a position vertically 11 pixels away from the pixel of interest. It is in.
- the pixel selection unit 4 21-1 to the pixel selection unit 4 21-L have a set straight line angle of 76.0 degrees.
- the pixel is within the range of 87.7 degrees or less (the range indicated by D in FIGS. 83 and 84)
- the pixel of interest is shifted vertically from one pixel column to the pixel of interest.
- the pixel at the center is selected as a set of pixels, and the pixels belonging to one vertical column on the left and right sides that are within 2 pixels in the horizontal direction with respect to the target pixel are In each case, 11 pixels are selected as a set of pixels.
- the pixel selection units 4 2 1-1 to 4 2 1-L Select a set of five pixels, each consisting of 11 pixels.
- the vertical position of the pixel closest to the set straight line is 8 to 50 pixels with respect to the target pixel.
- the pixel selection unit 4 2 1—7 is arranged so that the pixel of interest is one column vertically aligned with the pixel of interest. From the column of, select the 11 pixels centered on the pixel of interest as a set of pixels, and the pixels in the vertical column on the left and right sides within 2 pixels horizontally from the pixel of interest From each of the pixels belonging to the column, 11 pixels are selected as a set of pixels. That is, the pixel selection unit 4 2 1-7 selects a set of five pixels, each consisting of 11 pixels, from the input image. In this case, among the pixels selected as the pixels closest to the set straight line, the pixel farthest from the pixel of interest is located 8 pixels vertically away from the pixel of interest .
- the pixel selection unit 4 2 1-8 vertically extends one column with respect to the pixel of interest. From the column of pixels, select the pixel 11 centered on the target pixel as a set of pixels, and set the left and right vertical positions within 2 pixels away from the target pixel. From each of the pixels belonging to the row of pixels, 11 pixels are selected as a set of pixels. That is, the pixel selection unit 4 2 1-8 selects a set of 5 pixels each consisting of 11 pixels from the input image. In this case, among the pixels selected as the pixels closest to the set straight line, the pixel farthest from the pixel of interest is located 50 pixels vertically away from the pixel of interest. It is in.
- each of the pixel selection units 4 2 1-1 to 4 2 1-L includes a predetermined number of pixels corresponding to the angle range, each including a predetermined number of pixels corresponding to the angle range. Select the set of
- the pixel selection unit 4 2 1-1 supplies the selected pixel pair to the estimation error calculation unit 4 2 2-1, and the pixel selection unit 4 2 1-2 converts the selected pixel pair to the estimation error calculation unit 4 2 2-2 Similarly, each of the pixel selection sections 4 2 1-3 to 4 2 1-L converts the selected set of pixels into an estimation error calculation section 4 2 2-3 to an estimation error calculation section 4 2 2 1 Supply to each of L.
- Estimated error calculating unit 4 2 2 1 to the estimated error calculating unit 4 2 2 L was supplied from one of the pixel selection unit 4 2 1 one 1 pixel selector 4 2 1 one L, at a plurality of pairs
- the correlation of the pixel value of the pixel at the corresponding position is detected.
- the estimation error calculation unit 4 2 2 1 1 to the estimation error calculation unit 4 2 2-L are the pixel of interest supplied from any of the pixel selection units 4 2 1 1 1 to 4 2 1 1 L.
- the sum of the absolute value of the difference between the pixel value of the pixel set including the pixel set and the pixel value of the pixel at the corresponding position in the other pixel set is calculated. Divide the calculated sum by the number of included pixels. Dividing the calculated sum by the number of pixels included in the group other than the group including the pixel of interest indicates the correlation because the number of pixels selected varies depending on the angle of the set straight line. This is to normalize the value. pus, a pixel selection unit 4 2 1 one 1 pixel select
- the estimation error calculation units 4 2 2-1 to 4 2 2-L supply information indicating the detected correlation to the minimum error angle selection unit 4 13.
- the estimation error calculation units 4 2 2-1 to 4 2 2-L supply the sum of the absolute values of the normalized pixel value differences to the minimum error angle selection unit 4 13.
- the processing of L will be described.
- the pixel selection unit 4 2 1-1 to the pixel selection unit 4 2 1-L have a range of 0 to 45 degrees or 135 to 180 degrees, with an axis indicating the spatial direction X as a reference axis, Straight lines passing through the pixel of interest and having predetermined angles different from each other are set.
- the pixel selection units 4 2 1-1 to 4 2 1-L are pixels belonging to one horizontal row of pixels to which the pixel of interest belongs, and are numbers corresponding to the range of the angle of the set straight line.
- the pixel on the left side of the pixel of interest, the pixel on the right side of the pixel of interest, and the pixel of interest are selected as a set of pixels.
- the pixel selection units 4 2 1-1 to 4 2 1-L are located at a predetermined distance in the vertical direction with respect to the pixel with respect to one horizontal row of pixels to which the pixel of interest belongs. And a pixel belonging to a row of pixels on the lower side, which is closest to the set straight line, and selects a pixel on the side of the selected pixel from a row on the side of the selected pixel. Pixels to the left of the selected pixel, pixels to the right of the selected pixel, and selected pixels are selected as a set of pixels according to the range of the set angle of the straight line.
- the pixel selection units 4 2 1 1 1 to 4 2 1-L select a number of pixels according to the set angle range of the straight line as a set of pixels.
- the pixel selection unit 4 2 1 1 to 1 to pixel selection unit 4 2 1-L select a set of pixels in a number according to the set range of the angle of the straight line.
- the pixel selection unit 4 2 1-1 supplies the selected pixel pair to the estimation error calculation unit 4 2 2-1, and the pixel selection unit 4 2 1-2 converts the selected pixel pair to the estimation error calculation unit 4 2 2-2 Similarly, each of the pixel selection section 4 2 1—3 to the pixel selection section 4 2 1—Long
- the estimation error calculation unit 4 2 2-1 to the estimation error calculation unit 4 2 2-L are used for a plurality of sets supplied from any of the pixel selection units 4 2 1-1 to 4 2 1 1 L. The correlation of the pixel value of the pixel at the corresponding position is detected.
- the estimation error calculation units 4 2 2-1 to 4 2 2-L supply information indicating the detected correlation to the minimum error angle selection unit 4 13.
- step S101 corresponds to the processing of step S101.
- the detection process will be described.
- step S421 and step S422 is the same as the processing in step S401 and step S402, and a description thereof will be omitted.
- the data selecting unit 402 selects a pixel of interest from the pixel row including the pixel of interest for each angle within a predetermined range with respect to the activity detected in step S 422.
- the number of pixels defined for the range of angles at the center is selected as a set of pixels.
- the data selection unit 402 is a pixel that belongs to one vertical or horizontal pixel row to which the pixel of interest belongs, and has a number of pixels determined by the angle range with respect to the angle of the straight line to be set.
- the pixel above or to the left of the pixel of interest, the pixel below or to the right of the pixel of interest, and the pixel of interest are selected as a set of pixels.
- step S424 the data selection unit 402 determines, for each of a predetermined range of angles based on the activity detected in the processing of step S422, a predetermined number of angles for the angle range. From a row of pixels, a set number of pixels is selected as a set of pixels for a range of angles. For example, the data selection unit 402 sets a straight line passing through the pixel of interest with an axis having a predetermined range and an axis indicating the spatial direction X as a reference axis, and the horizontal or vertical direction with respect to the pixel of interest.
- a pixel that is a predetermined distance away from the range of the angle of the straight line to be set and that is closest to the straight line is selected, and the pixel above or to the left of the selected pixel with respect to the range of the angle of the straight line to be set Number of pixels, and selected proceedings
- the number of pixels below or to the right of the set line angle range and the pixel closest to the selected line are selected as a set of pixels.
- the data selection unit 402 selects a set of pixels for each angle.
- the data selection unit 402 supplies the selected pixel set to the error estimation unit 403.
- the error estimator 403 calculates a correlation between a set of pixels centered on the target pixel and a set of pixels selected for each angle. For example, the error estimator 403 calculates the sum of the absolute value of the difference between the pixel value of the pixel of the group including the target pixel and the pixel value of the pixel at the corresponding position in the other group, and calculates The correlation is calculated by dividing the sum of the absolute values of the pixel value differences by the number of pixels that belong.
- the continuity angle of the data may be detected based on the mutual correlation of a set of pixels selected for each angle.
- the error estimating unit 403 supplies information indicating the calculated correlation to the stationary direction deriving unit 404.
- steps S 426 and S 427 is the same as the processing in steps S 406 and S 407, and a description thereof will be omitted.
- the data continuity detecting unit 101 more accurately calculates the continuity angle of the data with respect to the reference axis in the image data corresponding to the continuity of the missing optical signal of the real world 1. Can be detected with higher accuracy.
- the data continuity detector 101 whose configuration is shown in Fig. 81, is especially useful when the continuity angle of the data is around 45 degrees. Can be evaluated, so that the continuity angle of the data can be detected with higher accuracy.
- the pixel of interest which is the pixel of interest, in the spatial direction of the input image is An activity is detected, and an angle with respect to the pixel of interest and the reference axis in the spatial direction, and a frame before or after the frame of interest for each motion vector in accordance with the detected activity. Spatial angle range, one row vertically or one row horizontally, from each else
- a set of pixels consisting of a specified number of pixels is extracted for a set number of pixels in a spatial angle range, a correlation of the extracted set of pixels is detected, and based on the correlation, an input image is extracted.
- the angle of continuity of the data in the time direction and the spatial direction in the image may be detected.
- FIG. 94 is a block diagram showing still another configuration of the data continuity detecting unit 101. As shown in FIG.
- a block of a predetermined number of pixels around the pixel of interest which is a pixel of interest, and a pixel of interest
- a plurality of blocks each consisting of a predetermined number of pixels around the pixel are extracted, the correlation between the block around the pixel of interest and the surrounding blocks is detected, and based on the correlation, the reference axis in the input image is determined.
- the angle of data continuity based on is detected.
- the data selection unit 441 selects a pixel of interest from the pixels of the input image in order, and a block composed of a predetermined number of pixels centered on the pixel of interest, and a predetermined number of pixels around the pixel of interest. A plurality of blocks are extracted, and the extracted blocks are supplied to the error estimating unit 442.
- the data selection unit 441 generates a block of 5 ⁇ 5 pixels centered on the pixel of interest, and a 5 ⁇ 5 X pixel from the periphery of the pixel of interest for each predetermined angle range with respect to the pixel of interest and the reference axis. Extract two blocks of 5 pixels.
- the error estimator 442 detects the correlation between the block around the pixel of interest and the blocks around the pixel of interest, supplied from the data selector 441, and calculates the correlation information indicating the detected correlation. It is supplied to the stationary direction deriving unit 4 4 3.
- the error estimator 4 4 2 includes a block of 5 ⁇ 5 pixels centered on the pixel of interest, and two blocks of 5 ⁇ 5 pixels corresponding to one angle range. For, the correlation between pixel values is detected.
- the continuity direction deriving unit 4 4 3 determines the position of the block around the pixel of interest, which has the strongest correlation,
- the stationary direction deriving unit 443 Detects the continuity angle of the data with respect to the reference axis in the input image corresponding to the continuity of the optical signal of, and outputs data continuity information indicating the angle. For example, based on the correlation information supplied from the error estimating unit 442, the stationary direction deriving unit 443 has the strongest correlation with the block composed of 5X5 pixels centering on the target pixel. The angle range for two blocks of 5 pixels is detected as the data continuity angle, and data continuity information indicating the detected angle is output.
- FIG. 95 is a public view showing a more detailed configuration of the data continuity detector 101 shown in FIG.
- the data selection section 44 1 includes a pixel selection section 4 61-1 to a pixel selection section 4 61-L.
- the error estimating unit 4 42 includes an estimation error calculating unit 4 62-1 to an estimation error calculating unit 4 62 1 L.
- the stationary direction deriving unit 443 includes a minimum error angle selecting unit 463.
- the data selection section 4 4 1 includes the pixel selection section 4 6 1-1 to the pixel selection section 4 6
- the error estimator 4 4 2 includes the estimated error calculator 4 6 2—
- Each of the pixel selection unit 461-1 to the pixel selection unit 461-L is a block composed of a predetermined number of pixels centered on the pixel of interest, and a predetermined block based on the pixel of interest and the reference axis. Extract two blocks consisting of a predetermined number of pixels corresponding to the angle range of.
- FIG. 96 is a diagram for describing an example of a block of 5 ⁇ 5 pixels extracted by the pixel selecting units 46-1 to 1 through L.
- the center position in FIG. 96 indicates the position of the pixel of interest.
- a block of 5 ⁇ 5 pixels is an example, and the number of pixels included in the block does not limit the present invention.
- the pixel selection unit 46-1-1 extracts a 5 ⁇ 5 pixel block centered on the pixel of interest, and outputs 0 to 18.4 degrees and 161.6 to 18 degrees.
- a block of 5 X 5 pixels (indicated by A in Fig. 96) centering on the pixel at the position shifted 5 pixels to the right is extracted, Attention pixel
- a 5 x 5 pixel block (indicated by A 'in Fig. 96) is extracted, centering on the pixel located 5 pixels to the left.
- the pixel selection section 46-1-1 supplies the extracted three blocks of 5 ⁇ 5 pixels to the estimation error calculation section 4621-2.
- the pixel selection unit 4 61-2 extracts a block of 5 ⁇ 5 pixels centered on the pixel of interest, and extracts a block of pixel of interest corresponding to the range from 18.4 degrees to 33.7 degrees. Then, a block of 5 ⁇ 5 pixels (indicated by B in FIG. 96) centering on the pixel at the position shifted 10 pixels to the right and 5 pixels to the upper side is extracted. A block of 5 X 5 pixels (indicated by B 'in Fig. 96) is extracted, centered on the pixel at the position shifted 10 pixels to the left and 5 pixels down.
- the pixel selection unit 461-2 supplies the extracted three blocks of 5 ⁇ 5 pixels to the estimation error calculation unit 462-2.
- the pixel selection unit 4 6 1 ⁇ 3 extracts a block of 5 ⁇ 5 pixels centered on the pixel of interest, and extracts a block of the pixel of interest corresponding to the range of 33.7 degrees to 56.3 degrees. Then, extract a block of 5 X 5 pixels (shown by C in Fig. 96) centering on the pixel at the position shifted 5 pixels to the right and 5 pixels to the upper side. Then, a block of 5 x 5 pixels (indicated by C in Fig. 96) is extracted, centering on the pixel at the position shifted 5 pixels downward.
- the pixel selection unit 461-3 supplies the extracted three blocks of 5 ⁇ 5 pixels to the estimation error calculation unit 462-3.
- the pixel selection unit 4 6 1-4 extracts a block of 5 ⁇ 5 pixels centered on the pixel of interest, and extracts a block of the pixel of interest corresponding to the range of 56.3 degrees to 71.6 degrees. Then, a block of 5 X 5 pixels (indicated by D in Fig. 96) centering on the pixel at the position shifted by 5 pixels to the right and 10 pixels to the upper side is extracted, and for the pixel of interest, A 5 ⁇ 5 pixel block (indicated by D 'in Fig. 96) is extracted, centered on the pixel at the position shifted 5 pixels to the left and 10 pixels down.
- the pixel selection unit 46 1-4 supplies the extracted three blocks of 5 ⁇ 5 pixels to the estimation error calculation unit 46 2-4.
- the pixel selector 4 6 1-5 extracts a block of 5 ⁇ 5 pixels centered on the pixel of interest, and extracts the block of interest corresponding to the range from 71.6 degrees to 108.4 degrees.
- a 5 x 5 pixel block centered on the pixel located 5 pixels upward (Indicated by E in Fig. 96)
- a 5 X 5 pixel block (shown by E 'in Fig. 96) centering on the pixel located 5 pixels below the target pixel ).
- the pixel selection unit 461-5 supplies the extracted three blocks of 5 x 5 pixels to the estimation error calculation unit 462-2-5.
- the pixel selection unit 4 6 1-6 extracts a block of 5 ⁇ 5 pixels centered on the pixel of interest, and extracts the pixel of interest corresponding to the range from 108.4 to 123.7 degrees. Then, a 5 x 5 pixel block (indicated by F in Fig. 96) centered on the pixel at the position shifted 5 pixels to the left and 10 pixels upward is extracted as the pixel of interest. On the other hand, a block of 5 X 5 pixels (indicated by F 'in Fig. 96) is extracted, centered on the pixel at the position shifted 5 pixels to the right and 10 pixels down.
- the pixel selection unit 461-6 supplies the extracted three blocks of 5 x 5 pixels to the estimation error calculation unit 462-2-6.
- the pixel selection unit 4 6 1-7 extracts a block of 5 ⁇ 5 pixels centered on the pixel of interest, and extracts the pixel of interest corresponding to the range of 12.7 to 14.3 degrees. Then, a 5 X 5 pixel block (shown by G in Fig. 96) is extracted, centered on the pixel at the position shifted 5 pixels to the left and shifted 5 pixels to the upper side. Then, a block of 5 X 5 pixels (indicated by G 'in Fig. 96) is extracted, centering on the pixel at the position shifted 5 pixels to the right and 5 pixels down.
- the pixel selection unit 46 1-7 supplies the extracted three blocks of 5 ⁇ 5 pixels to the estimation error calculation unit 462-7.
- the pixel selection unit 4 6 1-8 extracts a block of 5 ⁇ 5 pixels centered on the pixel of interest, and the pixel of interest corresponding to the range of 14.6. Then, a block of 5 X 5 pixels (indicated by H in Fig. 96) is extracted, centered on the pixel at the position shifted 10 pixels to the left and 5 pixels to the upper side. On the other hand, a block of 5 X 5 pixels (indicated by H in Fig. 96) is extracted, centering on the pixel at the position shifted 10 pixels to the right and 5 pixels down.
- the pixel selection unit 461-8 supplies the extracted three blocks of 5 x 5 pixels to the estimation error calculation unit 462-2-8.
- a target block a block composed of a predetermined number of pixels centered on the target pixel.
- a block composed of a predetermined number of pixels corresponding to a range of a predetermined angle based on the target pixel and the reference axis is referred to as a reference block.
- the pixel selection units 46-1 to 1 through 8 select, for example, a target block and a reference block from a range of 25 ⁇ 25 pixels around the pixel of interest. Extract.
- the correlation with the block is detected, and correlation information indicating the detected correlation is supplied to the minimum error angle selection unit 463.
- the estimation error calculation unit 4 62-1-1 includes a block of interest composed of 5 ⁇ 5 pixels centered on the pixel of interest, and 0 degrees to 18.4 degrees and 16 1.6 degrees to 1 degree.
- the 5 ⁇ 5 pixel reference block centered on the pixel at the position shifted 5 pixels to the right with respect to the pixel of interest extracted corresponding to the range of 8.0 ° is included in the block of interest
- the absolute value of the difference between the pixel value of the pixel to be read and the pixel value of the pixel included in the reference block is calculated.
- the estimation error calculation unit 462-2-1 sets the pixel value of the pixel of interest based on the position where the center pixel of the block of interest and the center pixel of the reference block overlap.
- the position of the block of interest with respect to the reference block is either 2 pixels on the left or 2 pixels on the right, or 2 pixels on the top. Calculate the absolute value of the difference between the pixel values of the pixels at the position where they overlap when one of the two pixels is moved downward. That is, the absolute value of the difference between the pixel values of the pixels at the corresponding positions in the 25 types of positions between the target block and the reference block is calculated.
- FIG. 97 is a diagram illustrating an example in which the target block moves two pixels to the right and one pixel to the upper side with respect to the reference block.
- the estimation error calculation unit 462-1-1 includes a target block composed of 5 ⁇ 5 pixels centered on the target pixel, and 0 to 18.4 degrees and 161.6 to 180 degrees. Pixels included in the block of interest with reference to the 5 x 5 pixel reference block centered on the pixel 5 pixels to the left of the pixel of interest extracted corresponding to the 0 degree range The absolute value of the difference between the pixel value of and the pixel value of the pixel included in the reference block is calculated.
- the estimation error calculation unit 462-2-1 finds the sum of the calculated absolute values of the differences, and supplies the sum of the absolute values of the differences to the minimum error angle selection unit 463 as correlation information indicating the correlation. Pay.
- the estimation error calculator 4 62 2 _ 2 is a reference block composed of 5 ⁇ 5 pixels and two references of 5 ⁇ 5 pixels extracted corresponding to a range of 18.4 degrees to 33.7 degrees. Calculate the absolute value of the difference between the pixel values for the block and the sum of the absolute values of the calculated differences.
- the estimation error calculator 462-1 supplies the sum of absolute values of the calculated differences to the minimum error angle selector 463 as correlation information indicating the correlation.
- each of the estimation error calculation sections 4 62-3 to 4 62-8 includes a block of interest composed of 5 ⁇ 5 pixels and an extracted block corresponding to a range of a predetermined angle.
- the absolute value of the difference between the pixel values of the two reference blocks of X5 pixels is calculated, and the sum of the absolute values of the calculated differences is calculated.
- Each of the estimation error calculation unit 4 62-3 to the estimation error calculation unit 4 62-8 supplies the sum of the absolute values of the calculated differences to the minimum error angle selection unit 4 63 as correlation information indicating the correlation. I do.
- the minimum error angle selection unit 463 is the sum of the absolute values of the pixel value differences as the correlation information supplied from the estimation error calculation units 462-1 to 462-2-8. Two references from the position of the reference block with the smallest value that shows the strongest correlation The angle with respect to the block is detected as the data continuity angle, and data continuity information indicating the detected angle is output.
- ⁇ indicates a ratio of a change in the position in the spatial direction X to a change in the position in the spatial direction Y.
- ⁇ is also referred to as a shift amount.
- FIG. 98 shows the case where the distance in the spatial direction X between the position of the target pixel and the straight line having an angle of 0 is 0, that is, when the straight line passes through the target pixel, the positions of the pixels around the target pixel and the angle
- FIG. 7 is a diagram showing a distance in a spatial direction X from a straight line having 6;
- the position of the pixel is the position of the center of the pixel.
- the distance between the position and the straight line is When the position is on the left side of the line, it is indicated by a negative value, and when the position is on the right side of the line, it is indicated by a positive value.
- the position of the pixel adjacent to the right side of the pixel of interest that is, the position where the coordinate X in the spatial direction X increases by 1, and the distance in the spatial direction X from the straight line having the angle ⁇ are 1, and the distance to the left of the pixel of interest is
- the distance in the spatial direction X between the position of the adjacent pixel that is, the position where the coordinate X in the spatial direction X decreases by 1 and the straight line having the angle 6 is 11.
- the distance in the spatial direction X between the position of the pixel adjacent above the target pixel that is, the position at which the coordinate y in the spatial direction Y increases by 1, and the straight line having the angle ⁇ is -y, and is below the target pixel.
- the distance in the spatial direction X between the position of the pixel adjacent to the side that is, the position where the coordinate y in the spatial direction Y decreases by 1, and the straight line having the angle ⁇ is ⁇ .
- FIG. 99 is a diagram showing the relationship between the shift amount y and the angle ⁇ .
- FIG. 100 is a diagram showing a distance in the spatial direction X between a position of a pixel around the target pixel and a straight line passing through the target pixel and having an angle of 0 with respect to the shift amount ⁇ .
- the one-dot chain line rising to the right indicates the distance in the spatial direction X between the position of the pixel adjacent to the lower side of the target pixel and the straight line with respect to the shift amount y
- the one-dot chain line falling to the left indicates And the distance in the spatial direction X between the position of the pixel adjacent above the pixel of interest and the straight line with respect to the shift amount ⁇ .
- the two-dot chain line rising to the right indicates the distance in the spatial direction X between the position of the pixel located one pixel to the left and two pixels below the target pixel and the straight line with respect to the shift amount V.
- the two-dot chain line at the lower left indicates the distance in the spatial direction X between the position of the pixel located two pixels above and one pixel to the right of the target pixel and the straight line with respect to the shift amount ⁇ .
- the three-dot chain line rising to the right is the distance in the spatial direction X between the position of the pixel located one pixel below and one pixel left of the target pixel and the straight line with respect to the shift amount ⁇ .
- the three-dot chain line at the lower left indicates the distance in the spatial direction X between the position of the pixel located one pixel above and one pixel to the right of the target pixel and the straight line with respect to the shift amount ⁇ . From FIG. 100, it can be seen that the pixel having the smallest distance with respect to the shift amount ⁇ .
- the shift amount ⁇ is 0 to 1/3
- the distance from the pixel adjacent above the target pixel and the pixel adjacent below the target pixel to the straight line is the minimum. That is, when the angle ⁇ is 71.6 degrees to 90 degrees, the distance from the pixel adjacent above the target pixel and the pixel adjacent below the target pixel to the straight line is the minimum.
- the shift amount ⁇ is 1/3 to 2 ⁇ 3
- the pixel located two pixels above and one pixel to the right of the target pixel, and two pixels below and one pixel left of the target pixel The distance from the pixel located on the side to the straight line is the smallest.
- the angle 0 is 56.3 degrees to 71.6 degrees
- the shift amount ⁇ is 2 ⁇ 3 to 1
- the pixel located one pixel above and one pixel to the right of the target pixel, and one pixel below the target pixel and one pixel below the target pixel The distance from the pixel on the left to the straight line is the smallest.
- the angle ⁇ is 45 degrees to 56.3 degrees
- the pixel located one pixel above the target pixel, one pixel rightward, and one pixel below the target pixel The distance from the pixel located one pixel to the left to the straight line is the smallest.
- the distance in the spatial direction X between the reference topic and the straight line can be considered.
- FIG. 101 shows a reference block that passes through the pixel of interest and has a minimum distance from a straight line at an angle ⁇ ⁇ ⁇ ⁇ with respect to the axis in the spatial direction X.
- ⁇ to ⁇ and A ′ to H ′ in FIG. 101 indicate reference blocks of A to H and A, to H ′ in FIG. That is, a straight line passing through the pixel of interest and having an angle ⁇ of 0 ° to 18.4 ° and 161.6 ° to 180.0 ° with respect to the axis in the spatial direction X, Of the distances in the spatial direction X from each of the reference blocks A to H and A 'to H', the distance between the straight line and the reference block in A and A 'is the smallest. Therefore, when considered conversely, when the correlation between the block of interest and the reference blocks of A and A, is the strongest, certain features repeat in the direction connecting the block of interest and the reference blocks of A and A '. As can be seen, the continuity angles of the data can be said to be in the range of 0 ° to 18.4 ° and 161.6 ° to 180.0 °.
- the angle of data continuity can be said to be in the range of 56.3 degrees to 71.6 degrees.
- the distance between the straight line and the reference block of E and E ' is the smallest of the distances in the spatial direction X from each of the blocks.
- certain features repeatedly appear in the direction connecting the block of interest and the reference blocks of E and E '. Therefore, the continuity angle of the data is in the range of 71.6 degrees to 108.4 degrees.
- a straight line passing through the pixel of interest and having an angle ⁇ of 123.7 degrees to 146.3 degrees with respect to the axis in the spatial direction X, and reference blocks A to H and A, to H ' The distance between the straight line and the reference block of G and G, among the distances in the spatial direction X with each of them is minimized.
- the continuity angle of the data is in the range of 123.7 degrees to 146.3 degrees.
- a straight line passing through the pixel of interest and having an angle ⁇ of 146.3 degrees to 161.6 degrees with respect to the axis in the spatial direction X, and a reference program for A to H and A, to H Among the distances in the spatial direction X from each of the blocks, the distance between the straight line and the reference blocks of H and H 'is the smallest. Therefore, if you think in reverse, the block of interest and H and When the correlation between H and the reference block is the strongest, certain characteristics appear repeatedly in the direction connecting the block of interest and the reference blocks of H and H '. It can be said that it is in the range of 6.3 degrees to 161.6 degrees.
- the data continuity detecting unit 101 can detect the continuity angle of the data based on the correlation between the block of interest and the reference block.
- the data continuity detecting unit 101 shown in FIG. 94 may output a data continuity angle range as data continuity information.
- a representative value indicating the range may be output as data continuity information.
- the median of the angle range of the data continuity can be used as the representative value. .
- the data continuity detection unit 101 shown in FIG. 94 uses the correlation of the reference block around the reference block having the strongest correlation to determine the range of the continuity angle of the detected data. It is possible to double the resolution of 1 Z 2, that is, the continuity angle of the detected data.
- the minimum error angle selection unit 463 determines, as shown in FIG. 'And the correlation of the reference blocks F and F, with the block of interest. If the correlation of the reference block of D and D 'to the block of interest is stronger than the correlation of the reference block of F and F' to the block of interest, the minimum error angle selection unit 463 Set the stationarity angle in the range of 71.6 degrees to 90 degrees. In this case, the minimum error angle selector 463 may set 81 degrees as a representative value for the data continuity angle.
- the minimum error angle selection unit 463 may set the angle of data steadiness to be a representative value of 99 degrees.
- the minimum error angle selection unit 463 can perform the same processing to reduce the continuity angle range of the detected data to / for other angle ranges.
- the method described with reference to FIG. 102 is also referred to as a simple 16-direction detection method.
- the data continuity detecting unit 101 shown in FIG. 94 can detect the data continuity angle with a narrower range by a simple process.
- step S101 detects the continuity of data. The processing will be described.
- step S444 the data selection unit 441 selects a target pixel, which is a target pixel, from the input image. For example, the data selection unit 441 selects a target pixel from the input image in the raster scan order.
- step S444 the data selection unit 441 selects a target block consisting of a predetermined number of pixels centered on the target pixel. For example, the data selection unit 441 selects a target block composed of 5 ⁇ 5 pixels centered on the target pixel.
- step S444 the data selection unit 4441 selects a reference block including a predetermined number of pixels at a predetermined position around the target pixel. For example, the data selection unit 441 sets the center of a pixel at a predetermined position on the basis of the size of the target pin for each predetermined angle range based on the target pixel and the reference axis. Select a reference block consisting of 5 pixels.
- the data selection unit 44 1 supplies the block of interest and the reference block to the error estimation unit 4 42.
- step S444 the error estimating unit 4442 calculates the correlation between the block of interest and the reference block corresponding to the range of angle for each predetermined angle range based on the pixel of interest and the reference axis. calculate.
- the error estimating unit 442 supplies correlation information indicating the calculated correlation to the stationary direction deriving unit 443.
- step S445 the stationary direction deriving unit 4443 corresponds to the continuity of the image, which is the optical signal of the real world 1 that is missing from the position of the reference block having the strongest correlation with the block of interest. Detects the continuity angle of data in the input image with respect to the reference axis.
- the stationary direction deriving unit 443 outputs data continuity information indicating the continuity angle of the detected data.
- step S446 the data selection unit 441 determines whether or not processing of all pixels has been completed. If it is determined that processing of all pixels has not been completed, step S4 4. Return to step 1, select the target pixel from the pixels that have not yet been selected as the target pixel, and repeat the above processing.
- step S446 If it is determined in step S446 that processing of all pixels has been completed, the processing ends.
- the data continuity detection unit 101 shown in FIG. 94 has a simpler processing, and can perform a simpler process by using the standard in the image data corresponding to the continuity of the missing optical signal of the real world 1.
- An angle of data continuity with respect to the axis can be detected.
- the data continuity detection unit 101 shown in FIG. 94 can detect the continuity angle of data using the pixel values of pixels in a relatively narrow range in the input image. Therefore, even if the input image includes noise or the like, the continuity angle of the data can be detected more accurately.
- the data detection unit 101 shown in FIG. 94 has a predetermined number of pixels from the frame of interest, which is the pixel of interest, of the frame of interest.
- the block around the pixel of interest and its surroundings spatially or temporally.
- the correlation with the block may be detected, and the continuity angle of the data in the time direction and the spatial direction in the input image may be detected based on the correlation.
- the data selecting unit 441 selects a target pixel in order from frame #n, which is the target frame, and a predetermined number of pixels centered on the target pixel from frame #n.
- a block composed of pixels and a plurality of blocks composed of a predetermined number of pixels around the pixel of interest are extracted.
- the data selection unit 441 includes, from each of the frame # 11-1 and the frame # n + 1, a block composed of a predetermined number of pixels centered on the pixel at the position corresponding to the position of the pixel of interest, and Then, a plurality of blocks composed of a predetermined number of pixels around the pixel at the position corresponding to the position of the target pixel are extracted.
- the data selector 441 supplies the extracted block to the error estimator 442.
- the error estimator 442 detects the correlation between the block centered on the pixel of interest supplied from the data selector 441 and the surrounding blocks in space or time, and calculates the detected correlation.
- the correlation information shown is supplied to the stationary direction deriving unit 4 43.
- the stationarity direction deriving unit 4 4 3 Based on the correlation information supplied from the error estimator 4 42, the stationarity direction deriving unit 4 4 3 detects the real world 1 It detects the continuity angle of the data in the time direction and the spatial direction in the input image corresponding to the continuity of the optical signal, and outputs data continuity information indicating the angle.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
Claims
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP04710975A EP1598775A4 (en) | 2003-02-28 | 2004-02-13 | IMAGE PROCESSING DEVICE, METHOD AND PROGRAM |
US10/546,724 US7599573B2 (en) | 2003-02-28 | 2004-02-13 | Image processing device, method, and program |
US11/626,662 US7672534B2 (en) | 2003-02-28 | 2007-01-24 | Image processing device, method, and program |
US11/627,243 US7602992B2 (en) | 2003-02-28 | 2007-01-25 | Image processing device, method, and program |
US11/627,155 US7668395B2 (en) | 2003-02-28 | 2007-01-25 | Image processing device, method, and program |
US11/627,230 US7567727B2 (en) | 2003-02-28 | 2007-01-25 | Image processing device, method, and program |
US11/627,195 US7596268B2 (en) | 2003-02-28 | 2007-01-25 | Image processing device, method, and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003052290A JP4144378B2 (ja) | 2003-02-28 | 2003-02-28 | 画像処理装置および方法、記録媒体、並びにプログラム |
JP2003-052290 | 2003-02-28 |
Related Child Applications (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10546724 A-371-Of-International | 2004-02-13 | ||
US11/626,662 Continuation US7672534B2 (en) | 2003-02-28 | 2007-01-24 | Image processing device, method, and program |
US11/627,155 Continuation US7668395B2 (en) | 2003-02-28 | 2007-01-25 | Image processing device, method, and program |
US11/627,195 Continuation US7596268B2 (en) | 2003-02-28 | 2007-01-25 | Image processing device, method, and program |
US11/627,243 Continuation US7602992B2 (en) | 2003-02-28 | 2007-01-25 | Image processing device, method, and program |
US11/627,230 Continuation US7567727B2 (en) | 2003-02-28 | 2007-01-25 | Image processing device, method, and program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004077353A1 true WO2004077353A1 (ja) | 2004-09-10 |
Family
ID=32923397
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/001584 WO2004077353A1 (ja) | 2003-02-28 | 2004-02-13 | 画像処理装置および方法、並びにプログラム |
Country Status (6)
Country | Link |
---|---|
US (6) | US7599573B2 (ja) |
EP (1) | EP1598775A4 (ja) |
JP (1) | JP4144378B2 (ja) |
KR (1) | KR101002999B1 (ja) |
CN (1) | CN100350429C (ja) |
WO (1) | WO2004077353A1 (ja) |
Families Citing this family (49)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4214459B2 (ja) * | 2003-02-13 | 2009-01-28 | ソニー株式会社 | 信号処理装置および方法、記録媒体、並びにプログラム |
JP4144374B2 (ja) * | 2003-02-25 | 2008-09-03 | ソニー株式会社 | 画像処理装置および方法、記録媒体、並びにプログラム |
JP4144377B2 (ja) * | 2003-02-28 | 2008-09-03 | ソニー株式会社 | 画像処理装置および方法、記録媒体、並びにプログラム |
JP2006185032A (ja) * | 2004-12-27 | 2006-07-13 | Kyocera Mita Corp | 画像処理装置 |
JP4523926B2 (ja) * | 2006-04-05 | 2010-08-11 | 富士通株式会社 | 画像処理装置、画像処理プログラムおよび画像処理方法 |
US7414795B2 (en) * | 2006-05-15 | 2008-08-19 | Eastman Kodak Company | Method for driving display with reduced aging |
US7777708B2 (en) * | 2006-09-21 | 2010-08-17 | Research In Motion Limited | Cross-talk correction for a liquid crystal display |
US20080170767A1 (en) * | 2007-01-12 | 2008-07-17 | Yfantis Spyros A | Method and system for gleason scale pattern recognition |
JP4861854B2 (ja) * | 2007-02-15 | 2012-01-25 | 株式会社バンダイナムコゲームス | 指示位置演算システム、指示体及びゲームシステム |
US20090153579A1 (en) * | 2007-12-13 | 2009-06-18 | Hirotoshi Ichikawa | Speckle reduction method |
JP5288214B2 (ja) * | 2007-12-18 | 2013-09-11 | ソニー株式会社 | データ処理装置、データ処理方法、及びプログラム |
JP4882999B2 (ja) * | 2007-12-21 | 2012-02-22 | ソニー株式会社 | 画像処理装置、画像処理方法、プログラム、および学習装置 |
US9478685B2 (en) | 2014-06-23 | 2016-10-25 | Zena Technologies, Inc. | Vertical pillar structured infrared detector and fabrication method for the same |
US8299472B2 (en) | 2009-12-08 | 2012-10-30 | Young-June Yu | Active pixel sensor with nanowire structured photodetectors |
US8274039B2 (en) * | 2008-11-13 | 2012-09-25 | Zena Technologies, Inc. | Vertical waveguides with various functionality on integrated circuits |
US9343490B2 (en) | 2013-08-09 | 2016-05-17 | Zena Technologies, Inc. | Nanowire structured color filter arrays and fabrication method of the same |
US8866065B2 (en) | 2010-12-13 | 2014-10-21 | Zena Technologies, Inc. | Nanowire arrays comprising fluorescent nanowires |
US9515218B2 (en) | 2008-09-04 | 2016-12-06 | Zena Technologies, Inc. | Vertical pillar structured photovoltaic devices with mirrors and optical claddings |
US8735797B2 (en) | 2009-12-08 | 2014-05-27 | Zena Technologies, Inc. | Nanowire photo-detector grown on a back-side illuminated image sensor |
US8229255B2 (en) | 2008-09-04 | 2012-07-24 | Zena Technologies, Inc. | Optical waveguides in image sensors |
US9000353B2 (en) | 2010-06-22 | 2015-04-07 | President And Fellows Of Harvard College | Light absorption and filtering properties of vertically oriented semiconductor nano wires |
US9299866B2 (en) | 2010-12-30 | 2016-03-29 | Zena Technologies, Inc. | Nanowire array based solar energy harvesting device |
US9406709B2 (en) | 2010-06-22 | 2016-08-02 | President And Fellows Of Harvard College | Methods for fabricating and using nanowires |
US8748799B2 (en) | 2010-12-14 | 2014-06-10 | Zena Technologies, Inc. | Full color single pixel including doublet or quadruplet si nanowires for image sensors |
US8386547B2 (en) * | 2008-10-31 | 2013-02-26 | Intel Corporation | Instruction and logic for performing range detection |
TWI405145B (zh) * | 2008-11-20 | 2013-08-11 | Ind Tech Res Inst | 以圖素之區域特徵為基礎的影像分割標記方法與系統,及其電腦可記錄媒體 |
JP2010193420A (ja) * | 2009-01-20 | 2010-09-02 | Canon Inc | 装置、方法、プログラムおよび記憶媒体 |
US8520956B2 (en) * | 2009-06-09 | 2013-08-27 | Colorado State University Research Foundation | Optimized correlation filters for signal processing |
GB2470942B (en) * | 2009-06-11 | 2014-07-16 | Snell Ltd | Detection of non-uniform spatial scaling of an image |
US8823797B2 (en) * | 2010-06-03 | 2014-09-02 | Microsoft Corporation | Simulated video with extra viewpoints and enhanced resolution for traffic cameras |
TWI481811B (zh) * | 2011-01-24 | 2015-04-21 | Hon Hai Prec Ind Co Ltd | 機台狀態偵測系統及方法 |
JP5836628B2 (ja) * | 2011-04-19 | 2015-12-24 | キヤノン株式会社 | 制御系の評価装置および評価方法、並びに、プログラム |
JP2012253667A (ja) * | 2011-06-06 | 2012-12-20 | Sony Corp | 画像処理装置、画像処理方法、及びプログラム |
JP5988143B2 (ja) * | 2011-06-24 | 2016-09-07 | 国立大学法人信州大学 | 移動体の動作制御装置及びこれを用いたスロッシング制御装置 |
KR20130010255A (ko) * | 2011-07-18 | 2013-01-28 | 삼성전자주식회사 | 엑스선 장치 및 화소맵 업데이트 방법 |
JP5558431B2 (ja) * | 2011-08-15 | 2014-07-23 | 株式会社東芝 | 画像処理装置、方法及びプログラム |
JP5412692B2 (ja) * | 2011-10-04 | 2014-02-12 | 株式会社モルフォ | 画像処理装置、画像処理方法、画像処理プログラム及び記録媒体 |
KR101909544B1 (ko) * | 2012-01-19 | 2018-10-18 | 삼성전자주식회사 | 평면 검출 장치 및 방법 |
JP5648647B2 (ja) * | 2012-03-21 | 2015-01-07 | カシオ計算機株式会社 | 画像処理装置、画像処理方法及びプログラム |
RU2528082C2 (ru) * | 2012-07-23 | 2014-09-10 | Общество с ограниченной ответственностью "Фирма Фото-Тревел" | Способ автоматического ретуширования цифровых фотографий |
US8903163B2 (en) * | 2012-08-09 | 2014-12-02 | Trimble Navigation Limited | Using gravity measurements within a photogrammetric adjustment |
US9709990B2 (en) * | 2012-12-21 | 2017-07-18 | Toyota Jidosha Kabushiki Kaisha | Autonomous navigation through obstacles |
JP6194903B2 (ja) * | 2015-01-23 | 2017-09-13 | コニカミノルタ株式会社 | 画像処理装置及び画像処理方法 |
KR102389196B1 (ko) * | 2015-10-05 | 2022-04-22 | 엘지디스플레이 주식회사 | 표시장치와 그 영상 렌더링 방법 |
EP3450910B1 (en) * | 2016-04-27 | 2023-11-22 | FUJIFILM Corporation | Index generation method, measurement method, and index generation device |
CN109949332B (zh) * | 2017-12-20 | 2021-09-17 | 北京京东尚科信息技术有限公司 | 用于处理图像的方法和装置 |
KR102444680B1 (ko) * | 2018-02-18 | 2022-09-19 | 에이에스엠엘 네델란즈 비.브이. | 이진화 방법 및 프리폼 마스크 최적화 흐름 |
CN109696702B (zh) * | 2019-01-22 | 2022-08-26 | 山东省科学院海洋仪器仪表研究所 | 一种海水放射性核素k40检测的重叠峰判断方法 |
JP7414455B2 (ja) * | 2019-10-10 | 2024-01-16 | キヤノン株式会社 | 焦点検出装置及び方法、及び撮像装置 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0951427A (ja) * | 1995-08-09 | 1997-02-18 | Fuji Photo Film Co Ltd | 画像データ補間演算方法および装置 |
JP2000201283A (ja) * | 1999-01-07 | 2000-07-18 | Sony Corp | 画像処理装置および方法、並びに提供媒体 |
JP2001084368A (ja) * | 1999-09-16 | 2001-03-30 | Sony Corp | データ処理装置およびデータ処理方法、並びに媒体 |
EP1096424A2 (en) | 1995-05-23 | 2001-05-02 | Hewlett-Packard Company, A Delaware Corporation | Area based interpolation for image scaling |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4648120A (en) * | 1982-07-02 | 1987-03-03 | Conoco Inc. | Edge and line detection in multidimensional noisy, imagery data |
JPH03258164A (ja) * | 1990-03-08 | 1991-11-18 | Yamatoya & Co Ltd | 画像形成装置 |
US5617489A (en) * | 1993-08-04 | 1997-04-01 | Richard S. Adachi | Optical adaptive thresholder for converting analog signals to binary signals |
JPH07200819A (ja) | 1993-12-29 | 1995-08-04 | Toshiba Corp | 画像処理装置 |
JP3125124B2 (ja) * | 1994-06-06 | 2001-01-15 | 松下電器産業株式会社 | 欠陥画素傷補正回路 |
US5627953A (en) * | 1994-08-05 | 1997-05-06 | Yen; Jonathan | Binary image scaling by piecewise polynomial interpolation |
TW361046B (en) * | 1996-10-31 | 1999-06-11 | Matsushita Electric Ind Co Ltd | Dynamic picture image decoding apparatus and method of decoding dynamic picture image |
US6188804B1 (en) * | 1998-05-18 | 2001-02-13 | Eastman Kodak Company | Reconstructing missing pixel information to provide a full output image |
US6678405B1 (en) * | 1999-06-08 | 2004-01-13 | Sony Corporation | Data processing apparatus, data processing method, learning apparatus, learning method, and medium |
JP2001318745A (ja) * | 2000-05-11 | 2001-11-16 | Sony Corp | データ処理装置およびデータ処理方法、並びに記録媒体 |
JP2002185704A (ja) * | 2000-12-15 | 2002-06-28 | Canon Inc | 画像読取装置及び方法 |
JP4143916B2 (ja) * | 2003-02-25 | 2008-09-03 | ソニー株式会社 | 画像処理装置および方法、記録媒体、並びにプログラム |
JP4144374B2 (ja) * | 2003-02-25 | 2008-09-03 | ソニー株式会社 | 画像処理装置および方法、記録媒体、並びにプログラム |
JP4265237B2 (ja) * | 2003-02-27 | 2009-05-20 | ソニー株式会社 | 画像処理装置および方法、学習装置および方法、記録媒体、並びにプログラム |
JP4144377B2 (ja) * | 2003-02-28 | 2008-09-03 | ソニー株式会社 | 画像処理装置および方法、記録媒体、並びにプログラム |
-
2003
- 2003-02-28 JP JP2003052290A patent/JP4144378B2/ja not_active Expired - Fee Related
-
2004
- 2004-02-13 KR KR1020057015846A patent/KR101002999B1/ko not_active IP Right Cessation
- 2004-02-13 US US10/546,724 patent/US7599573B2/en not_active Expired - Fee Related
- 2004-02-13 EP EP04710975A patent/EP1598775A4/en not_active Withdrawn
- 2004-02-13 CN CNB2004800052157A patent/CN100350429C/zh not_active Expired - Fee Related
- 2004-02-13 WO PCT/JP2004/001584 patent/WO2004077353A1/ja active Application Filing
-
2007
- 2007-01-24 US US11/626,662 patent/US7672534B2/en not_active Expired - Fee Related
- 2007-01-25 US US11/627,243 patent/US7602992B2/en not_active Expired - Fee Related
- 2007-01-25 US US11/627,195 patent/US7596268B2/en not_active Expired - Fee Related
- 2007-01-25 US US11/627,155 patent/US7668395B2/en not_active Expired - Fee Related
- 2007-01-25 US US11/627,230 patent/US7567727B2/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1096424A2 (en) | 1995-05-23 | 2001-05-02 | Hewlett-Packard Company, A Delaware Corporation | Area based interpolation for image scaling |
JPH0951427A (ja) * | 1995-08-09 | 1997-02-18 | Fuji Photo Film Co Ltd | 画像データ補間演算方法および装置 |
JP2000201283A (ja) * | 1999-01-07 | 2000-07-18 | Sony Corp | 画像処理装置および方法、並びに提供媒体 |
JP2001084368A (ja) * | 1999-09-16 | 2001-03-30 | Sony Corp | データ処理装置およびデータ処理方法、並びに媒体 |
Non-Patent Citations (4)
Title |
---|
HSIEH S HOU: "Cubic splines for image interpolation and digital filtering", IEEE TRANSACTION ON ACOUSTICS |
KLASSEN R V: "Using B-Splines for Re-Sizing Images", COMPUTER SCIENCE DEPARTMENT TECHNICAL REPORT |
See also references of EP1598775A4 |
ULICHNEY R A: "Scaling binary images with the telescoping template", IEEE |
Also Published As
Publication number | Publication date |
---|---|
US20070121138A1 (en) | 2007-05-31 |
EP1598775A4 (en) | 2011-12-28 |
CN100350429C (zh) | 2007-11-21 |
US7668395B2 (en) | 2010-02-23 |
US20070116378A1 (en) | 2007-05-24 |
US7602992B2 (en) | 2009-10-13 |
JP2004264925A (ja) | 2004-09-24 |
US7596268B2 (en) | 2009-09-29 |
US7599573B2 (en) | 2009-10-06 |
JP4144378B2 (ja) | 2008-09-03 |
EP1598775A1 (en) | 2005-11-23 |
US7567727B2 (en) | 2009-07-28 |
US20070196029A1 (en) | 2007-08-23 |
US20070189634A1 (en) | 2007-08-16 |
US20060147128A1 (en) | 2006-07-06 |
KR101002999B1 (ko) | 2010-12-21 |
US20070116377A1 (en) | 2007-05-24 |
KR20050103507A (ko) | 2005-10-31 |
US7672534B2 (en) | 2010-03-02 |
CN1754187A (zh) | 2006-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2004077353A1 (ja) | 画像処理装置および方法、並びにプログラム | |
WO2004072898A1 (ja) | 信号処理装置および方法、並びにプログラム | |
WO2004077351A1 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP2004264918A (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
WO2004077354A1 (ja) | 画像処理装置および方法、並びにプログラム | |
JP4214460B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4214462B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4214461B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4161729B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4264632B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4161734B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4264631B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4182776B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4161727B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4161731B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4161735B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4161733B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4161732B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4161730B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4175131B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4155046B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4178983B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4161728B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP4161254B2 (ja) | 画像処理装置および方法、記録媒体、並びにプログラム | |
JP2004246590A (ja) | 画像処理装置および方法、記録媒体、並びにプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
ENP | Entry into the national phase |
Ref document number: 2006147128 Country of ref document: US Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10546724 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020057015846 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2004710975 Country of ref document: EP Ref document number: 20048052157 Country of ref document: CN |
|
WWP | Wipo information: published in national office |
Ref document number: 1020057015846 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 2004710975 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 10546724 Country of ref document: US |