WO2005010818A1 - 画像処理装置、画像処理方法及び歪補正方法 - Google Patents
画像処理装置、画像処理方法及び歪補正方法 Download PDFInfo
- Publication number
- WO2005010818A1 WO2005010818A1 PCT/JP2004/011010 JP2004011010W WO2005010818A1 WO 2005010818 A1 WO2005010818 A1 WO 2005010818A1 JP 2004011010 W JP2004011010 W JP 2004011010W WO 2005010818 A1 WO2005010818 A1 WO 2005010818A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- distortion correction
- data
- image processing
- coordinates
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 331
- 238000000034 method Methods 0.000 title claims description 81
- 238000003672 processing method Methods 0.000 title claims description 15
- 238000012937 correction Methods 0.000 claims abstract description 385
- 230000015654 memory Effects 0.000 claims description 138
- 239000000872 buffer Substances 0.000 claims description 83
- 230000008569 process Effects 0.000 claims description 42
- 238000006243 chemical reaction Methods 0.000 claims description 31
- 238000003384 imaging method Methods 0.000 claims description 29
- 230000005540 biological transmission Effects 0.000 claims description 13
- 238000005070 sampling Methods 0.000 claims description 13
- 238000001514 detection method Methods 0.000 claims description 11
- 238000012544 monitoring process Methods 0.000 claims description 11
- 230000009466 transformation Effects 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 2
- 238000004148 unit process Methods 0.000 claims 1
- 241000226585 Antennaria plantaginifolia Species 0.000 abstract description 13
- 238000010586 diagram Methods 0.000 description 60
- 230000006870 function Effects 0.000 description 15
- 238000012546 transfer Methods 0.000 description 13
- 230000003287 optical effect Effects 0.000 description 11
- 230000008859 change Effects 0.000 description 5
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 230000007812 deficiency Effects 0.000 description 4
- 230000007547 defect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 230000001934 delay Effects 0.000 description 2
- 238000005429 filling process Methods 0.000 description 2
- 230000012447 hatching Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000010408 sweeping Methods 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 239000012464 large buffer Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000011295 pitch Substances 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 229910052709 silver Inorganic materials 0.000 description 1
- 239000004332 silver Substances 0.000 description 1
- -1 silver halide Chemical class 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/81—Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/61—Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
Definitions
- Image processing apparatus image processing method and distortion correction method
- the present invention mainly relates to an image processing device and an image processing method used for an electronic imaging device such as a digital camera, and more particularly, realizes a distortion correction function without increasing a circuit scale and a data transfer amount,
- the present invention relates to an image processing device, an image processing method, and a distortion correction method capable of calculating a spatial position of a small region when performing a distortion correction process in units of small regions (for example, block lines).
- distortion occurs in the optical system of a camera, regardless of whether it is a digital camera or a silver halide camera.
- most of the cameras currently on the market are capable of optical zoom, in which case the state of distortion changes from the wide-angle end to the telephoto end. There is much barrel distortion at the wide end, and pincushion distortion at the tele end.
- the distortion is observed as a barrel distortion and a pincushion distortion when, for example, a lattice-shaped object is photographed.
- FIG. 56 shows a grid-shaped object
- FIG. 57 shows a photographed image with barrel distortion
- FIG. 58 shows a photographed image with pincushion distortion.
- the data compressed by a compression method such as JPEG is recorded on a recording medium such as a memory card. I do.
- Figure 59 shows the concept of the image processing procedure performed by a general digital camera. Pre-process pixel defect processing, AZD conversion, etc. on the image signal captured by the CCD, and temporarily store the obtained image data in a frame memory such as an SDRAM. Next, image processing is performed on the image data read from the frame memory. Various image processing is performed by the process, and the image is compressed by JPEG processing and recorded on a memory card or the like as a recording medium.
- Figure 60 is a block diagram of a conventional digital camera image processing device.
- the conventional image processing apparatus includes a pre-processing circuit 102, a plurality of image processing circuits 106-1 to 106-n, a JPEG processing unit 107, a frame memory 105, and a recording medium on a path 103 together with a CPU 104.
- Memory card 108 is connected. Then, under the control of the CPU 104, the imaging signal from the CCD 101 is subjected to pixel defect processing, A / D conversion, and the like by the pre-processing circuit 102, and then temporarily stored in the frame memory 105 through the node 103.
- the image data is read out from the frame memory 105, input to the image processing circuit 106-1 through the path 103, performs predetermined image processing, and is rewritten into the frame memory 105 through the bus 103 again.
- data is sequentially exchanged between the frame memory 105 and the image processing circuits' 106-2 to 106-n via the bus 103, and finally, the JPEG processing unit 107 performs JPEG processing.
- the compression processing is performed, the image data is temporarily stored in the frame memory 105, and the processing data read from the frame memory 105 is recorded in the memory card 108 or the like.
- image processing is performed in units of small areas (proclines).
- FIGS. 61 and 62 illustrate the operation of correcting a captured image in which barrel distortion and pincushion distortion have occurred according to the above [Equation 5].
- Fig. 61 shows a case where the original image (left in the figure) distorted in a barrel shape as shown by the dotted line is corrected by [Equation 5], and the corrected image (right in the figure) is out of the image output range and wasted. Has arisen.
- Fig. 62 shows a case where the original image (left in the figure) distorted in a pincushion shape as shown by the dotted line is corrected by [Equation 5], and the corrected image (right in the figure) becomes smaller than the image output range and becomes smaller. —Some parts of the evening are short.
- Japanese Patent Application Laid-Open No. 9-098340 which is a prior application, discloses that after distortion correction, electronic zooming is performed to correct the distortion. The reduced amount is returned to its original position, minimizing image 'information lost due to distortion correction.
- Japanese Unexamined Patent Publication No. Hei 6-181530 discloses that, when the detecting means detects that the imaging position of the imaging zoom lens during imaging is within a position where distortion is large, the imaging zoom is used. It describes that geometric distortion of an image caused by a lens is corrected by reading out image data of a solid-state image sensor based on geometric deformation.
- Japanese Patent Application Laid-Open No. H10-224648 discloses a solid-state imaging device that receives light passing through an optical system.
- the data captured by the imaging device is stored in a random access video memory, and the random read timing generation circuit has an error correction data for correcting the aberration caused by the optical system. It describes that a signal generated by a solid-state imaging device is read out in a predetermined order to generate a video signal, thereby correcting distortion of an optical system.
- Japanese Patent Application Laid-Open No. Hei 9-19963 only mentions correction of barrel distortion, and refers to the point closest to the imaging center, and in principle, corrects barrel distortion. Only correction. With respect to pincushion distortion, the corrected image is smaller than the image output range, and a portion of the image data is insufficient. Furthermore, to correct the Jinkasa type distortion as shown in Fig. 22, the distortion correction equation requires a fourth-order term or higher, but it is necessary to analytically determine the correction magnification M for maximizing the use of the original image data. Was difficult.
- Japanese Patent Application Laid-Open No. 6-185030 discloses that distortion correction processing is performed. However, as shown in FIG. 36, pixels on the CCD imaging surface (vertical and horizontal grids indicated by black circles) are used. (Point 3) If the subject image, which should be a straight line, is curved due to the distortion of the optical lens and forms an image (indicated by reference numeral 32), a large-capacity buffer memory is stored in the line memory in the distortion correction circuit. Is required. In addition, the image size that can be processed is limited by the capacity of the buffer memory. Furthermore, there is no description on a method for determining the spatial position of data stored in the buffer memory. Furthermore, Japanese Patent Application Laid-Open No. Hei 6-181530 does not describe at all the distortion correction processing in small area (block line) units.
- a buffer memory is not required for processing by random access, but generally, when a memory such as an SDRAM is randomly accessed, the transfer time is reduced. Take it. Therefore, a first object of the present invention is to perform a distortion correction process by effectively utilizing a captured original image, and to output the corrected image to the image output range effectively without waste. It is an object of the present invention to provide an image processing apparatus and an image processing method capable of performing a distortion correction process capable of coping with distortion, barrel distortion, and jinkasa distortion.
- a second object of the present invention is to provide an image processing apparatus and a distortion correction method capable of calculating a spatial position of a small area (for example, a block line) and realizing distortion correction processing in small area units. That is.
- the third aspect of the present invention is that accurate distortion correction can be performed even on an image obtained by thinning out images, while accurate distortion correction can be performed on an image obtained by cutting out an arbitrary area such as a digital zoom. And a distortion correction method.
- a fourth object of the present invention is to provide an image processing apparatus and an image processing method capable of realizing distortion correction processing without greatly increasing the amount of transfer of a bus / the capacity of a memory.
- An image processing device is an image processing device having a distortion correction unit, further comprising: a distortion correction range calculation unit that calculates an input image range in which the distortion correction unit performs a distortion correction process. Having.
- the distortion correction range calculation unit that calculates the input image range in which the distortion correction process is performed, an area corresponding to a part or the whole of the output range of the output image (that is, the image after the distortion correction) is provided.
- the corrected image obtained by the distortion correction can be output to the output range to be output without excess or deficiency.
- the corrected image can be prevented from protruding or missing.
- the distortion correction range calculation unit includes: a coordinate generation unit configured to generate an interpolation coordinate; and a distortion correction unit configured to output a coordinate obtained by applying a predetermined distortion correction formula to the generated interpolation coordinate.
- the position of the corrected image (output image) to be output is generated in advance in order to obtain the position of the captured image (input image) before distortion correction. That is.
- a distortion correction formula is applied to the interpolated coordinates to generate (convert) and output an image position (coordinates) before distortion correction, and calculate an input image range to be subjected to distortion correction processing from the converted coordinate positions before correction. I am trying to do it.
- the distortion correction range calculation unit calculates a maximum value and a minimum value of coordinates of pixels corresponding to each of four sides of the output image range and the output image with respect to the coordinates generated by the coordinate conversion.
- the input image range is calculated from at least one of the coordinates corresponding to the four vertices of the range.
- the four vertices indicate the vertices of each side in the output image after distortion correction.
- the distortion correction range calculation unit calculates the input image range by sequentially repeating range calculation for a plurality of input signals to be subjected to distortion correction processing.
- the distortion correction range calculation is performed on the block lines to be processed next for each of the multiple input signals while the correction processing is performed on the multiple input signals simultaneously.
- one distortion correction range calculation unit is performed in an iterative process, and a correction magnification M is determined so that the image range after distortion correction for the input image range becomes a predetermined range.
- the distortion correction range calculation unit calculates an image input range in which distortion correction is performed next while the distortion correction processing unit is performing distortion correction.
- An image processing method is an image processing method for performing a distortion correction process, wherein when performing the distortion correction process, an input image range in which the distortion correction process is performed is calculated.
- the input image range in which the distortion correction processing is performed is calculated so that the area corresponding to a part or the whole of the output range of the image after the distortion correction occupies the input image.
- the corrected image obtained by the distortion correction can be output to the image output range without excess or deficiency, so that the corrected image does not protrude or run out of the image output range. it can.
- An image processing device is an image processing device having a distortion correction means for performing a distortion correction process on an image data, wherein the distortion correction means includes: When calculating the position corresponding to each pixel according to a predetermined correction formula, the position is calculated in a coordinate system that can describe the spatial position on the imaging surface.
- the coordinate system corresponding to the imaging surface (that is, the position in the two-dimensional space) is used as a reference. Perform calculations.
- the distortion correction means performs the processing in units of a first small area included in the corrected image, and converts the coordinate position of each pixel of the first small area according to the correction formula into a small area. After converting the position in the coordinate system corresponding to the imaging surface with respect to the second small region on the imaging plane including the image data, the data of each pixel of the corrected image is generated after converting the position into the coordinates in the second small region. .
- the interpolation processing can be performed based on the coordinates in the small area to be converted.
- the image data may be partial image data obtained by capturing only a part of the image data.
- an image data not all of the captured data but a part of the data, for example, data cut out from the center of the captured data at the time of digital zoom, or reduced data generated by thinning out the captured data. Even can be applied.
- the image data is filtered with respect to the image data,
- the data may be generated by performing at least one of the pulling and the interpolation.
- a distortion correction method is a distortion correction method for an image processing apparatus having distortion correction means for performing distortion correction processing on image data, wherein the distortion correction method includes a coordinate system corresponding to each pixel of the corrected image.
- this distortion correction method for example, when obtaining the data of each pixel of the corrected image by a correction formula for correcting geometric distortion due to the imaging lens, a coordinate system corresponding to each pixel of the corrected image is used.
- the first position is converted into a second position in a coordinate system in the image data before distortion correction according to the distortion correction formula, and the converted second position is converted into a coordinate system corresponding to the imaging surface. Is converted into coordinates within the setting area based on the reference, and interpolation processing is performed based on the coordinates to generate pixel data of the corrected image.
- the processing is performed for each small area included in the corrected image.
- processing is performed on a small area (for example, an area called a block line) included in the corrected image.
- An image processing apparatus is an image processing apparatus having a distortion correction means for performing distortion correction processing on image data, wherein the distortion correction unit includes: a memory unit that stores a part of the image data; A memory control unit that controls the writing and reading of data to and from the unit, and controls the image data read from the memory unit. To perform an interpolation operation.
- a part of the image data can be stored in the memory unit serving as an internal buffer, and the data can be used to perform an interpolation operation for distortion correction processing.
- the memory control unit controls so that image data (unit line: UL) composed of a fixed number of pixels arranged in a line in a column direction is written as a unit.
- image data unit line: UL
- reading control is performed on the image data stored in the memory unit so that the image after the distortion correction processing is output in units of the UL.
- the buffer capacity of the memory unit only needs to have a capacity of several ULs, that is, at least 1 UL in accordance with the amount of optical distortion. Distortion correction can be realized without greatly increasing the bus transfer amount or the memory capacity.
- the memory control unit is configured such that, with respect to the coordinate position of the pixel to be processed first of the UL, a region having a predetermined width on the front side and the rear side in the row direction (p-reULB, postULB respectively). ) Is provided so that the area is not overwritten by other processing during the UL processing.
- the image processing apparatus further comprises a buffer free space monitoring circuit for detecting a free space in the buffer, and when the free space monitoring circuit detects a free space in the buffer, Enables data writing.
- a buffer free space monitoring circuit for detecting a free space in the buffer, and when the free space monitoring circuit detects a free space in the buffer, Enables data writing.
- This configuration enables a pipeline-like operation that allows input during data output.
- the memory unit includes a plurality of memories capable of simultaneously performing a read operation and a write operation simultaneously
- the memory control unit further includes a write address generation unit that performs a data write control on the memory unit. Circuit and an address for simultaneously reading out data necessary for interpolation from the image data stored in the memory unit. And a read address generation circuit, wherein the data write control is to write simultaneously read data to different memories.
- An image processing method is an image processing method for performing a distortion correction process on an image data, wherein, when the distortion correction process is performed, the memory unit in which writing and reading of data are controlled is performed. Part of the data is stored, and an interpolation operation is performed on the image data read from the memory unit.
- a part of the image data is stored in a memory unit as an internal buffer, and an interpolation operation for a distortion correction process can be performed using the data.
- FIG. 1 is a block diagram showing the overall configuration of the image processing apparatus a according to the first embodiment of the present invention.
- FIG. 2 is a block diagram illustrating a configuration of the distortion correction processing unit.
- FIG. 3 to 5 are conceptual diagrams of coordinate conversion in the distortion correction processing unit.
- FIG. 3 is a diagram showing captured image data
- FIG. 4 is a diagram showing a corrected image
- FIG. 5 is a diagram explaining an interpolation process.
- FIG. 6 is a diagram showing the reading order of image data.
- FIG. 7 is a diagram for explaining the data overnight order conversion processing in the first data overnight order conversion unit.
- FIG. 8 is a view for explaining a data order conversion process in the second data order conversion unit.
- Figure 9 is a diagram showing the relationship between block lines and the memory capacity (buffer capacity) required for distortion correction processing.
- FIG. 10 is a diagram showing a method of setting a block line width.
- FIG. 11 is a block diagram illustrating a configuration of the distortion correction range calculation unit.
- FIGS. 12 and 13 are diagrams illustrating the operation of calculating the input image range when performing the distortion correction process in the distortion correction range calculation unit.
- FIG. 12 is a diagram illustrating a range on the corrected image.
- Fig. 13 is a diagram showing the range over one year.
- FIG. 14 is a flowchart for explaining the operation of the distortion correction processing.
- FIG. 15 is a flowchart illustrating a method of calculating the correction magnification M in step S11 of FIG.
- FIG. 16 is a block diagram showing the configuration of the distortion correction range calculation unit in FIG.
- FIG. 17 is a timing chart of the overnight output of FIG.
- Fig. 18 is an improvement of the operation of Fig. 17, and is a timing chart of the overnight output in the pipeline.
- FIG. 19 is a diagram illustrating a method of calculating an input image range when performing distortion correction processing for three channels.
- FIG. 20 is a flowchart illustrating the operation of the distortion correction processing in the case of three channels.
- FIG. 21 is a block diagram showing the overall configuration of the image processing apparatus according to the second embodiment of the present invention.
- FIG. 22 is a view showing a photographed image in which a kinkasa type distortion has occurred.
- FIGS. 23 and 24 are diagrams showing the relationship between the captured image data before correction and the corrected image after correction
- FIG. 23 is a diagram showing the captured image data before correction
- FIG. FIG. 4 is a diagram showing a corrected image of FIG.
- FIG. 25 is a flowchart of the distortion correction processing of FIGS. 23 and 24.
- FIGS. 26 and 27 show the relationship between the captured image data before correction and the corrected image after correction in the case of digital zoom.
- FIG. 26 shows the captured image data before correction.
- FIG. 27 is a diagram showing a corrected image after the correction.
- FIGS. 28 and 29 show the relationship between the image data before correction and the corrected image after correction when a certain area is cut out from the image data.
- FIG. 29 is a view showing a captured image
- FIG. 29 is a view showing a corrected image after correction.
- FIGS. 30 to 32 are diagrams for explaining line thinning in the CCD monitor mode.
- FIG. 30 is a diagram for explaining a state in which data at the time of imaging is thinned out from the CCD by three lines in the vertical direction and taken into the memory.
- FIG. 32 is a view showing a captured image data stored in a memory
- FIG. 32 is a view showing a corrected image in which the image is corrected to be asymmetric in the vertical and horizontal directions.
- FIGS. 33 and 34 are diagrams for explaining horizontal thinning of color data in YC422 format data.
- FIG. 33 is a diagram showing luminance data
- FIG. 34 is a diagram showing color data.
- FIG. 35 is a diagram for explaining center shift of an image.
- FIG. 36 is a diagram showing the relationship between CCD pixels and captured images.
- FIG. 37 is a block diagram illustrating a detailed configuration of the distortion correction processing unit in the image processing device according to the third embodiment of the present invention. '
- FIG. 38 is an image diagram of the interpolation operation in the interpolation circuit.
- FIG. 39 is a diagram for explaining the internal memory unit in the distortion correction processing unit.
- FIGS. 40 and 41 are diagrams for explaining the supplementary explanation of FIG. 39 and explaining how data is written to the 2-port SRAM.
- FIG. 40 is a diagram showing a writing order
- FIG. 41 is a diagram shown in FIG.
- FIG. 9 is a diagram showing where data in the write order is written on 16 2-port SRAMs.
- FIG. 42 is an explanatory diagram for obtaining DO required to calculate the corrected coordinate position in FIG.
- FIG. 43 is a diagram showing an example of the error processing.
- FIG. 44 shows another example of the error processing.
- FIG. 45 shows another example of the error processing.
- FIG. 46 is a diagram illustrating the buffer amount required for the distortion correction processing.
- FIG. 47 is a diagram illustrating the buffer amount required for the distortion correction processing.
- FIGS. 48 and 49 are diagrams for explaining the buffer amount required for the distortion correction processing.
- FIG. 48 is a diagram for explaining the definitions of preULB and post ULB. It is a figure explaining buffer release amount when straddling a heart.
- FIG. 50 is a view for explaining a method of calculating the opening amount associated with the UL processing.
- FIG. 51 is a diagram for explaining a method of calculating the opening amount associated with the UL processing.
- Fig. 52 is a diagram explaining the vacancy of the pipeline processing accompanying the processing of Fig. 51 o
- FIG. 53 is a view for explaining a method of calculating the opening amount associated with the UL processing.
- FIG. 54 is a diagram for explaining the empty space of the pipeline processing accompanying the processing of FIG.
- FIG. 55 is a diagram for explaining a method of calculating the opening amount associated with the UL processing.
- FIG. 56 is a diagram showing a grid-like subject.
- FIG. 57 is a diagram showing a captured image in which barrel distortion has occurred.
- FIG. 58 is a view showing a photographed image in which pincushion distortion has occurred.
- FIG. 59 is a diagram showing the concept of the image processing procedure of a general digital camera.
- FIG. 60 is a diagram showing a block configuration in an image processing device of a conventional digital camera.
- FIG. 61 is a diagram for explaining an operation of correcting a captured image having a barrel distortion using [Equation 5].
- FIG. 62 is a diagram for explaining an operation of correcting a photographed image having pincushion distortion by [Equation 5].
- FIG. 1 is a block diagram showing the overall configuration of the image processing apparatus according to the first embodiment of the present invention.
- the image signal from the CCD 1 is subjected to pixel defect processing, AZD conversion, and the like by the pre-processing circuit 2 under the control of the CPU 4 that controls each unit connected to the bus 3.
- the image data over the bus 3 via the frame memory 5
- the frame memory 5 is a memory configured by SDRAM or the like, and stores data before image processing and data after image processing.
- the image data read from the frame memory 5 is input to the first data order converter 6 via the bus 3.
- the first data-order conversion unit 6 includes a plurality of memories capable of storing data in blocks, here two.
- the first data order converter 6 reads out data from the frame memory 5 in the row direction and stores the data, then reads out the data in the column direction in order, and outputs the data to the image process circuit 7.
- the image processing circuit 7 performs predetermined image processing on the input data and transfers the processed data to a distortion correction processing unit 8 as a distortion correction unit in the next stage.
- the distortion correction processing unit 8 performs a distortion correction process on the input data and transfers it to the second data order conversion unit 9 in the next stage.
- the second data / order conversion unit 9 includes a plurality of memories, here two, capable of storing data in block units.
- the second data-to-night order converter 9 reads out the data from the distortion correction processor 8 in the column direction and stores the data in the column direction, sequentially reads the data in the direction, and transfers the data to the JPEG processor 10.
- the JPEG compression unit 10 performs JPEG compression processing, temporarily stores the processed data in the frame memory 5, and records the processed data read from the frame memory 5 on a memory card or the like 11. ing.
- the distortion correction processing unit 8 calculates the position of the corrected image after the distortion correction (X, Y called the interpolation position) and the corresponding position of the original image before the distortion correction ( ⁇ ′, ⁇ ') And a buffer memory (hereinafter simply referred to as a buffer) for temporarily storing a part of the image data from the circuit of the previous block of the distortion correction processing unit 8.
- image and internal memory unit 82 a memory controller 8 3 for controlling the Shi out writing and reading with respect to its internal memory unit 82, converted distortion correction image before the position of the coordinates (X 5, ⁇ ') according to the And an interpolation operation unit 84 that performs processing and corrects distortion of data.
- the interpolation coordinate generation section 81 generates an interpolation position generation section 811 for generating interpolation coordinates (X, ⁇ ) and a predetermined distortion correction for the generated interpolation coordinates (X, ⁇ ).
- Expression [expression 1] A distortion correction coordinate conversion unit 812 that outputs the coordinates ( ⁇ ′, ⁇ ′) before correction converted by applying (described later), the interpolation coordinates ( ⁇ , ⁇ ) from the interpolation position generation unit 811 and And a selector 813 that can selectively output the transformation coordinates ( ⁇ ′, ⁇ ′) from the distortion correction coordinate transformation unit 812.
- the set values for each block set in the control register 85 storing control data are set. It works according to. In addition, the status of the processing result can be referenced from the CPU.
- the distortion correction processing unit 8 includes a distortion correction range calculation unit 12 that calculates an input image range in which the distortion correction processing unit 8 performs a distortion correction process.
- the distortion correction range calculation unit 12 converts the coordinate by generating a coordinate generation unit 91 that generates interpolation coordinates and applying a predetermined distortion correction formula to the generated interpolation coordinates.
- a distortion correction coordinate conversion section 92 that outputs coordinates and a correction range detection section 93 that calculates the input image range from the converted coordinate positions before correction are configured.
- the first data-order converter 6 to the JPEG processor 10 are different from the bus 3 without passing through the bus 3. It is connected so that it can be pipelined through an information transmission path, and transfers and processes the image data in a predetermined block unit in a two-dimensional pixel array.
- the data transfer via the bus 3 is performed by transferring the data from the frame memory 5 to the first data order converter 6, transferring the data from the JPEG processor 1Q to the frame memory 5, and transferring the data from the frame memory 5 to the memory memory.
- the data is transferred only to the bus 11 and the data is transferred between the frame memory and each image processing circuit. Evening traffic can be greatly reduced, and the load on bus 3 can be significantly reduced.
- the distortion correction processing unit 8 is provided after the image processing circuit 7, but the configuration may be reversed.
- the image processing circuit section composed of the first stage image processing circuit 7 and the second stage image processing circuit, which is a distortion correction processing section 8 a pipeline register is provided before or inside each of the image processing circuits 7 and 8.
- a small-capacity memory is provided, and the image processing circuits 7 and 8 are configured to perform a pipeline processing operation via the small memory.
- These small-capacity memories are used to store the peripheral data required for image processing when performing spatial image processing in each of the image processing circuits 7 and 8, and to read image data in block units. It is provided because it is necessary to perform rearrangement and other processing.
- FIG. 3 to 5 show conceptual diagrams of coordinate conversion in the distortion correction processing unit 8.
- Fig. 3 shows the original image of the original image
- Fig. 4 shows the corrected image
- Fig. 5 shows the coordinate of the original image in Fig. 3 with respect to the coordinate position (X, Y) of the corrected image in Fig. 4.
- Data coordinates (, ', ⁇ ') at the coordinate position P converted above (The coordinate ⁇ is a coordinate position that does not always exactly match the positions of the pixels that actually constitute the original data.
- the coordinate position ( ⁇ ', ⁇ ') at the point ⁇ is calculated using the coordinates of the 16 pixels around the ⁇ ⁇ point. Interpolation is performed using the pixel data of.
- the interpolation calculation unit 84 performs processing to interpolate the data at the point ⁇ from the pixel values (brightness data) of 16 points around it.
- the generation of the interpolation coordinates by the interpolation position generation unit 811 in FIG. 2 means which pixel position (X, ⁇ ) is pointed on the corrected image side in FIG.
- the pixel position (X ′, ⁇ ′) before the distortion correction with respect to the pixel position ( ⁇ ,. ⁇ ) after the distortion correction can be calculated.
- the pixel position ( ⁇ ', ⁇ ') before distortion correction does not always have an integer value corresponding to the pixel position on the original image.
- Equation 1 Z in [Equation 1] is the distance from the distortion center (X d, Yd) to the point of interest (X, Y).
- [Equation 1] calculates the coordinates ( ⁇ ', ⁇ ⁇ ') of the original image that is distorted with respect to the point (X, Y) of the corrected image.
- ⁇ is a correction magnification for correcting a phenomenon that an image protrudes or becomes insufficient after correction when it is theoretically corrected using the data of the optical system.
- S X and S y are sampling ratios for correcting phenomena such as thinning in which the vertical and horizontal spatial sampling intervals differ.
- X off and Y off are values of the center shift for correcting a phenomenon that the object position deviates from the position at the time of shooting after the distortion correction processing.
- M ⁇ 1 In the barrel-shaped distortion shown in Fig. 61, M ⁇ 1 must be used to reduce the distortion a little in order to correct the distortion. In the case of the pincushion distortion shown in Fig. 62, it must be stretched in the opposite direction, so set M> 1.
- Equation 5 described in the conventional example is described in consideration of correcting barrel distortion and pincushion distortion.
- FIG. 6 is a diagram for explaining the reading order of the image data from the frame memory according to the present embodiment.
- the image data is swept in the line direction, that is, the row direction, and is written in the line direction.When reading, the image data is read in the row direction. Then, it is common to repeat the operation of reading out all the image data of the next adjacent line.
- the image processing apparatus converts the image data written by sweeping in the row direction into image data in the row direction in units of a certain length in the column direction.
- the image data is input to the image processing unit 7 in order, and then the next row is input to the image processing unit 7 and input repeatedly to the right end of the image.
- the small area (rectangular image data) ) Is called a block line (BL).
- FIGS. 7 and 8 are block diagrams showing the configuration of the first and second data sequence converters.
- the first data overnight sequence converter 6 has a plurality of memories, here two, capable of storing the image data in blocks, and the two memories 6a and the memories 6a are used. 6b is designed so that writing and reading can be alternately switched by each switch on the writing and reading sides. That is, the frame memory 5 is switchably connected to these memories 6a and 6b by a switch on the writing side, and the image processing section 7 is connected to these memories 6a by a switch on the reading side.
- the frame memory 5 When the frame memory 5 is connected to the memory 6b so as to be switchable and the frame memory 5 is connected to one of the memory 6a and the memory 6b, the other of the memory 6a and the memory 6b is connected to the image processing unit 7 Is switched to be connected to. That is, the memories 6a and 6b are switched so as not to be connected to both the frame memory 5 and the image processing unit 7 at the same time, and writing and reading are performed alternately.
- a part of the frame image stored in the frame memory 5 is read in the line direction in block units and stored in one memory, here, for example, the memory 6a.
- the image data in block units which have already been read and stored from the frame memory 5 are sequentially read in the column direction (vertical direction), and the image processing unit 7 Output to
- the switch on the writing side and the switch on the reading side are switched. Writing of the next block of image data to the memory 6b is started, and reading of block image data from the memory 6a to the image processing unit 7 is started.
- the second data order converter 9 is configured in substantially the same way as the first data order converter 6, and operates almost in the same way.
- the second data converter 9 stores the memory 9 a, the memory 9 b, It has a side switch and a read side switch.
- writing from the distortion correction processing unit 8 is performed in one of the memories 9a and 9b in the column direction (vertical direction). Reading is performed in the row direction (horizontal direction) from the other of a and the memory 9b, and is output to the JPEG processing unit 10.
- Figure 9 shows the relationship between the block line and the memory capacity (buffer capacity) required for distortion correction processing.
- the four distorted solid lines on the dotted line frame indicate that the output data after distortion correction should be a straight line (one vertical line), but are distorted in the original data. Among the four straight lines, the leftmost straight line away from the image center is the most distorted. Black circles indicate positions before distortion correction corresponding to pixel positions of output data after distortion correction.
- the horizontal width where the distorted input data after imaging becomes the maximum amount of distortion in the block line is taken into consideration by taking into consideration 16-point interpolation with a margin left and right. Is secured as a buffer amount necessary for the distortion correction processing. In other words, it indicates the amount of buffer capable of correcting distortion if there is such a buffer capacity, in other words, the amount of buffer capable of creating a correct linear shape when distortion is corrected.
- FIG. 10 shows a method of setting the block line width.
- An example is shown in which the block line width is variably set according to the target position of the distortion correction processing.
- the curved dotted line indicates a distorted image on the input side, and the degree of distortion increases as the distance from the center of the image, that is, toward the outside, increases. Therefore, when setting the block line width for the input data on the frame memory, the width is set such that the width increases as the distance from the image center increases and the width decreases as the distance from the center increases.
- the bus occupation time at that time can be reduced.
- the amount of deformation due to distortion is small at the center of the image.
- Change the set value of the block line width (the vertical sweep width shown in the figure) depending on the position of the distortion correction processing target. Since Equation 1 contains higher-order terms, the input range necessary for processing the block lines cannot be determined analytically.
- Set the block line width to calculate the distortion correction range It is set based on the processing result of the unit 12 and by performing a predetermined operation on the result by the CPU.
- FIG. 11 shows the configuration of the distortion correction range calculation unit 12.
- the distortion correction range calculation unit 12 includes a coordinate generation unit 91 that generates interpolation coordinates (X, Y) and a predetermined distortion correction expression (eg, [Equation 1]) for the generated interpolation coordinates (X, Y). ) Is applied, and the distortion correction coordinate conversion section 92 outputs coordinates ( ⁇ ', ⁇ ') which are transformed, and inputs necessary for distortion correction processing from the transformed coordinate positions ( ⁇ ', ⁇ ') before correction. And a correction range detection unit 93 for calculating an image range.
- a coordinate generation unit 91 that generates interpolation coordinates (X, Y) and a predetermined distortion correction expression (eg, [Equation 1]) for the generated interpolation coordinates (X, Y).
- the distortion correction coordinate conversion section 92 outputs coordinates ( ⁇ ', ⁇ ') which are transformed, and inputs necessary for distortion correction processing from the transformed coordinate positions ( ⁇ ', ⁇
- the distortion correction range calculation unit 12 includes the coordinate generation unit 91, the distortion correction coordinate conversion unit 92, and the correction range detection unit 93, and controls the operation of the CPU 4 through the control register 94. And the range calculation results will be obtained through Regis Evening 95.
- the distortion correction range calculation unit 12 is added to the distortion correction processing function, and functions as a support function that allows calculation of an input range of image data in consideration of distortion deformation.
- the intersection coordinates corresponding to the lattice points of the corrected image are formed on the distorted line, but the distorted area corresponding to the block line, that is, What is necessary is that the dotted line is used as the input image range.
- the maximum and minimum values for the four sides that define the position range are obtained. The value indicates the input image range.
- the upper side is Y Tmax, Y Tmin
- the lower side is Actually, it is detected that the side falls between YBtnax and YBmin, the left side falls between XLmax and XLmin, and the right side falls between XRmax and XRmin. , And store it in the result storage register 95.
- the block line processing requires a range in which (X Lmin to X Rmax, Y Tmin to ⁇ Bmax) plus the pixels required for interpolation.
- the converted positions of the vertices are (X, TL, Y, TL), (X, TR, Y, TR), (X, BL, Y, BL) and (X, BR, Y, BR). '
- the coordinate generation unit 91 generates a coordinate position (X, Y) required for the corrected image by [Equation 2], and converts it into ((′, ⁇ ′) by [Equation 1].
- the converted positions of each vertex are (X, TL, Y, TL), (X, TR, Y'TR), (X, BL, Y, BL), (X, BR, Y, BR). Is detected by the correction range detection unit 93 and stored in the result storage register 95. ''
- FIG. 14 is a flowchart for explaining the operation of the distortion correction processing.
- step S11 the correction magnification M of the distortion correction [Equation 1] is determined. How to determine the correction magnification M will be described later with reference to the flowchart of FIG.
- step S12 using the determined correction magnification M, as described in the description of FIGS.
- step S13 the input range thus calculated and the set values required for the distortion correction processing are set.
- step S14 the distortion correction processing for each block line and the input range calculation for the next block line are simultaneously executed.
- step S15 it is determined whether or not the distortion correction processing has been completed for the entire output image, so that step S14 is repeated, and the processing is continued until the distortion correction processing has been completed for the entire image, and the processing ends.
- the image input range for performing the next distortion correction is calculated.
- the image input range for performing the next distortion correction processing is changed. It is known that it is possible to perform the distortion correction processing sequentially and smoothly on the next block line without delay.
- step S11 if the correction magnification M is properly determined, even if any of the barrel-type and pincushion-type distortion images described in FIGS. It is possible to put in exactly.
- FIG. 15 is a flowchart illustrating the method of calculating the correction magnification M in step S11 of FIG.
- step S21 1.0 is set as the initial value of the correction magnification M.
- the input image range is calculated by the distortion correction range calculation unit 12 by performing coordinate transformation on, for example, four sides of the output image (step S22). It is determined whether it is within the range of the original image (step S23). If the output range exceeds the output range as shown in FIG. 61 at step S23, M is reduced by M (step S24), and the process returns to step S22 to calculate the input image range. Perform the in-range judgment of S23.
- step S23 it is determined whether or not it is within the range of the original image. If it is within the range of the original image, the process proceeds to step S25. In step S25, it is determined whether or not the input image range is the maximum M within the range of the original image. If it is not the maximum within the range of the original image, M is increased by M (step S26), and the process returns to step S22 and performs steps S23 to S26. If it is the maximum within the range of the original image in step S25, M at that time is determined as the correction magnification. At this time, the area used for the range calculation is the entire corrected image.
- the flow in FIG. 15 may be another method.
- the circuit scale can be reduced by using the multiplier for the correction operation in time series.
- [Equation 1] when performing the distortion correction processing, the calculations are very multiplied and the circuit scale is extremely large in terms of hardware.
- multipliers increase the hardware circuit scale, so it is preferable to reduce the number of multipliers as much as possible.
- a multiplier control section is provided to control the multiplier, and processing is performed in a pipeline while taking timing, thereby reducing the circuit scale without reducing the overall processing speed.
- FIG. 16 shows a specific configuration of a portion of the distortion correction coordinate conversion section 92 in FIG.
- the other parts are the same as in FIG.
- the distortion-correcting coordinate conversion unit 92 performs calculations with many multiplications as shown in [Equation 1]. However, it is necessary to reduce the number of multipliers by processing a plurality of multiplications in that order in a time series while maintaining the timing. It is configured to be able to.
- Distortion correction coordinate conversion section 92 receives the interpolated coordinates (X, Y) from the coordinate generating unit 9 1, and Z 2 calculator 921 for calculating a Z 2 by calculating (2) of [Expression 1], Z (2)
- a multiplier control unit 922 that inputs Z 2 from the calculation unit 921 and the key from the multiplier 923 and outputs a and h, a multiplier 923 that inputs a and 3 and outputs the key, and a correction coefficient Multiplier control unit 924 that outputs d corresponding to one of A, B, C,... and hi, d, and outputs £ that is the product of the correction coefficient and an integer multiple of Z
- the multiplier 925, the £ from the multiplier 925, and the interpolated coordinates (X, Y) from the coordinate generator 91 are input, and the coordinates ( ⁇ ', ⁇ ') of the original data before correction are output.
- FIG. 17 shows a timing chart of the overnight output of FIG.
- Zeta 2 calculator 921 the clock CLK is "1, and inputs the timing interpolation coordinates (chi, Upsilon).
- the multiplier control unit 922 outputs ⁇ 2 as / 5 at the timing of the clock CL '2', and the multiplier control unit 924 outputs the distortion correction coefficient A as d at the same timing '2'. I do.
- the multiplier 923 shed is input, based on the? Outputs Z 4 as ⁇ , multiplier 925 input Is the shed, and outputs the AZ 2 as etc. the same time '3, based on the d.
- the timing chart of FIG. 17 shows that there is a time gap (space) in the output data of each variable for each clock in the distortion correction coordinate conversion processing. Therefore, it is conceivable to output the output data of each variable continuously by pipeline processing so that there is no space in time.
- FIG. 18 is an improvement of the operation of FIG. 17, and shows a timing chart of the overnight output in the pipeline processing.
- the coordinate position (X ⁇ ') of the original data is obtained as an output after 7 clocks from the input of the interpolation coordinates ( ⁇ , ⁇ ), that is, until one result is obtained. Takes 7 clocks.
- FIG. 18 since the same thing as the first cycle occurs at the tenth cycle, three pixels are processed in nine cycles. In other words, apparently one pixel is processed in three cycles.
- the distortion correction range calculation unit has been described as an example in which the distortion correction range calculation processing is performed only for one channel.
- three channels are calculated in the same time as the time for one channel described above.
- a method for performing the calculation by one distortion correction range calculation unit will be described. To do this, as shown in Fig. 19, in the range setting on the corrected image in Fig. 12, the pixels for range calculation on the four sides of the region of interest are thinned out to, for example, 1 to 3 pixels, and set and processed. Thus, the processing can be completed in almost the same time as the processing time for one channel.
- the thinning amount is set at the registration evening.
- FIG. 20 is a flowchart illustrating the operation of the distortion correction processing in the case of three channels.
- the distortion correction processing shown in Fig. 14 is applied to three channels.
- Steps S31 to S33 and step S35 in FIG. 20 are the same as steps S11 to S13 and step S15 in FIG.
- step S34 distortion correction processing for each block line in channel 3 and calculation of the input range of the next block line for the first channel (for example, R) Calculation of the input range of the next block line for channel 2 (for example, G)
- the input range TO for the calculation of the input range of the next block line for the third channel (for example, B) is simultaneously executed.
- the distortion correction range calculation is performed for the next processing target block line for multiple input signals.
- one distortion correction range calculation unit calculates the range for a plurality of channels.
- the image input range for performing the next distortion correction is calculated, so when the distortion correction for one block line is completed, the image input range for the next distortion correction processing is known. It is possible to perform the distortion correction processing sequentially and smoothly on the next block line without delay.
- the distortion correction range calculation unit that calculates the input image range for performing the distortion correction processing since the distortion correction range calculation unit that calculates the input image range for performing the distortion correction processing is provided, the corrected image obtained by the distortion correction is output. It is possible to output without any excess or deficiency in the power output range. Distortion correction processing can be performed by effectively utilizing the original design, and pincushion distortion, barrel distortion, and Jinkasa distortion This makes it possible to perform distortion correction processing that can cope with the above. In addition, the input range required for processing for each block line can be calculated, and the amount of data transfer can be minimized.o
- FIG. 21 is a block diagram showing the overall configuration of the image processing device according to the second embodiment of the present invention.
- FIG. 21 shows a configuration in which the distortion correction range calculation unit 12 is deleted from FIG.
- the configuration of the distortion correction processing unit in FIG. 21 is the same as that in FIG.
- Equation 1 of the distortion correction coordinate transformation is an image in the whole screen to the last, and even if the corrected image is the image of the coordinate transformation result ( ⁇ ', ⁇ '), Is calculated to be located.
- the image data input to the distortion correction processing unit 8 is in units of block lines.
- the distortion correction processing unit 8 needs to know the spatial positional relationship (position in the two-dimensional space) of the input image data.
- the control register 85 specifies the position of a block line to be processed by the distortion correction processing unit 8. :
- FIG. 23 and FIG. 24 show the relationship between the captured image data before correction and the corrected image after correction.
- FIG. 23 shows a photographed image
- FIG. 24 shows a corrected image.
- FIGS. 23 and 24 are conceptual diagrams in which attention is paid to a certain pixel position A on a certain block line BL of the corrected image (output side image).
- the coordinates (X, Y) of position A in the corrected image (output image) are converted to the coordinates (X ', ⁇ ') of position A 'in (input image) using [Equation 1] of the distortion correction coordinate transformation. Is converted.
- the coordinate positions (X, ⁇ ) and (Xblst, Yblst), the coordinate positions ( ⁇ ', ⁇ ') and ( ⁇ 'blst, ⁇ ' blst) indicate the positions of the entire screen from the origin.
- the origin is the upper left corner of the entire screen.
- the image data input to the distortion correction block is in units of a block line indicated by a code BL. For this reason, first, for example, to perform a distorted coordinate conversion process on the pixel A of the corrected image, it is necessary to convert the pixel A to a coordinate position (x ', y') in the coordinate system in the block line BL so that the final interpolation process can be performed. is there.
- ⁇ and ⁇ are interpolation pitches. If 1.0 or more, a reduced image is generated, and if 1.Q or less, an enlarged image is generated.
- the coordinate position (X blst, Y blst) corresponds to the position of the left shoulder of the output-side block line of the distortion correction block, that is, the start position of the block line.
- (m, n) (0, 0)
- the coordinate position (X, Y) of the output image after correction generated in [Equation 3] is transformed by [Equation 1] to obtain the coordinate position ( ⁇ ', ⁇ ') of the input image before correction in Fig. 23. Ask for. Then, by taking the difference between the coordinate position (,,, ⁇ ') and the coordinate position (X, blst, Y, blst), the left shoulder of the block line (thick dotted rectangle) is defined as the origin. (X ', y').
- interpolation processing can be performed in block line units.
- the block line BL 'shown by the thick dotted line on the input image side shown in Fig. 23 is set by the register 85, but in order to generate its value, the input range (distortion) of the image (Correction range) can be calculated.
- the support function may be provided by adding to the distortion correction processing function of the distortion correction processing unit 8 or the like.
- FIG. 25 is a flowchart of the distortion correction processing of FIGS. 23 and 24.
- Step S41 the block line processing where hatching in FIG. 24 is performed is started (Step S41).
- Step S42 the left shoulder of the hatched area in FIG. 24, that is, the start coordinate position (X blst , Y blst).
- the coordinate position (X, Y) of the corrected image can be obtained from [Equation 3].
- m and n are each increased by one, the coordinate position (X, Y) of each pixel in the block line BL indicated by hatching in FIG. 24 can be generated (step S43).
- the position (X, Y) of the coordinate system in the corrected image is converted into the position ( ⁇ ′, ⁇ ′) of the coordinate system in the captured image before correction using [Equation 1] (step S44).
- the position ( ⁇ ', ⁇ ') of the coordinate system in the captured image is converted into the position (x ', y') of the coordinate system in the input block line BL by the method described above (step S45).
- an interpolation process is performed based on the coordinate position (x ', y') in the block line BL, and a plurality of neighboring pixel data is obtained by 16-point interpolation at the coordinate position ( ⁇ ', ⁇ '). By calculating the interpolation data from the evening, the corrected image data for one pixel can be obtained (step S46).
- Step S47 it is determined whether or not the above steps have been performed for all pixels in the BL of the coordinate system of the corrected image (step S47), and if not completed, the next m and n are set (step S47). 48), Steps S43 to S47 are repeated, and if the processing in all the pixels has been completed in the determination in Step S47, the process proceeds to Step S49.
- step S49 it is determined whether the distortion correction processing has been completed for all the block lines BL in the entire screen (frame). If not, the next input block line BL is selected. Steps S41 to S49 are repeated, and if the processing has been completed for all the screens in the determination of step S49, the processing ends. 26 to 29 can be said to be modifications of FIG.
- Only a part of the imaging data may be used.
- Memory can be saved by selectively capturing part of the image data from the CCD.
- information (X, imgst, Y, imgst) on where the captured data is located on the image sensor is required.
- FIGS. 26 and 27 show the case of digital zoom, in which only the central portion is taken from the captured image data.
- the difference from the case of FIG. 25 is that, as described above, the preprocess circuit 2 sets where to take in the entire screen. Based on this setting, only the necessary parts are captured, and the range is indicated by E 'in Fig. 26.
- the start position (X imgst, Y imgst) that corresponds to the left shoulder of the digital zoom range E in the entire screen is set.
- the block line BL is set.
- position coordinates (Xblst, Y blst) are determined with the left shoulder of the capture range E as the origin, and by changing m and n as in [Equation 2], each pixel in the block line BL is changed. Coordinates can be generated.
- the position ( ⁇ , ⁇ ) in the coordinates of the entire screen is determined from the coordinate position (X imgst, Y imgst), (X blst, Y blst), ( ⁇ , ⁇ ⁇ ). .
- the coordinate position ( ⁇ ′, ⁇ ′) in the captured image can be obtained from the coordinate position (X, ⁇ ) determined as described above using [Equation 1].
- the coordinates (X, imgst, Y, imgst) of the left shoulder can be determined from the set range E 'on the image plane of the captured image data.
- the coordinates of the left shoulder of the capture range E are defined as the origin, and the coordinate positions (X, blst, Y 'blst) of the left shoulder of the block line BL, are obtained in advance using the support function. Loci in coordinates at Target position (x ', y J) is determined. As a result, interpolation processing is performed for each block line.
- FIGS. 28 and 29 show a case where a certain area, for example, the edge of the screen is cut out from the image data. This corresponds to a case where a surveillance camera or the like captures an end portion of the image data.
- FIGS. 28 and 29 the same reference numerals are given as in FIGS. 26 and 27. The operation is the same as in FIGS. 26 and 27, except that the cutout positions are different from those in FIGS. 26 and 27.
- the CCD as an image pickup device is provided with a monitor mode (a mode in which an image is captured; the entire screen is not read out).
- monitor mode lines are often thinned out, and the size may differ on the data, even though the two-dimensional space in the image sensor is the same.
- the image data obtained from the CCD may be subjected to a filling process such as a mouth-to-pass filling process (LPF), or may be interpolated and loaded into the frame memory.
- a filling process such as a mouth-to-pass filling process (LPF)
- LPF mouth-to-pass filling process
- the vertical / horizontal sampling ratio may not be an integer.
- FIG. 30 shows a state in which the data actually sent from the CCD is only located at the black circles and three lines are thinned out in the vertical direction when capturing data at the time of imaging.
- the monitor mode only the thinned image is output from the CCD in this way, and when it is stored in the memory, the image becomes thinned in the vertical direction due to the thinning as shown in Figure 31.
- the image is shorter in the vertical direction than in the horizontal direction as shown in Fig. 31.
- the phenomenon of being forced appears.
- the distortion correction according to [Equation 1] the effect of the distortion correction increases as the distance from the center increases, so that the effect of the distortion correction changes vertically and horizontally.
- S x and S y may be set as non-integers. Further, it is possible even when the interval between the pixels of the image sensor is different vertically and horizontally (for example, a rectangular pixel).
- J PEG is generally recorded in a short time as Y C b C r, but the setting at that time may be as shown in FIGS. 33 and 34.
- the resolution for color image data is 1Z2 in the horizontal direction compared to the luminance data.
- FIG. 35 is a diagram for explaining center shift of an image.
- Figure 35 shows the corrected image.
- the center of the distortion is coincident with the center of the image, that is, the center of the CCD (the center of the image) is coincident with the optical axis of the lens (the center of the distortion).
- FIG. 35 shows a case where the center of the distortion is shifted from the center of the CCD.
- the center P of the image with respect to the output range H is Q Shown by black circle To).
- the way in which the distortion-corrected image data is cut out is changed.
- define the center shift of the image as (X off, Y off) and change the way the image is cropped.
- the spatial position of a small area (for example, a block line) can be calculated, and the distortion correction processing can be performed in small area units.
- accurate distortion correction can be performed for images captured by thinning
- JE accurate distortion correction can also be performed for images obtained by cutting out an arbitrary area, such as digital zoom.
- FIG. 37 shows a detailed configuration of the distortion correction processing unit 8 in FIG.
- the interpolation position calculation circuit 22 in FIG. 37 is provided in the interpolation position generation unit 811 in FIG. 2, the selector 24 is provided in the selector 813 in FIG. 2, and the distortion correction coefficient calculation circuit 21 and the interpolation position correction circuit 23 are provided in FIG. , Respectively.
- the 2-port SRAM 26 in FIG. 37 corresponds to the internal memory section (internal buffer) 82 in FIG. 2, the write address generation circuit 28, the buffer free space monitoring circuit 29, and data transmission availability determination.
- the circuit 30, the buffer amount calculation circuit 31 and the read address generation circuit 25 correspond to the memory control section 83 of FIG. 2, and the interpolation circuit 27 corresponds to the interpolation calculation section 84 of FIG. 2.
- the error detection circuit 32 is connected to the memory control unit 83 although not shown in FIG.
- the error detection circuit 32 has a large distortion amount in the distortion correction processing and will be described later.
- the interpolation position calculation circuit 22 uses the grant as a trigger to generate one unit line (hereinafter referred to as 1 UL). It calculates the minute interpolation position (X1, Y1).
- 1UL is defined as the time when writing or reading a certain number of images in a row in the column direction when writing or reading to the memory unit in the block line processing described above. Is one unit. In other words, 1 UL refers to a fixed number of pixels arranged in a line in the column direction on the block line (BL).
- the interpolation position correction circuit 23 multiplies the interpolation position (X1, Y1) by the distortion correction coefficient F from the distortion correction coefficient calculation circuit 21 to obtain the coordinate position ( ⁇ ', ⁇ ') is calculated.
- the selector 24 selects (Xl, ⁇ 1) and ( ⁇ , ⁇ '). When performing distortion correction, the selector 24 selects and outputs ( ⁇ ', ⁇ '), and performs enlargement / reduction processing ( (Resize), select (XI, Y1) and output.
- the two-port SRAM 26 is a buffer for storing data in the distortion correction processing unit 8.
- the read address generation circuit 25 generates an address (ADR) in the 2-port SRAM 16 corresponding to the interpolation position, outputs a control signal for aligning outputs from the 2-port SRAM 26, and outputs an output image data.
- the write control signal WE_N is output in synchronization with the evening, and the data string control signal that indicates where D0 shown in Figure 38 and [Equation 3] is located on the 2-port SRAM is output.
- the write address generation circuit 28 generates an address (ADDRESS) of the 2-port SRAM 26, which is an internal memory, in accordance with the write control signal WE. Count up) o
- Data transmission availability judgment circuit 30 determines the BLC value, the operation state of this circuit, and the next UL From the request (REQ) state from the post-stage circuit of the coordinate and distortion correction processing unit 8, it is determined whether a grant (GRANT-N) can be transmitted for the REQ signal from the post-stage circuit.
- the interpolation circuit 27 performs 16-point interpolation for each image data corresponding to the interpolation position.
- the buffer release calculation circuit 31 calculates the difference between the integer part of the currently processed UL start coordinate and the next UL start coordinate to be processed as the buffer release amount (see FIG. 50).
- the buffer free space monitoring circuit 29 sends a request (REQ) as a data request to the preceding circuit
- the buffer free space monitoring circuit 2'9 receives a grant (GRANT) as a request reception from the preceding circuit, and at the same time, reduces the count (in circuit 29) that stores the number of ULs that can be stored in the 2-port SRAM 26 by one. .
- One UL is transferred as one operation unit with one request and grant. When the count reaches 0, withdraw the request. Then, data flows into the write address generation circuit 28 from the previous stage circuit, and writing to the two-port SRAM 26 is performed. 1 Every time UL input, the internal count (BLC) of the write address generation circuit 18 goes up.
- the 2-port SRAM 26 is composed of a total of 16 4 ⁇ 4 2-port SRAMs that can simultaneously perform reading and writing so that the interpolation circuit 27 can perform, for example, 16-point interpolation. .
- the two-port SRAM 26 will be described with reference to FIG. 39, but the number of memories and the size of each memory may be different.
- the block line width (UL length) is 96, but it may be longer or shorter. What is necessary is just to determine by the balance of a circuit scale and correction performance.
- 4 x 4 (16 ) Can be 2x2 (four in total) for 4-point interpolation. It should be decided according to the interpolation method (how many points to interpolate).
- FIG. 39 shows a memory space composed of 16 2-port SRAMs.
- the horizontal direction is the order of UL input, while the vertical direction is the address assigned to each 2-port SRAM.
- N be an integer and describe the state at some point in the process.
- the data stored in No.O, 4, 8 and 12 which are four 2-port SRAMs arranged vertically are 4N, 4N + 4, 4N + 8 and 4N + the 12th UL data
- the data stored in Nos. 1, 5, 9 and 13 are the data of the 4N + 1, 4N + 5, 4N + 9, and 4N + 13th UL data.
- the data stored in 6, 10, and 14 are the 4N + 2, 4N + 6, 4N + 10, 4N + 14th UL data, and are stored in Nos. 3, 7, 11, and 15. DeNight is the 4N + 3, 4N + 7, 4N + 11, and 4N + 15th UL night.
- FIGS. 40 and 41 show a block line having a width of 8 lines (pixels)
- the data is expressed in units of UL as “1, 2,..., I, m”. , "5, 6, ..., j, n", "9, 10, ..., k, o", ... in that order.
- FIG. 41 shows where the data in the writing order shown in FIG. 40 is written on 16 4 ⁇ 4 16 two-port SRAM Nos. 0 to 15.
- image data (data indicated by order 1 to m) of 1 UL in the left vertical column in FIG. .
- the vertical 2-port SRAM shown in Figure 39 No.0, No.4, No.8 and No.12 have the notation added to each vertical line 0 to 92,:! ⁇ 93, 2 ⁇ 94, 3 ⁇ 95 are the notation of each unit line (1 ⁇ m ⁇ , 5 ⁇ n ⁇ 9 ⁇ ⁇ ⁇ ) shown in Fig. 40 Is different from This is because in FIG. 40 and FIG. 41, the order of insertion is determined for 16 4 ⁇ 4 image data.
- pixels (0, 4, 8... 92) that are multiples of 4 are stored in the vertical direction.
- (Multiple of 4) + 1 pixel (1, 5, 9, ... 93) is stored.
- (Multiple of 4) of U + 2 pixel (2 , 6, 10... 94) are stored, and in the vertical direction of No. 12 of the 2-port SRAM, (multiple of 4) of U + 3 pixels (3, 7, 11 to 95) are stored. is there.
- sets of o.1,2,3 and No.5,6,7 and ⁇ ⁇ 9,10,11 and ⁇ .13,14,15 of 2-port SRAM are also stored.
- the data for one UL is stored in the order of one pixel in four 2-port SRA # s arranged in the vertical direction.
- the first UL data is written to Nos. 0, 4, 8, and 12 (4N line in Fig. 39).
- the next UL data is written to Nos. 1, 5, 9, and 13 (line 4N + 1 in Fig. 39).
- 16 points around arbitrary coordinates in the buffer can be simultaneously extracted by one access.
- the overnight transmission determination circuit 30 receives the request (REQ-N) from the subsequent circuit, and outputs a request acceptance (GRANT-N) if the next UL data can be transmitted.
- This request reception (GRANT-N) becomes a trigger for itself, and the interpolation position calculation circuit 22 starts operating.
- An operation trigger (trig) is sent from the data transmission / rejection determination circuit 30 to the distortion correction coefficient calculation circuit 21 so that the interpolation position calculation circuit 22 starts operating and synchronizes with the output of the interpolation position (XI and YD).
- the distortion correction coefficient calculation circuit 21 calculates the next UL start coordinate after operating by one UL, and ends the processing.
- the read address generation circuit 25 issues a read address to each of the 16 2-port SRAMs 26 from the input interpolation coordinates. '
- FIG. 38 shows the interpolation method.
- FIG. 38 is an image diagram of the interpolation operation in the interpolation circuit 27.
- the coordinates ( ⁇ ', ⁇ ') of the corrected coordinate position P have already been obtained in [Equation 1] above.
- the pixel values (luminance data) at these coordinates are obtained from the pixel data D0 to D15 at 16 points around the coordinates ⁇ ( ⁇ ', ⁇ '). If D0 comes from which of the 16 memories, D1, D2..., D15 can be determined from the positional relationship with respect to D0. As described later, DO is obtained for the coordinates of the interpolation position.
- Figure 42 shows an example of reading from a buffer consisting of 16 two-port SRAMs.
- X, 10.... Is (4x0 + 10) or more Therefore, in the horizontal direction, the interpolation position is slightly to the right of the 4th + 10th UL of the 2-port SRAM.
- Y, 50 .... is greater than (4x12 + 2), the 4th + 10th UL, the memory where the 50th pixel is stored (No. 10) outputs the pixel D5 in Figure 7 Therefore, the memory that outputs DO is No. 5 at the upper left. Since the pixel data corresponding to D0 to D15 in FIG. 38 is a part in FIG. 42, an address is generated so that they are output.
- the image data output from No. 0 does not correspond to D 0.
- the output from No. 5 corresponds to D0. Therefore, in order to identify which data is output from which memory, a read-out control signal is output from the read address generation circuit 25. 2 ports Performs 16-point interpolation by recognizing where the SRAM 26 is coming from.
- the pixel data DO to D15 are known, the pixel data at the corrected coordinate position can be obtained as Dout by performing an interpolation process using the interpolation formula of [Equation 4].
- the difference between the UL start coordinate and the next UL start coordinate is calculated (see Figure 50), and the buffer release amount is output to the buffer free space monitoring circuit 29 in order to release the buffer in which unnecessary data is stored. I do. However, as shown in Fig. 49, it is preferable to release the buffer when straddling the center of distortion, taking into account changes in the values of post ULB1 and post ULB2.
- the preULB and postULB shown in FIG. 48 are areas in which predetermined width areas are provided on the front and rear sides in the row direction with respect to the coordinate position of the first pixel among the pixels forming the UL. These are defined as pre ULB and post ULB respectively.
- the buffer release amount is buffered by referring to the change amount of p OS tULB (ULB: Unit Line Buffer, unit line buffer) whose reference value changes instead of the difference between the normal UL start coordinates. Adjust the release amount (adjustment amount is postULB l—postULB2, see Figure 49).
- the buffer release amount calculation circuit 31 sends to the data transmission availability determination circuit 30 how much data is required from the preceding circuit for the next UL process.
- the buffer free space monitoring circuit 29 performs buffering by calculating the buffer opening amount described above. If there is a free space in the key, it makes a request to the preceding circuit.
- the data transmission availability determination circuit 30 determines whether or not the next UL data can be transmitted based on the internal count (BLC) of the write address generation circuit 28, the manual power from the buffer release amount calculation circuit 31, and the preULB value. to decide.
- the overnight transmission enable / disable determination circuit 30 returns a request acceptance (GRANT_N) in response to a request from the subsequent circuit.
- the error detection circuit 32 determines whether the coordinates input to the read address generation circuit 25 deviate from the left end (see FIG. 43) or the right end of the block line (BL), or move the upper and lower ends (see FIG. 44) of the block line (BL). An error is output if the value deviates or the amount of distortion deviates from the set values of preULB and post ULB (see Fig. 45).
- the input image when an image on the input side is interpolated with respect to an image on the output side, the input image may be out of the range.
- interpolation data is generated in a portion where there is no data in the input range, and an error indicating that interpolation cannot be performed is output from the error detection circuit 32.
- a predetermined area pre ULB is set forward (right side in the figure) with respect to the first coordinate to be interpolated (the top coordinate indicated by the X mark at the top of the image data in the block line BL in the figure), (Left side in the figure) is provided with a predetermined area P ostUL B.
- the support function corresponds to the “distortion correction range calculator” in Fig. 1.
- the attention area becomes BL, but is deformed as shown in FIG. 13 by the coordinate transformation for distortion correction.
- preULB1 / postULB1 is determined from X'TL, Y, TL, XLmin, and XLmax (see FIG. 48).
- preULB2 / pos t ULB2 is determined from X, TR, Y, TR, XRmin, and XRmax.
- the optical distortion bends in the opposite direction after passing through the center of the distortion.
- the values of pr ULB and pos ULB differ between the left and right sides of the distortion center.
- this value is also required for processing on one side on the left or right side. If not, the other side has to reserve a large amount of data (that is, a large buffer), which is a waste of buffer.
- the values are respectively set to the preULB and the post ULB as variables. After passing the center coordinates of the distortion, change these values to change the amount of reserve. In other words, by changing preULB and postULB before and after the center of distortion, the use of the internal buffer is eliminated, and relatively large distortion correction can be performed with a small buffer.
- the re, post ULB is set to register 85, but in order to generate those values, a sabot function that enables calculation of the input range of image data in consideration of distortion deformation is required.
- This support function may be provided by adding to the distortion correction processing function of the distortion correction processing unit 8 or the like.
- the memory control unit 83 controls the area preULB, post ULB not to be overwritten by another process during the 1UL process.
- the setting of the areas P reULB and post ULB may be set from the CPU 4 to the register 85 (see FIG. 2), or may be automatically calculated and set by the CPU. As described above, 16-point interpolation is performed in the distortion correction processing. However, as shown in FIG.
- the interpolation position can move by a (where a> 0) during the 1UL processing due to the distortion correction. Because of the 16-point interpolation, the input image range must be the sum of the interval a above and the intervals b L and bR of the pixels required for interpolation on both the pre and post sides. When the distortion center is exceeded, the pre and post ULB values are changed to reduce the number of buffers to the minimum necessary. Also, the above-mentioned “distortion correction calculator” outputs the result taking into account the interpolation. You may let it.
- the internal memory section (internal buffer) 82 of the distortion correction processing section 8 has a maximum of 96 pixels vertically and 16 lines (pixels) horizontally. This will be used to correct distortion.
- the data of the block line processing is swept at least 1 UL in the right direction in the figure. It is not always necessary to sweep 1 UL at a time, as several ULs may be released at once. Since the internal memory unit 82 starts from an empty state at first, 16 lines are input and the distortion correction processing is performed. As these unit lines are processed, unnecessary data (one to several UL's worth of data) will be generated on the left line. It is not always unnecessary after processing. When the enlargement ratio is large, it may take several UL before it becomes unnecessary.
- Unnecessary data is released (ie, overwriting is allowed) and new data is inserted.
- the sweeping image shifts to the right. Since there is only a size that can read a maximum of 16 lines, the data area that is no longer needed is opened and new data areas are sequentially overwritten. All unnecessary buffer areas are released at once.
- the opening may be one line or five lines. For example, the first three lines of data that were stored in the buffer are unnecessary for further processing, so they are released and the next data is received, and the next data is processed. Will be overwritten.
- the amount of buffer that can be released after 1 UL processing is completed cannot be known unless the next UL start coordinate is determined (see Fig. 50). Since the internal buffer is small, to release the portion storing unused data as soon as possible, calculate the next UL start coordinate, obtain the release amount, and release the buffer to enable the internal buffer. Make it available to you. However, a request is received from the circuit block at the subsequent stage for distortion correction, a grant is returned for it, and a one-ultra-long UL data is output to the subsequent stage.After returning the grant after the next request comes, To find the coordinates This delays the release of the buffer, which in turn delays the acquisition of new data, which can result in long pipeline operations. Therefore, as shown in FIG.
- Fig. 51 there is a method of determining the next coordinate as soon as possible and finding the buffer release amount as shown in Fig. 53.
- the next UL start coordinate is acquired earlier than when the next start coordinate is obtained at the end, and if it is determined that the necessary data is complete, the next Continuously perform the above UL processing to reduce the processing space. That is, (1) Before the UL processing is completed, the UL data is held in the register, etc., and the start coordinate position of the next UL is calculated, so that the release amount of the next UL processing can be determined in advance. Ask for it.
- FIG. 51 the next UL start coordinate is acquired earlier than when the next start coordinate is obtained at the end, and if it is determined that the necessary data is complete, the next Continuously perform the above UL processing to reduce the processing space. That is, (1) Before the UL processing is completed, the UL data is held in the register, etc., and the start coordinate position of the next UL is calculated, so that the release amount of the next UL processing can be determined in advance.
- the buffer release amount is obtained at an early stage by obtaining the start coordinate of the next UL in the second pixel of the distortion correction processing (the processing of the coordinates 1 to ⁇ shown in the figure).
- the buffer can be released quickly, and as shown in FIG. 54, the amount of free space in the pipeline can be considerably reduced as compared with FIG.
- Figure 55 is a further improvement of Figure 53.
- the amount to be released when the currently processed UL is over (the amount to be released 1) is already known during the previous UL processing.
- the next two UL head coordinates are calculated, and the amount that can be released when the next UL processing is completed (opening amount 2) is obtained in advance. If you do this, There is no need to perform extraordinary coordinate generation (coordinate 2 generation) during UL processing as in the case of Fig. 53. In this way, if the starting coordinates of the next next two ULs are obtained instead of the head of the next UL, the exceptional processing shown in Fig. 53 will be eliminated, and the circuit will be simple. Become.
- distortion correction can be realized without greatly increasing the amount of transfer of the bus / the capacity of the memory. That is, a relatively large distortion correction process can be performed with a small buffer capacity.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP04771117.1A EP1650705B1 (en) | 2003-07-28 | 2004-07-27 | Image processing apparatus, image processing method, and distortion correcting method |
US10/566,408 US7813585B2 (en) | 2003-07-28 | 2004-07-27 | Image processing apparatus, image processing method, and distortion correcting method |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003202493A JP4772281B2 (ja) | 2003-07-28 | 2003-07-28 | 画像処理装置及び画像処理方法 |
JP2003202663A JP2005045513A (ja) | 2003-07-28 | 2003-07-28 | 画像処理装置及び歪補正方法 |
JP2003202664A JP4334932B2 (ja) | 2003-07-28 | 2003-07-28 | 画像処理装置及び画像処理方法 |
JP2003-202663 | 2003-07-28 | ||
JP2003-202664 | 2003-07-28 | ||
JP2003-202493 | 2003-07-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005010818A1 true WO2005010818A1 (ja) | 2005-02-03 |
Family
ID=34108564
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/011010 WO2005010818A1 (ja) | 2003-07-28 | 2004-07-27 | 画像処理装置、画像処理方法及び歪補正方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US7813585B2 (ja) |
EP (2) | EP1650705B1 (ja) |
WO (1) | WO2005010818A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010032260A (ja) * | 2008-07-25 | 2010-02-12 | Jfe Steel Corp | 光学系歪補正方法および光学系歪補正装置 |
CN103116878A (zh) * | 2013-02-25 | 2013-05-22 | 徐渊 | 校正图像桶形失真的方法、装置以及图像处理装置 |
CN109817139A (zh) * | 2017-11-21 | 2019-05-28 | 三星电子株式会社 | 显示驱动器和电子设备 |
Families Citing this family (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7636498B2 (en) * | 2003-06-02 | 2009-12-22 | Olympus Corporation | Image processing apparatus |
US7742200B2 (en) * | 2005-01-11 | 2010-06-22 | Xerox Corporation | Pre-press production image alert system and method |
JP2006339988A (ja) * | 2005-06-01 | 2006-12-14 | Sony Corp | ストリーム制御装置、ストリーム暗号化/復号化装置、および、ストリーム暗号化/復号化方法 |
US7912317B2 (en) * | 2005-10-29 | 2011-03-22 | Apple Inc. | Estimating and removing lens distortion from scenes |
JP2007148500A (ja) * | 2005-11-24 | 2007-06-14 | Olympus Corp | 画像処理装置および画像処理方法 |
US7881563B2 (en) * | 2006-02-15 | 2011-02-01 | Nokia Corporation | Distortion correction of images using hybrid interpolation technique |
AT504372B8 (de) * | 2006-11-13 | 2008-09-15 | Arc Seibersdorf Res Gmbh | Verfahren und vorrichtung zur ermittlung von interpolationswerten in datenfeldern |
JP4931055B2 (ja) * | 2006-11-22 | 2012-05-16 | ソニー株式会社 | 画像処理装置及び画像処理方法 |
WO2008096533A1 (ja) * | 2007-02-07 | 2008-08-14 | Nikon Corporation | 画像処理装置および画像処理方法 |
JP2008227582A (ja) * | 2007-03-08 | 2008-09-25 | Hoya Corp | 撮像装置 |
JP2008252522A (ja) * | 2007-03-30 | 2008-10-16 | Hoya Corp | カメラ |
JP4657367B2 (ja) * | 2007-05-09 | 2011-03-23 | 富士通セミコンダクター株式会社 | 画像処理装置、撮像装置、および画像歪み補正方法 |
JP2009010730A (ja) * | 2007-06-28 | 2009-01-15 | Kyocera Corp | 画像処理方法と該画像処理方法を用いた撮像装置 |
JP5135953B2 (ja) | 2007-08-28 | 2013-02-06 | コニカミノルタアドバンストレイヤー株式会社 | 画像処理装置、画像処理方法、及び撮像装置 |
FR2920939A1 (fr) * | 2007-09-07 | 2009-03-13 | St Microelectronics Sa | Correction de deformation d'image |
US20090087115A1 (en) * | 2007-10-02 | 2009-04-02 | Ping Wah Wong | Correction for geometric distortion in images in pipelined hardware |
US8525914B2 (en) * | 2007-10-25 | 2013-09-03 | DigitalOptics Corporation Europe Limited | Imaging system with multi-state zoom and associated methods |
JP4919978B2 (ja) * | 2008-01-26 | 2012-04-18 | 三洋電機株式会社 | 歪補正装置 |
WO2009112309A2 (en) * | 2008-03-12 | 2009-09-17 | Thomson Licensing | Method and system for lens aberration correction |
JP5062846B2 (ja) * | 2008-07-04 | 2012-10-31 | 株式会社リコー | 画像撮像装置 |
JP5409278B2 (ja) * | 2009-11-06 | 2014-02-05 | オリンパスイメージング株式会社 | 画像撮像装置及び画像撮像方法 |
JP5593060B2 (ja) * | 2009-11-26 | 2014-09-17 | 株式会社メガチップス | 画像処理装置、および画像処理装置の動作方法 |
JP5316811B2 (ja) * | 2010-01-18 | 2013-10-16 | 株式会社リコー | 画像処理方法及び装置、並びに画像撮像装置 |
JP5602532B2 (ja) * | 2010-07-30 | 2014-10-08 | オリンパス株式会社 | 画像処理装置および画像処理方法 |
JP5299383B2 (ja) | 2010-08-20 | 2013-09-25 | 株式会社Jvcケンウッド | 画像補正装置および画像補正方法 |
JP2013125401A (ja) * | 2011-12-14 | 2013-06-24 | Samsung Electronics Co Ltd | 撮像装置および歪補正方法 |
JP5809575B2 (ja) * | 2012-01-31 | 2015-11-11 | 株式会社日立ハイテクノロジーズ | 画像処理装置および方法と歪補正マップ作成装置および方法と半導体計測装置 |
JP5924020B2 (ja) * | 2012-02-16 | 2016-05-25 | セイコーエプソン株式会社 | プロジェクター、及び、プロジェクターの制御方法 |
JP5997480B2 (ja) * | 2012-03-30 | 2016-09-28 | キヤノン株式会社 | 画像処理装置、画像処理方法およびプログラム |
JP5488853B1 (ja) * | 2012-11-16 | 2014-05-14 | 株式会社ジェイエイアイコーポレーション | 収差補正機能付き画像読取装置 |
TWI517094B (zh) * | 2013-01-10 | 2016-01-11 | 瑞昱半導體股份有限公司 | 影像校正方法及影像校正電路 |
US10013744B2 (en) | 2013-08-26 | 2018-07-03 | Inuitive Ltd. | Method and system for correcting image distortion |
CN104657940B (zh) * | 2013-11-22 | 2019-03-15 | 中兴通讯股份有限公司 | 畸变图像校正复原与分析报警的方法和装置 |
US9443281B2 (en) * | 2014-06-27 | 2016-09-13 | Intel Corporation | Pixel-based warping and scaling accelerator |
US9734603B2 (en) * | 2015-06-30 | 2017-08-15 | General Electric Company | Systems and methods for peak tracking and gain adjustment |
US10930185B2 (en) * | 2015-09-07 | 2021-02-23 | Sony Interactive Entertainment Inc. | Information processing system, information processing apparatus, output apparatus, program, and recording medium |
GB2542125A (en) * | 2015-09-08 | 2017-03-15 | Sony Corp | Colour conversion |
WO2017151799A1 (en) * | 2016-03-01 | 2017-09-08 | Ventana Medical Systems, Inc. | Improved image analysis algorithms using control slides |
US10547849B1 (en) * | 2016-03-03 | 2020-01-28 | Rockwell Collins, Inc. | Low-power and low-latency distortion correction for image processors |
JP6563358B2 (ja) * | 2016-03-25 | 2019-08-21 | 日立オートモティブシステムズ株式会社 | 画像処理装置及び画像処理方法 |
JP2019012361A (ja) * | 2017-06-29 | 2019-01-24 | キヤノン株式会社 | 情報処理装置、プログラム及び情報処理方法 |
CN110544209B (zh) | 2018-05-29 | 2022-10-25 | 京东方科技集团股份有限公司 | 图像处理方法、设备以及虚拟现实显示装置 |
CN111199518B (zh) | 2018-11-16 | 2024-03-26 | 深圳市中兴微电子技术有限公司 | Vr设备的图像呈现方法、装置、设备和计算机存储介质 |
US11483547B2 (en) | 2019-12-04 | 2022-10-25 | Nxp Usa, Inc. | System and method for adaptive correction factor subsampling for geometric correction in an image processing system |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06153065A (ja) * | 1992-11-11 | 1994-05-31 | Olympus Optical Co Ltd | 撮像装置 |
JPH06165024A (ja) * | 1992-11-18 | 1994-06-10 | Canon Inc | 撮像装置及び画像再生装置及び映像システム |
JPH06237412A (ja) * | 1993-10-15 | 1994-08-23 | Olympus Optical Co Ltd | 映像処理装置 |
JPH0998340A (ja) * | 1995-09-29 | 1997-04-08 | Sanyo Electric Co Ltd | 撮像装置 |
US5796426A (en) | 1994-05-27 | 1998-08-18 | Warp, Ltd. | Wide-angle image dewarping method and apparatus |
JPH11243508A (ja) * | 1998-02-25 | 1999-09-07 | Matsushita Electric Ind Co Ltd | 画像表示装置 |
JPH11250239A (ja) * | 1998-02-27 | 1999-09-17 | Kyocera Corp | Yuvデータによりディストーション補正を行うディジタル撮像装置 |
JPH11275391A (ja) * | 1998-03-20 | 1999-10-08 | Kyocera Corp | ディストーション補正を選択できるディジタル撮像装置 |
JP2000132673A (ja) * | 1998-10-28 | 2000-05-12 | Sharp Corp | 画像システム |
JP2000312327A (ja) * | 1999-04-28 | 2000-11-07 | Olympus Optical Co Ltd | 画像処理装置 |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH02252375A (ja) | 1989-03-27 | 1990-10-11 | Canon Inc | 固体撮像カメラ |
JP2925871B2 (ja) | 1992-12-11 | 1999-07-28 | キヤノン株式会社 | 固体撮像カメラ |
JP3686695B2 (ja) | 1994-10-20 | 2005-08-24 | オリンパス株式会社 | 画像処理装置 |
JPH08272964A (ja) | 1995-03-30 | 1996-10-18 | Fujitsu Denso Ltd | 掌紋押捺装置 |
JPH09252391A (ja) | 1996-01-09 | 1997-09-22 | Fuji Photo Film Co Ltd | 画像読取装置及び画像受信装置 |
JP3631333B2 (ja) * | 1996-08-23 | 2005-03-23 | シャープ株式会社 | 画像処理装置 |
JPH10224695A (ja) | 1997-02-05 | 1998-08-21 | Sony Corp | 収差補正装置及び方法 |
US5966678A (en) * | 1998-05-18 | 1999-10-12 | The United States Of America As Represented By The Secretary Of The Navy | Method for filtering laser range data |
JP2965030B1 (ja) | 1998-07-27 | 1999-10-18 | 株式会社ニコン | スクロール表示システム、およびスクロール表示プログラムを記録した記録媒体 |
US6747702B1 (en) * | 1998-12-23 | 2004-06-08 | Eastman Kodak Company | Apparatus and method for producing images without distortion and lateral color aberration |
JP2001101396A (ja) | 1999-09-30 | 2001-04-13 | Toshiba Corp | 画像歪み補正処理装置および方法、並びに画像歪み補正処理を行うプログラムを格納した媒体 |
US6801671B1 (en) * | 1999-11-18 | 2004-10-05 | Minolta Co., Ltd. | Controlled image deterioration correction device with reduction/enlargement |
KR100414083B1 (ko) * | 1999-12-18 | 2004-01-07 | 엘지전자 주식회사 | 영상왜곡 보정방법 및 이를 이용한 영상표시기기 |
JP3677188B2 (ja) * | 2000-02-17 | 2005-07-27 | セイコーエプソン株式会社 | 画像表示装置および方法、並びに、画像処理装置および方法 |
US6816625B2 (en) * | 2000-08-16 | 2004-11-09 | Lewis Jr Clarence A | Distortion free image capture system and method |
US7262799B2 (en) * | 2000-10-25 | 2007-08-28 | Canon Kabushiki Kaisha | Image sensing apparatus and its control method, control program, and storage medium |
JP4827213B2 (ja) | 2001-03-12 | 2011-11-30 | 株式会社メガチップス | 画像補正装置および画像補正方法 |
JP3853617B2 (ja) | 2001-07-16 | 2006-12-06 | 松下電器産業株式会社 | 虹彩認証装置 |
US6796426B2 (en) * | 2001-11-29 | 2004-09-28 | Ultra Pro Lp | Sleeves and album pages for flat items |
US6707998B2 (en) * | 2002-05-14 | 2004-03-16 | Eastman Kodak Company | Method and system for correcting non-symmetric distortion in an image |
JP2006014016A (ja) * | 2004-06-28 | 2006-01-12 | Seiko Epson Corp | 自動画像補正回路 |
-
2004
- 2004-07-27 EP EP04771117.1A patent/EP1650705B1/en not_active Expired - Lifetime
- 2004-07-27 US US10/566,408 patent/US7813585B2/en active Active
- 2004-07-27 EP EP12006314.4A patent/EP2533192B1/en not_active Expired - Lifetime
- 2004-07-27 WO PCT/JP2004/011010 patent/WO2005010818A1/ja active Application Filing
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06153065A (ja) * | 1992-11-11 | 1994-05-31 | Olympus Optical Co Ltd | 撮像装置 |
JPH06165024A (ja) * | 1992-11-18 | 1994-06-10 | Canon Inc | 撮像装置及び画像再生装置及び映像システム |
JPH06237412A (ja) * | 1993-10-15 | 1994-08-23 | Olympus Optical Co Ltd | 映像処理装置 |
US5796426A (en) | 1994-05-27 | 1998-08-18 | Warp, Ltd. | Wide-angle image dewarping method and apparatus |
JPH0998340A (ja) * | 1995-09-29 | 1997-04-08 | Sanyo Electric Co Ltd | 撮像装置 |
JPH11243508A (ja) * | 1998-02-25 | 1999-09-07 | Matsushita Electric Ind Co Ltd | 画像表示装置 |
JPH11250239A (ja) * | 1998-02-27 | 1999-09-17 | Kyocera Corp | Yuvデータによりディストーション補正を行うディジタル撮像装置 |
JPH11275391A (ja) * | 1998-03-20 | 1999-10-08 | Kyocera Corp | ディストーション補正を選択できるディジタル撮像装置 |
JP2000132673A (ja) * | 1998-10-28 | 2000-05-12 | Sharp Corp | 画像システム |
JP2000312327A (ja) * | 1999-04-28 | 2000-11-07 | Olympus Optical Co Ltd | 画像処理装置 |
Non-Patent Citations (2)
Title |
---|
See also references of EP1650705A4 |
WOLBERG, G ET AL.: "Computer Graphics", vol. 23, 1 July 1989, ACM, article "Separable image warping with spatial lookup tables", pages: 368 - 378 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010032260A (ja) * | 2008-07-25 | 2010-02-12 | Jfe Steel Corp | 光学系歪補正方法および光学系歪補正装置 |
CN103116878A (zh) * | 2013-02-25 | 2013-05-22 | 徐渊 | 校正图像桶形失真的方法、装置以及图像处理装置 |
CN103116878B (zh) * | 2013-02-25 | 2015-06-03 | 徐渊 | 校正图像桶形失真的方法、装置以及图像处理装置 |
CN109817139A (zh) * | 2017-11-21 | 2019-05-28 | 三星电子株式会社 | 显示驱动器和电子设备 |
Also Published As
Publication number | Publication date |
---|---|
EP2533192A3 (en) | 2015-09-02 |
US7813585B2 (en) | 2010-10-12 |
EP2533192A2 (en) | 2012-12-12 |
US20060188172A1 (en) | 2006-08-24 |
EP1650705A4 (en) | 2010-07-28 |
EP2533192B1 (en) | 2017-06-28 |
EP1650705B1 (en) | 2013-05-01 |
EP1650705A1 (en) | 2006-04-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2005010818A1 (ja) | 画像処理装置、画像処理方法及び歪補正方法 | |
JP4772281B2 (ja) | 画像処理装置及び画像処理方法 | |
US8369632B2 (en) | Image processing apparatus and imaging apparatus | |
WO2004109597A1 (ja) | 画像処理装置 | |
US8350927B2 (en) | Image-data processing apparatus and image-data processing method | |
EP1549052A1 (en) | Image processing device, image processing system, and image processing method | |
JP4255345B2 (ja) | 撮像装置 | |
US20070230827A1 (en) | Method and Apparatus for Downscaling a Digital Colour Matrix Image | |
JP2007228019A (ja) | 撮像装置 | |
JP2004064334A (ja) | 撮像装置 | |
JP2004362069A (ja) | 画像処理装置 | |
JP5602532B2 (ja) | 画像処理装置および画像処理方法 | |
US7212237B2 (en) | Digital camera with electronic zooming function | |
JP4334932B2 (ja) | 画像処理装置及び画像処理方法 | |
JP2005045513A (ja) | 画像処理装置及び歪補正方法 | |
US7808539B2 (en) | Image processor that controls transfer of pixel signals between an image sensor and a memory | |
US6906748B1 (en) | Electronic camera | |
JP4286124B2 (ja) | 画像信号処理装置 | |
US6348950B1 (en) | Video signal processing circuit and image pickup apparatus using the circuit | |
JP4403409B2 (ja) | 画像データ処理方法および画像データ処理装置 | |
JP2007156795A (ja) | 画像変換装置 | |
JP4424097B2 (ja) | 電子ズーム装置 | |
JP2000059800A (ja) | 画像信号処理回路 | |
WO2018220794A1 (ja) | データ転送装置およびデータ転送方法 | |
JP2005011268A (ja) | 画像処理装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200480021876.9 Country of ref document: CN |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2004771117 Country of ref document: EP Ref document number: 2006188172 Country of ref document: US Ref document number: 10566408 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 2004771117 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 10566408 Country of ref document: US |