WO2022024516A1 - 画像補正装置、画像補正方法、プログラムおよび記録媒体 - Google Patents
画像補正装置、画像補正方法、プログラムおよび記録媒体 Download PDFInfo
- Publication number
- WO2022024516A1 WO2022024516A1 PCT/JP2021/019281 JP2021019281W WO2022024516A1 WO 2022024516 A1 WO2022024516 A1 WO 2022024516A1 JP 2021019281 W JP2021019281 W JP 2021019281W WO 2022024516 A1 WO2022024516 A1 WO 2022024516A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- correction
- image
- images
- chart
- representative
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 238000003702 image correction Methods 0.000 title claims abstract description 40
- 238000012937 correction Methods 0.000 claims abstract description 287
- 238000003384 imaging method Methods 0.000 claims abstract description 11
- 238000003705 background correction Methods 0.000 claims description 37
- 230000008569 process Effects 0.000 claims description 33
- 230000004075 alteration Effects 0.000 claims description 25
- 238000000605 extraction Methods 0.000 claims description 25
- 238000005259 measurement Methods 0.000 claims description 10
- 239000003086 colorant Substances 0.000 claims description 8
- 238000009826 distribution Methods 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 4
- 238000012545 processing Methods 0.000 description 57
- 238000010586 diagram Methods 0.000 description 12
- 238000003860 storage Methods 0.000 description 8
- 238000009434 installation Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 125000002066 L-histidyl group Chemical group [H]N1C([H])=NC(C([H])([H])[C@](C(=O)[*])([H])N([H])[H])=C1[H] 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- NJPPVKZQTLUDBO-UHFFFAOYSA-N novaluron Chemical compound C1=C(Cl)C(OC(F)(F)C(OC(F)(F)F)F)=CC=C1NC(=O)NC(=O)C1=C(F)C=CC=C1F NJPPVKZQTLUDBO-UHFFFAOYSA-N 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000005401 electroluminescence Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000010030 laminating Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
- H04N1/603—Colour correction or control controlled by characteristics of the picture signal generator or the picture reproducer
- H04N1/6033—Colour correction or control controlled by characteristics of the picture signal generator or the picture reproducer using test pattern analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G06T5/80—
-
- G06T5/92—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/387—Composing, repositioning or otherwise geometrically modifying originals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
- H04N1/6027—Correction or control of colour gradation or colour contrast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/002—Diagnosis, testing or measuring for television systems or their details for television cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/61—Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/61—Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
- H04N25/611—Correction of chromatic aberration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
- G06T2207/30208—Marker matrix
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
- H04N1/6083—Colour correction or control controlled by factors external to the apparatus
Definitions
- the present invention relates to an image correction device, an image correction method, a program, and a recording medium, in which an object is photographed by an imaging device, and an image of the object in which the object is captured is image-corrected to form an image.
- a photographing device such as a camera
- an image such as graphic distortion, image brightness and hue, etc. before or after the object is photographed.
- a correction chart for correction is taken, a correction parameter is created from the correction chart image in which the correction chart is shown, and the correction parameter is used to correct the object image in which the object is shown. ..
- the user holds the shooting device in his / her hand and shoots, especially at the time of shooting. It is difficult to shoot a correction chart and an object while keeping the shooting environment such as the angle of view and the lighting condition constant. Therefore, there is a problem that it is very difficult to obtain an object image in which image correction is performed correctly.
- Patent Documents 1 to 3 are prior art documents that can be used as a reference for the present invention.
- Patent Document 1 describes a gray balance using a scanned image obtained by reading a calibration image, a captured image of a calibration image captured by an imaging device of a portable communication terminal, and imaging conditions of the captured image. Described is an image processing system that generates image processing parameters such as correction and color correction, and uses the image processing parameters to correct the captured image of the subject captured by the photographing device of the portable communication terminal.
- Patent Document 2 a pedestal in which a color bar indicating a preset color pixel value of each color and an apex of a polygon having a known shape are described in advance is photographed, and the polygon described in the pedestal in the photographed image is photographed. Based on the vertex coordinates of, each coordinate of the polygon image is projected and converted to the output coordinate system, the color correction parameter is acquired from each color pixel value of the color bar image in the output coordinate system, and the captured image is taken using the color correction parameter.
- a photographing device that performs color correction and brightness correction of the above is described.
- Patent Document 3 displays a calibration test chart for adjusting the reproducibility of the color, brightness, or shape of the captured image, images the displayed calibration test chart, and displays the test. Based on the color, brightness, or shape of the chart, correction parameters for the captured image are calculated, and correction processing such as color correction, brightness correction, and shape correction of the captured image is performed using the correction parameters.
- the imaging device to be used is described.
- An object of the present invention is an image photographing apparatus, an image photographing method, which can solve the above-mentioned problems of the prior art and can image-correct an object with high quality in any photographing environment. To provide programs and recording media.
- the present invention comprises a first acquisition step of acquiring a moving image in which a correction chart is captured by a photographing device.
- a second acquisition step of continuously acquiring a moving image in which a sheet-shaped object is photographed at least one before and after the correction chart is photographed by the photographing apparatus.
- an image correction method including a correction step of correcting a representative object image using a correction parameter.
- each of the plurality of frame images included in the moving image it is determined whether each of the plurality of frame images is an object image or a correction chart image based on the correlation between adjacent frame images.
- Including the discrimination step It is preferable to extract a plurality of object images and a plurality of correction chart images from a plurality of frame images included in the moving image based on the discrimination result of the discrimination step.
- a lens aberration correction parameter that corrects graphic distortion due to lens aberration of the photographing device is generated. It is preferable to use the lens aberration correction parameter to correct the graphic distortion due to the lens aberration for each of the plurality of correction chart images and each of the plurality of object images.
- a tilt correction parameter for correcting graphic distortion due to tilt at the time of shooting is generated. It is preferable to correct the graphic distortion due to the tilt for each of the plurality of object images after the graphic distortion due to the aberration of the lens is corrected by using the tilt correction parameter.
- the correction chart includes a plurality of achromatic patches having different densities, and for each of the plurality of achromatic patches, a plurality of densities patches having the same density are spatially discretely arranged.
- a plurality of achromatic patch images corresponding to a plurality of achromatic patches are extracted from the representative correction chart image, and a plurality of achromatic patch images corresponding to the positions of the plurality of density patches having the same density are extracted from each of the plurality of achromatic patch images.
- each pixel of the interpolated image of the size of the representative correction chart image based on the signal value of the position of the plurality of density patch images corresponding to the positions of the plurality of density patches of the same density.
- the spatial mean value which is the average value of the signal values at all pixel positions, is calculated.
- a gradation correction parameter that performs gradation correction so that there is no difference between the reflectance of multiple achromatic patches acquired in advance and the spatial mean value of multiple interpolated images. It is preferable to perform gradation correction of the representative object image of a plurality of interpolated images, a representative correction chart image, and a plurality of object images after correction of graphic distortion by tilting by using the gradation correction parameter.
- the spatial average values of the plurality of interpolated images after the gradation correction do not change before and after the shading correction, and after the shading correction, after the gradation correction.
- Representative of multiple interpolated images Create a two-dimensional shading correction parameter that corrects the distribution of the two-dimensional signal values of the interpolated image so that it is flat. It is preferable to perform shading correction of the representative correction chart image and the representative object image by using the shading correction parameter.
- the correction chart also contains multiple chromatic patches in different colors.
- a plurality of chromatic color patch images corresponding to a plurality of chromatic color patches are extracted from the representative correction chart image, and the signal values of the plurality of chromatic color patch images are acquired.
- a color correction parameter is generated to perform color correction so that there is no difference between the signal value of each of the plurality of chromatic color patches acquired in advance and the signal value of each of the corresponding plurality of chromatic color patch images. It is preferable to perform color correction of the representative object image after shading correction by using the color correction parameter.
- the object is a measurement sheet that develops a color at a concentration corresponding to the external energy when the external energy is applied.
- the present invention comprises a processor, wherein the processor Acquire the moving image in which the correction chart was taken by the shooting device,
- the photographing device continuously acquires moving images in which a sheet-shaped object is photographed at least one before and after the correction chart is photographed. From a plurality of frame images included in the moving image, a plurality of correction chart images showing a correction chart and a plurality of object images showing an object are extracted. Based on a plurality of correction chart images, a correction parameter for correcting a representative object image of a plurality of object images is generated.
- an image correction device that corrects a representative object image by using a correction parameter.
- the present invention also provides a program for causing a computer to execute each step of any of the above image correction methods.
- the present invention also provides a computer-readable recording medium in which a program for causing a computer to execute each step of any of the above image correction methods is recorded.
- the shooting state is good and the shooting state is close to a plurality of frame images included in the moving image.
- the correction chart image and the object image can be extracted, and the image correction of the object image can be performed based on the correction chart image. Therefore, even when the user shoots the object in an arbitrary shooting environment, the target can be image-corrected and imaged more easily and with higher quality.
- FIG. 1 is a conceptual diagram of an embodiment showing the configuration of the image correction device of the present invention.
- the image correction device 10 shown in FIG. 1 includes a document stand 12 and a photographing device 14.
- the manuscript stand 12 is a stand for arranging (placeting) the object 16 on it and taking a picture.
- a correction chart 18 is formed on one surface (upper side surface in FIG. 1) of the platen 12 by printing or laminating color charts.
- the document table 12 is a table for arranging the correction chart 18 and the object 16, and the object 16 is placed on the correction chart 18 formed on one surface of the document table 12 at the time of shooting. Be placed.
- the correction chart 18 is used to create a correction parameter for correcting a correction chart image showing the correction chart 18 and an object image showing the object 16 taken by the photographing apparatus 14. .
- the correction chart 18 is not particularly limited as long as it can be used for image correction of the correction chart image and the object image, and examples thereof include a grid pattern, an achromatic color pattern, and a chromatic color pattern.
- a plurality of rectangular figures are two-dimensionally arranged by a plurality of lines extending in one direction and a plurality of lines extending in a direction orthogonal to the lines extending in one direction.
- the grid pattern 18a shown in FIG. 4 is a black-and-white grid-like pattern formed by a plurality of black lines against a white background for correcting white balance.
- the grid pattern is used for correction of graphic distortion, correction of white balance, and the like.
- the achromatic color pattern has a configuration in which a plurality of achromatic color patches having different densities are arranged two-dimensionally.
- the achromatic pattern 18b shown in FIG. 5 includes four types of rectangular achromatic patches having different densities, and a plurality of rectangular shapes having the same density for each of the four types (4 gradations) of achromatic patches. Concentration patches are spatially discretely arranged.
- the achromatic color pattern is used for correction of graphic distortion, gradation correction, shading correction, and the like.
- the number of achromatic patches (number of gradations) included in the achromatic pattern is not particularly limited.
- the chromatic color pattern has a configuration in which a plurality of chromatic color patches having different colors are arranged two-dimensionally.
- the chromatic color patterns 18c and 18d shown in FIGS. 6 and 7 include a plurality of rectangular chromatic color patches having different colors and a plurality of rectangular achromatic color patches having different densities, and a plurality of chromatic color patches and a plurality of chromatic color patches.
- the achromatic patches of are arranged two-dimensionally.
- the chromatic color pattern is used for graphic distortion correction, gradation correction, shading correction, color correction, and the like.
- the number of chromatic color patches included in the chromatic color pattern is not particularly limited, but is preferably 10 or more in order to perform color correction and the like. Further, the types of colors of the chromatic color patch included in the chromatic color pattern are not limited, and for example, standard colors such as RGB (red, green, blue) and CMY (cyan, magenta, yellow), intermediate colors thereof, and hues. Any kind of color with different saturation and lightness may be used.
- the correction chart 18 may be composed of only one of the above-mentioned grid pattern 18a, achromatic color pattern 18b, or chromatic color pattern 18c, 18d, or as shown in FIG. 8, an example is shown. It may be configured by combining two or more patterns.
- the correction chart 18 shown in FIG. 8 is configured by combining the achromatic color pattern 18b shown in FIG. 5 and the chromatic color pattern 18c shown in FIG.
- the object 16 is a sheet-like object to be imaged, such as paper, a sheet, and a film, and a figure having an arbitrary shape and an arbitrary density is formed on at least one surface thereof. ing.
- the object 16 is arranged on the correction chart 18 with the formation surface of the figure facing upward at the time of photographing.
- the object 16 is not particularly limited, and examples thereof include a pressure-sensitive sheet, a heat-sensitive sheet, and a measuring sheet such as an ultraviolet (or near-infrared ray or mid-infrared ray) sheet.
- the pressure-sensitive sheet, heat-sensitive sheet, ultraviolet sheet, etc. are measured to develop color at a concentration corresponding to the applied external energy by applying external energy such as pressure, temperature, and ultraviolet rays (or near-infrared rays and mid-infrared rays), respectively.
- Sheet for. The measuring sheet can measure the distribution and magnitude of external energy applied to the measuring sheet based on the shape and color density of the figure formed on the surface of the measuring sheet.
- the photographing device 14 continuously photographs the correction chart 18 and the object 16 in sequence, acquires a moving image thereof, and among a plurality of frame images included in the moving image, the correction chart.
- the object image is corrected and converted into an image (digital data) based on the image.
- the photographing device 14 of the present embodiment is a smartphone having a camera function, but is not limited to this, and may be a digital camera or a digital video camera having a moving image photographing function.
- FIG. 2 is a block diagram of an embodiment showing the internal configuration of the photographing apparatus.
- the photographing device 14 includes a photographing unit 20, a display unit 22, an operation unit 24, a storage unit 26, and a processor 28.
- the display unit 22, the operation unit 24, the photographing unit 20, the storage unit 26, and the processor 28 are bidirectionally connected via the internal bus 42, and data can be transmitted to and received from each other.
- the photographing unit 20 photographs a subject under the control of the processor 28 and outputs the image (still image and moving image). In the case of the present embodiment, the photographing unit 20 continuously photographs the correction chart 18 and the object 16 in sequence, and outputs a moving image thereof.
- the shooting unit 20 corresponds to a camera function of a smartphone.
- the display unit 22 displays various images, information, and the like under the control of the processor 28.
- the display unit 22 is composed of, for example, an LCD (Liquid Crystal Display), an organic EL (Organic Electroluminescence) display, an LED (Light Emitting Diode) display, a display such as electronic paper, or the like. Further, the display unit 22 may be provided with a touch panel that accepts a touch operation by the user of the photographing device 14.
- the operation unit 24 receives the user's operation under the control of the processor 28.
- the operation unit 24 includes a plurality of buttons provided on the outer surface of the housing of the photographing device 14, a graphical user interface of the touch panel included in the display unit 22, and the like. When shooting a subject or setting various setting items, the user performs a corresponding operation through the operation unit 24.
- the storage unit 26 stores data and the like of an image (still image and moving image) of a subject photographed by the photographing unit 20 under the control of the processor 28.
- the storage unit 26 stores the data of the moving image in which the correction chart 18 and the object 16 are captured. Further, the storage unit 26 stores a program executed by the processor 28, various data, and the like.
- the processor 28 controls each part of the photographing device 14 and executes various processes including photographing of moving images, storage of moving image data, display of moving images, and the like.
- the processor 28 includes an acquisition processing unit 30, a discrimination processing unit 32, an extraction processing unit 34, a generation processing unit 36, a correction processing unit 38, and a display processing unit 40. There is.
- the processor 28 functions as each processing unit.
- the acquisition processing unit 30, the discrimination processing unit 32, the extraction processing unit 34, the generation processing unit 36, the correction processing unit 38, and the display processing unit 40 are bidirectionally connected via the internal bus 42, and data can be transmitted and received to each other. It is possible.
- the acquisition processing unit 30 controls the operation of the photographing unit 20 in response to the user's operation (shooting instruction) through the operation unit 24, and the photographing unit 20 (shooting device 14) photographs the correction chart 18.
- the moving image in which the object 16 is photographed is continuously captured by the first acquisition process (first acquisition step) for acquiring the moving image and at least one before and after the correction chart 18 is photographed by the photographing unit 20.
- the second acquisition process (second acquisition step) to be acquired is executed. That is, the acquisition processing unit 30 may continuously acquire the moving image in which the object 16 is photographed after the correction chart 18 is photographed, or the correction chart after the object 16 is photographed.
- the moving image in which 18 is taken may be continuously acquired. Alternatively, as shown in FIG. 9, the acquisition processing unit 30 continuously acquires a moving image in which the object 16 is photographed after the correction chart 18 is photographed, and then the correction chart 18 is photographed. You may.
- the discrimination processing unit 32 determines whether each of the plurality of frame images among the plurality of frame images included in the moving image acquired by the acquisition process is an object image based on the correlation between adjacent frame images.
- a discrimination process for discriminating whether the image is a correction chart image is executed. For example, by image analysis, the feature amount of each frame image included in the moving image is calculated, and the correlation coefficient between the adjacent frame images is calculated based on the feature amount of the adjacent frame image.
- a typical frame image f is stored for each range in the range where the correlation coefficient between adjacent frame images becomes a value equal to or larger than the threshold value.
- the range of the frame image f0 and the range of the frame image f1 are determined by the correlation coefficient between the representative frame images f0 and f1 corresponding to each range and the correction chart image for correlation calculation acquired in advance. It can be determined that it is a correction chart image and an object image, respectively.
- the extraction processing unit 34 Based on the discrimination result of the discrimination processing, the extraction processing unit 34 shows a plurality of correction chart images and an object 16 in which the correction chart 18 is shown from among the plurality of frame images included in the moving image.
- An extraction process (extraction step) for extracting a plurality of object images is executed.
- the extraction processing unit 34 may extract a plurality of object images from the frame image on the side in which the shooting state is the best among the plurality of frame images included in the moving image. For example, for a plurality of frame images included in a moving image, one or more items such as the degree of camera shake, the amount of angle of view deviation, and the degree of unevenness (variation) in lighting are analyzed and the score of each item is obtained. Is calculated, and the total score of all items is used as the score of the frame image. Then, among the plurality of frame images included in the moving image, a plurality of object images are extracted from the side with the best shooting state, that is, from the frame image with the highest score.
- the extraction processing unit 34 may extract a plurality of correction chart images from the frame image on the side closest to the plurality of object images in the shooting state among the plurality of frame images included in the moving image. For example, among a plurality of frame images included in a moving image, the shooting state is from the side closest to the plurality of object images, that is, the shooting time is from the side closest to the plurality of object images, or the score is the highest.
- a plurality of correction chart images are extracted from the frame image on the side. It should be noted that the higher the correlation coefficient, the higher the score may be calculated by adding the correlation coefficient with the correction chart image for correlation calculation acquired in advance.
- the generation processing unit 36 sets a correction parameter for correcting a representative object image, which is a representative of the plurality of object images also extracted by the extraction process, based on the plurality of correction chart images extracted by the extraction process. Execute the generation process (generation process) to generate.
- the generation processing unit 36 is not particularly limited, but generates correction parameters for performing image correction such as graphic distortion correction, gradation correction, shading correction, and color correction.
- the correction processing unit 38 executes a correction process (correction step) for correcting the representative object image using the correction parameters generated by the generation process.
- the display processing unit 40 executes a display process (display process) for displaying various images, information, and the like on the display unit 22.
- the user holds the shooting device 14 which is a smartphone in an arbitrary shooting environment in which the angle of view and the lighting state (external light, natural light, ambient light) at the time of shooting change sequentially and cannot be kept constant.
- the start of shooting is instructed by the user's operation through the operation unit 24.
- the operation of the photographing unit 20 is controlled by the acquisition processing unit 30, and the photographing unit 20 executes the first acquisition process of acquiring the moving image captured by the correction chart 18 (step S1). ..
- step S2 the user arranges the object 16 on the correction chart 18 while holding the photographing device 14 in the same arbitrary photographing environment.
- a second acquisition process of continuously acquiring a moving image in which the object 16 is photographed is executed by the photographing unit 20. Is done (step S2).
- the correction chart 18 is formed on the platen 12, that is, the correction chart is fixed to the platen 12, when the object 16 is photographed, the correction chart 18 and the object 16 are used. There is an advantage that the relative positional relationship of is fixed.
- one moving image in which the correction chart 18 and the object 16 are sequentially captured is acquired.
- the moving image of the correction chart 18 and the moving image of the object 16 may be acquired as separate moving images.
- the correction chart 18 and the object 16 can be used in any photographing environment by the user. It includes image blurring due to camera shake, graphic distortion due to tilting during shooting, and image unevenness (unevenness in brightness, hue, etc.) due to uneven lighting during shooting. ..
- the discrimination processing unit 32 corrects whether each of the plurality of frame images is an object image based on the correlation between the adjacent frame images among the plurality of frame images included in the moving image.
- a discrimination process for determining whether the image is a chart image is executed (step S3).
- the extraction processing unit 34 executes an extraction process for extracting a plurality of correction chart images and a plurality of object images from a plurality of frame images included in the moving image based on the discrimination result of the discrimination process. Is done (step S4).
- the extraction processing unit 34 extracts the frame image as an object image when it is determined that the frame image is an object image, and when it is determined that the frame image is a correction chart image, the frame. The image is extracted as a correction chart image.
- the discrimination process and the extraction process may be collectively executed after the photographing of the correction chart 18 and the object 16 is completed, or may be sequentially executed while photographing.
- the discrimination process and the extraction process are sequentially executed while shooting, each time a part of the moving image of the correction chart 18 and the object 16 is acquired, the discrimination process and the frame image included in the part of the moving image are discriminated.
- the extraction process is executed sequentially. Further, in the extraction process, the correction chart image and the object image may be discriminated automatically by the extraction processing unit 34, or may be manually (visually) discriminated by the user.
- a plurality of correction chart images are selected from a plurality of frame images included in the moving image by the extraction processing unit 34 according to the user's operation (input of the determination result) through the operation unit 24. And multiple object images are extracted.
- the generation processing unit 36 corrects the representative object image of the plurality of object images also extracted by the extraction process based on the plurality of correction chart images extracted by the extraction process. Is executed (step S5).
- the generation processing unit 36 generates correction parameters for performing graphic distortion correction, gradation correction, shading correction, color correction, and the like.
- the correction processing unit 38 executes a correction process for correcting the representative object image using the correction parameters generated by the generation process (step S6).
- the correction processing unit 38 corrects graphic distortion, gradation correction, shading correction, and color based on correction parameters for performing graphic distortion correction, gradation correction, shading correction, color correction, and the like. Make corrections, etc.
- the correction chart image having a good shooting state and a close shooting state from among a plurality of frame images included in the moving image.
- the object image can be extracted and the image of the object image can be corrected based on the correction chart image. Therefore, even when the user shoots the object 16 in an arbitrary shooting environment, the object 16 can be image-corrected and imaged more easily and with higher quality.
- the distribution of the color density and the like of the measurement sheet changes with the passage of time. Therefore, the measurement sheet is simply photographed and imaged at the site of use. There is a request to do. In this case, after the measurement sheet is imaged, the distribution and magnitude of the external energy applied to the measurement sheet can be analyzed using the digital data after the image.
- correction processing such as graphic distortion correction, gradation correction, shading correction, and color correction will be described.
- the frame image included in the moving image includes graphic distortion due to lens aberration, graphic distortion due to tilting at the time of shooting, and the like.
- a lens aberration correction parameter for correcting graphic distortion due to lens aberration is generated based on a plurality of correction chart images.
- the lens aberration correction parameters are not particularly limited, but for example, using the black-and-white grid pattern 18a shown in FIG. 4, a known Zhang method (Zhengyou Zhang, "A Flexible New Technique for Camera Calibration", Microsoft Research Technical Report, MSR) -It can be generated by TR-98-71, December 2, 1998.).
- a known Zhang method Zhengyou Zhang, "A Flexible New Technique for Camera Calibration", Microsoft Research Technical Report, MSR
- MSR Microsoft Research Technical Report
- the graphic distortion due to the lens aberration is corrected for each of the plurality of correction chart images and each of the plurality of object images.
- the method for determining the representative correction chart image is not particularly limited, but for example, among a plurality of correction chart images after correction of graphic distortion due to lens aberration, the shooting state is the best, in other words, the score is the best.
- One high correction chart image may be selected as the representative correction chart image.
- a representative correction chart image is obtained by calculating the average value of the signal values below the threshold value among the plurality of signal values at the corresponding pixel positions of the plurality of correction chart images after correcting the graphic distortion due to the aberration of the lens. May be created. That is, the average value of the plurality of signal values at the corresponding pixel positions of the plurality of correction chart images becomes the signal value of the corresponding pixel positions of the representative correction chart image.
- the frame image may contain signal values that exceed the threshold due to strong light from specularly reflected light from the illumination. That is, the threshold value is used to determine the signal value affected by the strong light due to the specularly reflected light from the illumination. By excluding the signal value exceeding the threshold value, it is possible to generate a representative correction chart image using the signal value equal to or less than the threshold value, which is hardly affected by the specularly reflected light.
- the reference points may be four or more points, and preferably include four reference points constituting a square or a rectangle. Further, the size of the region composed of the four reference points is preferably larger than the figure formed on the object 16 in order to perform the tilt correction with high accuracy.
- the tilt correction of the object image may be performed using the tilt correction parameters generated based on the representative correction chart image, or the target image may be tilted.
- the tilt correction of the object image may be performed using the tilt correction parameter generated based on the above.
- the tilt correction parameter based on the object image can be generated based on the correction chart in which a part of the object image is reflected.
- the representative object image of the plurality of object images after the correction of the graphic distortion due to the tilt is determined.
- one object image with the best shooting condition in other words, the highest score may be selected as the representative object image. ..
- a representative object image is created by calculating the average value of the signal values below the threshold value among the plurality of signal values at the corresponding pixel positions of the plurality of object images corrected for graphic distortion due to tilting. May be good. That is, the average value of the plurality of signal values at the corresponding pixel positions of the plurality of object images becomes the signal value of the corresponding pixel positions of the representative object image. Similarly, by excluding the signal value exceeding the threshold value, it is possible to generate a representative object image using the signal value below the threshold value, which is hardly affected by the specularly reflected light.
- gradation correction When performing gradation correction, first, a plurality of achromatic patch images corresponding to a plurality of achromatic patches included in the achromatic pattern 18b shown in FIG. 5, for example, are extracted from a representative correction chart image, and a plurality of achromatic patch images are extracted. For each of the achromatic patch images of the above, the signal values of the positions of the plurality of density patch images corresponding to the positions of the plurality of density patches of the same density spatially spaced are acquired.
- the spatial mean value which is the average value of the signal values at all pixel positions.
- the gradation of density is linear so that there is no difference between the reflectance (target value) of the plurality of achromatic patches acquired in advance and the spatial mean value (actual measurement value) of the plurality of interpolated images.
- a gradation correction parameter for performing gradation correction is generated so as to be.
- the reflectance of an achromatic color patch is a spectral reflectance in a wavelength range having sensitivity to the density of an achromatic color patch when an achromatic color patch having a certain density is photographed by an imaging system (imaging element).
- the method of acquiring the reflectance of the achromatic patch is not particularly limited, but it can be acquired by, for example, taking an achromatic patch with an imaging system and measuring the reflectance (signal value) corresponding to the density of the achromatic patch. can. If high accuracy is not required for gradation correction, the target value (set value) at the time of creating the achromatic patch may be used.
- shading correction When performing shading correction, first, based on a plurality of interpolated images after shading correction, the spatial average values of the plurality of interpolated images after gradation correction do not change before and after shading correction, and after shading correction. , Representative of a plurality of interpolated images after gradation correction Create a two-dimensional shading correction parameter that corrects so that the distribution of two-dimensional signal values of the interpolated image becomes flat.
- the method of determining the representative interpolated image is not particularly limited, but for example, one interpolated image may be selected as the representative interpolated image from a plurality of interpolated images after gradation correction.
- a representative interpolated image may be created by calculating the average value of the signal values equal to or less than the threshold value among the plurality of signal values at the corresponding pixel positions of the plurality of interpolated images after gradation correction. That is, the average value of the plurality of signal values at each pixel position corresponding to the plurality of interpolated images becomes the signal value at each pixel position corresponding to the representative interpolated image.
- the shading correction parameter using the shading correction parameter, the shading correction of the representative correction chart image and the representative object image is performed.
- the pixel position of each pixel position of the image to be shaded is (x, y), the signal value of each pixel position of the image before shading correction is A (x, y), and the signal value of each pixel position of the image after shading correction is B.
- (X, y) assuming that the two-dimensional shading correction parameter at each pixel position of the image to be shaded corrected is S (x, y)
- the signal value of each pixel position of the image after shading correction is calculated by the following equation. It is calculated.
- Color correction When performing color correction, first, a plurality of chromatic color patch images corresponding to a plurality of chromatic color patches included in the chromatic color pattern 18c shown in FIG. 6 are extracted from a representative correction chart image, and a plurality of chromatic colors are performed. Acquire each signal value of the patch image.
- the difference between the signal value (target value) of each of the plurality of chromatic color patches acquired in advance and the signal value (actual measurement value) of each of the corresponding plurality of chromatic color patch images is eliminated.
- the method of acquiring the signal value of the chromatic color patch is not particularly limited, but it can be acquired by measuring the chromatic color patch or the like as in the case of the achromatic color patch. If high accuracy is not required for color correction, the target value (set value) at the time of creating the chromatic color patch may be used.
- the method of generating the color correction parameter is not particularly limited, but for example, the signal value (target value) of the chromatic color patch made of RGB and the signal of the chromatic color patch image made of RGB are obtained by a known 3-row 3-column matrix processing.
- the relationship with the value (actual measurement value) can be obtained by converting it by a matrix calculation of 3 rows and 3 columns.
- the transformation matrix can also be obtained by a known technique, for example, the least squares method.
- the photographing assisting tool 44 is used for photographing. May be good. That is, the image correction device shown in FIG. 11 further includes a photographing assisting tool 44 in the image correction device shown in FIG.
- the photographing auxiliary tool 44 includes an installation table 46 and an auxiliary lighting 48.
- the installation table 46 is a table on which the photographing device 14 is installed at the time of photographing.
- the auxiliary lighting 48 illuminates the correction chart 18, the object 16, and the like at the time of shooting. By illuminating with the auxiliary lighting 48 at the time of shooting, unevenness of lighting at the time of shooting can be reduced.
- photographing the correction chart and the object 16 with the photographing aid 44 it is possible to reduce the graphic distortion and the unevenness of illumination included in the correction chart image included in the moving image and the object image. , It is possible to improve the image quality of the object image after image correction.
- the present invention can be realized as an application program that operates in a photographing device such as a smartphone.
- the image correction device may include a server that operates in cooperation with the application program of the photographing device.
- the photographing device serving as a client executes the first acquisition process, the second acquisition process, the display process, and the like.
- the server receives the moving image from the photographing device, executes at least one of the discrimination process, the extraction process, the generation process, and the correction process, and transmits the result of the process to the photographing device.
- the photographing apparatus can execute the subsequent processing according to the result of the processing received from the server.
- the processor is a processor whose circuit configuration can be changed after manufacturing such as CPU (Central Processing Unit) and FPGA (Field Programmable Gate Array), which are general-purpose processors that execute software (programs) and function as various processing units. It includes a dedicated electric circuit which is a processor having a circuit configuration specially designed for performing a specific process such as a programmable logic device (PLD) and an ASIC (Application Specific Integrated Circuit).
- CPU Central Processing Unit
- FPGA Field Programmable Gate Array
- PLD programmable logic device
- ASIC Application Specific Integrated Circuit
- One processing unit may be composed of one of these various processors, or a combination of two or more processors of the same type or different types, for example, a combination of a plurality of FPGAs, or a combination of an FPGA and a CPU. It may be configured by such as. Further, a plurality of processing units may be configured by one of various processors, or two or more of the plurality of processing units may be collectively configured by using one processor.
- processors are configured by a combination of one or more CPUs and software, and this processor functions as a plurality of processing units.
- SoC system on chip
- a processor that realizes the functions of the entire system including a plurality of processing units with one IC (Integrated Circuit) chip is used.
- circuitry that combines circuit elements such as semiconductor elements.
- the method of the present invention can be carried out, for example, by a program for causing a computer (processor) to execute each step. It is also possible to provide a computer-readable recording medium on which this program is recorded.
- Image correction device 10
- Document stand 14
- Imaging device 16
- Object 18
- Correction chart 20
- Imaging unit 22
- Display unit 24
- Operation unit 26
- Storage unit 28
- Acquisition processing unit 32
- Discrimination processing unit 34
- Extraction processing unit 36
- Generation processing unit 38
- Correction processing Part 40
- Display processing part 42
- Internal bus 44
- Shooting aid 46
- Installation stand 48 Auxiliary lighting
Abstract
Description
撮影装置によって、補正用チャートが撮影される前後の少なくとも一方において、シート状の対象物が撮影された動画像を連続的に取得する第2取得工程と、
動画像に含まれる複数のフレーム画像の中から、補正用チャートが写っている複数の補正用チャート画像および対象物が写っている複数の対象物画像を抽出する抽出工程と、
複数の補正用チャート画像に基づいて、複数の対象物画像の代表対象物画像を補正する補正パラメータを生成する生成工程と、
補正パラメータを用いて、代表対象物画像を補正する補正工程と、を含む、画像補正方法を提供する。
判別工程の判別結果に基づいて、動画像に含まれる複数のフレーム画像から、複数の対象物画像および複数の補正用チャート画像を抽出することが好ましい。
レンズ収差補正パラメータを用いて、複数の補正用チャート画像の各々、および、複数の対象物画像の各々について、レンズの収差による図形歪みを補正することが好ましい。
アオリ補正パラメータを用いて、レンズの収差による図形歪みの補正後の複数の対象物画像の各々について、アオリによる図形歪みを補正することが好ましい。
代表補正用チャート画像から、複数の無彩色パッチに対応する複数の無彩色パッチ画像を抽出して、複数の無彩色パッチ画像の各々について、同じ濃度の複数の濃度パッチの位置に対応する複数の濃度パッチ画像の位置の信号値を取得し、
複数の無彩色パッチ画像の各々について、同じ濃度の複数の濃度パッチの位置に対応する複数の濃度パッチ画像の位置の信号値に基づいて、代表補正用チャート画像の大きさの補間画像の各画素位置の信号値を補間によって算出することにより、複数の無彩色パッチ画像に対応する複数の補間画像を作成し、
複数の補間画像の各々について、全画素位置の信号値の平均値である空間平均値を算出し、
予め取得された複数の無彩色パッチの反射率と複数の補間画像の空間平均値との差がなくなるように階調補正を行う階調補正パラメータを生成し、
階調補正パラメータを用いて、複数の補間画像、代表補正用チャート画像、およびアオリによる図形歪みの補正後の複数の対象物画像の代表対象物画像の階調補正を行うことが好ましい。
シェーディング補正パラメータを用いて、代表補正用チャート画像および代表対象物画像のシェーディング補正を行うことが好ましい。
代表補正用チャート画像から、複数の有彩色パッチに対応する複数の有彩色パッチ画像を抽出して、複数の有彩色パッチ画像の各々の信号値を取得し、
予め取得された複数の有彩色パッチの各々の信号値と、対応する複数の有彩色パッチ画像の各々の信号値との差がなくなるように色補正を行う色補正パラメータを生成し、
色補正パラメータを用いて、シェーディング補正後の代表対象物画像の色補正を行うことが好ましい。
撮影装置によって、補正用チャートが撮影された動画像を取得し、
撮影装置によって、補正用チャートが撮影される前後の少なくとも一方において、シート状の対象物が撮影された動画像を連続的に取得し、
動画像に含まれる複数のフレーム画像の中から、補正用チャートが写っている複数の補正用チャート画像および対象物が写っている複数の対象物画像を抽出し、
複数の補正用チャート画像に基づいて、複数の対象物画像の代表対象物画像を補正する補正パラメータを生成し、
補正パラメータを用いて、代表対象物画像を補正する、画像補正装置を提供する。
なお、無彩色パターンに含まれる無彩色パッチの個数(階調数)は、特に限定されない。
なお、有彩色パターンに含まれる有彩色パッチの個数は、特に限定されないが、色補正等を行うために、10個以上であることが好ましい。また、有彩色パターンに含まれる有彩色パッチの色の種類も限定されず、例えばRGB(赤、緑、青)およびCMY(シアン、マゼンタ、黄)等の基準色、その中間色の他、色相、彩度および明度の異なる任意の種類の色を使用してもよい。
すなわち、取得処理部30は、補正用チャート18が撮影された後に、対象物16が撮影された動画像を連続的に取得してもよいし、対象物16が撮影された後に、補正用チャート18が撮影された動画像を連続的に取得してもよい。あるいは、取得処理部30は、図9に示すように、補正用チャート18が撮影された後に、対象物16が撮影され、続いて、補正用チャート18が撮影された動画像を連続的に取得してもよい。
例えば、画像解析により、動画像に含まれる各々のフレーム画像の特徴量を算出し、隣接するフレーム画像の特徴量に基づいて、隣接するフレーム画像間の相関係数を算出する。隣接するフレーム画像間の相関係数が閾値以上の値になった範囲において代表的なフレーム画像fを範囲毎に記憶しておく。各範囲に対応する代表的なフレーム画像f0、f1と、事前に取得しておいた相関計算用の補正用チャート画像との相関係数によって、フレーム画像f0の範囲およびフレーム画像f1の範囲が、それぞれ補正用チャート画像および対象物画像であると判別できる。
例えば、動画像に含まれる複数のフレーム画像について、手ブレの度合い、画角のズレ量および照明のムラ(バラツキ)の度合い等のように、1以上の項目の解析を行って各項目のスコアを算出し、全項目の合計のスコアをフレーム画像のスコアとする。そして、動画像に含まれる複数のフレーム画像のうち、撮影状態が最もよい側から、すなわち、スコアが最も高い側のフレーム画像から、複数の対象物画像を抽出する。
例えば、動画像に含まれる複数のフレーム画像のうち、撮影状態が複数の対象物画像に最も近い側から、すなわち、撮影時刻が複数の対象物画像と最も近い側から、あるいは、スコアが最も高い側のフレーム画像から、複数の補正用チャート画像を抽出する。なお、事前に取得しておいた相関計算用の補正用チャート画像との相関係数も加えて、相関係数が高くなるほど、スコアが高くなるように算出してもよい。
生成処理部36は、特に限定されないが、図形歪みの補正、階調補正、シェーディング補正および色補正等の画像補正を行うための補正パラメータを生成する。
これに応じて、取得処理部30により、撮影部20の動作が制御され、撮影部20によって、補正用チャート18が撮影された動画像を取得する第1取得処理が実行される(ステップS1)。
これに応じて、取得処理部30により、補正用チャート18が撮影された後、引き続いて、撮影部20によって、対象物16が撮影された動画像を連続的に取得する第2取得処理が実行される(ステップS2)。
また、抽出処理において、補正用チャート画像および対象物画像の判別は、抽出処理部34が自動で判別してもよいし、あるいはユーザが手動(目視)で判別を行ってもよい。ユーザが判別を行う場合、操作部24を通じたユーザの操作(判別結果の入力)に応じて、抽出処理部34により、動画像に含まれる複数のフレーム画像の中から、複数の補正用チャート画像および複数の対象物画像が抽出される。
動画像に含まれるフレーム画像には、レンズの収差による図形歪み、および、撮影時のアオリによる図形歪み等が含まれている。
フレーム画像には、照明からの正反射光による強い光によって閾値を超える信号値が含まれる場合がある。つまり、閾値を用いて、照明からの正反射光による強い光によって影響を受けた信号値を判定する。このような閾値を超える信号値を除外することにより、正反射光の影響がほとんど無い、閾値以下の信号値を用いて代表補正用チャート画像を生成することができる。
なお、基準点は、4点以上あればよく、正方形または長方形を構成する4点の基準点を含むことが好ましい。また、4点の基準点によって構成される領域の大きさは、精度よくアオリ補正を行うために、対象物16に形成された図形よりも大きい方が好ましい。
対象物画像のアオリ補正に関しては、上記のように、代表補正用チャート画像に基づいて生成されたアオリ補正パラメータを用いて対象物画像のアオリ補正を行ってもよいし、あるいは、対象物画像に基づいて生成されたアオリ補正パラメータを用いて対象物画像のアオリ補正を行ってもよい。対象物画像に基づくアオリ補正パラメータは、対象物画像内において、一部が写り込んでいる補正用チャートに基づいて生成することができる。
同様に、閾値を超える信号値を除外することにより、正反射光の影響がほとんど無い、閾値以下の信号値を用いて代表対象物画像を生成することができる。
階調補正を行う場合、まず、代表補正用チャート画像から、例えば図5に示す無彩色パターン18bに含まれている複数の無彩色パッチに対応する複数の無彩色パッチ画像を抽出して、複数の無彩色パッチ画像の各々について、空間的に離れて配置された同じ濃度の複数の濃度パッチの位置に対応する複数の濃度パッチ画像の位置の信号値を取得する。
無彩色パッチの反射率とは、撮像系(撮像素子)によって、ある濃度の無彩色パッチを撮影した場合に、無彩色パッチの濃度に感度を有する波長範囲の分光反射率である。無彩色パッチの反射率の取得方法は、特に限定されないが、例えば撮像系によって無彩色パッチを撮影し、無彩色パッチの濃度に対応する反射率(信号値)を計測することにより取得することができる。なお、階調補正に高精度が要求されない場合には、無彩色パッチの作成時の目標値(設定値)を使用してもよい。
シェーディング補正を行う場合、まず、階調補正後の複数の補間画像に基づいて、シェーディング補正の前後において、階調補正後の複数の補間画像の空間平均値が変わらず、かつ、シェーディング補正後において、階調補正後の複数の補間画像の代表補間画像の2次元的な信号値の分布が平坦になるように補正する2次元的なシェーディング補正パラメータを作成する。
B(x、y)=S(x、y)×A(x、y)
シェーディング補正用の補正用チャート18として、濃度が異なる複数の無彩色パッチが含まれている場合、つまり、複数の濃度に対応する、複数の2次元的なシェーディング補正パラメータがある場合、複数の濃度の各々に対応するシェーディング補正パラメータをS1(x、y)、S2(x、y)、…SN(x、y)、複数の濃度の各々を、階調がリニアになるように階調補正した画像の信号値をC1、C2、…、CN(C1>C2>…>CN)とすると、シェーディング補正後の画像の各画素位置の信号値は、下記式によって算出される。
A(x、y)>C1の場合、
B(x、y)=S1(x、y)×B(x、y)
C1≧A(x、y)>C2の場合、
B(x、y)=(α×S1(x、y)+β×S2(x、y))×A(x、y)
ここで、α=(A(x、y)-S2(x、y))/(S1(x、y)-S2(x、y))、β=1.0-α
A(x、y)>C3以降の場合も同様である。
CN≧A(x、y)の場合、
B(x、y)=SN(x、y)×A(x、y)
なお、各画素がRGBからなるカラー画像の場合、RGBのチャネル毎に、上記のシェーディング補正を行う。
色補正を行う場合、まず、代表補正用チャート画像から、例えば図6に示す有彩色パターン18cに含まれる複数の有彩色パッチに対応する複数の有彩色パッチ画像を抽出して、複数の有彩色パッチ画像の各々の信号値を取得する。
有彩色パッチの信号値の取得方法は、特に限定されないが、無彩色パッチの場合と同様に、有彩色パッチの測定等によって取得することができる。なお、色補正に高精度が要求されない場合には、有彩色パッチの作成時の目標値(設定値)を使用してもよい。
色補正パラメータの生成方法は、特に限定されないが、例えば公知の3行3列のマトリクス処理により、RGBからなる有彩色パッチの信号値(目標値)と、同じくRGBからなる有彩色パッチ画像の信号値(実測値)との関係を、3行3列の行列計算によって変換して求めることができる。また、変換行列も公知の技術、例えば最小二乗法等により求めることができる。
補助照明48は、撮影時に補正用チャート18および対象物16等を照明する。撮影時に補助照明48によって照明することにより、撮影時の照明のムラを軽減することができる。
撮影補助具44を用いて補正用チャートおよび対象物16を撮影することにより、動画像に含まれる補正用チャート画像および対象物画像に含まれる図形歪みおよび照明のムラ等を低減することができるため、画像補正後の対象物画像の画質を向上させることができる。
12 原稿台
14 撮影装置
16 対象物
18 補正用チャート
20 撮影部
22 表示部
24 操作部
26 記憶部
28 プロセッサ
30 取得処理部
32 判別処理部
34 抽出処理部
36 生成処理部
38 補正処理部
40 表示処理部
42 内部バス
44 撮影補助具
46 設置台
48 補助照明
Claims (14)
- 撮影装置によって、補正用チャートが撮影された動画像を取得する第1取得工程と、
前記撮影装置によって、前記補正用チャートが撮影される前後の少なくとも一方において、シート状の対象物が撮影された前記動画像を連続的に取得する第2取得工程と、
前記動画像に含まれる複数のフレーム画像の中から、前記補正用チャートが写っている複数の補正用チャート画像および前記対象物が写っている複数の対象物画像を抽出する抽出工程と、
前記複数の補正用チャート画像に基づいて、前記複数の対象物画像の代表対象物画像を補正する補正パラメータを生成する生成工程と、
前記補正パラメータを用いて、前記代表対象物画像を補正する補正工程と、を含む、画像補正方法。 - 前記動画像に含まれる複数のフレーム画像のうち、隣接するフレーム画像間の相関に基づいて、前記複数のフレーム画像の各々が、前記対象物画像なのか、あるいは前記補正用チャート画像なのかを判別する判別工程を含み、
前記判別工程の判別結果に基づいて、前記動画像に含まれる複数のフレーム画像から、前記複数の対象物画像および前記複数の補正用チャート画像を抽出する、請求項1に記載の画像補正方法。 - 前記複数の補正用チャート画像に基づいて、前記撮影装置のレンズの収差による図形歪みを補正するレンズ収差補正パラメータを生成し、
前記レンズ収差補正パラメータを用いて、前記複数の補正用チャート画像の各々、および、前記複数の対象物画像の各々について、前記レンズの収差による図形歪みを補正する、請求項1または2に記載の画像補正方法。 - 前記レンズの収差による図形歪みの補正後の前記複数の補正用チャート画像の代表補正用チャート画像上の複数の基準点に基づいて、撮影時のアオリによる図形歪みを補正するアオリ補正パラメータを生成し、
前記アオリ補正パラメータを用いて、前記レンズの収差による図形歪みの補正後の前記複数の対象物画像の各々について、前記アオリによる図形歪みを補正する、請求項3に記載の画像補正方法。 - 前記補正用チャートは、濃度が異なる複数の無彩色パッチを含み、かつ、前記複数の無彩色パッチの各々について、同じ濃度の複数の濃度パッチが空間的に離散して配置されており、
前記代表補正用チャート画像から、前記複数の無彩色パッチに対応する複数の無彩色パッチ画像を抽出して、前記複数の無彩色パッチ画像の各々について、前記同じ濃度の複数の濃度パッチの位置に対応する複数の濃度パッチ画像の位置の信号値を取得し、
前記複数の無彩色パッチ画像の各々について、前記同じ濃度の複数の濃度パッチの位置に対応する前記複数の濃度パッチ画像の位置の信号値に基づいて、前記代表補正用チャート画像の大きさの補間画像の各画素位置の信号値を補間によって算出することにより、前記複数の無彩色パッチ画像に対応する複数の前記補間画像を作成し、
前記複数の補間画像の各々について、全画素位置の信号値の平均値である空間平均値を算出し、
予め取得された前記複数の無彩色パッチの反射率と前記複数の補間画像の空間平均値との差がなくなるように階調補正を行う階調補正パラメータを生成し、
前記階調補正パラメータを用いて、前記複数の補間画像、前記代表補正用チャート画像、および前記アオリによる図形歪みの補正後の前記複数の対象物画像の前記代表対象物画像の階調補正を行う、請求項4に記載の画像補正方法。 - 階調補正後の前記複数の補間画像に基づいて、シェーディング補正の前後において、階調補正後の前記複数の補間画像の空間平均値が変わらず、かつ、前記シェーディング補正後において、階調補正後の前記複数の補間画像の代表補間画像の2次元的な信号値の分布が平坦になるように補正する2次元的なシェーディング補正パラメータを作成し、
前記シェーディング補正パラメータを用いて、前記代表補正用チャート画像および前記代表対象物画像のシェーディング補正を行う、請求項5に記載の画像補正方法。 - 前記補正用チャートは、色が異なる複数の有彩色パッチを含み、
前記代表補正用チャート画像から、前記複数の有彩色パッチに対応する複数の有彩色パッチ画像を抽出して、前記複数の有彩色パッチ画像の各々の信号値を取得し、
予め取得された前記複数の有彩色パッチの各々の信号値と、対応する前記複数の有彩色パッチ画像の各々の信号値との差がなくなるように色補正を行う色補正パラメータを生成し、
前記色補正パラメータを用いて、前記シェーディング補正後の前記代表対象物画像の色補正を行う、請求項6に記載の画像補正方法。 - 前記動画像に含まれる複数のフレーム画像のうち、撮影状態が最もよい側のフレーム画像から、前記複数の対象物画像を抽出する、請求項1ないし7のいずれか一項に記載の画像補正方法。
- 前記動画像に含まれる複数のフレーム画像のうち、前記撮影状態が前記複数の対象物画像に最も近い側のフレーム画像から、前記複数の補正用チャート画像を抽出する、請求項8に記載の画像補正方法。
- 前記補正用チャートの上に配置された前記対象物の動画像を撮影する、請求項1ないし9のいずれか一項に記載の画像補正方法。
- 前記対象物は、外部エネルギーが加えられることにより、前記外部エネルギーに対応する濃度に発色する測定用シートである、請求項1ないし10のいずれか一項に記載の画像補正方法。
- プロセッサを備え、前記プロセッサが、
撮影装置によって、補正用チャートが撮影された動画像を取得し、
前記撮影装置によって、前記補正用チャートが撮影される前後の少なくとも一方において、シート状の対象物が撮影された前記動画像を連続的に取得し、
前記動画像に含まれる複数のフレーム画像の中から、前記補正用チャートが写っている複数の補正用チャート画像および前記対象物が写っている複数の対象物画像を抽出し、
前記複数の補正用チャート画像に基づいて、前記複数の対象物画像の代表対象物画像を補正する補正パラメータを生成し、
前記補正パラメータを用いて、前記代表対象物画像を補正する、画像補正装置。 - 請求項1ないし11のいずれか一項に記載の画像補正方法の各々の工程をコンピュータに実行させるためのプログラム。
- 請求項1ないし11のいずれか一項に記載の画像補正方法の各々の工程をコンピュータに実行させるためのプログラムが記録されたコンピュータ読み取り可能な記録媒体。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21850995.8A EP4191993A4 (en) | 2020-07-29 | 2021-05-21 | IMAGE CORRECTION DEVICE, IMAGE CORRECTION METHOD, PROGRAM, AND RECORDING MEDIUM |
JP2022540031A JP7402992B2 (ja) | 2020-07-29 | 2021-05-21 | 画像補正装置、画像補正方法、プログラムおよび記録媒体 |
CN202180058899.0A CN116194947A (zh) | 2020-07-29 | 2021-05-21 | 图像校正装置、图像校正方法、程序及记录介质 |
US18/159,521 US20230169688A1 (en) | 2020-07-29 | 2023-01-25 | Image correction apparatus, image correction method, program, and recording medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020128244 | 2020-07-29 | ||
JP2020-128244 | 2020-07-29 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/159,521 Continuation US20230169688A1 (en) | 2020-07-29 | 2023-01-25 | Image correction apparatus, image correction method, program, and recording medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022024516A1 true WO2022024516A1 (ja) | 2022-02-03 |
Family
ID=80037954
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/019281 WO2022024516A1 (ja) | 2020-07-29 | 2021-05-21 | 画像補正装置、画像補正方法、プログラムおよび記録媒体 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20230169688A1 (ja) |
EP (1) | EP4191993A4 (ja) |
JP (1) | JP7402992B2 (ja) |
CN (1) | CN116194947A (ja) |
TW (1) | TW202205212A (ja) |
WO (1) | WO2022024516A1 (ja) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004007300A (ja) | 2002-06-03 | 2004-01-08 | Fuji Photo Film Co Ltd | 撮像装置 |
JP2007194794A (ja) | 2006-01-18 | 2007-08-02 | Casio Comput Co Ltd | 撮影装置、撮影画像の画像処理方法及びプログラム |
JP2012238932A (ja) * | 2011-05-09 | 2012-12-06 | For-A Co Ltd | 3d自動色補正装置とその色補正方法と色補正プログラム |
US20160088266A1 (en) * | 2013-06-28 | 2016-03-24 | Thomson Licensing | Automatic image color correciton using an extended imager |
JP2016171475A (ja) | 2015-03-13 | 2016-09-23 | 株式会社リコー | 画像処理システム、方法およびプログラム |
US20190124232A1 (en) * | 2017-10-19 | 2019-04-25 | Ford Global Technologies, Llc | Video calibration |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2013175550A1 (ja) * | 2012-05-21 | 2016-01-12 | ピタフォー モバイル エルエルシー | 撮像システム、撮像方法、撮像用プログラム及び情報記録媒体 |
-
2021
- 2021-05-21 CN CN202180058899.0A patent/CN116194947A/zh active Pending
- 2021-05-21 JP JP2022540031A patent/JP7402992B2/ja active Active
- 2021-05-21 WO PCT/JP2021/019281 patent/WO2022024516A1/ja active Application Filing
- 2021-05-21 EP EP21850995.8A patent/EP4191993A4/en active Pending
- 2021-06-17 TW TW110122173A patent/TW202205212A/zh unknown
-
2023
- 2023-01-25 US US18/159,521 patent/US20230169688A1/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004007300A (ja) | 2002-06-03 | 2004-01-08 | Fuji Photo Film Co Ltd | 撮像装置 |
JP2007194794A (ja) | 2006-01-18 | 2007-08-02 | Casio Comput Co Ltd | 撮影装置、撮影画像の画像処理方法及びプログラム |
JP2012238932A (ja) * | 2011-05-09 | 2012-12-06 | For-A Co Ltd | 3d自動色補正装置とその色補正方法と色補正プログラム |
US20160088266A1 (en) * | 2013-06-28 | 2016-03-24 | Thomson Licensing | Automatic image color correciton using an extended imager |
JP2016171475A (ja) | 2015-03-13 | 2016-09-23 | 株式会社リコー | 画像処理システム、方法およびプログラム |
US20190124232A1 (en) * | 2017-10-19 | 2019-04-25 | Ford Global Technologies, Llc | Video calibration |
Non-Patent Citations (2)
Title |
---|
See also references of EP4191993A4 |
ZHENGYOU ZHANG: "A Flexible New Technique for Camera Calibration", MICROSOFT RESEARCH TECHNICAL REPORT, MSR-TR-98-71, 2 December 1998 (1998-12-02) |
Also Published As
Publication number | Publication date |
---|---|
TW202205212A (zh) | 2022-02-01 |
JPWO2022024516A1 (ja) | 2022-02-03 |
EP4191993A4 (en) | 2024-02-07 |
EP4191993A1 (en) | 2023-06-07 |
US20230169688A1 (en) | 2023-06-01 |
CN116194947A (zh) | 2023-05-30 |
JP7402992B2 (ja) | 2023-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10735627B2 (en) | Color conversion table creation apparatus and method, color conversion apparatus, and program | |
US9654670B2 (en) | Color conversion table creation device and method, program, and recording medium | |
US20110032380A1 (en) | Printing system and method | |
JP5967441B2 (ja) | 色処理方法、色処理装置及び色処理システム | |
TWI279735B (en) | Method and device of color correction for projector, and projector | |
US8767232B2 (en) | Image processing apparatus, image processing method, and computer-readable storage medium | |
TWI394428B (zh) | Image reading method and image reading device | |
EP1117070A2 (en) | Constructing profiles to compensate for non-linearities in image capture | |
US11740132B2 (en) | Method and apparatus for color lookup using a mobile device | |
JP2000050318A (ja) | 出力装置の応答関数を特性化する方法、プログラム製品及びシステム | |
JP2010139324A (ja) | 色ムラ測定方法、および色ムラ測定装置 | |
WO2022024516A1 (ja) | 画像補正装置、画像補正方法、プログラムおよび記録媒体 | |
JP2019041204A (ja) | 基準画像データ生成方法、印刷物の検査方法、及び基準画像データ生成システム | |
JP2004202968A (ja) | インキ供給量制御方法およびインキ供給量制御装置 | |
JP2012165271A (ja) | 画像処理装置、画像処理方法及びプログラム | |
JP6627356B2 (ja) | カラーパッチ生成装置及び画像形成装置並びにプログラム | |
US20060139479A1 (en) | Image processing methods and systems for fine art reproduction | |
WO2008108761A1 (en) | True color communication | |
Pedersen et al. | Framework for the evaluation of color prints using image quality metrics | |
CN110300291B (zh) | 确定色彩值的装置和方法、数字相机、应用和计算机设备 | |
JP2003016443A (ja) | 画質評価方法および画質評価装置ならびに画質評価用チャート画像 | |
WO2022059342A1 (ja) | 画像処理装置、画像処理方法、プログラムおよび記録媒体 | |
JP6922967B2 (ja) | カラーパッチ生成装置及び画像形成装置並びにプログラム | |
JP7339794B2 (ja) | 画像処理装置、画像処理方法、およびプログラム | |
Lei et al. | Composite Target for Camera-Based Document/Object Capture System*. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21850995 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022540031 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2021850995 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2021850995 Country of ref document: EP Effective date: 20230228 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |