WO2013105381A1 - 画像処理方法、画像処理装置および画像処理プログラム - Google Patents
画像処理方法、画像処理装置および画像処理プログラム Download PDFInfo
- Publication number
- WO2013105381A1 WO2013105381A1 PCT/JP2012/082016 JP2012082016W WO2013105381A1 WO 2013105381 A1 WO2013105381 A1 WO 2013105381A1 JP 2012082016 W JP2012082016 W JP 2012082016W WO 2013105381 A1 WO2013105381 A1 WO 2013105381A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- density value
- histogram
- density
- image processing
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 24
- 238000012545 processing Methods 0.000 title claims description 110
- 230000001186 cumulative effect Effects 0.000 claims abstract description 94
- 238000004364 calculation method Methods 0.000 claims abstract description 29
- 238000006243 chemical reaction Methods 0.000 claims description 200
- 230000008859 change Effects 0.000 claims description 14
- 238000000034 method Methods 0.000 description 136
- 230000008569 process Effects 0.000 description 84
- 230000006870 function Effects 0.000 description 73
- 238000012937 correction Methods 0.000 description 66
- 238000010586 diagram Methods 0.000 description 28
- 238000013500 data storage Methods 0.000 description 11
- 238000003384 imaging method Methods 0.000 description 7
- 238000007726 management method Methods 0.000 description 7
- 238000004519 manufacturing process Methods 0.000 description 7
- 229920006395 saturated elastomer Polymers 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000009825 accumulation Methods 0.000 description 3
- 230000004888 barrier function Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 239000011521 glass Substances 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 230000015654 memory Effects 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 125000002066 L-histidyl group Chemical group [H]N1C([H])=NC(C([H])([H])[C@](C(=O)[*])([H])N([H])[H])=C1[H] 0.000 description 1
- 108091006503 SLC26A1 Proteins 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000001444 catalytic combustion detection Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000003936 working memory Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/40—Picture signal circuits
- H04N1/407—Control or modification of tonal gradation or of extreme levels, e.g. background level
- H04N1/4072—Control or modification of tonal gradation or of extreme levels, e.g. background level dependent on the contents of the original
- H04N1/4074—Control or modification of tonal gradation or of extreme levels, e.g. background level dependent on the contents of the original using histograms
Definitions
- the present invention relates to an image processing method, an image processing apparatus, and an image processing program for generating a conversion function for correcting a density value between at least two images.
- Patent Document 1 uses a captured image represented by the RGB color space of one of the two cameras A and B as a reference image Sa.
- a photographed image represented in the RGB color space by the other camera B is defined as an adjustment target image Sb.
- the adjustment target image Sb is subjected to color space conversion by a color adjustment unit in the color adjustment apparatus, and is accumulated for each component after the color space conversion with reference to the same component Ha (k) of the reference image Sa.
- a histogram matching process is performed. Then, after this cumulative histogram matching process, the original RGB space is restored, and thereby an adjusted image Sb ′ having the same hue as that of the reference image Sa is generated.
- Non-patent Document 1 With the technique disclosed in Shintaro Inamura, Ryo Taguchi, and “Color adjustment method between different cameras” (Non-patent Document 1), two cameras are arranged side by side to capture the same subject. Conversion is performed so that the histogram of one camera image is matched with the histogram of the other camera so that the cumulative histograms of the two images match. Then, after color space conversion is performed, histogram conversion is performed for each component, and then the RGB color space is restored by inverse conversion.
- the angle at which the subject is captured differs, so the subject images are not exactly the same.
- the images of the subject appearing in the right-eye image and the left-eye image are different from each other, and thus the shapes of the histograms generated from the images are not the same.
- the method using only the histogram disclosed in the prior art as described above may not be able to appropriately determine the color correspondence between images.
- the conversion function such as the density value conversion table, there may be a portion where the correspondence changes rapidly. In this case, the density value (gradation) included in the corrected image is skipped. , Fake textures, etc. can occur.
- the present invention has been made to solve such a problem, and an object of the present invention is to provide a conversion function for correcting a density value even when a subject image is different among a plurality of images.
- An object is to provide an image processing method, an image processing apparatus, and an image processing program that can be appropriately generated.
- An image processing method includes a generation step of generating at least a cumulative histogram for density values of pixels included in at least a first image and a second image, and a histogram frequency and a density value of the cumulative histogram.
- the present invention it is possible to more appropriately generate a conversion function for correcting the density value even when the image of the subject is different among a plurality of images.
- the embodiment of the present invention is directed to an image processing method for determining a conversion function for correcting a density value at least between a first image and a second image.
- These images mean images obtained by capturing the same subject, and may be three or more images.
- Such a plurality of images are typically obtained by imaging the same subject from different viewpoints using a plurality of imaging devices (cameras). More specifically, each image is obtained by imaging the same subject with a plurality of cameras (typically, stereo cameras) arranged apart by a predetermined distance.
- An “image” in the present embodiment is an image in which each pixel is defined by a density value of a single channel (ie, a monochrome image), and an image in which each pixel is defined by a density value of a plurality of channels (ie, an image) , Color images).
- a channel in the case of a monochrome image, a density value corresponding to a gray value or a gradation value is used.
- each gradation value such as an RGB color space or a CMY color space is used as the density value of each channel.
- the present invention can be applied to images expressed using various color spaces such as a YUV color space, an XYZ color space, an xyY color space, an L * u * v * color space, and an L * a * b * color space.
- the image processing method according to the present embodiment is typically directed to a process of generating a conversion function for correcting a density value between stereo-captured images. It can also be applied to density value correction (color matching) between images.
- FIG. 1 is a diagram for describing an overview of a conversion function generation process according to an embodiment of the present invention.
- a pair of images (image 1 and image 2) is acquired by capturing an image of a subject OBJ with a stereo camera (cameras 2 and 4).
- a conversion function for correcting the density value between the images is determined by executing the following processing on the image 1 and the image 2.
- a density value conversion table is used as a typical example of the conversion function.
- the density value conversion table is not necessarily a table format, and may be a function format or a mapping format.
- the density value conversion table when determining the correspondence between the cumulative histogram 1 of image 1 and the cumulative histogram 2 of image 2, is generated by associating the histograms having the same frequency. Rather than searching for the relationship of the corresponding density value based on the distance in the space defined including the histogram frequency and density value of the cumulative histogram (typically DP (Dynamic Programming) matching) The density value conversion table is determined from the correspondence relationship of the density values obtained from the search result.
- DP Dynamic Programming
- FIG. 2 is a block diagram showing a configuration when the conversion function generation processing according to the embodiment of the present invention is realized by a personal computer.
- an image processing apparatus 100 realized by a personal computer is mainly mounted on a computer having a general-purpose architecture.
- an image processing apparatus 100 includes, as main components, a CPU (Central Processing Unit) 102, a RAM (Random Access Memory) 104, a ROM (Read Only Memory) 106, and a network interface (I / F). ) 108, auxiliary storage device 110, display unit 120, input unit 122, and memory card interface (I / F) 124. Each component is communicably connected to each other via a bus 130.
- the CPU 102 controls the entire image processing apparatus 100 by executing various programs such as an operating system (OS: Operating System) and a conversion function generation processing program stored in the ROM 106, the auxiliary storage device 110, and the like.
- the RAM 104 functions as a working memory for executing a program by the CPU 102, and temporarily stores various data necessary for executing the program.
- the ROM 106 stores an initial program (boot program) that is executed when the image processing apparatus 100 is started.
- the network interface 108 exchanges data with other devices (such as server devices) via various communication media. More specifically, the network interface 108 is connected via a wired line such as Ethernet (registered trademark) (LAN (Local Area Network), WAN (Wide Area Network), etc.) and / or a wireless line such as a wireless LAN. Perform data communication.
- a wired line such as Ethernet (registered trademark) (LAN (Local Area Network), WAN (Wide Area Network), etc.) and / or a wireless line such as a wireless LAN. Perform data communication.
- the auxiliary storage device 110 typically includes a large-capacity magnetic storage medium such as a hard disk, and stores an image processing program 112 for realizing various processing according to the present embodiment, a processing target image 114 to be processed, and the like. To do. Further, the auxiliary storage device 110 may store a program such as an operating system.
- the processing target image 114 includes at least two images to be processed.
- the main body of the image processing apparatus 100 may not have a function of capturing an image of a subject.
- at least two images may be acquired using a mechanism similar to a digital camera as will be described later, and these images may be input to the image processing apparatus 100 by an arbitrary method. More specifically, an image is input to the image processing apparatus 100 via the network interface 108 and the memory card interface 124 described above.
- the display unit 120 displays a GUI (Graphical User Interface) screen provided by the operating system, an image generated by executing the image processing program 112, and the like.
- the display unit 120 is preferably configured by an arbitrary display device that supports a three-dimensional display method.
- a parallax barrier method or the like can be employed.
- this parallax barrier method by providing a parallax barrier on the liquid crystal display surface, the right eye image can be visually recognized by the user's right eye, and the left eye image can be visually recognized by the user's left eye.
- a shutter glasses method may be adopted.
- the left eye image and the right eye image are alternately switched at high speed and displayed, and the user wears special glasses equipped with a shutter that opens and closes in synchronization with the switching of the image. , You can enjoy stereoscopic display.
- the input unit 122 typically includes a keyboard, a mouse, a touch panel, and the like, and outputs the content of the instruction received from the user to the CPU 102 or the like.
- the memory card interface 124 reads / writes data from / to various memory cards (nonvolatile storage media) 126 such as an SD (Secure Digital) card and a CF (Compact Flash (registered trademark)) card.
- memory cards nonvolatile storage media
- the memory card interface 124 is loaded with a memory card 126 storing a processing target image acquired by any device, and the processing target image read from the memory card 126 is stored in the auxiliary storage device 110 ( Copied).
- the image processing program 112 stored in the auxiliary storage device 110 is stored and distributed in a storage medium such as a CD-ROM (Compact Disk-Read Only Memory) or distributed from a server device or the like via a network.
- the image processing program 112 calls a necessary module among program modules provided as part of an operating system executed by the image processing apparatus 100 (personal computer) at a predetermined timing and in order to realize the processing. May be.
- the image processing program 112 itself does not include a module provided by the operating system, and image processing is realized in cooperation with the operating system.
- the image processing program 112 may be provided by being incorporated in a part of some program instead of a single program.
- the image processing program 112 itself does not include a module that is commonly used in the program, and image processing is realized in cooperation with the program. Even such an image processing program 112 that does not include some modules does not depart from the spirit of the image processing apparatus 100 according to the present embodiment.
- image processing program 112 may be realized by dedicated hardware.
- FIG. 3 is a block diagram showing a configuration when the conversion function generation processing according to the embodiment of the present invention is realized by a configuration similar to a digital camera.
- the image processing apparatus 200 acquires at least two processing target images by actually capturing a subject, and executes a conversion function generation process on the acquired processing target images.
- the image processing apparatus 200 includes an image processing engine 202, an input unit 204, a display unit 206, a pair of lenses 212 and 222, and a pair of CCD (Charge Coupled Device) image sensors 214 and 224 as main components. .
- CCD Charge Coupled Device
- the image processing engine 202 executes various digital processes including a conversion function generation process according to the present embodiment.
- the image processing engine 202 includes DSP (Digital Signal Processor), ASIC (Application Specific Integrated Circuit), LSI (Large Scale Integration), FPGA (Field-Program), and the like.
- the input unit 204 typically includes various key buttons, a touch panel, and the like, and outputs the contents of instructions received from the user to the image processing engine 202.
- the display unit 206 displays a user interface screen relating to imaging of a subject.
- the display unit 206 is configured by an arbitrary display device that supports a three-dimensional display method, similar to the display unit 120 (FIG. 2). It is preferable.
- the pair of lenses 212 and 222 are provided at different positions of the main body of the image processing apparatus 200, and can image the subject from different viewpoints. That is, different reflected lights from the subject are incident on the pair of lenses 212 and 222, respectively.
- the pair of CCDs 214 and 224 are associated with the pair of lenses 212 and 222, respectively, receive light (image) from the subject condensed by the lenses 212 and 222, and receive an electrical signal indicating the image as an image. Output to the processing engine 202.
- FIG. 4 is a diagram illustrating an example of an image that is a target of processing for generating a conversion function for correcting density values.
- FIG. 5 is a diagram showing an example of a cumulative histogram generated from each image shown in FIG.
- FIG. 6 is a diagram for explaining a case where the density values are not properly associated.
- FIG. 7 is a diagram illustrating an example of the density value conversion table generated when the situation illustrated in FIG. 6 occurs.
- FIG. 8 is a diagram showing an example of a result of density value correction using the density value conversion table shown in FIG.
- Non-Patent Document 1 in density value correction (color correction) between images, between cumulative histograms generated from each image, correspondence between density values based only on the frequency of the histogram. The relationship is determined.
- density values are associated based only on the histogram frequency. More specifically, regarding a certain density value of image 1, it is determined that the density value of image 2 having the histogram frequency closest to the cumulative histogram frequency of the density value is a corresponding density value. That is, as shown in FIG. 6, coordinates having the same histogram frequency are associated with each other.
- Such a related technique is based on the premise that the images of the subject in the image 1 and the image 2 are substantially the same, that is, the shapes of the histograms of both are substantially the same. That is, it is assumed that the density is shifted as a whole due to camera differences. However, in reality, for example, in a stereo camera, there is a parallax between the cameras, so that the image of the subject is not the same in the two generated images, and as a result, the histograms have different shapes. In such a case, a correct correspondence relationship may not be obtained.
- the ratio of the medium density value is relatively small. Therefore, in the cumulative histogram shown in FIG. 5, there is a flat portion where the increase in histogram frequency is small even when the density value increases.
- the conversion function generation processing not only the histogram frequency but also the density value of the pixel is used to define the distance (proximity) between the density values. Then, based on the proximity between the density values in each image, the corresponding density value is searched by a technique such as DP matching to determine the density value correspondence.
- FIG. 9 is a flowchart showing an overall procedure of the conversion function generation process according to the embodiment of the present invention. Each step shown in FIG. 9 is typically realized by the CPU 102 (FIG. 2) executing the image processing program 112.
- the CPU 102 acquires at least two images.
- the CPU 102 acquires image 1 and image 2 (step S2).
- image 1 and image 2 step S2
- a pair of stereo images acquired by imaging a subject with a pair of stereo cameras or the like is input.
- the CPU 102 generates a cumulative histogram for the density values of the pixels included in each image. That is, the CPU 102 generates a simple histogram for the density values of the pixels included in the input image 1 (step S4), and generates a simple histogram for the density values of the pixels included in the input image 2. (Step S6). Note that the processing in steps S4 and S6 may be performed by parallel processing or serial processing. When performing by serial processing, the execution order is not ask
- the CPU 102 in a space defined including the histogram frequency and density value of the cumulative histogram, between the density value on the histogram generated from the image 1 and the density value on the histogram generated from the image 2.
- the distance is calculated (step S12).
- distances between density values are calculated for possible combinations of density values on each histogram. .
- the CPU 102 determines a correspondence relationship between the density value included in the image 1 and the density value included in the image 2 based on the distance between the density values calculated in step S12 (step S14). Then, the CPU 102 determines a conversion function (density value conversion table 22) for correcting the density value between the image 1 and the image 2 from the determined correspondence (step S16).
- a conversion function (density value conversion table 22) necessary for color correction between the image 1 and the image 2 is determined. If necessary, the CPU 102 also executes color correction for the image 1 and / or the image 2. That is, the CPU 102 performs color correction (density value conversion) of at least one of the image 1 and the image 2 based on the conversion function (density value conversion table 22) determined in step S16 (step S18). In step S18, two color-corrected images are generated. Then, the process ends.
- step S16 the conversion function (density value conversion table 22) determined in step S16 can be applied to a set of images captured under the same imaging conditions in principle.
- the number of sets of image 1 and image 2 may be repeated.
- FIG. 10 is a block diagram showing a functional configuration of conversion function generation processing according to the embodiment of the present invention.
- the image processing apparatus has, as its main functional configuration, image management unit 10 including data storage unit 12, histogram generation unit 14, determination processing unit 16, and distance calculation unit. 18, a data storage unit 20, and a density value conversion unit 24.
- the image management unit 10 receives an image input from a camera or the like and stores the image in the data storage unit 12.
- the image stored in the data storage unit 12 is output to the histogram generation unit 14 and / or the density value conversion unit 24 as required.
- the image management unit 10 receives the image after the density value conversion (color correction) in the density value conversion unit 24 and stores the image in the data storage unit 12.
- the image management unit 10 outputs the image stored in the data storage unit 14 to a display unit or the like in response to a request.
- the data storage unit 14 is typically realized using a storage area provided by the RAM 104 of the image processing apparatus 100.
- the histogram generation unit 14 reads a plurality of images stored in the data storage unit 12 of the image management unit 10 and generates a histogram for the density values of the pixels included in the read image.
- the histogram generation unit 14 generates a simple histogram and / or a cumulative histogram.
- the distance calculation unit 18 uses the histogram generated by the histogram generation unit 14 to calculate a distance to the density value in a space defined including the histogram frequency and density value of the cumulative histogram. The distance calculation unit 18 outputs the distance for each calculated combination of density values to the determination processing unit 16.
- the determination processing unit 16 determines the correspondence relationship between the density values between the images based on the distance between the density values calculated by the distance calculation unit 18, and corrects the density value between the images based on the determined correspondence relationship.
- a conversion function (density value conversion table 22) is determined.
- the data storage unit 20 stores the density value conversion table 22 determined by the determination processing unit 16, and outputs the density value conversion table 22 in response to a request from the density value conversion unit 24 or the like.
- the density value conversion unit 24 performs density value conversion (color correction) on the input image based on the density value conversion table 22.
- the image after the density value conversion is stored in the data storage unit 12 of the image management unit 10.
- Embodiment 1 First, as a first embodiment, a method for determining a conversion function (density value conversion table 22) using a cumulative histogram of density values and a DP (Dynamic Programming) matching method will be described.
- FIG. 11 is a diagram showing an example of a simple histogram generated in the first embodiment of the present invention.
- FIG. 12 is a diagram showing an example of a cumulative histogram generated from the simple histogram shown in FIG.
- image 1 and image 2 in which each pixel has a color defined by the density values of three channels (R, G, B) in the RGB color space are processed. Similar processing can be performed even for a monochrome image having only one channel density value.
- simple histograms are generated for the density values of the channels of the pixels included in image 1 and image 2, respectively. That is, a total of three simple histograms for the R, G, and B channels are generated for the image 1, and a total of three simple histograms for the R, G, and B channels are generated for the image 2. That is, a simple histogram as shown in FIG. 11 is generated for each channel.
- each cumulative histogram frequency is normalized with the maximum frequency so that the maximum frequency of the normalized histogram becomes Hmax. That is, a cumulative histogram as shown in FIG. 12 is generated for each channel.
- the cumulative histogram as shown in FIG. 12 can be regarded as a space defined including the (normalized) cumulative histogram frequency and the density value of each channel. Between the density value m of image 1 (arbitrary coordinates on the cumulative histogram generated from image 1) and the density value n of image 2 (arbitrary coordinates on the cumulative histogram generated from image 2) in this space.
- the distance dist (m, n) is defined as shown in equation (1).
- c_hist 1 (m) represents the normalized cumulative histogram frequency of the density value m of the image 1
- c_hist 2 (n) represents the normalized cumulative histogram frequency of the density value n of the image 2.
- This distance dist (m, n) corresponds to the proximity between density values in space. That is, the smaller the distance dist (m, n), the more similar the density value m of the image 1 and the density value n of the image 2 are.
- the density value of image 1 and the density value of image 2 are obtained when the sum of the distances of the density values is minimum. It is evaluated that the correspondence between and has been optimized. That is, the determination processing unit 16 (FIG. 10) determines the density value so that the sum of the distances between all density values included in the first image and the corresponding density values included in the second image is minimized. Determine the correspondence relationship. In other words, the correspondence between the density value m of the image 1 and the density value n of the image 2 is determined so that the expression (2) is established.
- FIG. 13 and FIG. 14 are diagrams for describing processing for searching for a correspondence relationship according to the embodiment of the present invention. As shown in FIG. 13, the distance from the density on one cumulative histogram to the density on the other cumulative histogram is sequentially calculated, and the combination of density that minimizes the sum of the distances is determined.
- the cumulative histogram is generated by accumulating the histogram frequencies from the one having the smallest density value. Therefore, the correspondence relationship between the density values does not change the order of the magnitude relationship of the density values. That is, when the density value m of image 1 corresponds to the density value n of image 2, the next density value (m + 1) of image 1 corresponds to the density value of image 2 before the density value (n ⁇ 1). None do.
- the determination processing unit 16 determines the magnitude relationship between the density values included in the image 1 and the magnitude value included in the image 2 corresponding to the density value included in the image 1. Using the relationship, the correspondence relationship for the density value is determined. More specifically, as shown in FIG. 14, when it is determined that the density value m on the cumulative histogram of image 1 corresponds to the density n on the cumulative histogram of image 2, the density value on the cumulative histogram of image 1. Regarding the correspondence relationship between the density value (m + 1) following m, the density value n on the cumulative histogram of the image 2 or a density value having a value larger than the density value n may be set as a search target. This is based on the rules relating to the generation of the cumulative histogram as described above.
- the determination processing unit 16 determines that the density value m greater than the density value m included in the image 1 when the density value m included in the image 1 corresponds to the density value n included in the image 2.
- a density value ( ⁇ n) greater than or equal to the density value n included in the image 2 is determined as a search target.
- FIG. 15 is a diagram showing an example of the density value conversion table 22 generated by the conversion function generation process according to the embodiment of the present invention. As shown in FIG. 15, for each density value (gradation value) of image 1, the corresponding density value (gradation value) of image 2 is stored. FIG. 15 shows the density value conversion table 22 when the image 1 is a target of density value conversion. However, the density value conversion table 22 may be adopted where the image 2 is a target of density value conversion.
- the density value conversion unit 24 (FIG. 10) converts the density value of the image (in this example, the image 1) using the density value conversion table 22, thereby converting the density value of the image 1 into the density value of the image 2. Can be close. Thereby, a difference in color between the image 1 and the image 2 can be improved.
- the density value of the pixel included in the image is different from other color spaces, for example, CMY color space, YUV color space, XYZ color space, xyY color space, L * u * v *.
- the present invention can also be applied to a case where the image is expressed using various color spaces such as a color space and an L * a * b * color space.
- the present invention can be applied not only to correction of density values between images captured in stereo, but also to correction of density values (color matching) between panoramic images.
- a conversion function (density value conversion table) is generated by adopting a cumulative histogram of density values as a histogram for density values.
- a conversion function density value conversion table
- color correction is performed with relatively simple processing and suppressing generation of fake textures and the like. be able to. Further, the calculation time and the calculation cost can be further reduced by using the order of the magnitude relationship of the density values included in the image.
- Embodiment 2 a configuration using weights when calculating the distance between density values of images in a space will be described.
- the second embodiment is different from the first embodiment only in the distance calculation method in the distance calculation unit 18 (FIG. 10), and the other processes and configurations are the same as those in the first embodiment. Therefore, detailed description of the common parts will not be repeated.
- the distance between the density values is calculated in a space defined by including the histogram frequency and density value of the cumulative histogram.
- the size of each axis direction (histogram frequency and density value)
- FIG. 16 is a diagram for describing weight setting according to the second embodiment of the present invention.
- weights wh and wc included in equation (3) are weights set in the respective axial directions for the cumulative histogram frequency and the density value. The greater the weight, the greater the calculated distance. In other words, the histogram is enlarged or reduced.
- the weights wh and wc are typically fixed values set in advance. However, as will be described later, it may be changed dynamically. Further, only one of the weights wh and wc may be used.
- the process of calculating the distance between the density values shown in FIG. 9 is the axial direction corresponding to the histogram frequency in the space when calculating the distance between the density values.
- a process of setting a weight to at least one of the distances in the axial direction corresponding to the distance and the density value is the axial direction corresponding to the histogram frequency in the space when calculating the distance between the density values.
- the distance (proximity) between the density values can be set more appropriately.
- FIG. 17 is a diagram for describing processing for setting a weight based on the cumulative histogram frequency according to the second embodiment of the present invention.
- the range where the histogram frequency is large is close to the region where the density is saturated, and it is highly possible that the shape of the cumulative histogram is flat. Therefore, the search can be performed in the axial direction of the density value. preferable.
- the range where the histogram frequency is not so large there is a margin until the density is saturated, so it is preferable to perform a search in the axial direction of the histogram frequency.
- the density value axis is used as a reflective effect. Search in the direction.
- the distance in the axial direction of the density value is calculated to be relatively large. Therefore, as a reflective effect, the search is performed in the axial direction of the histogram frequency.
- both the weights wh and wc may be used (changed) or only one of them may be used (change). It is good)
- the process of calculating the distance between the density values shown in FIG. 9 is performed by calculating the distance in the axial direction corresponding to the histogram frequency in the space. Including a process of setting a corresponding weight.
- the search direction according to the density value is dynamically set, the correspondence relationship between the density values can be searched more appropriately.
- FIG. 18 is a diagram for describing processing for setting weights based on density values according to the second embodiment of the present invention.
- the weight wh decreases as the difference between the focused density value and the density value to be searched increases, so that the weight wc is evaluated relatively large as a reflective effect. It will be. That is, the smaller the difference from the density value of interest, the smaller the calculated distance. In other words, when the search is performed in the axial direction of the density value, the calculated distance is relatively large. Therefore, as a reflective effect, the search is performed preferentially along the axial direction of the histogram frequency. .
- the weight wh1 when the density value difference is relatively small is smaller than the weight wh2 when the density value difference is relatively large, and the density difference is emphasized. Search can be performed.
- the process of calculating the distance between the density values shown in FIG. 9 is performed by calculating the distance in the axial direction corresponding to the density value in the space. Including a process of setting a corresponding weight.
- the search is performed so that the density difference becomes smaller, it is possible to prevent the determination of the correspondence relationship in which the difference between the density values is large, thereby more appropriately searching for the correspondence relationship between the density values. .
- FIG. 19 is a diagram for describing processing for setting a weight based on the histogram frequency and the density value according to the second embodiment of the present invention.
- the search is mainly performed in the axial direction of the histogram frequency in the range where the slope is gentle, and the search is performed mainly in the axial direction of the density value in the range where the slope is steep.
- the process of calculating the distance to the density value shown in FIG. 9 includes the process of setting the weight according to the histogram frequency and the density value.
- the search when the slope of the cumulative histogram is flat, the search is performed in the axial direction of the histogram frequency, and when the slope is steep, the search is performed in the axial direction of the density value. Search according to the shape of the histogram can be performed.
- Embodiment 3 when searching for a correspondence between density values, if the histograms intersect in space, the correspondence may be erroneously determined. Therefore, a configuration capable of preventing the correspondence relationship from being erroneously determined due to the occurrence of such histogram crossing will be described.
- the third embodiment is different from the first embodiment only in the distance calculation method in the determination processing unit 16 and the distance calculation unit 18 (FIG. 10), and the other processes and configurations are the same as those in the first embodiment. Since it is the same as 1, detailed description of the common part will not be repeated.
- FIG. 20 is a diagram for explaining histogram intersections handled by the conversion function generation processing according to the third embodiment of the present invention.
- the histogram of image 1 is in the form of a broken line and intersects with the histogram of image 2, the neighboring density points that intersect may be associated with each other by mistake. . Therefore, in this embodiment, in such a case, one histogram is translated in space to eliminate the crossing state with the other histogram, and the corresponding point search process is performed. By such parallel movement of the histogram, it is possible to avoid erroneous association caused by the intersection of the histograms.
- the density value m of the image 1 (the cumulative histogram generated from the image 1).
- the distance dist (m, n) between the arbitrary value on the cumulative histogram obtained by shifting the image and the density value n of the image 2 (the arbitrary coordinate on the cumulative histogram generated from the image 2) is , (7) can be calculated.
- step S14 the process of determining the correspondence between the density value included in image 1 and the density value included in image 2 shown in FIG. And a process of calculating a distance after translating at least one of the histograms of the image 2 in space.
- the amount of movement necessary for parallel movement necessary to eliminate such crossing of histograms can be adjusted in advance by the following method.
- H1 Determination of movement amount based on histogram frequency
- the movement amount d h1 is set so that the histogram does not intersect. That is, by shifting one of the histograms by at least the minimum value Hdist min , the crossing of the histograms can be eliminated.
- the movement amount d h1 is set as shown in the following equation (9).
- the movement amount is determined so that the histogram does not intersect in each channel.
- the channel-to-channel is determined. It is preferable to average the amount of movement of the histogram at.
- a method for determining the movement amount of the histogram in each channel by such a method will be described.
- the maximum distance Hdist ch, max between the histograms of image 1 and image 2 is calculated for each channel in accordance with equation (10). Subsequently, the maximum distance Hdist ch, max calculated for each channel is averaged according to the equation (11). That is, the average value of the maximum distance Hdist ch, max among a plurality of channels is calculated.
- the movement amount d h1 in each channel is determined so that the maximum distance between the histograms of all the channels becomes Hdist ave, max .
- the maximum distance between the histograms is almost the same in any channel, so that similar density value association processing is performed, and density having similar shapes to each other.
- a value conversion table can be generated. Therefore, it is possible to maintain the density balance between the channels (RGB) after the density value conversion.
- Embodiment 4 when searching for a correspondence relationship between density values, only a part of the effective density values among all density values included in the image are targeted, thereby creating a correspondence relationship. The time required for the search process and the calculation cost can be reduced. Therefore, in the present embodiment, a configuration for calculating the distance after limiting the range of target density values will be described.
- the fourth embodiment is different from the first embodiment only in the correspondence search process in the determination processing unit 16 (FIG. 10) and the distance calculation method in the distance calculation unit 18 (FIG. 10). Since other processes and configurations are the same as those in the first embodiment, detailed description of common parts will not be repeated.
- FIG. 21 and FIG. 22 are diagrams for explaining the correspondence search process according to the fourth embodiment of the present invention.
- the correspondence relationship between the density values existing at both ends of the cumulative histogram is not determined, for the density value outside the search range, the correspondence relationship within the search range is interpolated (typically, A conversion function (density value conversion table) is generated, for example, by linear interpolation.
- the search ranges of the image 1 and the image 2 are different, it is preferable to expand the histogram so that the start point and the end point of the image 1 and the image 2 coincide with each other. More specifically, it is preferable to perform enlargement / reduction along the axial direction of the density value while maintaining the shape of the cumulative histogram generated from each of the images 1 and 2.
- the accumulated histograms of the images 1 and 2 have shapes that are approximate to each other, and the search process for the correspondence relationship with respect to the density value can be performed with higher accuracy.
- the search range for the cumulative histogram of image 1 is m1 st ⁇ n ⁇ m1 ed
- the search range for the cumulative histogram of image 2 is m2 st ⁇ n ⁇ m2.
- the start value m1 st corresponds to the density value 0
- the end value m1 ed corresponds to the maximum density value Cmax.
- the image is enlarged / reduced along the axial direction of the density value.
- the cumulative histogram of image 2 is enlarged / reduced along the axis direction of the density value so that the end value m2 ed becomes the maximum density value Cmax so that the start value m2 st becomes the density value 0.
- the cumulative histograms enlarged / reduced along the axis direction of the density values are as shown in FIG.
- the cumulative histograms after the enlargement of the image 1 and the image 2 are both distributed in the density value range of 0 to Cmax.
- the search process for the correspondence relationship for the density value is executed.
- the process of determining the correspondence between the density value included in image 1 and the density value included in image 2 (step S14) shown in FIG. 9 is shown in FIG.
- the process includes the process of determining the correspondence relationship between the density values after limiting the range of the density values to be processed.
- the distance is calculated after expanding the histogram of the limited density value range among the histogram of image 1 and the histogram of image 2. You may include the processing to do.
- search range limiting method can be similarly applied to a simple (normal) histogram. More specifically, only the range where the histogram frequency is not 0 (that is, only the pixel values corresponding to the colors used in the image) are targeted for the density value association, thereby speeding up the processing. it can.
- image 1 and image 2 in which each pixel has a color defined by the density values of three channels (R, G, B) in the RGB color space are processed. Similar processing can be performed even for a monochrome image having only one channel density value.
- simple histograms are generated for the density values of the respective channels of the pixels included in image 1 and image 2. That is, a total of three simple histograms for the R, G, and B channels are generated for the image 1, and a total of three simple histograms for the R, G, and B channels are generated for the image 2.
- the maximum frequency is extracted from each simple histogram frequency, and the corresponding simple histogram is normalized using the extracted maximum frequency.
- This normalized simple histogram can be regarded as a space defined including the (normalized) simple histogram frequency and the density value of each channel (R, G, B). Between the density value m of image 1 (arbitrary coordinates on the simple histogram generated from image 1) and the density value n of image 2 (arbitrary coordinates on the simple histogram generated from image 2) in this space Is defined as the equation (13).
- the inclinations ⁇ 1 and ⁇ 2 are calculated according to the equation (14) using the normalized histogram frequency and the density value.
- hist 1 (m) indicates the normalized histogram frequency of the density value m of the image 1
- hist 2 (n) indicates the normalized histogram frequency of the density value n of the image 2.
- the proximity dist (m, n) corresponds to the similarity of the slope (change amount) in the space of the simple histogram, and is defined using the histogram slope at a certain density value. That is, as the value of the proximity degree dist (m, n) is larger, the density value m of the image 1 and the density value n of the image 2 are more similar.
- the density value of the image 1 and the density value of the image 2 can be expressed as the equation (15) using the matrix A and the vector b.
- the total sum of the proximity between the density value of image 1 and the density value of image 2 after the density value conversion is calculated.
- the maximum value it is evaluated that the correspondence between the density value of the image 1 and the density value of the image 2 is optimized. That is, it can be determined that the correspondence relationship with respect to the density value is optimal when S (Cmax, Cmax) calculated according to the equation (16) is maximized. Note that when determining (searching) the correspondence relationship with respect to the density value, it is assumed that the order of the density value relationship is not changed as a constraint condition of the equation (15).
- a conversion function (density value conversion table) is generated by adopting a simple histogram of density values.
- the degree of proximity between the density value of image 1 and the density value of image 2 is determined using the slope on the simple histogram, an error is likely to occur depending on the shape of the simple histogram. There's a problem. In this regard, it is better in terms of accuracy to determine the proximity using the distance on the cumulative histogram as in the first to fourth embodiments.
- Embodiment 5 A conversion function (density value conversion table) can be generated by the method described in the first to fourth embodiments. However, depending on the histogram shape of the density values included in the input image, the correspondence relationship for the correct density values is not always obtained. Therefore, as a fifth embodiment, a process for correcting the generated conversion function (density value conversion table) afterwards will be described.
- FIG. 23 is a diagram showing an example of failure of the density value conversion table corrected in the fifth embodiment of the present invention.
- FIG. 24 is a diagram illustrating a correction example of the density value conversion table illustrated in FIG.
- the generated density value conversion table is checked (for example, before execution of the density value conversion process of the image), and if necessary, Correction is performed. More specifically, when there is a change exceeding a predetermined limit in the conversion function (density value conversion table 22), the content of the density value conversion table 22 is changed.
- the density value of the image 2 corresponding to the density value n of the image 1 is conv (n) (see FIG. 23).
- the value (conversion destination density value) is updated. More specifically, according to the equation (18), the value in the conversion table is replaced with conv ′ (n) obtained by correcting the corresponding conv (n).
- the correction reference value corr th described above can be set based on the following values.
- (1) Average value of gradient of density value conversion table As shown in equation (19), the average value of gradient of the density value conversion table may be adopted as the correction reference value corr th . That is, the equation (19) calculates the inclination for both ends of the density value conversion table, and determines the correction reference value corr th using this inclination.
- Equation (20) Inclination of density value conversion table near current density value
- the inclination of the density value conversion table near the current density value may be adopted as the correction reference value corr th . That is, the equation (20) calculates the gradient of the density value conversion table in the vicinity of the focused density value m, and determines the correction reference value corr th using this gradient.
- the correction value corr repl can be set based on the following values.
- (1) Average value of gradient of density value conversion table As shown in the equation (22) or (23), the average value of gradient of the density value conversion table may be adopted as the correction value corr repl . That is, Equations (22) and (23) calculate the slopes at both ends of the density value conversion table, and determine the correction value corr repl using this slope.
- Equation (25) Inclination of density value conversion table from center density value
- the slope of the density value conversion table from the center density value may be adopted as the correction value corr repl . That is, the equation (25) calculates an inclination between the center and the end of the density value conversion table, and determines the correction value corr repl using this inclination.
- the density value conversion table is corrected when the density value change amount ⁇ conv (u) exceeds a predetermined value.
- the density value change amount ⁇ conv (u) is predetermined. Even if the value does not exceed the value, if some condition is satisfied, the density value conversion table may be corrected using the same method as described above.
- the process of determining the correspondence relationship between the density value included in image 1 and the density value included in image 2 is a conversion function (density value conversion table 22). Includes a process for changing the conversion function when there is a change exceeding a predetermined limit.
- Embodiment 6 When the input image 1 and the image 2 have different dynamic ranges, the correction is necessary. Thus, as a sixth embodiment, a process for correcting the conversion function (density value conversion table) afterwards when color saturation or the like occurs will be described.
- FIG. 25 is a diagram for explaining the correction process of the density value conversion table according to the sixth embodiment of the present invention.
- the image 1 is an image having a narrower dynamic range (the number of gradations of density values is smaller) than that of the image 2.
- the number of gradations of the density value is small, so that a false texture may occur in an area having a color gradation.
- color correction may be performed so that the gradation around the area where the image 1 is saturated in color and overexposed is conspicuous.
- color saturation (so-called “overexposure”) is performed in the cumulative histogram generated from each of the image 1 and the image 2 using the logic for detecting the color saturation portion. ) Is determined.
- density value conversion is performed using the density value with the color saturation as an upper limit value.
- the normalized histogram frequency c_hist 1 (Cmax ⁇ 1) ⁇ satth1 in the cumulative histogram for image 1 it is determined that image 1 is color saturated. If it is determined that the color is saturated, then, in the cumulative histogram for the image 2, the maximum msat that satisfies c_hist 1 (Cmax ⁇ 1)> c_hist 2 (m) is the density value 0 ⁇ m ⁇ Search is made in the range of Cmax.
- the density value (Cmax ⁇ 1) in the image 1 corresponds to the density value m in the image 2.
- the values in the density value conversion table are linearly converted so that the density value m of the image 2 becomes the density value Cmax.
- conv ′ (n)> Cmax the value in the density value conversion table is clipped at Cmax. That is, the density value conv (n) is corrected according to the equation (26).
- the degree of overexposure can be adjusted between image 1 and image 2.
- the process of determining the correspondence relationship between the density value included in image 1 and the density value included in image 2 is the histogram of image 1 and the histogram of image 2.
- the process includes changing the conversion function (density value conversion table 22) when occurrence of color saturation is detected in at least one of the above.
- Embodiment 7 In the methods described in the first to sixth embodiments described above, it is assumed that a conversion function (density value conversion table) is mainly generated using the entire input image. The images of the subject are not exactly the same between the images to be captured. Therefore, depending on the amount of parallax generated between images, it may be preferable to generate a conversion function (density value conversion table) using partial regions set in the input images. Therefore, as Embodiment 7, a process of generating a conversion function (density value conversion table) using cumulative histograms generated from partial areas set in a plurality of images will be described. That is, after specifying a common area between the image 1 and the image 2, density value correction (color correction) between the images is performed.
- density value correction color correction
- the process of generating a cumulative histogram from simple histograms of density values is a process of generating a cumulative histogram from partial areas set in each of image 1 and image 2. including.
- FIG. 26 is a flowchart showing an overall procedure of conversion function generation processing according to the seventh embodiment of the present invention.
- step S3 is newly added compared to the flowchart shown in FIG.
- steps shown in FIG. 26 those performing the same processing as the steps shown in FIG. 9 are given the same reference numerals.
- Each step shown in FIG. 26 is typically realized by the CPU 102 (FIG. 2) executing the image processing program 112.
- CPU 102 searches for a partial region common between image 1 and image 2 (step S3).
- This partial area search process will be described later.
- Image 1 and image 2 are typically captured using a stereo camera, and these images include the same object. Then, the partial area of image 1 and the partial area of image 2 are set based on the degree of coincidence between them.
- the CPU 102 generates a cumulative histogram for the density values of the pixels included in the common area set for each image. That is, the CPU 102 generates a simple histogram for the density values of the pixels included in the partial area set in the input image 1 (step S4 #), and in the partial area set in the input image 2 A simple histogram is generated for the density values of the included pixels (step S6 #). Note that the processing of steps S4 # and S6 # may be performed by parallel processing or serial processing. When performing by serial processing, the execution order is not ask
- step S8 the CPU 102 generates a cumulative histogram from the simple histogram for the density value generated in step S4 # (step S8), and also generates a cumulative histogram from the simple histogram for the density value generated in step S6 #. Generate (step S10). Thereafter, processing similar to that in step S12 and subsequent steps in FIG. 9 is executed.
- step S3 a method for searching and setting a common partial area in step S3 will be described.
- a method using pattern matching and a method using stereo calibration will be exemplified.
- Pattern matching As a process of searching and setting a common area using pattern matching, partial areas are sequentially set in image 1 and image 2, and the degree of coincidence (similarity) between the set partial areas. Degree). Then, the partial area that maximizes the degree of coincidence is set as the common area. That is, a position where the pixel is most suitable is searched by pattern matching.
- the common area corresponds to a range in which a common part of the same subject is shown. In principle, the common area is set to the partial area corresponding to the common area set in the image 1 and the image 2. The partial area corresponding to the common area is substantially the same.
- Such a degree of coincidence RNCC can be calculated using, for example, a correlation value as shown in Equation (27).
- the pixel size of the partial region is assumed to be N pixels ⁇ M pixels.
- a position where the matching degree RNCC calculated according to the equation (27) is maximized is searched.
- FIG. 27 is a diagram for explaining processing for searching for a common area using pattern matching in the seventh embodiment of the present invention.
- FIG. 28 is a flowchart showing a processing procedure for searching for a common area using pattern matching in the seventh embodiment of the present invention.
- FIG. 29 is a diagram showing an example of a common area set using pattern matching in the seventh embodiment of the present invention.
- partial areas are sequentially set for image 1 and image 2.
- the position where the partial area is set is defined using the center position (X0, Y0). Then, the position of the center position (X0, Y0) is sequentially changed while the size of the partial area is fixed, and the degree of coincidence RNCC is calculated each time the position is changed.
- FIG. 28 shows a processing example in which a partial area is set for image 1 and the partial areas set for image 2 are sequentially moved.
- Each step shown in FIG. 28 is typically realized by the CPU 102 (FIG. 2) executing the image processing program 112.
- the CPU 102 sets a partial area at a predetermined position for the image 1 (step S300). Subsequently, the CPU 102 sets a partial area at the reference initial position for the image 2 (step S302). Then, the CPU 102 calculates the degree of coincidence between the partial areas currently set for the image 1 and the image 2 (step S304). This degree of coincidence may be calculated according to the above equation (27). The calculated degree of coincidence is temporarily stored in the storage area together with the corresponding center position.
- the CPU 102 determines whether or not the X coordinate value of the center position that defines the partial area set in the image 2 has reached the upper limit value (step S306). If the X coordinate value of the center position has not reached the upper limit value (NO in step S306), the CPU 102 increments the X coordinate of the center position by 1 (step S308), and the processing in step S304 and subsequent steps. repeat.
- the CPU 102 determines that the Y coordinate value of the center position that defines the partial area set in the image 2 is the upper limit value. It is determined whether it has reached (step S310).
- step S310 If the Y coordinate value of the center position has not reached the upper limit value (NO in step S310), the CPU 102 resets the X coordinate of the center position to the initial value and sets the Y coordinate of the center position to only 1. Increment (step S312), and repeat the processing from step S304.
- the CPU 102 corresponds to the matching degree having the largest value among the matching degrees stored so far.
- a center position is extracted, and a partial area defined by the extracted center position is determined as a common area (step S314). Then, the process ends.
- a common area as shown in FIG. 29 can be set for each image by searching for a partial area having the highest degree of coincidence.
- color correction density value conversion
- color correction may be performed on the entire input image using a density value conversion table generated from the common area (partial area).
- Stereo calibration Stereo calibration that performs optical correction may be employed instead of the above-described pattern matching. Specifically, for example, when images 1 and 2 are acquired using a stereo camera, processing such as distortion correction and parallelization in images 1 and 2 is performed, and then a pinhole camera model or the like. A common area may be set by performing camera calibration using.
- FIG. 30 is a diagram for explaining the process of excluding occlusion areas according to the seventh embodiment of the present invention.
- a part of the rear subject standing tree
- image 1 is not visible in image 2 (in FIG. 30).
- an occlusion area indicated by black is removed, and similarly, an area not visible in image 1 (an occlusion area indicated by black in FIG. 30) is removed from image 2.
- a pair of images (“removed”) from which the occlusion area is excluded is shown. That is, the partial area of image 1 and the partial area of image 2 are set so as to exclude the occlusion area.
- the accuracy of the corresponding point search can be increased, and a more appropriate density value conversion table can be generated.
- a conversion function for correcting a density value can be generated more appropriately even when a subject image is different among a plurality of images.
- Embodiments of the present invention include the following aspects.
- An image processing method calculates a distance between density values in a space defined by a histogram frequency and a density value, and a histogram generation step for generating a cumulative histogram of density values from two or more images.
- the correspondence between the density values between the images is stored and the correspondence is determined.
- the correspondence between the density values that minimizes the sum of the distances between the density values between the images is determined.
- the distance is calculated by setting a weight in the axial direction of the space. More preferably, in the distance calculation step, a weight is set based on the histogram frequency.
- a weight is set based on the density value.
- the correspondence is determined by limiting a range of density values for which distance calculation is performed.
- the correspondence is determined by deforming a histogram of a limited density value range.
- the correspondence is determined after the histogram is translated.
- it further includes a table correction step for detecting and correcting a change in the density value conversion table above or below the threshold.
- it further includes a dynamic range adjustment step of detecting the presence or absence of color saturation from the histogram frequency and correcting the density value conversion table.
- a series of processing is performed for each concentration value of each channel.
- a histogram is generated from a partial region of the image.
- the image is an image including the same object as at least one other image, and the partial area is a common area between the images.
- the image is an image including the same object as at least one other image
- the partial region is a region excluding an occlusion region between the images.
- the process for specifying the common area is pattern matching or stereo calibration.
- the process for specifying the occlusion area is a corresponding point search (pattern matching).
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Facsimile Image Signal Circuits (AREA)
Abstract
Description
例えば、特開2010-16803号公報(特許文献1)に開示される色調整装置は、2台のカメラAおよびBのうち一方のカメラAによるRGB色空間で表される撮影画像を基準画像Saとし、他方のカメラBによるRGB色空間で表される撮影画像を調整対象画像Sbとする。調整対象画像Sbは、色調整装置内の色調整部によって、色空間変換を施され、この色空間変換後のそれぞれの成分ごとに、基準画像Saの同成分Ha(k)を基準として、累積ヒストグラムマッチング処理を施される。そして、この累積ヒストグラムマッチング処理後に、元のRGB空間に戻され、これによって、基準画像Saと同じ色合いの調整後画像Sb’が生成される。
本発明の実施の形態は、少なくとも第1画像と第2画像との間で濃度値を補正するための変換関数を決定する画像処理方法に向けられている。これらの画像は、同一の被写体を撮像した画像を意味し、3つ以上の画像であってもよい。このような複数の画像は、典型的には、複数の撮像装置(カメラ)を用いて同一の被写体を異なる視点から撮像することで得られる。より具体的には、所定距離だけ離して配置された複数のカメラ(典型的には、ステレオカメラ)によって同一の被写体を撮像することでそれぞれの画像が得られる。
<B.システム構成>
まず、本発明の実施の形態に従う変換関数生成処理を実現する画像処理装置の実装例について説明する。
図2は、本発明の実施の形態に従う変換関数生成処理をパーソナルコンピューターにより実現した場合の構成を示すブロック図である。
図3は、本発明の実施の形態に従う変換関数生成処理をデジタルカメラ類似の構成により実現した場合の構成を示すブロック図である。
上述したパーソナルコンピューターにより実現する例、および、デジタルカメラ類似の構成による実現する例に加えて、携帯電話上に実装してもよい。さらに、少なくとも1つのサーバー装置が本実施の形態に従う処理を実現する、いわゆるクラウドサービスのような形態であってもよい。この場合、ユーザーは、自身の端末(パーソナルコンピューターやスマートフォンなど)を用いて、少なくとも2つの処理対象画像をサーバー装置(クラウド側)へ送信し、当該送信された処理対象画像に対して、サーバー装置側が本実施の形態に従う画像処理を行なうような構成が想定される。さらに、サーバー装置側がすべての機能(処理)を行なう必要はなく、ユーザー側の端末とサーバー装置とが協働して、本実施の形態に従う画像処理を実現するようにしてもよい。
まず、本実施の形態に従う変換関数生成処理を説明する前に、関連技術について説明する。
まず、本実施の形態に従う画像処理方法の全体手順について説明する。
次に、本実施の形態に従う画像処理装置および/または画像処理プログラムの機能構成について説明する。
まず、実施の形態1として、濃度値についての累積ヒストグラムと、DP(Dynamic Programming:動的計画法)マッチング方法とを用いて、変換関数(濃度値変換テーブル22)を決定する方法について説明する。
次に、実施の形態2として、空間において画像の濃度値間の距離を算出する際に、重みを用いる構成について説明する。なお、実施の形態2は、距離算出部18(図10)における距離の算出方法が実施の形態1と異なっているのみであり、その他の処理および構成については、実施の形態1と同様であるので、共通部分についての詳細な説明は繰り返さない。
(f1:それぞれの軸方向に付与される重み)
まず、上述の(1)式において定義される距離dist(m,n)において、それぞれの軸成分に重みwhおよびwcをそれぞれ設定することを考える。この場合、上述の(1)式は、(3)式のように変形できる。
次に、注目している濃度値のヒストグラム度数に基づいて、重みを変化させる処理について説明する。上述の図13に示すように、ヒストグラム度数の大きさに応じて、濃度値間の対応関係を探索すべき方向は変化することになる。そこで、探索対象の濃度値におけるヒストグラム度数に応じて、重み係数を変化させることで、適切な探索方向を動的に設定することができる。
次に、注目している濃度値に基づいて、重みを変化させる処理について説明する。空間における濃度値間の距離が相対的に小さくとも、濃度値の変換処理において変換前後の濃度値の差が大きいと、画像の品質上の問題が生じ得る。そのため、ヒストグラム度数の軸方向を重点的に探索を行なうことが好ましい。そこで、注目している濃度値に応じて、重み係数を変化させることで、適切な探索方向を動的に設定することができる。
次に、ヒストグラム度数および濃度値に基づいて、重みを変化させる処理について説明する。より具体的には、注目している濃度値についての、ヒストグラム度数およびその濃度値の大きさに基づいて、当該注目している濃度値にける累積ヒストグラムの傾きを算出し、当該算出した傾きに基づいて重みを決定する。このような累積ヒストグラムの傾きを用いることで、累積ヒストグラムの傾きが急峻な範囲では、濃度値の軸方向に探索を行ない、累積ヒストグラムの傾きが緩やかな範囲では、ヒストグラム度数の軸方向に探索を行なうことになる。このような方法で重み係数を変化させることで、適切な探索方向を動的に設定することができる。
次に、実施の形態3として、濃度値の間の対応関係を探索する場合に、空間においてヒストグラム同士が交差している場合には、対応関係を誤って決定する可能性がある。そこで、このようなヒストグラムの交差の発生による、対応関係を誤って決定することを防止できる構成について説明する。なお、実施の形態3は、決定処理部16および距離算出部18(図10)における距離の算出方法が実施の形態1と異なっているのみであり、その他の処理および構成については、実施の形態1と同様であるので、共通部分についての詳細な説明は繰り返さない。
シンプルな方法として、2つの累積ヒストグラムについての、同一の濃度値に対応するヒストグラム度数の差に基づいて、ヒストグラムの交差の有無を判断することができる。すなわち、濃度値nにおける累積ヒストグラム度数の差を順次算出するとともに、すべての濃度値のうち、最も小さくなる累積ヒストグラム度数の差の最小値Hdistminを算出する。すなわち、以下の(8)式に沿って、最小値Hdistminを算出できる。
上述の方法では、各チャネルにおいてヒストグラムの交差が発生しないように移動量を決定するものであるが、画像に含まれる各画素が複数のチャネルの濃度値で定義されている場合には、チャネル間におけるヒストグラムの移動量などを平均化することが好ましい。以下、このような方法によって、各チャネルにおけるヒストグラムの移動量を決定する方法について説明する。
次に、実施の形態4として、濃度値の間の対応関係を探索する場合に、画像に含まれるすべての濃度値のうち、一部の有効な濃度値のみを対象とすることで、対応関係の探索処理に要する時間および計算コストを低減できる。そこで、本実施の形態においては、対象とする濃度値の範囲を限定した上で、距離を算出する構成について説明する。なお、実施の形態4は、決定処理部16(図10)における対応関係の探索処理、および、距離算出部18(図10)における距離の算出方法が実施の形態1と異なっているのみであり、その他の処理および構成については、実施の形態1と同様であるので、共通部分についての詳細な説明は繰り返さない。
次に、濃度値についての単純ヒストグラムと、最小二乗法とを用いて、変換関数(濃度値変換テーブル22)を決定する参考例について説明する。
上述の実施の形態1~4に示す方法によって、変換関数(濃度値変換テーブル)を生成することができる。しかしながら、入力画像に含まれる濃度値のヒストグラム形状によっては、常に正しい濃度値についての対応関係が得られるとは限らない。そこで、実施の形態5として、生成された変換関数(濃度値変換テーブル)を事後的に補正する処理について説明する。
(1)濃度値変換テーブルの傾きの平均値
(19)式に示すように、補正基準値corrthとして、濃度値変換テーブルの傾きの平均値を採用してもよい。すなわち、(19)式は、濃度値変換テーブルの両端についての傾きを算出し、この傾きを用いて補正基準値corrthを決定する。
(20)式に示すように、補正基準値corrthとして、現在の濃度値近傍における濃度値変換テーブルの傾きを採用してもよい。すなわち、(20)式は、注目している濃度値mの近傍における濃度値変換テーブルの傾きを算出し、この傾きを用いて補正基準値corrthを決定する。
(21)式に示すように、補正基準値corrthとして、中心の濃度値からの濃度値変換テーブルの傾きを採用してもよい。すなわち、(21)式は、濃度値変換テーブルの中心と端との間の傾きを算出し、この傾きを用いて補正基準値corrthを決定する。
(1)濃度値変換テーブルの傾きの平均値
(22)式または(23)式に示すように、補正値corrreplとして、濃度値変換テーブルの傾きの平均値を採用してもよい。すなわち、(22)式および(23)式は、濃度値変換テーブルの両端についての傾きを算出し、この傾きを用いて補正値corrreplを決定する。
(24)式に示すように、補正値corrreplとして、現在の濃度値近傍における濃度値変換テーブルの傾きを採用してもよい。すなわち、(24)式は、注目している濃度値mの近傍における濃度値変換テーブルの傾きを算出し、この傾きを用いて補正値corrreplを決定する。
(25)式に示すように、補正値corrreplとして、中心の濃度値からの濃度値変換テーブルの傾きを採用してもよい。すなわち、(25)式は、濃度値変換テーブルの中心と端との間の傾きを算出し、この傾きを用いて補正値corrreplを決定する。
入力された画像1と画像2との間でダイナミックレンジが異なる場合には、その補正が必要になる。そこで、実施の形態6として、色飽和などが生じた場合に、変換関数(濃度値変換テーブル)を事後的に補正する処理について説明する。
上述の実施の形態1~6に示す方法では、主として、入力された画像の全体を用いて変換関数(濃度値変換テーブル)を生成する場合を想定しているが、異なる視点から撮像して得られる画像間では、被写体の画像が全く同一ではない。そのため、画像間に生じる視差量によっては、入力された画像にそれぞれ設定される部分領域を用いて、変換関数(濃度値変換テーブル)を生成することが好ましい場合もある。そこで、実施の形態7として、複数の画像にそれぞれ設定される部分領域から生成される累積ヒストグラムを用いて、変換関数(濃度値変換テーブル)を生成する処理について説明する。すなわち、画像1と画像2との間で共通領域を特定した上で、画像間の濃度値補正(色補正)する。
パターンマッチングを用いて共通領域を探索および設定する処理としては、画像1および画像2にそれぞれ部分領域を順次設定していき、それぞれ設定された部分領域の間で一致度(類似度)を評価する。そして、一致度が最大になる部分領域が共通領域として設定される。すなわち、パターンマッチングにより、画素が最も適合する位置を探索する。上述したように、共通領域とは、同一の被写体の共通する部分が写っている範囲に相当し、原理的には、画像1に設定された共通領域に対応する部分領域と、画像2に設定された共通領域に対応する部分領域との間は、ほぼ一致することになる。
上述のパターンマッチングに代えて、光学的な補正を行なう、ステレオキャリブレーションを採用してもよい。具体的には、例えば、ステレオカメラを用いて画像1および画像2を取得する場合には、画像1および画像2における歪みの補正や平行化などの処理を行なった上で、ピンホールカメラモデルなどを用いたカメラキャリブレーションを行ない、共通領域を設定してもよい。
ステレオカメラを用いて異なる視点から被写体を撮像した場合には、画像間に視差が存在するため、オクルージョンにより、画像間に対応する画像部分が存在しない領域が存在し得る。この場合には、パターンマッチングなどの対応点探索を用いて、オクルージョン領域を特定し、当該特定したオクルージョン領域を除外した上で、濃度値変換テーブルなどの作成を行なうことが好ましい。
本実施の形態によれば、複数の画像間で被写体の画像が異なっている場合であっても、濃度値を補正するための変換関数をより適切に生成できる。
本発明の実施の形態としては、以下のような態様を含む。
さらに好ましくは、距離算出ステップでは、ヒストグラム度数に基づいて重みを設定する。
好ましくは、対応関係決定ステップでは、距離算出を行なう濃度値の範囲を限定して対応を決定する。
好ましくは、ヒストグラム生成ステップでは、画像の部分領域からヒストグラムを生成する。
Claims (16)
- 少なくとも第1画像および第2画像に含まれる画素の濃度値についての累積ヒストグラムをそれぞれ生成する生成ステップと、
累積ヒストグラムのヒストグラム度数および濃度値を含んで定義される空間において、前記第1画像から生成された第1ヒストグラム上の濃度値と前記第2画像から生成された第2ヒストグラム上の濃度値との間の距離を算出する算出ステップと、
算出した濃度値間の距離に基づいて、前記第1画像に含まれる濃度値と前記第2画像に含まれる濃度値との対応関係を決定するとともに、決定した対応関係から前記第1画像と前記第2画像との間で濃度値を補正するための変換関数を決定する決定ステップとを備える、画像処理方法。 - 前記決定ステップは、前記第1画像に含まれる第1濃度値が前記第2画像に含まれる第2濃度値に対応している場合に、前記第1画像に含まれる前記第1濃度値より大きな第3濃度値についての対応関係を、前記第2画像に含まれる前記第2濃度値以上の濃度値を探索対象として決定するステップを含む、請求項1に記載の画像処理方法。
- 前記決定ステップは、前記第1画像に含まれるすべての濃度値と前記第2画像に含まれる対応する濃度値との間の距離の総和が最小となるように、濃度値についての対応関係を決定するステップを含む、請求項1または2に記載の画像処理方法。
- 前記算出ステップは、前記距離の算出にあたり、前記空間におけるヒストグラム度数に対応する軸方向の距離に応じた重みを設定するステップを含む、請求項1~3のいずれか1項に記載の画像処理方法。
- 前記算出ステップは、前記距離の算出にあたり、前記空間における濃度値に対応する軸方向の距離に応じた重みを設定するステップを含む、請求項1~4のいずれか1項に記載の画像処理方法。
- 前記算出ステップは、前記第1ヒストグラムおよび前記第2ヒストグラムの少なくとも一方を前記空間において平行移動した上で、前記距離を算出するステップを含む、請求項1~5のいずれか1項に記載の画像処理方法。
- 前記算出ステップは、対象とする濃度値の範囲を限定した上で、前記距離を算出するステップを含む、請求項1~6のいずれか1項に記載の画像処理方法。
- 前記算出ステップは、前記第1ヒストグラムおよび前記第2ヒストグラムのうち、限定した濃度値の範囲のヒストグラムを拡張した上で、前記距離を算出するステップを含む、請求項7に記載の画像処理方法。
- 前記決定ステップは、前記変換関数において所定の制限範囲を超える変化が存在している場合に、当該変換関数を変更するステップを含む、請求項1~8のいずれか1項に記載の画像処理方法。
- 前記決定ステップは、前記第1ヒストグラムおよび前記第2ヒストグラムの少なくとも一方において色飽和の発生が検知された場合に、当該変換関数を変更するステップを含む、請求項1~9のいずれか1項に記載の画像処理方法。
- 前記第1画像および前記第2画像に含まれる画素の各々は、複数のチャネルの濃度値で定義されており、
前記算出ステップは、チャネルごとに濃度値間の距離を算出するステップを含む、請求項1~10のいずれか1項に記載の画像処理方法。 - 前記生成ステップは、前記第1画像および前記第2画像のそれぞれに設定される部分領域から累積ヒストグラムを生成するステップを含む、請求項1~11のいずれか1項に記載の画像処理方法。
- 前記第1画像の部分領域と前記第2画像の部分領域は、両者の一致度に基づいて設定される、請求項12に記載の画像処理方法。
- 前記第1画像の部分領域と前記第2画像の部分領域は、オクルージョン領域を除外するように設定される、請求項12に記載の画像処理方法。
- 画像処理装置であって、
少なくとも第1画像および第2画像に含まれる画素の濃度値についての累積ヒストグラムをそれぞれ生成する生成手段と、
累積ヒストグラムのヒストグラム度数および濃度値を含んで定義される空間において、前記第1画像から生成された第1ヒストグラム上の濃度値と前記第2画像から生成された第2ヒストグラム上の濃度値との間の距離を算出する算出手段と、
算出した濃度値間の距離に基づいて、前記第1画像に含まれる濃度値と前記第2画像に含まれる濃度値との対応関係を決定するとともに、決定した対応関係から前記第1画像と前記第2画像との間で濃度値を補正するための変換関数を決定する決定手段とを備える、画像処理装置。 - 画像処理プログラムであって、コンピューターに、
少なくとも第1画像および第2画像に含まれる画素の濃度値についての累積ヒストグラムをそれぞれ生成する生成ステップと、
累積ヒストグラムのヒストグラム度数および濃度値を含んで定義される空間において、前記第1画像から生成された第1ヒストグラム上の濃度値と前記第2画像から生成された第2ヒストグラム上の濃度値との間の距離を算出する算出ステップと、
算出した濃度値間の距離に基づいて、前記第1画像に含まれる濃度値と前記第2画像に含まれる濃度値との対応関係を決定するとともに、決定した対応関係から前記第1画像と前記第2画像との間で濃度値を補正するための変換関数を決定する決定ステップとを実行させる、画像処理プログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP12864866.4A EP2804369A4 (en) | 2012-01-10 | 2012-12-11 | IMAGE PROCESSING, PICTURE PROCESSING DEVICE AND PICTURE PROCESSING PROGRAM |
US14/370,944 US9542733B2 (en) | 2012-01-10 | 2012-12-11 | Image processing method, imaging processing apparatus and image processing program for correcting density values between at least two images |
JP2013553216A JP6020471B2 (ja) | 2012-01-10 | 2012-12-11 | 画像処理方法、画像処理装置および画像処理プログラム |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012002417 | 2012-01-10 | ||
JP2012-002417 | 2012-01-10 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013105381A1 true WO2013105381A1 (ja) | 2013-07-18 |
Family
ID=48781341
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/082016 WO2013105381A1 (ja) | 2012-01-10 | 2012-12-11 | 画像処理方法、画像処理装置および画像処理プログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US9542733B2 (ja) |
EP (1) | EP2804369A4 (ja) |
JP (1) | JP6020471B2 (ja) |
WO (1) | WO2013105381A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019012400A (ja) * | 2017-06-30 | 2019-01-24 | 三菱スペース・ソフトウエア株式会社 | 画像比較装置、画像比較プログラムおよび画像比較方法 |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015158728A (ja) * | 2014-02-21 | 2015-09-03 | 東芝テック株式会社 | 情報閲覧装置、及び、情報閲覧プログラム |
TWI552600B (zh) * | 2014-12-25 | 2016-10-01 | 晶睿通訊股份有限公司 | 用於接圖的影像校正方法及具有影像校正功能的相關攝影機與影像處理系統 |
JP2016189946A (ja) * | 2015-03-31 | 2016-11-10 | 富士フイルム株式会社 | 医用画像位置合わせ装置および方法並びにプログラム |
CN109565577B (zh) * | 2016-07-27 | 2022-02-15 | 凸版印刷株式会社 | 色彩校正装置、色彩校正系统以及色彩校正方法 |
JP6740177B2 (ja) * | 2017-06-14 | 2020-08-12 | キヤノン株式会社 | 画像処理装置、画像処理方法及びプログラム |
US10579880B2 (en) * | 2017-08-31 | 2020-03-03 | Konica Minolta Laboratory U.S.A., Inc. | Real-time object re-identification in a multi-camera system using edge computing |
CN109314773A (zh) * | 2018-03-06 | 2019-02-05 | 香港应用科技研究院有限公司 | 具有颜色、亮度和清晰度平衡的高品质全景图的生成方法 |
US20190281215A1 (en) * | 2018-03-06 | 2019-09-12 | Hong Kong Applied Science and Technology Research Institute Company, Limited | Method for High-Quality Panorama Generation with Color, Luminance, and Sharpness Balancing |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004343483A (ja) * | 2003-05-16 | 2004-12-02 | Acutelogic Corp | 手振れ補正装置および方法、手振れ検出装置 |
JP2008524673A (ja) * | 2004-12-06 | 2008-07-10 | エレクトロニクス アンド テレコミュニケーションズ リサーチ インスチチュート | ステレオカメラの画像の歪み補正装置、及びその方法 |
JP2009049759A (ja) * | 2007-08-21 | 2009-03-05 | Kddi Corp | 色補正装置、方法及びプログラム |
JP2010016803A (ja) | 2008-06-04 | 2010-01-21 | Toa Corp | 複数のカラーカメラ間の色調整装置および方法 |
JP2011095131A (ja) * | 2009-10-30 | 2011-05-12 | Dainippon Screen Mfg Co Ltd | 画像処理方法 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7328111B2 (en) | 2003-11-07 | 2008-02-05 | Mitsubishi Electric Research Laboratories, Inc. | Method for determining similarities between data sequences using cross-correlation matrices and deformation functions |
US7986351B2 (en) | 2005-01-27 | 2011-07-26 | Qualcomm Incorporated | Luma adaptation for digital image processing |
DE602006019481D1 (de) * | 2005-11-29 | 2011-02-17 | Nec Corp | Mustererkennungsvorrichtung, mustererkennungsverfahren und mustererkennungsprogramm |
US7796812B2 (en) * | 2006-10-17 | 2010-09-14 | Greenparrotpictures, Limited | Method for matching color in images |
EP2320378A1 (en) * | 2009-11-06 | 2011-05-11 | Nxp B.V. | Colour image enhancement |
-
2012
- 2012-12-11 US US14/370,944 patent/US9542733B2/en not_active Expired - Fee Related
- 2012-12-11 JP JP2013553216A patent/JP6020471B2/ja not_active Expired - Fee Related
- 2012-12-11 WO PCT/JP2012/082016 patent/WO2013105381A1/ja active Application Filing
- 2012-12-11 EP EP12864866.4A patent/EP2804369A4/en not_active Withdrawn
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004343483A (ja) * | 2003-05-16 | 2004-12-02 | Acutelogic Corp | 手振れ補正装置および方法、手振れ検出装置 |
JP2008524673A (ja) * | 2004-12-06 | 2008-07-10 | エレクトロニクス アンド テレコミュニケーションズ リサーチ インスチチュート | ステレオカメラの画像の歪み補正装置、及びその方法 |
JP2009049759A (ja) * | 2007-08-21 | 2009-03-05 | Kddi Corp | 色補正装置、方法及びプログラム |
JP2010016803A (ja) | 2008-06-04 | 2010-01-21 | Toa Corp | 複数のカラーカメラ間の色調整装置および方法 |
JP2011095131A (ja) * | 2009-10-30 | 2011-05-12 | Dainippon Screen Mfg Co Ltd | 画像処理方法 |
Non-Patent Citations (2)
Title |
---|
See also references of EP2804369A4 |
SHINTARO INAMURA; AKIRA TAGUCHI: "Color Calibration between Two Different Cameras", INFORMATION AND COMMUNICATION ENGINEERS, IEICE TECHNICAL REPORT, vol. 107, no. 374, 4 December 2007 (2007-12-04), pages 13 - 18 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019012400A (ja) * | 2017-06-30 | 2019-01-24 | 三菱スペース・ソフトウエア株式会社 | 画像比較装置、画像比較プログラムおよび画像比較方法 |
JP2022000829A (ja) * | 2017-06-30 | 2022-01-04 | 三菱スペース・ソフトウエア株式会社 | 画像比較装置、画像比較プログラムおよび画像比較方法 |
JP7198896B2 (ja) | 2017-06-30 | 2023-01-04 | 三菱電機ソフトウエア株式会社 | 画像比較装置、画像比較プログラムおよび画像比較方法 |
Also Published As
Publication number | Publication date |
---|---|
US9542733B2 (en) | 2017-01-10 |
JP6020471B2 (ja) | 2016-11-02 |
US20150043817A1 (en) | 2015-02-12 |
EP2804369A1 (en) | 2014-11-19 |
JPWO2013105381A1 (ja) | 2015-05-11 |
EP2804369A4 (en) | 2016-03-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6020471B2 (ja) | 画像処理方法、画像処理装置および画像処理プログラム | |
US11948282B2 (en) | Image processing apparatus, image processing method, and storage medium for lighting processing on image using model data | |
US8224069B2 (en) | Image processing apparatus, image matching method, and computer-readable recording medium | |
US8928736B2 (en) | Three-dimensional modeling apparatus, three-dimensional modeling method and computer-readable recording medium storing three-dimensional modeling program | |
US9251589B2 (en) | Depth measurement apparatus, image pickup apparatus, and depth measurement program | |
US9378583B2 (en) | Apparatus and method for bidirectionally inpainting occlusion area based on predicted volume | |
WO2015146230A1 (ja) | 映像表示装置および映像表示システム | |
WO2012086120A1 (ja) | 画像処理装置、撮像装置、画像処理方法、プログラム | |
JP6452360B2 (ja) | 画像処理装置、撮像装置、画像処理方法およびプログラム | |
JP2015197745A (ja) | 画像処理装置、撮像装置、画像処理方法及びプログラム | |
WO2013038833A1 (ja) | 画像処理システム、画像処理方法および画像処理プログラム | |
WO2014030630A1 (ja) | 視差マップを生成する装置及びその方法 | |
JP5747797B2 (ja) | 画像処理装置、画像処理方法および画像処理プログラム | |
WO2018147059A1 (ja) | 画像処理装置、および画像処理方法、並びにプログラム | |
EP3189493B1 (en) | Depth map based perspective correction in digital photos | |
KR101875532B1 (ko) | 계층적 스테레오 매칭 장치 및 방법 | |
KR101281003B1 (ko) | 다시점 영상을 이용한 영상 시스템 및 영상 처리 방법 | |
JP5478533B2 (ja) | 全方位画像生成方法、画像生成装置およびプログラム | |
CN111630569A (zh) | 双目匹配的方法、视觉成像装置及具有存储功能的装置 | |
JP6579764B2 (ja) | 画像処理装置、画像処理方法およびプログラム | |
JP6320165B2 (ja) | 画像処理装置及びその制御方法、並びにプログラム | |
JP2010154323A (ja) | 画像処理装置、画像抽出方法、および、プログラム | |
JP2016062447A (ja) | 画像処理装置、画像処理方法、および画像処理プログラム | |
KR101804157B1 (ko) | 개선된 sgm 기반한 시차 맵 생성 방법 | |
JP5751117B2 (ja) | 画像生成装置、画像生成方法、画像生成装置用プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12864866 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2013553216 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14370944 Country of ref document: US |
|
REEP | Request for entry into the european phase |
Ref document number: 2012864866 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012864866 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |