WO2016063452A1 - Appareil de traitement d'image, appareil de capture d'image, système de traitement d'image et processus de traitement d'image - Google Patents

Appareil de traitement d'image, appareil de capture d'image, système de traitement d'image et processus de traitement d'image Download PDF

Info

Publication number
WO2016063452A1
WO2016063452A1 PCT/JP2015/004697 JP2015004697W WO2016063452A1 WO 2016063452 A1 WO2016063452 A1 WO 2016063452A1 JP 2015004697 W JP2015004697 W JP 2015004697W WO 2016063452 A1 WO2016063452 A1 WO 2016063452A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
data
resolution
acquirer
target pixel
Prior art date
Application number
PCT/JP2015/004697
Other languages
English (en)
Inventor
Norihito Hiasa
Original Assignee
Canon Kabushiki Kaisha
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Kabushiki Kaisha filed Critical Canon Kabushiki Kaisha
Publication of WO2016063452A1 publication Critical patent/WO2016063452A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/815Camera processing pipelines; Components thereof for controlling the resolution by using a single image

Definitions

  • the present invention relates to an image processing apparatus, an image pickup apparatus, an image processing system, and an image processing method so as to reduce harmful effects according to a resolution enhancement process of an image.
  • a resolution enhancement process is performed by, for example, a simple sharpness process, a process using an inverse filter, such as a Wiener filter, or a super-resolution process performing an estimation based on a least-squares method.
  • these processes generate harmful effects such as noise amplification and ringing and deteriorate image quality while improving a sense of resolution of images.
  • Patent Literature 1 discloses a generation method that changes intensity for performing a resolution enhancement process using an inverse filter according to an edge region or the other regions to generate high resolution images without increasing noise amplification.
  • Patent Literature 2 discloses a noise reducing method utilizing self-similarity of an object space called a NLM (Non-local Means) filter.
  • the NLM filter replaces a signal value of a target pixel with a weighted-averaged signal value of a plurality of pixels around the target pixel to reduce noises. Then, a weight using in weighted averaging is determined on the basis of distances between a vector, which has each signal value in a part region near the target pixel as components, and a vector, which is similarly generated from a pixel around the target pixel.
  • the above method allows image denoising while maintaining a sense of resolution of an edge.
  • the method disclosed in PTL1 just decreases intensity for performing the resolution enhancement process relative to a non-edge region and generates noise amplification and ringing in the edge region as in a conventional way. Additionally, if the method disclosed in PTL2 is applied to high resolution images, amplified noises are reduced. Meanwhile, ringing strongly generated is judged to be edge and fails to be removed.
  • the present invention provides an image processing apparatus, an image pickup apparatus, an image processing system, and an image processing method that are capable of reducing harmful effects generated according to the resolution enhancement process of an image.
  • An image processing apparatus includes a first acquirer that acquires a first image generated by enhancing resolution of an input image and a second image having a resolution degree smaller than a resolution degree of the first image, a selector that selects a target pixel from the first image, a second acquirer that acquires first data including a pixel corresponding to the target pixel and a plurality of second data from the second image, a determiner that determines a weight for each of the plurality of the second data on the basis of correlation between the first and second data, a third acquirer that acquires signal values of a plurality of reference pixels corresponding to the plurality of the second data from the first image, and a generator that generates an output pixel corresponding to the target pixel on the basis of a signal value calculated from the signal values of the plurality of the reference pixels and the weights.
  • An image pickup apparatus as another aspect of the present invention includes an image pickup unit that images an object image formed by an imaging optical system to output an input image, a first acquirer that acquires a first image generated by enhancing resolution of the input image and a second image having a resolution degree smaller than a resolution degree of the first image, a selector that selects a target pixel from the first image, a second acquirer that acquires first data including a pixel corresponding to the target pixel and a plurality of second data from the second image, a determiner that determines a weight for each of the plurality of the second data on the basis of correlation between the first and second date, a third acquirer that acquires signal values of a plurality of reference pixels corresponding to the plurality of the second data from the first image, and a generator that generates an output pixel corresponding to the target pixel on the basis of a signal value calculated from the signal values of the plurality of the reference pixels and the weights.
  • An image processing system includes an image pickup unit that images an object image formed by an imaging optical system to output an input image, an image generator that generates a first image generated by enhancing resolution of the input image and a second image having a resolution degree smaller than a resolution degree of the first image, a first acquirer that acquires the first and second images, a selector that selects a target pixel from the first image, a second acquirer that acquires first data including a pixel corresponding to the target pixel and a plurality of second data from the second image, a determiner that determines a weight for each of the plurality of the second data on the basis of correlation between the first and second data, a third acquirer that acquires signal values of a plurality of reference pixels corresponding to the plurality of the second data from the first image, and a generator that generates an output pixel corresponding to the target pixel on the basis of a signal value calculated from the signal values of the plurality of the reference pixels and the weights.
  • An image processing method as another aspect of the present invention includes the steps of acquiring a first image generated by enhancing resolution of the input image and a second image having a resolution degree smaller than a resolution degree of the first image, selecting a target pixel from the first image, acquiring first data including a pixel corresponding to the target pixel from the second image, acquiring a plurality of second data from the second image, determining a weight for each of the plurality of the second data on the basis of correlation between the first and second data, acquiring signal values of a plurality of reference pixels corresponding to the plurality of the second data from the first image, and generating an output pixel corresponding to the target pixel on the basis of a signal value calculated from the signal values of the plurality of the reference pixels and the weights.
  • the present invention can provide an image processing apparatus, image pickup apparatus, image processing system, and image processing method that are capable of reducing harmful effects generated according to a resolution enhancement process of an image.
  • FIG. 1 is an appearance view of an image pickup apparatus according to a first embodiment.
  • FIG. 2 is a block view of the image pickup apparatus according to the first embodiment.
  • FIG. 3 is a flowchart of an image pickup processing method according to the first embodiment.
  • FIG. 4 is a configuration diagram of an image processor according to the first embodiment.
  • FIG. 5 is an explanatory view regarding a resolution degree of an image.
  • FIG. 6 is a relationship diagram between a first image and a second image according to the first embodiment.
  • FIG. 7 is an explanatory view regarding a harmful effect reduction process.
  • FIG. 8 is a flowchart of an image processing method according to a second embodiment.
  • FIG. 9 is a relationship diagram between a first image and a second image according to the second embodiment.
  • FIG. 10 is an appearance view of an image processing system according to a third embodiment.
  • FIG. 11 is a block view of the image processing system according to the third embodiment.
  • FIG. 12 is an appearance view of an image pickup system according to a fourth embodiment.
  • FIG. 13 is a block view of the image pickup system according to the fourth embodiment.
  • a first image is formed by enhancing resolution of an input image using an inverse filter and a super-resolution process. Then, harmful effects (for example, noise amplification and ringing) according to the resolution enhancement process occur in the first image.
  • a target pixel as a target for reducing harmful effects is selected from the first image.
  • a part region including a pixel (target corresponding pixel) corresponding to the target pixel is acquired as a target corresponding data (first data) from a second image.
  • the second image is formed by imaging the same object space as the first image and has a resolution degree smaller than a resolution degree of the first image.
  • the second image has harmful effects according to the resolution enhancement process smaller than that of the first image, or no harmful effect.
  • the second image is the input image, or an image formed by performing the resolution enhancement process of the input image weaker than that of the first image.
  • a resolution degree denotes contrasts of images and frequency distribution and the details will be described later.
  • a plurality of reference corresponding data (second data), which are a part region, are acquired from the second image to calculate correlation values relative to the target corresponding data.
  • a weight for each reference corresponding data is determined according to the correlation values. Larger correlation values, in other words, more similarity between the reference corresponding data and the target corresponding date determines larger weights.
  • signal values of reference pixels corresponding to the reference corresponding data are acquired from the first image to calculate a weighted average value of the signal values on the basis of the weights calculated using the reference corresponding data. And, replacing a signal value of the target pixel with the weighted average value ends a reduction method of harmful effects.
  • calculating correlation values from the second image having small harmful effects acquires high precision weights suppressing influences of harmful effects. Averaging using these weights can suppress harmful effects such as ringing in addition to a noise reduction using a conventional NFL filter.
  • FIG.1 is an appearance view of the image pickup apparatus 100
  • FIG.2 is a block view of the image pickup apparatus 100.
  • An image acquirer 101 includes an imaging optical system and an image pickup element.
  • the image pickup element is, for example, CCD (Charge Coupled Device) and CMOS (Complementary Metal-Oxide Semiconductor).
  • CCD Charge Coupled Device
  • CMOS Complementary Metal-Oxide Semiconductor
  • the image optical system condenses light incident to the image acquirer 101, and the image pickup element converts the light to analogue electric signals.
  • An A/D convertor 102 changes the analogue electric signals to digital signals, and the digital signals is input to an image processor 103.
  • the image processor (image processing apparatus) 103 performs a resolution enhancement process and a reduction process of harmful effects due to the resolution enhancement process in addition to predetermined processes. These processes will be described later.
  • these processes use optical characteristics of the image acquirer 101 stored in a memory 104 and information regarding an image pickup condition of the image acquirer 101 detected by a state detector 109.
  • the optical characteristics denote distortion and diffraction of the imaging optical system in the image acquirer 101, or information regarding defocus (for example, an optical transmission function and point image intensity distribution).
  • the image pickup condition denotes a state of the image acquirer 101 while imaging, such as a state of diaphragm, a focus position, a blurred trail of the image pickup apparatus 100 during exposure, or a focal length of a zoom lens.
  • the state detector 109 may acquire information regarding the image pickup condition from a system controller 107 or from a controller 108.
  • a processed image is stored in an image recording medium 106 in a predetermined format.
  • information regarding the image pickup condition may be also stored at the same time. Additionally, an image already stored in the image recording medium 106 may be retrieved so that the image processor 103 enhances resolution and reduces harmful effects.
  • the image is output to a display 105, such as a liquid crystal display.
  • the system controller 107 performs the above a series of controls, and the controller 108 performs mechanical drives of the image acquirer 101 according to instructions of the system controller 107.
  • FIG.3 is a configuration diagram of the image processor 103.
  • a first acquirer 103a acquires an input image acquired by the image acquirer 101.
  • the input image loses information of an object space by various factors while imaging, such as distortion and diffraction of the imaging optical system in the image acquirer 101, sampling by the image pickup element or a blur of the image pickup apparatus 100 during exposure.
  • the input image further includes noises occurred in the image pickup element.
  • the first acquirer 103a generates two high resolution images having different resolution from the input image.
  • the image having higher resolution is the first image
  • the image having lower resolution is the second image.
  • the second image may be the input image or an image formed by processing a low resolution process, such as a smoothing filter, to the input image. But it is preferable that the second image have the resolution degree larger than the resolution degree of the input image as described later.
  • the resolution degree denotes information amounts of the object space, such as contrasts of an image, a maximum resolution frequency, and frequency distribution that a pixel has.
  • the maximum frequency denotes an absolute value of a maximum spatial frequency in which spectrum intensity is larger than a threshold value.
  • the threshold value may be determined by image quality requested for an image.
  • a larger contrast and a larger maximum resolution frequency mean that the resolution degree is larger.
  • an integrated value of frequency distribution of an image is used to highly precisely determine the resolution degree.
  • FIG.5 denotes the outline. For simplification, FIG.5 denotes just spectrum intensity of a spatial frequency along a one-dimensional direction of an image.
  • An area of a region where horizontal lines are drawn denotes an integrated value relative to spectrum intensity 211 of the first image 200
  • an area of a region where vertical lines are drawn denotes an integrated value relative to spectrum intensity 311 of the second image 300. Since a spatial frequency is actually two-dimensions, an integrated value corresponds to volume. Larger integrated values mean that resolution degree is larger.
  • parameters of a resolution enhancement process may be used as a method to denote other resolution degree. In this case, both resolution of the first image 200 and the second image 300 need to be enhanced using the same method to determine a size of the resolution degree on the basis of the parameters.
  • resolution enhancement processes are processes using an inverse filter, such as a Wiener filter, and super-resolution processes, such as a RL (Richardson-Lucy) method and a posterior probability maximizing method.
  • Degradation information which is necessary in the above processes, generated in the input image is acquired from a design value and a measured value of the imaging optical system when degradation is distortion and diffraction, and the degradation information is acquired from a gyro sensor mounted in the image pickup apparatus 100 when degradation is blurring during exposure.
  • a method estimating the degradation information from the input image to perform a resolution enhancement process called Blind Deconvolution may be used.
  • the high resolution image includes harmful effects such as noise amplification and ringing.
  • harmful effects such as noise amplification and ringing.
  • Larger intensity for performing a resolution enhancement process increases harmful effects. Consequently, harmful effects are easily shown in the first image 200 enhanced resolution.
  • the object of the present invention is to prevent harmful effects while maintaining a sense of resolution of the first image 200.
  • G is frequency distribution of the input image
  • H is an optical transmission function which represents degradation occurred in the input image
  • H* is a complex conjugate of H
  • F is frequency distribution of an image enhanced resolution.
  • is a parameter representing intensity for perform a resolution enhancement process, and closing ⁇ to 0 increases the resolution degree.
  • X is the input image
  • K point image intensity distribution which represents degradation occurred in the input image
  • Y is an image enhanced resolution
  • * is convolution.
  • A(Y) called a regularization term suppresses harmful effects according to a resolution enhancement process but fails to entirely suppress harmful effects.
  • Examples of the regularization term are primary average norm and TV (Total Variation) norm.
  • is a parameter representing how stronger an effect of the regularization term is, and closing ⁇ to 0 generally increases the resolution degree.
  • the number of iterations is a parameter representing the resolution degree.
  • a high resolution image in the middle of performing an iterative operation to generate the first image 200 may be the second image 300. In this case, calculation load to generate the second image 300 is reduced.
  • a selector 103b selects a target pixel 201 for removing harmful effects from the first image 200 as illustrated in FIG.6.
  • a position of the target pixel 201 is not limited to a position of FIG.6, and the selector 103b may select a plurality of pixels (for example, a 2 ⁇ 2 pixel group).
  • a second acquirer 103c acquires target corresponding data (first data) 303 which is a partial region including a pixel (referred to as a “target corresponding pixel”) corresponding to the target pixel 201 of the first image 200 from the second image 300.
  • the target pixel 201 and the target corresponding pixel 301 are respectively arranged at the same position in each image, and if the target pixel 201 in the first image 200 is selected, the target corresponding pixel 301 and the target corresponding data 303 in the second image 300 are determined.
  • a size and a shape of the target corresponding data 303 is not limited to a size and a shape of FIG.6, and the target corresponding data 303 needs to have a plurality of pixels because the target corresponding data 303 needs to have information regarding signal distribution. If the target corresponding pixel 301 has a plurality of pixels, the target corresponding pixel 301 may accord the target corresponding data 303.
  • Step S105 first, a reference corresponding data acquiring region 304 which denotes a region to acquire reference corresponding data (second data) 305a-305c is set around the target corresponding pixel 301 as illustrated in FIG.6.
  • a size and a shape of the reference corresponding data acquiring region 304 is limited to a size and a shape of FIG.6, and the reference corresponding data acquiring region 304 may be the entire second image 300.
  • the reference corresponding data acquiring region 304 is limited around the target corresponding data 303 so as to reduce calculation load.
  • the second acquirer 103c acquires reference corresponding pixels 302a-302c and the reference corresponding data 305a-305c including the reference corresponding pixels 302a-302c from the reference corresponding data acquiring region 304.
  • a size and a shape of the reference corresponding pixel 302a-302c and the reference corresponding data 305a-305c may be not accorded with a size and a shape of the target corresponding pixel 301 and the target corresponding data 303, respectively.
  • Their size and shape can be adjusted using a reduction conversion while calculating correlation values in Step S106. In this case, the same conversion needs to be performed to reference pixels used for a weighted average calculation in Step S109.
  • the reference corresponding data may be acquired from color components different from the target corresponding data.
  • the target corresponding data is selected from a R component in a RGB (Red, Green, and Blue) image
  • the reference corresponding data may be acquired from a G component and a B component. This is because harmful effects according to the resolution enhancement process are not generally changed so much among RGB components.
  • three reference images and three pieces of reference corresponding data are acquires but more images and data may be acquired.
  • Step S106 correlation values between each of the reference corresponding data 305a-305c and the target corresponding data 303 are calculated.
  • a method of a feature base such as SIFT (Scale-Invariant Feature Transform) and SURF (Speeded-Up Robust Features), and a method of a region base described later may be used to calculate the correlation values.
  • the method of the feature base focuses on feature amounts and thus correlation values can be calculated even if the number of pixels of the target corresponding data and the reference corresponding data are different.
  • the method of the region base focuses on differences between each signal value of each data and thus the number of pixels of both data needs to be fitted to exactly calculate correlation values.
  • the method of the region base is used because similarity can be precisely determined by using the correlation values based on the region base than the correlation values based on the feature base.
  • the following are examples of a correlation calculation expression but calculation methods are not limited to these examples.
  • a root-mean-square of signal differences between the target corresponding data and the reference corresponding data is used.
  • a correlation calculation expression g 1 is expressed by Expression (3).
  • T is a matrix whose component T ij is a signal value of each pixel on the target corresponding data
  • N is the number of rows of T
  • M is the number of columns of T
  • R k is a matrix whose component is each signal value of the k-th reference corresponding data.
  • P satisfies the following Expression (4) and P ij represents components of P.
  • N Rk is the number of rows of R k
  • M Rk is the number of columns of R k
  • ⁇ (R k , N/N Rk , M/M Rk ) represents a conversion which multiplies the number of rows and the number of columns of the matrix R k by N/N Rk times and M/M Rk times, respectively (magnification or reduction on the image).
  • Bilinear interpolation and bi-cubic interpolation may be used for a conversion of ⁇ .
  • t is a vector whose component t i is each signal value of the target corresponding data
  • r k is a vector whose component is each signal value of the k-th reference corresponding data
  • is a vector where each component of the matrix P is unidimensionally rearranged
  • ⁇ i is a component of ⁇ .
  • the correlation calculation expressions expressed by the Expressions (3) and (5) calculates differences between the target corresponding data and the reference corresponding data and thus closing the differences to 0 means high similarity of both data.
  • a direct-current component (denotes an average value and corresponds to brightness of the image) may be subtracted from a signal of the target corresponding data and the reference corresponding data.
  • a correlation calculation determines similarity between constructions of the target corresponding data and the reference corresponding data and brightness (direct-current component) has no relation.
  • contrasts of the reference corresponding data may be adjusted so that correlation of both data is the highest. This adjustment corresponds to multiplying an alternating-current component of the reference corresponding data by a scalar. Then, the Expression (3) is rewritten as the following Expression (6).
  • T ave and P ave are average values of each signal value of the matrixes T and P, respectively.
  • the average value may be calculated using uniform weight or may be weighted-averaged.
  • c is a coefficient to adjust contrasts and is expressed by the following Expression (7) from a least-squares method.
  • SSIM Structure Similarity
  • the correlation calculation expression used by SSIM is expressed by the following Expression (8).
  • L, C and S are brightness, a contrast and an evaluation function regarding other constructions, respectively, and are 0 to 1. Closing each value to 1 means comparing two signals are near.
  • a correlation value may be calculated by combining a plurality of correlation calculation expressions. Additionally, when performing a correlation calculation of the region base, for example, using the example 1 or 2 of a correlation calculation expression, an isometric transformation to the reference corresponding data may be performed so that a correlation value relative to the target corresponding data is the highest.
  • the isometric transformation is, for example, an identical transformation, a rotation transformation and an inversion transformation. Then, a transformation where the correlation value is the highest is also performed while calculating a weighted average in Step S109. Finding the reference corresponding data having higher similarity can reduce harmful effects. However, calculation amounts increase and thus it is better that performing the isometric transformation is determined in comparison between harmful effects and calculation amounts.
  • a determiner 103d determines each weight of the reference corresponding data 305a-305c from the correlation value calculated in Step S106. Higher correlation causes the reference corresponding data to be similar with the target corresponding data and thus the weights are set to be larger. For example, the weights are determined using the following Expression (9).
  • w k is a weight corresponding the k-th reference corresponding data and h is a intensity of a filter.
  • Z is a standardized factor of the weight w k and satisfies the following Expression (10).
  • a method determining the weight is not limited to the above method.
  • a table of weights corresponding to correlation values may be possessed to determine weights in reference to the table.
  • a third acquirer 103e acquires signal values of a plurality of reference pixels 202a-202c respectively corresponding to the plurality of reference corresponding pixels 302a-302c in the second image 300 from the first image 200.
  • the reference corresponding pixels 302a-302c and the reference pixels 202a-202c are positioned at the same position relative to each image. This step may be performed at any time between Step S105 and Step S109. Additionally, in this embodiment, three reference pixels are acquired but more pixels may be acquired.
  • Step S109 a weighted average in signal values of the reference pixels is calculated using the weights determined in Step S107.
  • a generator 103f replaces a signal value of the target pixel by the calculated weighted average, and generates an output pixel formed by removing harmful effects from the target pixel 201. Since the signal values of constructions similar to constructions of the target pixel near the target pixel are weighted and averaged, noise amplification according to the resolution enhancement process can be reduced while maintaining the constructions of the target pixel (in other words, a sense of resolution of an edge region). Further, calculating the weights from the second image having small ringing can distinguish ringing from original constructions in the object space and thus ringing are also reduced by averaging as the noise.
  • FIG.7 is an explanatory view regarding a harmful effect reduction process.
  • a weighted average signal value s ave is calculated using the following Expression (11).
  • S k is a signal value of the k-th reference pixel.
  • S k and S ave are vector amounts.
  • a weighted average calculation is not limited to the above method and a nonlinear coupling may be used for example.
  • Step S106 when subtraction of a direct-current component and adjustment of contrasts are performed in the correlation calculation of Step S106, a weighted average needs to be calculated after adjusting brightness and contrasts corresponding to the reference pixel. This is the same for a size conversion of the Expression (4) and an isometric transformation
  • a replacement process is used for reducing harmful effects in this embodiment but a learning type harmful effects reduction process may be used referring to the weighted average of the signal value.
  • Step S110 whether or not a process in a predetermined region of the input image is completed is determined. If processes relative to all images where harmful effects need to be removed are completed, the process ends. If the processes remain, Step S103 is performed to select new target pixel.
  • the above harmful effects reduction process can reduce a plurality of harmful effects according to the resolution enhancement process of the image at the same time.
  • the resolution degree of the second image is larger than the resolution degree of the input image. If the resolution degree of the second image is too smaller than the resolution degree of the first image, a sense of resolution of the output image generated by weighted-averaged decreases compared to a sense of resolution of the first image. This is because the weights are calculated from the second image having the resolution degree smaller than the resolution degree of the first image and as a result the weights of construction having low similarity increases. Thus, it is desirable that the resolution degree of the second image is avoided to be smaller than the resolution degree of the input image.
  • the resolution degree of the second image is determined according to the resolution degree of the first image. Closing the resolution degree of the first and second images equalizes harmful effects of both images and thus an effect for reducing harmful effects is hardly acquired. Conversely, separating the resolution degree of the first and second images decreases a sense of resolution of the output image as described above.
  • the resolution degree of the second image is determined from image pickup conditions while imaging the input image.
  • the image pickup conditions may have information regarding ISO sensitivity and a brightness level. This is because noises generated in the input image vary according to ISO sensitivity while imaging and a brightness level of the input image.
  • an image having smaller noise amplification can increase an effect for reducing noises.
  • the weights determined in Step S107 may be varied according to the resolution degree of the second image. Further preferably, larger differences between the resolution degree of the first and second images increase weights of the reference corresponding data having strong correlation relative to the target corresponding data divided by the weights of the reference corresponding data having weak correlation relative to the target corresponding data.
  • parameters are adjusted to make weights larger. Consequently, constructions having higher similarity are emphasized and deterioration of a sense of resolution can be prevented. For example, adjusting parameters corresponds to closing h to 0 in the Expression (9).
  • the above construction provides an image pickup apparatus capable of reducing a plurality of harmful effects generated according to the resolution enhancement process of the image at the same time.
  • FIG.8 is a flowchart of the image processing method according to this embodiment
  • FIG.9 is a relationship diagram of a first image 500 and a second image 600 according to this embodiment. The following description omits the same parts as the processes of embodiment 1. Additionally, a method as illustrated in FIG.8 can be realized as an image processing program to be executed function of each step by the computer.
  • Steps S201-S204 are the same as Steps S101-S104 of embodiment 1 and thus descriptions regarding Steps S201-S204 are omitted.
  • Step S205 the second acquirer 103c acquires a target data (third data) 503 including a target data 501 in the first image 500. Subsequently, the second acquirer 103c calculates a correlation value between the target data 503 and a target corresponding data 603 including a target corresponding pixel 601 in the second image 600.
  • a size, a shape and a position of the target pixel 501 and the target data 503 are not limited to that of FIG.8. A method as explained in embodiment 1 may be used to calculate correlation values.
  • a shape and a size of the target data 503 are not necessarily required to accord with a shape and a size of the target corresponding data 603.
  • Step S206 the second acquirer 103c determines whether or not correlation values calculated in Step S205 satisfies the predetermined conditions.
  • the second acquirer 103c calculates the correlation values between the target data 503 and the target corresponding data 603, and determines whether or not the calculated correlation values satisfy the predetermined conditions in this embodiment but the other construction may perform.
  • Steps S210-S213 As Steps S105-S108 in embodiment 1.
  • Step S207 a plurality of reference pixels 502a-502c and reference data 505a-505c which are a part region including the reference pixels 502a-502c are acquired from the first image 500.
  • a size and a shape of the reference data 505a-505c are not necessarily required to accord with a size and a shape of the target data 503.
  • a reference data acquiring region 504 may be set around the target pixel 501 to acquire the plurality of the reference data. In this embodiment, three reference pixels and three pieces of reference data are acquired but more images and data may be acquired.
  • Step S208 correlation values between the target data 503 and the reference data 505a-505c are calculated using the same method explained in Step S106 of embodiment 1.
  • Step S209 weights of each reference data are determined from the correlation values calculated in Step S208 using the same method explained in Step S107 of embodiment 1.
  • Step S214 a weighted average of signal values of the reference pixels 502a-502c is calculated using the determined weights. And an output image where harmful effects are removed from the target pixel 501 is generated using the calculated weighted average similarly to Step S109 of embodiment 1.
  • Step S215 similarly to Step S110 of embodiment 1, when pixels where harmful effects need to be removed remain, it returns to Step S203.
  • the above construction provides an image pickup apparatus capable of reducing a plurality of harmful effects generated according to the resolution enhancement process of an image at the same time.
  • FIG.10 is an appearance view of the image processing system according to this embodiment
  • FIG.11 is a block view of the image processing system.
  • the image pickup apparatus 401 shots an object image which an imaging optical system images to acquire an input image.
  • the input image acquired by the image pickup apparatus 401 is input to an image processing apparatus 402 through a communicator 403. Then, optical characteristics (optical transmission function and point image intensity distribution) of the image pickup apparatus 401 and information regarding image pickup conditions (focal length, diaphragm and ISO sensitivity) while imaging are stored in a memory 404 as necessary.
  • a resolution enhancing unit 405 generates a first image and a second image having resolutions different from each other.
  • the image pickup apparatus 401 may perform a resolution enhancement process and may input the first and second images already generated in the image processing apparatus 402.
  • the second image may be the input image.
  • the first and second images are input to a harmful effect reducer 406, and an image process (harmful effect reduction process) is performed.
  • An output image after processing is output to at least one of a display apparatus 407, a recording medium 408 and an output apparatus 409 through the communicator 403.
  • the display apparatus 407 is, for example, a liquid crystal display and projector. Users can work while confirming with images in the middle of being processed through the display apparatus 407.
  • the recording medium 408 is, for example, a semiconductor memory, hard disk and a server on a network.
  • the output apparatus 409 is, for example, a printer.
  • the image processing apparatus 402 may have functions for a developing process and other image processes as necessary.
  • the above configuration provides an image processing system capable of reducing a plurality of harmful effects generated according to the resolution enhancement process of an image at the same time.
  • FIG.12 is an appearance view of an image pickup system according to this embodiment
  • FIG.13 is a block view of the image pickup system.
  • an image pickup apparatus transfers images to a server wirelessly connected to the image pickup apparatus, and the server performs the resolution enhancement process and a harmful effects reduction process according to the resolution enhancement process.
  • a server 703 includes a communicator 704 and is connected to an image pickup apparatus 701 through a network 702.
  • an input image is automatically or manually input to the server 703.
  • information regarding optical characteristics of the image pickup apparatus 701 and image pickup conditions is also input as necessary.
  • Images input to the server 703 may be the first and second images instead of the input image.
  • Images input to the server 703 are stored in the memory 705.
  • the image processor 706 performs a resolution enhancement process and a harmful effects reduction process relative to the input to generate an output image.
  • the output image is output to the image pickup apparatus 701 or is stored by the memory 705.
  • the above configuration provides an image pickup system capable of reducing a plurality of harmful effects generated according to a resolution enhancement process of an image at the same time.
  • 103 image processor (image processing apparatus) 103a first acquirer 103b selector 103c second acquirer 103d determiner 103e third acquirer 103f generator 200 first image 201 target pixel 202a-202c reference pixel 203 average signal value 300 second image 301 target corresponding pixel 303 target corresponding data (first data) 304 reference corresponding data acquiring region 305a-305c reference corresponding data (second data)

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Studio Devices (AREA)

Abstract

L'invention concerne un appareil de traitement d'image comprenant un premier dispositif d'acquisition qui acquiert une première image à résolution améliorée d'une image d'entrée et une seconde image présentant un degré de résolution plus faible que la première image, un sélecteur sélectionnant un pixel cible dans la première image, un deuxième dispositif d'acquisition qui acquiert des premières données comprenant un pixel correspondant au pixel cible et une pluralité de secondes données de la seconde image, un dispositif de détermination qui détermine un poids pour chacune de la pluralité des secondes données sur la base de la corrélation entre les premières et secondes données, un troisième dispositif d'acquisition qui acquiert des valeurs de signal d'une pluralité de pixels de référence correspondant à la pluralité des secondes données à partir de la première image, et un générateur qui génère un pixel de sortie correspondant au pixel cible sur la base d'une valeur de signal valeur calculée à partir des valeurs de signal de la pluralité des pixels de référence et des poids.
PCT/JP2015/004697 2014-10-21 2015-09-15 Appareil de traitement d'image, appareil de capture d'image, système de traitement d'image et processus de traitement d'image WO2016063452A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014214367A JP6468791B2 (ja) 2014-10-21 2014-10-21 画像処理装置、撮像装置、画像処理システム、画像処理方法および画像処理プログラム
JP2014-214367 2014-10-21

Publications (1)

Publication Number Publication Date
WO2016063452A1 true WO2016063452A1 (fr) 2016-04-28

Family

ID=55760515

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/004697 WO2016063452A1 (fr) 2014-10-21 2015-09-15 Appareil de traitement d'image, appareil de capture d'image, système de traitement d'image et processus de traitement d'image

Country Status (2)

Country Link
JP (1) JP6468791B2 (fr)
WO (1) WO2016063452A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000209432A (ja) * 1999-01-18 2000-07-28 Dainippon Screen Mfg Co Ltd 画像処理方法
JP2006222493A (ja) * 2005-02-08 2006-08-24 Seiko Epson Corp 複数の低解像度画像を用いた高解像度画像の生成
WO2014077376A1 (fr) * 2012-11-15 2014-05-22 株式会社 東芝 Dispositif de diagnostic par rayons x

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000209432A (ja) * 1999-01-18 2000-07-28 Dainippon Screen Mfg Co Ltd 画像処理方法
JP2006222493A (ja) * 2005-02-08 2006-08-24 Seiko Epson Corp 複数の低解像度画像を用いた高解像度画像の生成
WO2014077376A1 (fr) * 2012-11-15 2014-05-22 株式会社 東芝 Dispositif de diagnostic par rayons x

Also Published As

Publication number Publication date
JP6468791B2 (ja) 2019-02-13
JP2016082496A (ja) 2016-05-16

Similar Documents

Publication Publication Date Title
JP7242185B2 (ja) 画像処理方法、画像処理装置、画像処理プログラム、および、記憶媒体
JP4571670B2 (ja) 画像モデルにおける色成分の修復のための方法、システム、プログラムモジュールおよびコンピュータプログラム製品
US9036032B2 (en) Image pickup device changing the size of a blur kernel according to the exposure time
RU2523028C2 (ru) Устройство обработки изображения, устройство захвата изображения и способ обработки изображения
JP6327922B2 (ja) 画像処理装置、画像処理方法、およびプログラム
US8941762B2 (en) Image processing apparatus and image pickup apparatus using the same
KR101578583B1 (ko) 화상 처리 장치, 정보 처리 방법, 및 컴퓨터 판독 가능한 기억 매체
JP2017010092A (ja) 画像処理装置、撮像装置、画像処理方法、画像処理プログラム、および、記憶媒体
JP2014150498A (ja) 画像処理方法、画像処理装置、画像処理プログラムおよび撮像装置
US11830173B2 (en) Manufacturing method of learning data, learning method, learning data manufacturing apparatus, learning apparatus, and memory medium
US10217193B2 (en) Image processing apparatus, image capturing apparatus, and storage medium that stores image processing program
JP6541454B2 (ja) 画像処理装置、撮像装置、画像処理方法、画像処理プログラム、および、記憶媒体
JP2023055848A (ja) 画像処理方法、画像処理装置、画像処理システム、およびプログラム
WO2016051716A1 (fr) Procédé de traitement d'image, dispositif de traitement d'image, et support d'enregistrement pour la mémorisation d'un programme de traitement d'images
JP2017130167A (ja) 画像処理装置、撮像装置および画像処理プログラム
WO2016063452A1 (fr) Appareil de traitement d'image, appareil de capture d'image, système de traitement d'image et processus de traitement d'image
JP2018067868A (ja) 撮像装置
JP2021140758A (ja) 学習データの製造方法、学習方法、学習データ製造装置、学習装置、およびプログラム
JP2017028583A (ja) 画像処理装置、撮像装置、画像処理方法、画像処理プログラム、および、記憶媒体
JP2015119428A (ja) 画像処理方法、画像処理装置、撮像装置、画像処理プログラム、および、記憶媒体
JP2012156714A (ja) プログラム、画像処理装置、画像処理方法および撮像装置。
Šorel et al. Restoration of color images degraded by space-variant motion blur
JP2017130168A (ja) 画像処理装置、撮像装置および画像処理プログラム
JP2017146882A (ja) 画像処理装置、画像処理方法、画像処理プログラム、および記憶媒体
JP2017041014A (ja) 画像処理装置、撮像装置、画像処理方法、画像処理プログラム、および、記憶媒体

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15852359

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15852359

Country of ref document: EP

Kind code of ref document: A1