WO2010092835A1 - Dispositif de traitement d'image, dispositif de capture d'image, procédé de traitement d'image, programme de traitement d'image et support d'enregistrement - Google Patents

Dispositif de traitement d'image, dispositif de capture d'image, procédé de traitement d'image, programme de traitement d'image et support d'enregistrement Download PDF

Info

Publication number
WO2010092835A1
WO2010092835A1 PCT/JP2010/000917 JP2010000917W WO2010092835A1 WO 2010092835 A1 WO2010092835 A1 WO 2010092835A1 JP 2010000917 W JP2010000917 W JP 2010000917W WO 2010092835 A1 WO2010092835 A1 WO 2010092835A1
Authority
WO
WIPO (PCT)
Prior art keywords
luminance value
image
pixel
region
normalized
Prior art date
Application number
PCT/JP2010/000917
Other languages
English (en)
Japanese (ja)
Inventor
中井博之
山本修平
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Publication of WO2010092835A1 publication Critical patent/WO2010092835A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/48Increasing resolution by shifting the sensor relative to the scene

Definitions

  • the present invention relates to an image processing apparatus, an imaging apparatus, an image processing method, and an image processing program that generate one high resolution image data from a plurality of low resolution image data.
  • the resolution of an image captured by a camera depends on the density of an image sensor (CCD (Charge-Coupled Device), CMOS (Complementary Metal-Oxide-Semiconductor), etc.).
  • CCD Charge-Coupled Device
  • CMOS Complementary Metal-Oxide-Semiconductor
  • Camera manufacturers have released high-resolution cameras with a high element density one after another in order to meet user requirements for resolution.
  • a high-resolution image processing technique in which imaging is performed by slightly displacing the imaging device or the camera position, and a plurality of captured images are combined to generate an image having a higher resolution than the original image.
  • This high-resolution image processing is generally divided into two types, image shift processing and super-resolution processing.
  • the image shift process is a method of mapping the luminance value of the low resolution image to the pixel of the high resolution image based on the position correspondence for each pixel of the plurality of low resolution images and the target high resolution image.
  • the super-resolution processing is a method for estimating a target high-resolution image from a plurality of low-resolution images.
  • the ML (Maximum-likelihood) method the MAP (Maximum A Posterior) method
  • the POCS Project On
  • Various super-resolution processing methods have been proposed, such as to Convex Set).
  • the image sensor has a narrow range of light that can be captured at one time (dynamic range) compared to the human eye, so when capturing an environment with a wide dynamic range, the image may be overexposed or underexposed.
  • dynamic range a narrow range of light that can be captured at one time
  • the image may be overexposed or underexposed.
  • Patent Documents 2 and 2). See Patent Document 3).
  • These methods detect white and black pixels in the pre-combination image and supplement those areas with a short-time exposure image or a long-time exposure image.
  • images having different exposure amounts are alternately output, and when the short exposure images are output, the pixels whose luminance value is a predetermined value or less. Is determined to be blacked out, and the pixel value of the pixel is switched to the pixel value of the long-exposure image stored in the memory.
  • Patent Document 2 a method for realizing a highly reliable dynamic range expansion by performing a combining process using a plurality of image data having different exposure times. Then, the white and black pixels in the image data are detected and replaced with the luminance values of the pixels having different exposure times. For this reason, since the positional relationship of the imaging target on the image needs to match at the pixel level between the images, it is necessary to fix the imaging element position and the camera position. Therefore, even though the dynamic range can be expanded as compared with the original image, the resolution cannot be improved.
  • a plurality of low-resolution image data having different exposure times is prepared, and a plurality of pieces of high-resolution image data are generated using the low-resolution image data having the same exposure time.
  • the present invention has been made to solve the above problems, and an object of the present invention is to provide an image processing apparatus and an image processing method capable of expanding the dynamic range of a captured image and improving the resolution. is there.
  • an image processing apparatus acquires a group of low-resolution images including low-resolution images having different luminance ranges, which are captured under different imaging conditions and imaging positions, and the group An image processing apparatus for generating a high-resolution image from the low-resolution images of the group of low-resolution images so that all of the group of low-resolution images can be regarded as being captured under the same imaging conditions.
  • Normalization means for generating a group of normalized images corresponding to the group of low resolution images by correcting the luminance value of each pixel included in at least some of the low resolution images, and the group of low resolutions For each image, a region for specifying a first region composed of pixels having a luminance value outside a predetermined range and a second region composed of pixels having a luminance value within the predetermined range.
  • a pixel area corresponding to the pixel of the high resolution image to be generated is extracted from each of the normalization image generated by the specifying means and the normalization means, and the extracted pixel area is extracted from the pixel of the high resolution image.
  • the luminance value of the pixel of the normalized image corresponding to the pixel of the first region specified by the region specifying unit corresponds to the pixel.
  • Luminance value for generating a corrected normalized image by correcting using the interpolated luminance value calculated from the luminance value of the pixel region corresponding to the pixel in the second region, which is in a predetermined positional relationship with the pixel region in the composite image For the pixels in the first area of the group of normalized images including the interpolation means and the corrected normalized image generated by the luminance value interpolation means, the luminance with respect to the pixels in the first area
  • a weighting unit that performs weighting based on the interpolated luminance value calculated by the interpolating unit; a high-resolution image generating unit that generates a high-resolution image from a group of normalized images including the corrected normalized image generated by the luminance value interpolating unit;
  • the high-resolution image generation means when generating the
  • an image processing method acquires a group of low-resolution images including low-resolution images having different luminance ranges, which are captured under different imaging conditions and imaging positions, and the group An image processing method in an image processing apparatus for generating a high-resolution image from a low-resolution image of the group, so that all of the group of low-resolution images can be regarded as being captured under the same imaging conditions.
  • Each of the group of low resolution images includes a first region composed of pixels having luminance values outside a predetermined range and pixels having luminance values within the predetermined range.
  • a pixel region that is a region corresponding to the pixel of the high-resolution image to be generated is extracted from each of the region specifying step for specifying two regions and the normalized image generated in the normalization step, and the extracted pixel region is
  • the luminance value of the pixel of the normalized image corresponding to the pixel of the first region specified in the region specifying step is Correction normalization by correcting using the interpolated luminance value calculated from the luminance value of the pixel region corresponding to the pixel in the second region, which is in a predetermined positional relationship with the pixel region in the composite image corresponding to the pixel
  • a luminance value interpolation step for generating an image and a group of normalized images including a corrected normalized image generated in the luminance value interpolation step are applied to the pixels in the first region.
  • a high-resolution image from a group of normalized images including a weighting step for weighting the pixels in the first region based on the interpolation luminance value calculated in the luminance value interpolation step, and a corrected normalized image generated by the luminance value interpolation step
  • a high-resolution image generation step for generating a high-resolution image, and when the high-resolution image is generated in the high-resolution image generation step, the luminance value of the pixel in the first region contributes to the generation of the high-resolution image. Is reduced according to the weighting weighted by the weighting means.
  • the normalizing means can determine that at least a part of the group of low resolution images is low so that all of the group of low resolution images can be regarded as being captured under the same imaging condition.
  • a group of normalized images is generated by correcting the luminance value of each pixel. This group of normalized images is associated with a group of low resolution images.
  • the area specifying means specifies the first area and the second area for each group of low-resolution images.
  • the first area is an area composed of pixels having a luminance value outside the predetermined range
  • the second area is an area composed of pixels having a luminance value within the predetermined range.
  • the predetermined range is a range of luminance values considered to appropriately represent the intensity of light emitted from the imaging target (or light reflected on the imaging target), and is appropriately set by those skilled in the art. It is.
  • the luminance value interpolating means performs the pixel of the first region in the normalized image (more precisely, the low resolution A pixel corresponding to a pixel in the second region having a predetermined positional relationship with the pixel region in the composite image corresponding to the pixel, the luminance value of the pixel in the normalized image corresponding to the pixel in the first region included in the image
  • a corrected normalized image is generated by correcting the luminance value of the region using the interpolated luminance value.
  • the luminance value interpolating means only needs to generate a corrected normalized image from at least a part of the normalized image generated by the normalizing means, and it is not necessary to correct all of the normalized images.
  • the weighting unit is a pixel in the first region included in the group of normalized images including the corrected normalized image generated by the luminance value interpolation unit (more precisely, a pixel corresponding to a pixel in the first region included in the low-resolution image) ) With respect to the pixel based on the interpolated luminance value calculated by the luminance value interpolating means.
  • the weighting based on the interpolated luminance value is, for example, based on the weighting based on the result of determining the suitability of the calculated interpolated luminance value or the accuracy of calculating the interpolated luminance value used when calculating the interpolated luminance value.
  • Means weighting is a weighting according to the number of pixel areas corresponding to the pixels in the second area, for example.
  • the high-resolution image generation unit generates a high-resolution image from a group of normalized images including the corrected normalized image generated by the luminance value interpolation unit. Then, when generating a high resolution image, the high resolution image generating means decreases the degree of contribution of the luminance value of the pixel in the first region in generating the high resolution image according to the weighting weighted by the weighting means. .
  • the image processing apparatus acquires a group of low resolution images including low resolution images having different luminance ranges, which are imaged under different imaging conditions and imaging positions, and the group of low resolution images.
  • Normalizing means for generating a group of normalized images corresponding to the group of low resolution images by correcting the luminance value of each pixel included in the portion of the low resolution image, and each of the group of low resolution images
  • Area specifying means for specifying a first area consisting of pixels having a luminance value outside a predetermined range and a second area consisting of pixels having a luminance value within the predetermined range;
  • a pixel area that is an area corresponding to the pixel of the high-resolution image to be generated is extracted from each of the normalized images generated by the normalization means, and the extracted pixel area corresponds to the pixel array of the high-resolution image.
  • the luminance value of the pixel of the normalized image corresponding to the pixel of the first region specified by the region specifying unit is set in the composite image corresponding to the pixel.
  • Luminance value interpolation means for generating a corrected normalized image by performing correction using an interpolation luminance value calculated from a luminance value of a pixel area corresponding to the pixel of the second area, which is in a predetermined positional relationship with the pixel area; For the pixels in the first area of a group of normalized images including the corrected normalized image generated by the luminance value interpolation means, the luminance value interpolation means calculates the pixels in the first area.
  • Weighting means for weighting based on the interpolated luminance value, and high-resolution image generating means for generating a high-resolution image from a group of normalized images including the corrected normalized image generated by the luminance value interpolation means,
  • the high-resolution image generation means when generating the high-resolution image, according to the weighting weighted by the weighting means, the degree of contribution of the luminance value of the pixel in the first region in the generation of the high-resolution image. This is a configuration for lowering.
  • the image processing method acquires a group of low resolution images including low resolution images having different luminance ranges, which are imaged at different imaging conditions and imaging positions, and obtains a high resolution from the group of low resolution images.
  • a normalization step of generating a group of normalized images corresponding to the group of low resolution images by correcting the luminance value of each pixel included in a part of the low resolution images, and each of the group of low resolution images A region for specifying a first region composed of pixels having a luminance value outside a predetermined range and a second region composed of pixels having a luminance value within the predetermined range
  • a pixel region that is a region corresponding to a pixel of the high-resolution image to be generated is extracted from each of the normalization image generated in the normalization step and the normalization step, and the extracted pixel region is extracted from the pixel of the high-resolution image.
  • the luminance value of the pixel of the normalized image corresponding to the pixel of the first region specified in the region specifying step corresponds to the pixel.
  • Luminance value for generating a corrected normalized image by correcting using the interpolated luminance value calculated from the luminance value of the pixel region corresponding to the pixel in the second region, which is in a predetermined positional relationship with the pixel region in the composite image With respect to the pixels in the first region of the group of normalized images including the interpolation step and the corrected normalized image generated in the luminance value interpolation step, the pixels in the first region A high-resolution image that generates a high-resolution image from a group of normalized images including a weighting step for weighting based on the interpolated luminance value calculated in the luminance value interpolation step and a corrected normalized image generated by the luminance value interpolation step The weighting means determines the degree of contribution in the generation of the high-resolution
  • FIG. 1 is a block diagram illustrating a configuration of an image processing apparatus according to an embodiment of the present invention. It is a figure which shows an example of the movement mode which moves a solid-state image sensor with an actuator. It is a block diagram which shows the structure of the dynamic range expansion super-resolution process part with which the said image processing apparatus is provided. It is a flowchart which shows an example of the flow of the process in the super-resolution processing method as a comparative example. It is a flowchart which shows an example of the flow of a process in the said image processing apparatus.
  • (A) to (c) show the pixel size of the low-resolution image in the horizontal and vertical directions when nine low-resolution captured images are captured with two types of exposure time, that is, a short exposure time and a long exposure time. It is a figure which shows the positional relationship of the pixel at the time of imaging by shifting 1/3 pitch.
  • (A) and (b) are images of low resolution images in the horizontal and vertical directions in the case of long exposure time when nine captured images are picked up with two types of exposure time, short exposure time and long exposure time. It is a figure which shows the positional relationship at the time of image
  • FIGS. 1 to 8 an embodiment of the present invention will be described as follows.
  • FIG. 2 is a block diagram showing a configuration of the image processing apparatus 1 according to the present embodiment.
  • the image processing apparatus 1 includes an imaging device (imaging unit) 2 and a control device (image processing device) 3.
  • the imaging device 2 has a plurality of imaging elements that form a plurality of pixels, spatially discretizes an optically imaged target image, samples the image, and converts it into an image signal. More specifically, the imaging device 2 includes a lens (optical imaging unit) 11, a solid-state imaging device 12, and a solid-state imaging device actuator 13 (displacement unit) that changes the position of the solid-state imaging device (hereinafter, “displacement unit”). , Simply referred to as actuator 13).
  • the solid-state imaging device 12 for example, a CCD image sensor or a CMOS image sensor can be used.
  • the imaging device 2 is an area sensor using a sensor such as a CCD or a CMOS.
  • the actuator 13 changes the relative position between the imaging target (hereinafter simply referred to as the target) and the solid-state imaging device 12. More specifically, the actuator 13 moves the solid-state imaging device 12 two-dimensionally on a plane perpendicular to the optical axis connecting the target and the lens 11. However, this movement is not limited to two dimensions, and may be a three-dimensional movement including rotation.
  • the lens 11 is fixed to the casing, the actuator 13 is fixed to the inner wall of the casing, and the solid-state imaging device 12 is supported by the actuator 13. Therefore, when the actuator 13 displaces the solid-state image sensor 12, the solid-state image sensor 12 can be displaced relative to the housing.
  • a piezo actuator or a stepping motor can be used as the actuator 13.
  • a piezo actuator is used as the actuator 13.
  • control device 3 controls the imaging device 2 and processes the captured image data of the target acquired by the imaging device 2, thereby generating high resolution image data having a high resolution and a wide dynamic range.
  • the control device 3 includes an imaging control unit 21, an imaging condition storage unit 22, a low resolution image storage unit 23, a high resolution image storage unit 24, a dynamic range expansion super-resolution processing unit (high resolution image generation unit) 25, and an interpolation processing unit. 26, an interpolation condition storage unit 27, and a normalized image data storage unit 30.
  • the imaging control unit 21 includes an actuator control unit 28 and an imaging timing control unit 29.
  • the imaging control unit 21 controls the imaging timing of the imaging device 2, the motion control of the actuator 13 provided in the imaging device 2, and the various types between the imaging device 2. Send and receive control signals. More specifically, the imaging control unit 21 reads the imaging condition data and the imaging position data from the imaging condition storage unit 22, controls the exposure time of the solid-state imaging device 12, and performs solid-state imaging so as to correspond to the control. The element 12 is moved.
  • the actuator control unit 28 controls the position of the solid-state imaging device 12 by operating the actuator 13 with a predetermined pattern. More specifically, the actuator control unit 28 controls exercise start timing, exercise speed, exercise distance, and the like.
  • the imaging timing control unit 29 controls the timing of imaging by the solid-state imaging device 12 (imaging start time, exposure time, etc.) by sending a control signal to the imaging device 2.
  • FIG. 3 shows an example of a movement mode in which the solid-state imaging device 12 is moved by the actuator 13.
  • a thick line frame 51 in the figure indicates the position of the solid-state image sensor serving as a reference, and a broken-line frame 52 indicates the position of the solid-state image sensor during movement.
  • the solid-state image sensor The solid-state image pickup device 12 is moved at a size of 1/3 of the image. At this time, the rectangle 53 indicates the movement of the center of the imaging pixel.
  • the imaging device 2 moves by the width of one pixel of the high resolution image and generates one pixel of the low resolution image in order to generate a plurality of pixels of the high resolution image corresponding to one pixel of the low resolution image. Imaging is performed a number of times corresponding to the number of pixels of the high-resolution image corresponding to one pixel.
  • the control device 3 stores the captured image data captured by the imaging device 2 in the low-resolution image storage unit 23 in association with the imaging position at the time of imaging (the position of the solid-state imaging device 12) and the exposure time.
  • the interpolation processing unit 26 applies luminance values that are greater than or equal to the first luminance value or less than or equal to the second luminance value to all of the low resolution image data stored in the low resolution image storage unit 23 (that is, luminance values that are out of a predetermined range).
  • the luminance value of the abnormal pixel having) is normal among the neighboring pixels of the abnormal pixel in the image including the abnormal pixel (or the neighboring pixels of the pixel corresponding to the abnormal pixel in the low-resolution image different from the image).
  • Interpolation processing is performed using the luminance value of the pixel having the luminance value.
  • the first luminance value and the second luminance value are recorded in interpolation condition data stored in the interpolation condition storage unit 27 in advance. Details of the interpolation processing unit 26 will be described later.
  • the dynamic range expansion super-resolution processing unit 25 performs high-resolution image data by performing predetermined processing on the low-resolution captured image data after normalization that has been subjected to interpolation processing by the interpolation processing unit 26, and generates the high-resolution image data.
  • the high resolution image data is stored in the high resolution image storage unit 24.
  • FIG. 1 is a block diagram showing a configuration of the interpolation processing unit 26.
  • the interpolation processing unit 26 includes a region extraction unit (region specifying unit) 31, an interpolation luminance calculation unit (luminance value interpolation unit) 32, an interpolation luminance determination unit (luminance value interpolation unit) 33, and weighted data creation. Section (weighting means) 34 and a normalization processing section (normalization means) 35.
  • the area extraction unit 31 has a luminance value equal to or higher than the first luminance value or lower than the second luminance value indicated by the interpolation condition data stored in the interpolation condition storage unit 27 with respect to the captured image data stored in the low resolution image storage unit 23.
  • An abnormal pixel (first region) having a luminance value is specified, and the luminance value is extracted.
  • a range of luminance values that is smaller than the first luminance value and larger than the second luminance value is a normal luminance value range. Therefore, it can be said that the region extraction unit 31 specifies a first region having an abnormal luminance value outside the normal luminance value range and a second region having a luminance value within the normal range. For example, when the captured image data is 8-bit data, the second luminance value is 1 and the first luminance value is 254.
  • the normalization processing unit 35 reads the exposure time corresponding to each captured image data from the imaging condition data stored in the imaging condition storage unit 22, The captured image is normalized according to the exposure time.
  • all captured images included in a captured image set composed of a plurality of captured images captured under a plurality of different imaging conditions are captured under one imaging condition of the plurality of different imaging conditions. This is a process of correcting the luminance value of each pixel of each captured image so that it can be considered.
  • the normalization processing unit 35 generates normalized image data obtained by normalizing low-resolution captured image data by calculating a normalized luminance value for each pixel of each captured image.
  • the normalized image data generated by the normalization processing unit 35 is stored in the normalized image data storage unit 30. Details of the normalization processing in the normalization processing unit 35 will be described later.
  • the interpolation luminance calculation unit 32 refers to the imaging position data stored in the imaging condition storage unit 22, and the first extracted by the region extraction unit 31 for each of the normalized image data generated by the normalization processing unit 35. A pixel (neighboring pixel) near the pixel in the high-resolution image corresponding to the region is specified. Then, the interpolation luminance calculation unit 32 calculates an interpolation luminance value using the luminance value of a pixel (second region) having a normal luminance value among the specified neighboring pixels, and calculates the calculated interpolation luminance value in the first region. After the interpolation, the normalized image (corrected normalized image) data set as the luminance value is generated.
  • the interpolated luminance calculation unit 32 corresponds to the pixels in the first region of the normalized image, and includes at least one first pixel within a predetermined distance from a pixel region (a pixel in the high-resolution image) in a composite image to be described later.
  • the interpolated luminance value is calculated using the luminance value of the pixel area (pixel in the high resolution image) corresponding to the two area pixels. Since the pixel region is a pixel in the high-resolution image, the pixel region is smaller in size than the pixel included in the low-resolution image or the normalized image. A method for calculating the interpolated luminance value will be described later.
  • the interpolated luminance determination unit 33 calculates whether the interpolated luminance value and the normalization processing unit 35 determine whether the interpolated luminance calculation unit 32 has successfully calculated the interpolated luminance value of the first region in the normalized image after interpolation. This is performed by comparing the normalized luminance value in the first region. Details of this determination method will be described later.
  • the interpolation brightness determination unit 33 maintains the interpolation brightness value as the brightness value in the first region, and the interpolation brightness calculation unit 32
  • the interpolation luminance value is changed to a normalized luminance value before being changed by the interpolation luminance calculation unit 32. For example, when the luminance value of the pixel in the first region is larger than a predetermined range (normal luminance value range), the interpolated luminance determining unit 33 calculates the interpolated luminance value of the pixel in the first region in the normalized image after correction.
  • the interpolated luminance value is interpolated when the interpolated luminance value is smaller than the normalized luminance value
  • the luminance value of the pixel in the normalized image is replaced with the normalized luminance value.
  • the interpolation luminance determination unit 33 and the interpolation luminance value of the pixel in the first region in the corrected normalized image and the first region in the normalized image Is compared with the normalized luminance value that is the luminance value of the pixel, and when the interpolated luminance value is larger than the normalized luminance value, the luminance value of the pixel in the corrected normalized image interpolated by the interpolated luminance value Is replaced with the normalized luminance value.
  • the interpolated luminance determining unit 33 outputs the final normalized image data subjected to the luminance value changing process to the differential image generating unit 42 of the dynamic range expansion super-resolution processing unit 25.
  • the weighting data creating unit 34 is a first area in the final normalized image generated by the interpolation luminance determining unit 33 based on the determination result by the interpolation luminance determining unit 33 and the weighting condition data stored in the interpolation condition storing unit 27. Weighting data for reducing the degree of contribution of the luminance value in high-resolution image generation is generated, and the weighted data is output to the weighted difference image generation unit 44 of the dynamic range expansion super-resolution processing unit 25.
  • the weighting data creation unit 34 calculates the interpolated luminance value of the pixel in the first region and the normalized image in the corrected normalized image. Is compared with the normalized luminance value, which is the luminance value of the pixel in the first region, and when the interpolated luminance value is smaller than the normalized luminance value, a high-resolution image is generated for the pixel in the first region Weighting is performed to reduce the degree of contribution.
  • the weighting data creation unit 34 and the first luminance in the normalized image and the interpolated luminance value of the pixel in the first region in the normalized image after correction when the normalized luminance value, which is the luminance value of the pixel in the region, is compared, and the interpolated luminance value is larger than the normalized luminance value, the degree of contribution in generating the high-resolution image for the pixel in the first region Weighting is performed to reduce Performing weighting to reduce the degree of contribution means performing negative weighting or decreasing positive weighting.
  • the weighting condition data is data in which a weight value is set for each determination by the region extraction unit 31 and the interpolation luminance determination unit 33.
  • a weight of 0.0 is assigned to the first region specified by the region extraction unit 31
  • a weight of 1.0 is assigned to the second region specified by the region extraction unit 31
  • interpolation A weight of 1.0 (or a value greater than 0.0 and less than 1.0) is assigned to the first area determined to have been successfully interpolated by the luminance determining unit 33, and the interpolation luminance determining unit 33 determines that the interpolation has failed. It is recorded that a weight of 0.0 is assigned to the first area.
  • the weight means a positive weight.
  • weighting data creation method in the weighting data creation unit 34 Details of the weighting data creation method in the weighting data creation unit 34 will be described later.
  • FIG. 4 is a block diagram illustrating a configuration of the dynamic range expansion super-resolution processing unit 25.
  • the dynamic range expansion super-resolution processing unit 25 includes a difference image generation unit 42, a pseudo low resolution image generation unit 43, a weighted difference image generation unit 44, and an image update unit 45.
  • the pseudo low resolution image generation unit 43 converts the high resolution image data stored in the high resolution image storage unit 24 with the observation model, and generates a plurality of pseudo low resolution images.
  • the difference image generation unit 42 uses the final normalized image data output from the interpolation luminance determination unit 33 and the pseudo low resolution image obtained by the pseudo low resolution image generation unit 43 to generate a plurality of images. Difference image data is generated.
  • the weighted difference image generation unit 44 weights the difference image data generated by the difference image generation unit 42 in units of pixels indicated by the weighting data output from the weighting data generation unit 34 of the interpolation processing unit 26.
  • the weighted difference image generation unit 44 outputs the weighted difference image data to the image update unit 45.
  • the image update unit 45 updates the high resolution image data based on the weighting by the weighted difference image generation unit 44 and stores the updated high resolution image data in the high resolution image storage unit 24.
  • FIG. 5 is a flowchart illustrating an example of a processing flow in a super-resolution processing method as a comparative example.
  • FIG. 6 is a flowchart illustrating an example of a processing flow in the image processing apparatus 1.
  • the super-resolution processing is formulated as an optimization problem of an evaluation function defined for the generated high-resolution image data. That is, the evaluation function is optimized based on the square error between the estimated captured image data and the observed captured image data.
  • an evaluation function (constraint term) based on the characteristics of the high resolution image data may be added to the evaluation function (error term). Specifically, when it is known that the edge component in the high resolution image data is small, the evaluation function (constraint term) based on the image data obtained by multiplying the high resolution image data by the Laplacian filter is used as the evaluation function (error term). Append to Details of the evaluation function will be described later. Since the super-resolution processing is an optimization calculation with a very large number of unknowns, an iterative calculation method such as a steepest descent method is used to optimize the evaluation function.
  • step 1 shown in FIG. 5 m pieces of captured image data y m from the first sheet to m th is obtained.
  • ym represents m captured image data from the first to m-th images as a vector.
  • step 2 high-resolution image data h as an initial value is set.
  • the initial high-resolution image data h is image data having a higher resolution than the captured image data (the number of pixels constituting the image is large).
  • h represents high-resolution image data as a vector.
  • the initial high-resolution image for example, a gray image, a black image, or a white image with a uniform entire surface may be used, or any one of a plurality of captured image data is extracted and the image indicated by the captured image data is displayed.
  • An image obtained by extending the image size of the image to the size of the high-resolution image by performing the interpolation process may be used as the initial high-resolution image.
  • an interpolation method at this time there is a method of converting 3 ⁇ 3 pixels of a high-resolution image present at a position corresponding to the position of a target pixel included in the captured image into luminance corresponding to the captured image.
  • m pseudo low-resolution image data B m h is created from the high-resolution image data h in consideration of factors such as optical degradation and downsampling.
  • the pseudo low-resolution image data B m h is obtained by low resolution with a high-resolution image data h to align the captured image data y m. Details of the method of creating the pseudo low-resolution image data B m h will be described later.
  • step 4 the difference between the pseudo low-resolution image data B m h calculated in step 3 and the captured image data y m is calculated, and difference image data X m is calculated.
  • This difference image data Xm is an error between “estimated captured image data (high resolution image data)” and “observed captured image data (captured image data)”.
  • the evaluation value E is calculated based on the difference image data X m calculated in step 4.
  • the evaluation value E is a value indicating the accuracy of high resolution (the degree to which the high resolution image data and the captured image data are different). An evaluation function for calculating the evaluation value E will be described later.
  • step 6 the value of the evaluation value E is compared with a preset threshold value. If the evaluation value E is smaller than a preset value (YES in S6), the high resolution image data h used for calculating the evaluation value E is finally obtained as a result of the super-resolution processing.
  • the high-resolution image data h is stored in the storage unit, and the super-resolution processing ends.
  • the evaluation function used for the super-resolution processing is expressed by the following equation (1), and the repeated optimization is expressed as a process for minimizing the evaluation function. .
  • the above equation (1) is an evaluation function used in a super-resolution processing method called a MAP method, and an error term that is the first term on the right side is the estimated captured image data and the observed captured image data.
  • the constraint term that is a term based on the square error and the second term on the right side is a term based on the characteristics of the high-resolution image data.
  • h represents the vector representation of the high-resolution image data
  • y m is the vector representation of the captured image data of all the low resolution.
  • L represents a matrix representing prior probability information that is a characteristic of the high-resolution image data
  • w represents the strength of the constraint term.
  • 1 to m represent the numbers of captured image data.
  • y m is equivalent to the captured image data in FIG. 2, represented by the following equation (2).
  • the vector representation of image data means that when the element size of image data is horizontal W ⁇ vertical H, all elements are arranged in a line as shown in equation (2), and the size is (W ⁇ H) ⁇ 1. This is an expression method for generating a vector.
  • C is a matrix expression representing a convolution (processing that blurs an image by convolution) using a degradation function (Point Spread Function ⁇ PSF function>) that causes blur in the observation model.
  • M i is a matrix representing a process of sampling a luminance value of a pixel corresponding to the position of each imaging element of the captured image data after aligning the high resolution image with the imaging position of the captured image data, and B m Is a combination of these matrices.
  • a plurality of pseudo low resolution image data B m h shown in step 3 of FIG. 5 is obtained. Further, by subtracting the captured image data y m from the pseudo low resolution image data B m h, difference image data X m shown in step 4 of FIG. 5 is obtained. Further, in step 7 of FIG. 5, we obtain the high resolution image data h from the difference image data X m (update) processing corresponds to the operation indicated by the following equation (4), predetermined for the difference image data X m This is equivalent to a process of multiplying the weight and subtracting the value from the previous high-resolution image data h in the repetitive calculation.
  • FIG. 6 is a flowchart illustrating an example of a processing flow in the image processing apparatus 1.
  • the image processing apparatus 1 acquires a plurality of low-resolution captured images and imaging conditions.
  • the imaging control unit 21 of the control device 3 instructs the imaging device 2 to image the imaging target.
  • the imaging control unit 21 reads the imaging condition data from the imaging condition storage unit 22, and performs imaging of the imaging target according to the imaging conditions indicated by the read imaging condition data (S11). More specifically, the imaging control unit 21 performs imaging at m different imaging positions while shifting the imaging position little by little for the same imaging target. Then, the imaging control unit 21 changes the exposure time (n types), and again executes imaging at the m different imaging positions under the imaging conditions after the change of the exposure time.
  • the m image ⁇ n sets of captured image data y mn captured in this way are stored in the low resolution image storage unit 23.
  • the imaging conditions include the imaging position of each captured image, the exposure time, and the number of captured images. Further, the dynamic range expansion super-resolution processing unit 25 acquires each parameter used in the dynamic range expansion super-resolution processing. These parameters are stored in advance in a storage unit (not shown) that can be used by the dynamic range expansion super-resolution processing unit 25.
  • the image processing apparatus 1 extracts the first area and the second area for each of the plurality of captured image data y mn before normalization, and creates weight data used during the super-resolution processing.
  • the area extraction unit 31 of the interpolation processing unit 26 acquires m ⁇ n pieces of captured image data y mn stored in the low-resolution image storage unit 23, and for each of the captured image data y mn , Based on the luminance value distribution of the captured image data y mn , areas of pixels where the luminance value is out of the normal luminance value range, such as a blackout area and a white stripe area, are specified, and these are extracted as the first area To do.
  • This first area is an area where the luminance value of the pixel is equal to or lower than a predetermined luminance value or equal to or higher than a predetermined luminance value.
  • the luminance value is an area where the luminance value is 1 or lower or 254 or higher.
  • the region extraction unit 31 extracts a region other than the first region in the captured image data y mn (a region within the dynamic range of the camera without blackout or whiteout) as the second region. Then, the region extracting unit 31 outputs region specifying information for specifying the extracted first region and second region to the interpolation luminance calculating unit 32.
  • region data Z m a plurality of low-resolution captured image data may be used as reference images in step 21 to determine whether each pixel is a first region or a second region, and a weight may be assigned to the pixel.
  • the region of the captured image is divided into two regions, and the weight corresponding to the region is a binary value of 1 or 0.
  • the weight is not necessarily a binary value, and is a ternary value. (3 steps)
  • the above weights may be used. The setting of the weight will be described later.
  • the normalization processing unit 35 performs processing for normalizing a plurality of low-resolution captured image data captured at a plurality of exposure times and stored in the low-resolution image storage unit 23 according to the exposure time. Do. In this normalization process, for example, when there is an imaging condition of an exposure time A [msec] and an exposure time B [msec] (A> B), the luminance level of the captured image data captured at the exposure time B is used as the exposure time. In order to match the brightness level of the captured image data captured at A, the brightness value of the captured image data at the exposure time B is multiplied by A / B.
  • the normalization processing unit 35 refers to the imaging condition data to determine each captured image data. Get exposure time information.
  • the normalization process may result in a value larger than 255.
  • the captured image data is not censored at 255. Is preferably capable of holding a value higher than that.
  • the normalization processing unit 35 stores the generated normalized image data in the normalized image data storage unit 30.
  • the interpolation luminance calculation unit 32 performs a process of interpolating the luminance value of the region in the normalized image corresponding to the first region indicated by the region specifying information output from the region extraction unit 31.
  • the interpolation luminance calculation unit 32 acquires the normalized image data from the normalized image data storage unit 30, and interpolates the luminance value of the first region in the normalized image indicated by the acquired normalized image data.
  • the first area is an area where blackening, halation, or the like occurs and is not a normal value
  • the interpolation luminance calculation unit 32 performs interpolation using the luminance value of the neighboring pixels of the first area.
  • a brightness value is calculated, and a process of replacing the brightness value of the first region with the interpolated brightness value is performed.
  • Setting of neighboring pixels by the interpolation luminance calculation unit 32 is performed according to the following two criteria.
  • the target pixel means a pixel of the high resolution image when the low resolution image is mapped to the high resolution image
  • the neighboring pixel maps the low resolution image to the high resolution image. It means other low-resolution image pixels in the vicinity of the pixel of interest at the time. That is, “pixel” in the following description means a virtual pixel in a high-resolution image unless otherwise specified.
  • the first criterion is to select a pixel that is as close as possible to the pixel of interest included in the first region in the high-resolution image.
  • neighboring pixels are selected not only from normalized image data including the target pixel but also from normalized image data generated from captured image data having different exposure times (and imaging positions).
  • the imaging position of the captured image including the target pixel and the selection target captured image including the neighboring image to be selected are selected.
  • the pixel of the selection target captured image corresponding to the target pixel may be specified, and a pixel located in the vicinity of the specified pixel region may be selected as the above-described neighboring pixel.
  • first method For the setting of the neighboring pixels, (1) a method of setting the pixels located in the vicinity of 4 above and below, left and right centering on the pixel of interest or 8 near the top and bottom and left and right including an oblique direction (first method); (2) There is a method (second method) in which all pixels within a predetermined range from the center of the target pixel are set as neighboring pixels.
  • the moving distance of the actuator 13 is controlled at a constant equal interval pitch and is particularly effective.
  • FIGS. 7A to 7C show the pixels of the low-resolution image in the horizontal and vertical directions when nine low-resolution captured images are captured with two types of exposure time, that is, the short exposure time and the long exposure time. This shows the positional relationship of pixels when imaging is performed with a shift of 1/3 pitch of the size.
  • the pixels of the low resolution image are indicated by reference numeral 55, and the pixels of the high resolution image (indicated by reference numeral 56) are arranged in a matrix in the pixels 55 of the low resolution image.
  • the pixel 55 of the low resolution image corresponds to one of the imaging elements 12.
  • FIG. 3 shows the positional relationship between the imaging position (pixel position of the captured image) and the pixel of the high-resolution image when the resolution of the captured image is tripled.
  • the imaging position of the imaging device 2 (in other words, the position of the imaging device 12) is moved by the actuator 13 by the width of one pixel of the high resolution image. Then, among the captured images captured at each imaging position, an area corresponding to the pixel of the high-resolution image located at a predetermined position (for example, the upper left) (in FIG. 7A, “a5” in the pixel 55). The luminance value at the corresponding position is reflected in the high resolution image.
  • the imaging apparatus 2 performs long exposure imaging to obtain the area a5, shifts the imaging position by 1/3 pitch of the pixel size of the low-resolution image, and then obtains the short to obtain the area a1.
  • the operation of performing long exposure imaging to obtain the area a8 is repeated. That is, the imaging device 2 moves the imaging position by the width of one pixel of the high-resolution image while changing the exposure time in some imagings from that of other imaging, and the number corresponding to the high-resolution magnification.
  • a low-resolution captured image is captured.
  • the captured images shown in (a) to (c) of FIG. 7 are obtained by extracting a region corresponding to the pixel of the high-resolution image located at the upper left from each of the nine captured images captured as described above. It is mapped like the image of.
  • the interpolation luminance calculation unit 32 may actually synthesize a synthesized image as shown in FIGS. 7A to 7C, or may not synthesize. When not synthesizing, the interpolation luminance calculation unit 32 may perform the interpolation process on the assumption that the synthesized image is synthesized (that is, using a virtual synthesized image). In the following description, it is assumed that the interpolated luminance calculation unit 32 generates a composite image shown in (a) to (c) of FIG.
  • the interpolation luminance calculation unit 32 is an area corresponding to a pixel (a target pixel) of a high-resolution image to be generated from each of a plurality of normalized images having different imaging positions by the width of one pixel of the high-resolution image.
  • a process of extracting the pixel area from a predetermined position of the normalized image captured at the imaging position corresponding to the position of the target pixel is performed for each pixel of the high resolution image.
  • the interpolation luminance calculation unit 32 generates a composite image by arranging the extracted pixel areas so as to correspond to the imaging positions of the normalized image from which the pixel areas are extracted.
  • the plurality of normalized images is a number of images corresponding to the number of pixels of the high resolution image corresponding to one pixel of the low resolution image.
  • neighboring pixels a1 to a4 when four neighborhoods are set with respect to the position of the target pixel A, neighboring pixels a1 to a4 can be set. Further, as shown in FIG. 7B, when 8 neighborhoods are set, neighboring pixels a1 to a8 can be set. Further, neighboring pixels may be set using the pixel area on the outer periphery.
  • the neighboring pixels may be selected only from the normalized image data including the target pixel without selecting the neighboring pixels from each captured image data.
  • the neighboring pixels are set as shown in FIG.
  • the method when capturing all the captured images including the target pixel, the method is particularly effective when the moving distance of the actuator 13 is not controlled at a constant equal pitch or when the actuator 13 is not used.
  • FIGS. 8A and 8B when nine captured images are captured with two types of exposure time, that is, a short exposure time and a long exposure time, in the case of a long exposure time (indicated by a rectangle 57). Is taken by shifting the pixel size of the low-resolution image by 1/3 pitch in the horizontal and vertical directions, and when the exposure time is short (indicated by a rectangle 58), it is shifted by 1/12 pitch with respect to the image position of the long exposure. The positional relationship is shown.
  • a neighborhood range can be set for the position of the pixel of interest A, and four or eight neighborhoods can be set as neighborhood pixels from the pixels included in the range. In this case, depending on the pixel of interest, it may not be possible to set all eight neighborhoods. Therefore, when performing such neighborhood setting, a minimum number of pixels (minimum number of neighbors) selected as neighborhood pixels is set in advance. .
  • the second method selects the neighboring pixels only from the normalized image data including the target pixel without selecting the neighboring pixels from each normalized image data for speeding up. Also good.
  • the neighboring pixels used for the interpolation include pixels derived from a captured image obtained by long exposure. Therefore, even if eight neighbors of the target pixel are used as the neighboring pixels, eight second regions are not always present in the vicinity of the target pixel. Therefore, the minimum number of second regions necessary in the vicinity of the target pixel is set in advance. For example, in the case of 4 neighborhoods, there are 2 half, and in the case of 8 neighborhoods, there are 4 etc. If a neighboring pixel is not obtained using such a minimum number, the interpolated luminance calculation unit 32 resets the neighboring pixel by one of the following two methods.
  • the interpolated luminance calculation unit 32 increases the range of pixels set as neighboring pixels. For example, 4 neighborhoods are enlarged to 8 neighborhoods, 8 neighborhoods are enlarged to 16 neighborhoods.
  • the interpolation luminance calculating unit 32 performs the interpolation processing using the neighboring pixels obtained at that time, and repeats the interpolation processing until the minimum number of neighboring pixels is reached.
  • the luminance value of the first region is updated by interpolation, the pixel changes from the first region to the second region as the luminance value is updated. Therefore, in one neighborhood setting process, even when the minimum number of neighboring pixels is insufficient, the minimum number may be satisfied after the second time.
  • the interpolated luminance calculation unit 32 calculates the interpolated luminance value
  • the simplest method is to calculate the average of the luminance values of neighboring pixels. Further, when averaging the luminance values, it is possible to weight the luminance values of neighboring pixels according to the distance from the target pixel.
  • the method of calculating the interpolated luminance value is not limited to the method of calculating the average value of the luminance values of the neighboring pixels, and the median value of the luminance values of the neighboring pixels may be calculated, and the luminance value of the neighboring pixels is statistically processed.
  • the interpolation luminance value may be calculated by
  • the interpolation luminance calculation unit 32 outputs interpolation information including the calculated interpolation luminance value and information specifying the pixel to which the interpolation luminance value is applied (pixel in the normalized image data) to the interpolation luminance determination unit 33.
  • the above-described setting method of neighboring pixels and the setting information indicating the minimum number are stored in a storage unit (for example, the interpolation condition storage unit 27) that can be used by the interpolation luminance calculation unit 32, and the interpolation luminance calculation unit 32
  • the neighboring pixels may be set by referring to the setting information.
  • step 15 the interpolation luminance determination unit 33 determines whether or not the interpolation of the luminance value in the pixel indicated by the interpolation information output from the interpolation luminance calculation unit 32 has succeeded.
  • the interpolation luminance determination unit 33 uses the normalized image data in which the luminance value of the pixel of the post-interpolation normalized image data (the post-interpolation luminance value) indicated by the interpolation information is stored in the normalized image data storage unit 30. If the luminance value is equal to or higher than the luminance value of the pixel corresponding to the pixel, the interpolation is considered to be successful and the value is adopted.
  • the interpolation luminance determination unit 33 determines that the interpolation has failed when the post-interpolation luminance value of the pixel indicated by the interpolation area information is smaller than the luminance value of the pixel corresponding to the pixel included in the normalized image data. Therefore, the interpolated luminance value is discarded, and the luminance value in the normalized image data is again adopted (S16).
  • the interpolation luminance determination unit 33 outputs the final normalized image data generated by correcting the post-interpolation normalized image data in this way to the difference image generation unit 42 of the dynamic range expansion super-resolution processing unit 25. .
  • the interpolation luminance determination unit 33 acquires the normalized image data normalized by the normalization processing unit 35 from the normalized image data storage unit 30 and is output from the interpolation luminance calculation unit 32.
  • the luminance value after interpolation of the pixel of interest indicated by the interpolation information is compared with the luminance value of the pixel of interest (normalized luminance value) in the normalized image data, and the luminance value after interpolation is equal to or greater than the luminance value after normalization. What is necessary is just to determine whether it is.
  • the interpolated luminance determining unit 33 adopts the post-interpolated luminance value calculated by the interpolated luminance calculating unit 32 for the pixel indicated as the first region by the region specifying information, or the luminance value before interpolation ( Determination information indicating whether or not the luminance value of the pixel corresponding to the pixel in the normalized image data is adopted is output to the weighting data creation unit 34.
  • the weighting data creation unit 34 creates weighting data based on the determination information output from the interpolation luminance determination unit 33 and the weighting condition data stored in the interpolation condition storage unit 27.
  • the creation of the weighted data means that the area specifying information generated by the area extracting unit 31 is updated.
  • the first area is set to 0 and the second area is set to 1 in the stage of step 12.
  • the weight set here is used to multiply the pixel value at the time of repeated calculation of the super-resolution processing in the dynamic range expansion super-resolution processing unit 25.
  • the weight is 1 (second region)
  • the image value is Used directly for repeated calculations.
  • the weight is 0 (first region)
  • the pixel value is not used at all for repeated calculation. For example, when the weight is 0.5, half the pixel value is used.
  • the weighting data creation unit 34 updates the Zm as the area specifying information according to the success or failure of the interpolation in step 15 and / or the interpolation status (whether the interpolation has been performed and whether the minimum number of neighbors is satisfied).
  • the weighting data creation unit 34 when the weighting data creation unit 34 succeeds in the interpolation (when the post-interpolation luminance value calculated by the interpolation luminance calculation unit 32 is adopted), the weighting data creation unit 34 performs the normal second region with respect to the interpolated first region. 1 is set, and if the minimum number of neighbors is insufficient and interpolation is not performed, or if the interpolation is deemed to have failed by the determination in step 15, the pixel has a normal luminance value. It is assumed that the weight is 0, and the weight remains 0.
  • the weighting data creation unit 34 updates the weight so that the value becomes larger than 0 and smaller than 1.
  • the set value of the weight is not limited to the above method.
  • the weighting data creation unit 34 outputs the created weighting data to the weighted difference image generation unit 44 of the super-resolution processing unit 25.
  • the super-resolution processing unit 25 acquires or generates high-resolution image data as an initial value and stores it in the high-resolution image storage unit 24.
  • the initial high-resolution image data is not particularly limited as long as it is an image having a resolution desired to be generated by the super-resolution processing.
  • an interpolation process may be performed on an arbitrary piece selected from the captured image data, and the number of pixels increased may be used as the initial high resolution image data.
  • the captured image data is reflected in the high-resolution image data before the super-resolution processing is performed. As a result, it is possible to reduce the number of repetitive calculations in the super-resolution processing, and it is possible to generate high-precision image data with high accuracy in a short time.
  • the pseudo low-resolution image generation unit 43 creates pseudo low-resolution image data from the high-resolution image data in consideration of factors such as optical degradation and downsampling. To do. That is, the pseudo low resolution image generation unit 43 reads the high resolution image data stored in the high resolution image storage unit 24, and the resolution of the read high resolution image data is stored in the low resolution image storage unit 23. The resolution is reduced so as to have the same resolution as the captured image data. Further, the pseudo low resolution image generation unit 43 performs conversion by PSF on the high resolution image data, and generates a plurality (m pieces) of pseudo low resolution image data aligned with each of the captured image data.
  • the difference image generation unit 42 uses the difference between the group of final normalized images output from the interpolation luminance determination unit 33 and the pseudo low resolution image generated by the pseudo low resolution image generation unit 43 as a difference image. Calculate as data and generate a plurality of difference image data. The difference image generation unit 42 outputs the generated difference image data to the weighted difference image generation unit 44.
  • the weighted difference image generation unit 44 calculates the weighted data generated by the weighting data generation unit 34 in step 17 for each pixel included in the difference image data generated by the difference image generation unit 42. Is given a weight.
  • the weighted difference image generation unit 44 outputs the weighted difference image data (weighted difference image data) to the image update unit 45.
  • the image update unit 45 updates the high resolution image data stored in the high resolution image storage unit 24 based on the weighting by the weighted difference image generation unit 44, and the updated high resolution.
  • the image data is stored in the high resolution image storage unit 24.
  • the image update unit 45 assigns the weighted difference image data output from the weighted difference image generation unit 44 to the evaluation function, and increases the resolution accuracy (high-resolution image data and normalized captured image).
  • An evaluation value E indicating the degree of difference from the data is obtained (S22).
  • Various known functions can be applied as the evaluation function.
  • the evaluation value E can be obtained by using the evaluation function shown in the following equation (5).
  • Equation (5) is an evaluation function used in a super-resolution processing method called MAP method.
  • Equation (5) is an equation composed of an error term and a constraint term, and W (ABh ⁇ y), that is, difference image data on the right side of equation (5) is an error term.
  • a term including the characters ⁇ , C, and h on the right side of the equation (5) is a constraint term.
  • h is a vector representing high-resolution image data
  • A is a PSF (matrix representing optical blur)
  • B is a matrix representing movement to the imaging position
  • W is a matrix representing weights for the halation part.
  • y is a vector representing the low resolution image data
  • is a weight for the constraint term
  • C is a matrix representing the constraint for the high resolution image data.
  • the value obtained by substituting the difference image data into this equation (5) is the evaluation value E.
  • the super-resolution processing is processing for searching for high-resolution image data h that minimizes the evaluation value E.
  • the image updating unit 45 compares the calculated evaluation value E with a preset threshold value (S23). If the evaluation value E is smaller than the threshold value (YES in S23), the high resolution image data h used for calculating the evaluation value E is finally obtained as a result of the super-resolution processing.
  • the high-resolution image data h is stored in the high-resolution image storage unit 24, and the super-resolution processing ends.
  • the image update unit 45 repeatedly performs the calculation so that the evaluation value E becomes small, and the high-resolution image data h is obtained.
  • Update (S24) For example, the high resolution image data h can be updated by using the following equation (6) (same as the above equation (4)).
  • Equation (6) is a mathematical formula used in the steepest descent method. According to (6), from the high resolution image data h k, by subtracting a predetermined value obtained by multiplying the weight ⁇ to the difference image data, so that high-resolution image data h k + 1 new is generated ing. By repeating this calculation, high-resolution image data h having a smaller evaluation value E is generated.
  • the iterative calculation is not limited to the steepest descent method, and a known iterative calculation method can be applied.
  • the high-resolution image data h updated in this way is pseudo-reduced again at step 19 and used for calculating the evaluation value E at step 22.
  • the evaluation value is compared with the threshold value again. That is, in the super-resolution process, the processes of S19 to S24 are repeatedly performed until the evaluation value E is smaller than the threshold value.
  • the threshold value can be changed as appropriate in accordance with the required high resolution accuracy. For example, when higher resolution and higher resolution are required, the threshold value may be set small, and higher resolution accuracy is not required, but when a rapid processing speed is required, the threshold value is set. Should be set larger. That is, the threshold value is a value for determining the accuracy of the super-resolution processing, and can be set as appropriate according to various conditions required for the super-resolution processing.
  • the image update unit 45 if the value of the weighting data for the difference image data is a region of 1, the above-described step processing is performed on the region. Further, in the case of the first region where the value of the weighting data is 0, the image update unit 45 does not consider the luminance value of the first region when updating the high resolution image.
  • the update is performed according to the weight.
  • the image processing apparatus 1 acquires a group of low-resolution images including low-resolution images having different luminance ranges that are imaged under different imaging conditions and imaging positions, and obtains a high resolution from the plurality of low-resolution images.
  • An image processing apparatus that generates an image.
  • the normalization processing unit 35 includes each of the low-resolution images included in at least a part of the group of low-resolution images so that all of the group of low-resolution images can be regarded as being captured under the same imaging condition. By correcting the luminance values of the pixels, a group of normalized images corresponding to the group of low resolution images is generated.
  • the region extraction unit 31 identifies, for each group of low-resolution images, a first region composed of pixels having luminance values outside a predetermined range and a second region composed of pixels having luminance values within the predetermined range. To do.
  • the interpolated luminance calculation unit 32 selects a pixel region that is a region corresponding to a pixel of a high-resolution image to be generated from each of a plurality of normalized images generated by the normalization processing unit 35 and having different imaging positions. For each pixel of, extract a region from a predetermined position of each normalized image, and extract a region when a composite image is generated by arranging the extracted pixel regions so as to correspond to the pixel array of the high-resolution image.
  • the luminance value of the pixel of the normalized image corresponding to the pixel of the first region specified by the unit 31 is set to the pixel region and the predetermined position in the composite image corresponding to the pixel (that is, including a part of the pixel).
  • the pixel region of the composite image corresponding to the pixel of the normalized image corresponding to the pixel of the first region in the low resolution image is referred to as the first corresponding pixel region, and the pixel of the composite image corresponding to the pixel of the second region in the low resolution image.
  • the interpolation luminance calculation unit 32 determines the luminance value of the pixel of the normalized image corresponding to the pixel in the first region in the first corresponding pixel region corresponding to the pixel and the composite image. It can be expressed that a corrected normalized image is generated by performing correction using the interpolated luminance value calculated from the luminance value of the second corresponding pixel region having a predetermined positional relationship.
  • the pixel area is smaller in size than the pixels included in the normalized image, and includes a part of the pixels included in the normalized image.
  • the weighting data creation unit 34 performs interpolation luminance calculation on the pixels in the first region for the pixels in the first region included in the group of normalized images including the corrected normalized image generated by the interpolation luminance calculation unit 32. Weighting is performed based on the interpolated luminance value calculated by 32.
  • the dynamic range expansion super-resolution processing unit 25 generates a high resolution image from a group of normalized images including the corrected normalized image generated by the interpolation luminance calculation unit 32. At this time, the dynamic range expansion super-resolution processing unit 25 reduces the degree of contribution of the luminance value of the pixel in the first region in the generation of the high-resolution image according to the negative weighting weighted by the weighting data creation unit 34.
  • the present invention may be regarded as an imaging device including the imaging device 2 and a control device (image processing device) 3.
  • Each block of the image processing apparatus 1 described above, in particular, the interpolation processing unit 26 and the dynamic range expansion super-resolution processing unit 25 may be configured by hardware logic, or by software using a CPU as follows. It may be realized.
  • the image processing apparatus 1 includes a CPU (central processing unit) that executes instructions of a control program that realizes each function, a ROM (read only memory) that stores the program, and a RAM (random access memory) that expands the program. And a storage device (recording medium) such as a memory for storing the program and various data.
  • An object of the present invention is to enable a computer to read program codes (execution format program, intermediate code program, source program) of a control program (image processing program) of the image processing apparatus 1 which is software that realizes the functions described above. This can also be achieved by supplying the recorded recording medium to the image processing apparatus 1 and reading and executing the program code recorded on the recording medium by the computer (or CPU or MPU).
  • Examples of the recording medium include tapes such as magnetic tapes and cassette tapes, magnetic disks such as floppy (registered trademark) disks / hard disks, and disks including optical disks such as CD-ROM / MO / MD / DVD / CD-R.
  • Card system such as IC card, IC card (including memory card) / optical card, or semiconductor memory system such as mask ROM / EPROM / EEPROM / flash ROM.
  • the image processing apparatus 1 may be configured to be connectable to a communication network, and the program code may be supplied via the communication network.
  • the communication network is not particularly limited.
  • the Internet intranet, extranet, LAN, ISDN, VAN, CATV communication network, virtual private network, telephone line network, mobile communication network, satellite communication. A net or the like is available.
  • the transmission medium constituting the communication network is not particularly limited.
  • infrared rays such as IrDA and remote control, Bluetooth ( (Registered trademark), 802.11 wireless, HDR (high data rate), mobile phone network, satellite line, terrestrial digital network, and the like can also be used.
  • the present invention can also be realized in the form of a computer data signal embedded in a carrier wave in which the program code is embodied by electronic transmission.
  • the present invention can also be expressed as follows.
  • the image processing apparatus of the present invention is an image processing apparatus that generates a high-resolution image from a plurality of low-resolution images, the image data acquisition means for acquiring the plurality of low-resolution images, and the plurality of image data.
  • Region separation means for separating the region into a second region other than the first region, and the luminance value of the first region in the plurality of normalized image data, the normalization of the second region in the vicinity of the first region
  • Luminance value interpolation means for calculating the interpolated luminance value from the obtained luminance value and setting the obtained interpolated luminance value as the luminance value of the first region; and for the plurality of image data, the interpolation is performed in the first region.
  • Brightness value A weighting coefficient is calculated and set accordingly, weighting coefficient setting means for setting a predetermined value as a weighting coefficient in the second region, image data obtained by interpolating the luminance values of the plurality of low resolution images, And high-resolution image generation means for generating a high-resolution image from the weighting coefficient for each low-resolution image.
  • the luminance value interpolating means for each pixel in the first area, makes a pixel within a predetermined distance and in the second area of the low resolution image including the pixel and a plurality of other low resolution images close to each other. It is preferable to calculate an interpolation value using each pixel in the vicinity.
  • the luminance value interpolation means separates the interpolated value into a first area where the interpolated value is equal to or greater than a predetermined value and a first area where the interpolated value is equal to or smaller than the predetermined value. It is preferable to return to the pixel value.
  • the weighting coefficient setting means sets a predetermined value (the setting value of the first area ⁇ the setting value of the second area) in the first area and the second area, and in the 1.5th area It is preferable to set a value equivalent to or equivalent to that of the second area (setting value of the first area ⁇ setting value of the 1.5th area ⁇ setting value of the second area).
  • the weighting means when the luminance value of the pixel in the first area is larger than the predetermined range, the weighting means is the first area in the corrected normalized image generated by the luminance value interpolation means.
  • the interpolated luminance value of the pixel and the normalized luminance value which is the luminance value of the pixel in the first region in the normalized image, and the interpolated luminance value is smaller than the normalized luminance value,
  • the weighting is preferably performed on the pixels in the first region.
  • the interpolation luminance value is likely to be larger than the normalized luminance value if the interpolation is appropriately performed. For this reason, when the interpolated luminance value is smaller than the normalized luminance value, there is a high possibility that the interpolation by the luminance value interpolating unit has failed (the interpolated luminance value has not been calculated appropriately).
  • the weighting unit determines that the interpolation by the luminance value interpolation unit has failed. In view of this, weighting is performed to reduce the degree of contribution in the generation of the high-resolution image of the luminance value of the pixel subjected to the interpolation.
  • the weighting unit is configured to calculate the interpolated luminance value of the pixel in the first region and the normal value in the corrected normalized image generated by the luminance value interpolation unit.
  • a normalized luminance value that is a luminance value of the pixel in the first region in the normalized image, and when the interpolated luminance value is larger than the normalized luminance value, the pixel in the first region The weighting is preferably performed.
  • the interpolation luminance value is likely to be smaller than the normalized luminance value if interpolation is performed appropriately. Therefore, when the interpolated luminance value is larger than the normalized luminance value, there is a high possibility that the interpolation by the luminance value interpolating unit has failed (the interpolated luminance value has not been calculated appropriately).
  • the weighting unit determines that the interpolation by the luminance value interpolation unit has failed. In view of this, weighting is performed to reduce the degree of contribution in the generation of the high-resolution image of the luminance value of the pixel subjected to the interpolation.
  • the brightness value interpolation means uses at least one brightness value of the pixel area corresponding to the pixel of the second area within a predetermined distance from the pixel area corresponding to the pixel of the first area in the synthesized image. It is preferable to calculate the interpolated luminance value.
  • the pixel of the normalized image to be interpolated is a pixel region corresponding to the second region that is located within a predetermined distance from the pixel region corresponding to the pixel in the synthesized image (high resolution image). Is interpolated using the brightness value of. There is a high possibility that appropriate interpolation can be performed by interpolating a pixel to be interpolated using a pixel that is present at a position close to the pixel to be interpolated.
  • the normalization image can be interpolated more appropriately, and as a result, a more appropriate high-resolution image can be generated.
  • the luminance value interpolation unit calculates the interpolation luminance value of the pixel in the first region in the corrected normalized image generated by the luminance value interpolation unit.
  • a normalized luminance value that is a luminance value of a pixel in the first region in the normalized image is compared, and when the interpolated luminance value is smaller than the normalized luminance value, the interpolated luminance value is interpolated.
  • the interpolated luminance value of the pixel in the first region is larger than the predetermined range, if the interpolated luminance value becomes smaller than the normalized luminance value, the interpolation is not properly performed (the interpolated luminance value is calculated appropriately). Not likely).
  • the luminance value interpolation means The luminance value of the pixel in the corrected normalized image interpolated by is replaced with the normalized luminance value.
  • the luminance value after interpolation in the corrected normalized image can be returned to the luminance value before interpolation, and inappropriate interpolation can be canceled.
  • the luminance value interpolation means calculates the interpolation luminance value of the pixel in the first region in the corrected normalized image generated by the luminance value interpolation means.
  • the normalized luminance value that is the luminance value of the pixel in the first region in the normalized image is compared, and when the interpolated luminance value is larger than the normalized luminance value, the interpolated luminance value is interpolated.
  • the interpolated luminance value of the pixel in the first region is smaller than the predetermined range, if the interpolated luminance value becomes larger than the normalized luminance value, the interpolation is not properly performed (the interpolated luminance value is calculated appropriately). Not likely).
  • the luminance value interpolation means The luminance value of the pixel in the corrected normalized image interpolated by is replaced with the normalized luminance value.
  • the luminance value after interpolation in the corrected normalized image can be returned to the luminance value before interpolation, and inappropriate interpolation can be canceled.
  • An image processing program for operating the image processing apparatus an image processing program for causing a computer to function as each of the above-described means, and a computer-readable recording medium on which the image processing program is recorded are also included in the technical scope of the present invention. It is.
  • the imaging apparatus includes the image processing apparatus, the imaging apparatus including an imaging unit that captures the group of low-resolution images, and the image processing apparatus acquires the group of low-resolution images from the imaging unit.
  • the characteristic imaging device is also included in the technical scope of the present invention.
  • high-resolution image data with an expanded dynamic range can be generated, and therefore, the present invention can be suitably applied to an apparatus for increasing the resolution of a still image or a moving image.
  • Image processing device 2 Imaging device (imaging unit) 3 Control device (image processing device) 25 Dynamic range expansion super-resolution processor (high-resolution image generation means) 31 area extraction unit (area specifying means) 32 Interpolation luminance calculation unit (luminance value interpolation means) 33 Interpolation luminance determination unit (luminance value interpolation means) 34 Data creation part (weighting means) 35 Normalization processing unit (normalization means) 42 Difference image generation unit (high resolution image generation means) 43 Pseudo low resolution image generation unit (high resolution image generation means) 44 Difference image generation unit (high resolution image generation means) 45 Image update unit (high-resolution image generation means)

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

La présente invention concerne un dispositif de traitement d'image comprenant : une unité de traitement de normalisation (35) qui produit une image normalisée à partir d'une image à basse résolution ; une unité d'extraction de régions (31) qui, pour chaque image à basse résolution, identifie une première région en dehors d'une plage de luminance prédéfinie et une seconde région à l'intérieur de la plage de luminance prédéfinie ; une unité de calcul de luminance par interpolation (32) qui corrige les valeurs de luminance des pixels dans la première région de l'image normalisée à l'aide des valeurs de luminance interpolées calculées à partir des valeurs de luminance des pixels dans la seconde région, lesdits pixels dans la seconde région se trouvant dans une relation de position prédéfinie avec des pixels dans une image à haute résolution qui correspondent aux pixels dans la première région de l'image normalisée ; une unité de production de données pondérées (34) qui pondère les pixels de la première région de chaque image normalisée en fonction des valeurs de luminance interpolées ; et une unité de traitement à très haute résolution par élargissement de la plage dynamique (25) qui, lorsqu'elle crée une image à haute résolution à partir d'un groupe d'images normalisées, diminue le degré de contribution des valeurs de luminance des pixels dans la première région à la création de l'image à haute résolution, en réponse à la pondération. De ce fait, le dispositif de traitement d'image peut élargir la plage dynamique de l'image capturée et améliorer la résolution.
PCT/JP2010/000917 2009-02-13 2010-02-15 Dispositif de traitement d'image, dispositif de capture d'image, procédé de traitement d'image, programme de traitement d'image et support d'enregistrement WO2010092835A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2009-031785 2009-02-13
JP2009031785A JP4494505B1 (ja) 2009-02-13 2009-02-13 画像処理装置、撮像装置、画像処理方法、画像処理プログラムおよび記録媒体

Publications (1)

Publication Number Publication Date
WO2010092835A1 true WO2010092835A1 (fr) 2010-08-19

Family

ID=42351980

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/000917 WO2010092835A1 (fr) 2009-02-13 2010-02-15 Dispositif de traitement d'image, dispositif de capture d'image, procédé de traitement d'image, programme de traitement d'image et support d'enregistrement

Country Status (2)

Country Link
JP (1) JP4494505B1 (fr)
WO (1) WO2010092835A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103585A (zh) * 2017-04-28 2017-08-29 广东工业大学 一种图像超分辨率系统
CN112419146A (zh) * 2019-08-20 2021-02-26 武汉Tcl集团工业研究院有限公司 一种图像处理方法、装置及终端设备

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5699480B2 (ja) * 2010-08-17 2015-04-08 株式会社ニコン 焦点検出装置およびカメラ
JP5658189B2 (ja) * 2011-03-21 2015-01-21 ゲイリー・エドウィン・サットン 曲面状センサーカメラを有する移動通信装置、移動型光学部を有する曲面状センサーカメラ、及びシリコン繊維で製作される曲面状センサー
JP6075295B2 (ja) 2011-12-12 2017-02-08 日本電気株式会社 辞書作成装置、画像処理装置、画像処理システム、辞書作成方法、画像処理方法及びプログラム
IL219773A (en) * 2012-05-13 2015-09-24 Elbit Sys Electro Optics Elop Device and method for increasing the resolution of vacuum-based infrared imaging detectors and cryogenic coolers
JP6029430B2 (ja) * 2012-11-20 2016-11-24 キヤノン株式会社 画像処理装置及び画像処理方法
CN111507902B (zh) * 2020-04-15 2023-09-26 京东城市(北京)数字科技有限公司 一种高分辨率图像获取方法及装置

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0837628A (ja) * 1994-07-22 1996-02-06 Canon Inc 撮像装置
JP2007019641A (ja) * 2005-07-05 2007-01-25 Tokyo Institute Of Technology 固体撮像素子の信号読み出し方法及び画像信号処理方法

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0837628A (ja) * 1994-07-22 1996-02-06 Canon Inc 撮像装置
JP2007019641A (ja) * 2005-07-05 2007-01-25 Tokyo Institute Of Technology 固体撮像素子の信号読み出し方法及び画像信号処理方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103585A (zh) * 2017-04-28 2017-08-29 广东工业大学 一种图像超分辨率系统
CN112419146A (zh) * 2019-08-20 2021-02-26 武汉Tcl集团工业研究院有限公司 一种图像处理方法、装置及终端设备
CN112419146B (zh) * 2019-08-20 2023-12-29 武汉Tcl集团工业研究院有限公司 一种图像处理方法、装置及终端设备

Also Published As

Publication number Publication date
JP2010187341A (ja) 2010-08-26
JP4494505B1 (ja) 2010-06-30

Similar Documents

Publication Publication Date Title
JP4494505B1 (ja) 画像処理装置、撮像装置、画像処理方法、画像処理プログラムおよび記録媒体
JP4355744B2 (ja) 画像処理装置
JP4879261B2 (ja) 撮像装置、高解像度化処理方法、高解像度化処理プログラム、及び記録媒体
JP4524717B2 (ja) 画像処理装置、撮像装置、画像処理方法及びプログラム
KR101643122B1 (ko) 화상 처리 장치, 화상 처리 방법 및 기록 매체
JP4646146B2 (ja) 画像処理装置、画像処理方法、およびプログラム
US8581992B2 (en) Image capturing apparatus and camera shake correction method, and computer-readable medium
JP4775700B2 (ja) 画像処理装置及び画像処理方法
JP5756099B2 (ja) 撮像装置、画像処理装置、画像処理方法、および画像処理プログラム
US8036481B2 (en) Image processing apparatus and image restoration method and program
JP5408053B2 (ja) 画像処理装置、画像処理方法
JP6308748B2 (ja) 画像処理装置、撮像装置及び画像処理方法
JP5672796B2 (ja) 画像処理装置、画像処理方法
WO2005025235A1 (fr) Procede de traitement d'images, appareil de traitement d'images et programme informatique
JP2011004353A (ja) 画像処理装置、画像処理方法
JP2009037460A (ja) 画像処理方法、画像処理装置、及びこの画像処理装置を備えた電子機器
JP2009194896A (ja) 画像処理装置及び方法並びに撮像装置
JP2018107526A (ja) 画像処理装置、撮像装置、画像処理方法およびコンピュータのプログラム
JP2002064754A (ja) 撮像装置の有効ダイナミックレンジを拡張する方法及び装置
JP2013162347A (ja) 画像処理装置、画像処理方法、プログラム、および装置
JP2008077501A (ja) 画像処理装置及び画像処理制御プログラム
JP5184574B2 (ja) 撮像装置、画像処理装置、および画像処理方法
JP5882702B2 (ja) 撮像装置
JP6802848B2 (ja) 画像処理装置、撮像システム、画像処理方法および画像処理プログラム
JP5042954B2 (ja) 画像生成装置、画像生成方法、画像生成プログラム、および該プログラムを記録したコンピュータ読み取り可能な記録媒体

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10741115

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10741115

Country of ref document: EP

Kind code of ref document: A1