WO2023077426A1 - Image processing device, imaging device, and program - Google Patents

Image processing device, imaging device, and program Download PDF

Info

Publication number
WO2023077426A1
WO2023077426A1 PCT/CN2021/128995 CN2021128995W WO2023077426A1 WO 2023077426 A1 WO2023077426 A1 WO 2023077426A1 CN 2021128995 W CN2021128995 W CN 2021128995W WO 2023077426 A1 WO2023077426 A1 WO 2023077426A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
defect
focus position
image
evaluation
Prior art date
Application number
PCT/CN2021/128995
Other languages
French (fr)
Inventor
Toshihiko Arai
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp., Ltd. filed Critical Guangdong Oppo Mobile Telecommunications Corp., Ltd.
Priority to PCT/CN2021/128995 priority Critical patent/WO2023077426A1/en
Publication of WO2023077426A1 publication Critical patent/WO2023077426A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/68Noise processing, e.g. detecting, correcting, reducing or removing noise applied to defects
    • H04N25/683Noise processing, e.g. detecting, correcting, reducing or removing noise applied to defects by defect estimation performed on the scene signal, e.g. real time or on the fly detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Definitions

  • Embodiments of the present disclosure relate to an image processing device, an imaging device, and program.
  • a defect pixel that does not output a correct output value may exist on an image sensor. Because a defect pixel has, for example, a white spot that appears as a high-brightness point on an image due to the addition of a bias voltage to an output according to an amount of incident light or a black spot that appears as a low-brightness point on the image due to low photoelectric sensitivity, this leads to decreasing an image quality. For this reason, there has been known a technology of detecting a defect pixel by inspection at the factory or dynamic defect pixel detection during using and correcting the output of the detected defect pixel, for example. For example, there has been known a technology of detecting a defect pixel based on a correlation with peripheral pixels, such as brightness differences between the defect pixel and the peripheral pixels.
  • a pixel corresponding to an image position of a focused subject also has brightness differences between the pixel and peripheral pixels. For this reason, when a defect-pixel-like pattern having brightness differences between the pattern and peripheral pixels is included in an image of the focused subject, this defect-pixel-like pattern may be miss-detected as a defect pixel.
  • An object of embodiments of the present disclosure is to appropriately detect a defect pixel on an image sensor.
  • An image processing device includes an acquiring module, a detecting module, and an output module.
  • the acquiring module is configured to acquire at least two evaluation images captured at focus positions different from one another including an infinite focus position and a closest focus position.
  • the detecting module is configured to detect a defect pixel of which a pixel defect degree is larger than a predetermined threshold in common between at least the two evaluation images, the pixel defect degree indicating a difference in pixel values between the defect pixel and peripheral pixels.
  • the output module is configured to output the pixel position of the detected defect pixel.
  • An image processing device includes an acquiring module, a detecting module, and an output module.
  • the acquiring module is configured to acquire an evaluation image captured while moving a focus position during exposure.
  • the detecting module is configured to detect a defect pixel of which a pixel defect degree is larger than a predetermined threshold, the pixel defect degree indicating a difference in pixel values between the defect pixel and peripheral pixels.
  • the output module is configured to output the pixel position of the detected defect pixel.
  • a program causes a computer to execute: acquiring at least two evaluation images captured at focus positions different from one another including an infinite focus position and a closest focus position; detecting a defect pixel of which a pixel defect degree is larger than a predetermined threshold in common between at least the two evaluation images, the pixel defect degree indicating a difference in pixel values between the defect pixel and peripheral pixels; and outputting the pixel position of the detected defect pixel.
  • a program causes a computer to execute: acquiring an evaluation image captured while moving a focus position during exposure; detecting a defect pixel of which a pixel defect degree is larger than a predetermined threshold, the pixel defect degree indicating a difference in pixel values between the defect pixel and peripheral pixels; and outputting the pixel position of the detected defect pixel.
  • the image processing device the imaging device, and the program according to the embodiments of the present disclosure, it is possible to appropriately detect a defect pixel on an image sensor.
  • FIG. 1 is a diagram illustrating an example of a configuration of an imaging device according to a first embodiment
  • FIG. 2 is a diagram illustrating an example of a functional configuration of a controller according to the first embodiment
  • FIG. 3 is a diagram explaining evaluation images acquired in detection processing according to the first embodiment
  • FIG. 4 is a diagram explaining the evaluation images acquired in the detection processing according to the first embodiment
  • FIG. 5 is a diagram explaining defect pixel estimation in the detection processing according to the first embodiment
  • FIG. 6 is a flowchart illustrating an example of a flow of the detection processing according to the first embodiment
  • FIG. 7 is a diagram explaining an evaluation image acquired in detection processing according to a second embodiment
  • FIG. 9 is a diagram explaining blinking defect pixel detection performed by the detection processing according to the second embodiment.
  • FIG. 10 is a diagram explaining a movement of a focus position during exposure in detection processing according to a third embodiment
  • FIG. 11 is a diagram explaining a movement of a focus position during exposure in detection processing according to a fourth embodiment.
  • FIG. 12 is a diagram illustrating an example of a functional configuration of an image processing device according to a fifth embodiment.
  • a defect pixel that does not output a correct output value may exist on an image sensor. Because a defect pixel has, for example, a white spot that appears as a high-brightness point on an image due to the addition of a bias voltage to an output according to an amount of incident light or a black spot that appears as a low-brightness point on the image due to low photoelectric sensitivity, this leads to decreasing an image quality.
  • pixel positions of defect pixels detected by inspection at the factory are registered and are stored in a memory etc., for example.
  • pixel positions of defect pixels detected by dynamic defect pixel detection during using are additionally registered, for example.
  • the defect-pixel-like pattern means an image, or the whole or a part of a shape of the image, which has brightness differences between itself and peripheral pixels.
  • the defect-pixel-like pattern means an image, or the whole or a part of a shape of the image, which has a high contrast value.
  • a defect pixel there is a technology of detecting, as a defect pixel, a pixel corresponding to a defect-pixel-like pattern whose position on an image does not vary when an imaging device is moving due to a camera shake or the like.
  • a defect pixel cannot be detected when an imaging device does not have a movement, for example, the imaging device is fixed to a tripod or the like.
  • the imaging device is fixed to a tripod or the like.
  • both an imaging device and a subject are moving, there is a problem that the position of the subject on an image does not vary even if the subject is moving and thus a part or the whole of a subject image is miss-detected as a defect-pixel-like pattern.
  • FIG. 1 is a diagram illustrating an example of a configuration of an imaging device 1 according to the first embodiment.
  • the imaging device 1 equipped with an image processing device 20 is exemplified as illustrated in FIG. 1.
  • the imaging device 1 includes an imaging unit 10, a controller 21, an image processing circuitry 23, and a memory 25.
  • the imaging unit 10, the controller 21, the image processing circuitry 23, and the memory 25 are connected to be able to communicate with one another via a signal line such as a bus 31.
  • the imaging unit 10 images a subject field to generate image data. As illustrated in FIG. 1, the imaging unit 10 includes an optical system 11, an image sensor 13, an analog processing circuitry 15, and an A/D converter circuitry 17.
  • the optical system 11 includes an optical element configured to form an image of a light beam from a subject on an imaging surface 131 of the image sensor 13. It should be noted that FIG. 1 exemplifies a single lens as the optical element of the optical system 11 but the present embodiment is not limited to the above.
  • the optical system 11 may have desired imaging performance by at least one optical element having power.
  • the optical system 11 may be composed of a compound lens that includes at least one single lens, or may be composed of a combination of a lens system and a reflection system.
  • the image sensor 13 images a subject field to generate an image signal.
  • the image sensor 13 is arranged on an optical axis of the optical system 11.
  • the image sensor 13 is arranged at a position at which the image of the light beam from the subject is formed by the optical system 11.
  • the image sensor 13 can appropriately employ a solid-state imaging device such as CCD (Charge Coupled Device) and CMOS (Complementary Metal-Oxide Semiconductor) .
  • the image sensor 13 has a configuration that a plurality of light receiving units constituting a plurality of pixels are arrayed in a two-dimensional manner.
  • the image sensor 13 is a solid-state imaging device having a color filter with a Bayer array structure.
  • the analog processing circuitry 15 performs analog processing such as amplification processing with respect to an image signal read from the image sensor 13.
  • the A/D converter circuitry 17 converts an image signal output from the analog processing circuitry 15 into digital-format image data.
  • the imaging unit 10 is configured to be able to change a focus position.
  • "to be able to change a focus position” means that an image formed on the imaging surface 131 can be made smaller than a diameter of a permissible circle of confusion for each of at least two object points that exist at different positions in an optical axis direction of the optical system 11.
  • a diameter of a permissible circle of confusion is defined depending on a pixel pitch of the image sensor 13 or imaging performance of the optical system 11, for example.
  • the imaging unit 10 is configured to be able to focus or blur (bokeh) an arbitrary subject.
  • the imaging unit 10 is configured to be able to move at least one, of an image-side focus position of the optical system 11, an object-side focus position of the optical system 11, and the imaging surface 131 of the image sensor 13, in the optical axis direction of the optical system 11.
  • the controller 21 controls each component of the imaging device 1 in accordance with a program stored in the memory 25.
  • the controller 21 includes a processor and a memory as hardware resources.
  • the processor can appropriately employ various processors such as CPU (Central Processing Unit) , DSP (Digital Signal Processor) , ASIC (Application Specific Integrated Circuit) , and FPGA (Field-Programmable Gate Array) .
  • the memory can appropriately employ various memories such as ROM (Read Only Memory) , a flash memory, and RAM (Random Access Memory) . It should be noted that the controller 21 may employ a microcomputer.
  • the image processing circuitry 23 performs various image processing required for displaying and recording an image with respect to the image data.
  • the image processing includes, for example, an optical black (OB) subtraction process, a white balance (WB) correction process, a demosaic process, a color conversion process, a gamma conversion process, a noise reduction process, an enlargement/reduction process, a compression process, and the like.
  • the image processing circuitry 23 performs a defect correction process for correcting an output value from a defect pixel with respect to the image data.
  • the image processing circuitry 23 performs a defect correction process for correcting an output from a defect pixel registered in a defect pixel list by using output values of peripheral pixels, for example.
  • the memory 25 stores therein a program required for operations of the imaging device 1. Moreover, the memory 25 stores therein information required for various processes of the imaging device 1. This information includes information on pixel positions of defect pixels and information on parameters of image processing, for example. Moreover, the memory 25 temporarily stores therein the image data output from the A/D converter circuitry 17 or the image processing circuitry 23 and various data such as processing data in the controller 21.
  • the memory 25 includes a nonvolatile memory such as ROM and a flash memory, and a volatile memory such as DRAM (Dynamic RAM) , SDRAM (Synchronous DRAM) , and SRAM (Static RAM) .
  • FIG. 2 is a diagram illustrating an example of a functional configuration of the controller 21 according to the first embodiment.
  • the controller 21 realizes functions of an imaging control module 211, a focus control module 213, and a defect pixel detection module 215 by the processor executing a program developed on the memory.
  • the imaging control module 211 and the focus control module 213 are an example of an acquiring module.
  • the defect pixel detection module 215 is an example of a detecting module and an output module.
  • modules 211, 213, and 215 may be realized by a single processor or may be realized by a combination of a plurality of independent processors. Moreover, each of the modules 211, 213, and 215 may be realized by being distributed to or integrated into a plurality of processors.
  • the controller 21 Based on an AE evaluation value indicating a subject brightness in the image data, the controller 21 performs, as the imaging control module 211, automatic exposure (AE) processing for setting imaging conditions that include an aperture value and a shutter speed value.
  • the controller 21 performs the AE processing by using a first release operation of a user as a trigger, for example.
  • the first release operation includes an operation tapping an arbitrary subject on a touch panel display during live view display.
  • the controller 21 performs imaging processing for controlling the imaging unit 10 to acquire image data.
  • the controller 21 performs the imaging processing by using a second release operation of the user as a trigger, for example.
  • the controller 21 performs imaging processing for defect pixel detection at a predetermined timing, for example.
  • the predetermined timing may be a timing of a user's operation to instruct the controller to execute the defect pixel detection, such as a check mode selection, may be a predetermined periodic timing, or may be a timing for each of a predetermined number of times during power-up or imaging of the imaging device 1.
  • the predetermined timing is previously set at the time of shipment or by the setting of the user and is stored in the memory 25 etc.
  • the imaging processing for defect pixel detection is imaging processing for performing imaging at different focus positions including an infinite focus position and a closest focus position to acquire image data of at least two evaluation images regarding at least the two focus positions.
  • the controller 21 Based on at least two evaluation images acquired in the imaging processing for defect pixel detection, the controller 21 performs, as the defect pixel detection module 215, detection processing for detecting a defect pixel of the image sensor 13. Optionally, the controller 21 detects a defect-pixel-like pattern from each of at least the two evaluation images. The controller 21 detects a pixel at a pixel position at which the defect-pixel-like pattern is commonly detected between at least the two evaluation images. The details of the detection processing will be described later.
  • FIGS. 3 and 4 are diagrams explaining evaluation images 301 and 303 acquired in the detection processing according to the first embodiment.
  • the imaging device 1 actively changes the focus position of the imaging device 1 to acquire at least the two evaluation images 301 and 303 having different focus positions.
  • the evaluation image 301 is an image captured at the infinite focus position.
  • the evaluation image 303 is an image captured at the closest focus position.
  • an image point IP1 corresponding to a long-distance object point OP1 is in a focused state, but an image point IP2 corresponding to a short-distance object point OP2, which is located at a position different from the object point OP1 in the optical axis direction of the optical system 11, is not in a focused state.
  • the image point IP1 is not in a focused state, but the image point IP2 is in a focused state. In this way, by imaging the same object point at different focus positions, a state where each image point is blurred, that is, a state where a contrast value of the corresponding image point is low can be made at any of the focus positions.
  • the focused subject is in a state where blurring does not occur, that is, a state where a contrast value of a subject image is high
  • the subject image cannot be determined to be a defect DP caused by a defect pixel having a high contrast value.
  • each of the subjects can be made blurred in the corresponding frame. Therefore, by comparing the evaluation images 301 and 303, for example, an image having a high contrast value common between both frames captured at the infinite and closest focus positions, that is, a pixel at an image position detected as a defect-pixel-like pattern can be detected as a defect pixel.
  • FIG. 5 is a diagram explaining defect pixel estimation in the detection processing according to the first embodiment.
  • it can be estimated whether an arbitrary pixel is a defect-pixel-like pattern based on a relationship between the arbitrary pixel and peripheral pixels.
  • the present estimation method for a defect pixel is only an example and thus it does not matter what an estimation method for a defect-pixel-like pattern is.
  • the pixel defect degree D and the determination value K for a pixel of each color can be respectively expressed by the following relational expressions. For example, together with the defect pixel list, the threshold Th for each color is assumed to be stored in the memory 25 or the like.
  • a defect pixel can be estimated without distinguishing between the Gr pixel and Gb pixel.
  • a pixel defect degree D for the Gb t pixel can be expressed by the following relational expression.
  • FIG. 6 is a flowchart illustrating an example of a flow of the detection processing according to the first embodiment. As described above, the flow of FIG. 6 is assumed to be executed for each pixel at a predetermined timing.
  • the controller 21 sets a focus position to an infinite distance (S101a) , and captures an image at the infinite focus position to acquire the first evaluation image 301 (S102a) . Similarly, the controller 21 changes the focus position to set the focus position to a closest distance (S103a) , and captures an image at the closest focus position to acquire the second evaluation image 303 (S104a) .
  • the controller 21 detects, as a defect pixel, a pixel commonly detected between the first evaluation image 301 and the second evaluation image 303, namely, the infinite and closest both frames as a defect-pixel-like pattern (S105a) . After that, the controller 21 outputs a pixel position of the detected defect pixel to the memory 25, for example, and registers the pixel position in the defect pixel list (S106) . In imaging after that, the image processing circuitry 23 performs a defect correction process for correcting an output at the imaging from the defect pixel registered in the defect pixel list by using output values of peripheral pixels, for example.
  • imaging at the infinite focus position and then imaging at the closest focus position has been exemplified with reference to FIG. 6 but the present embodiment is not limited to the above.
  • imaging at the closest focus position and then imaging at the infinite focus position may be performed.
  • the imaging device 1 is configured to detect, as a defect pixel, a pixel at a pixel position at which the defect-pixel-like pattern is commonly detected between the evaluation images 301 and 303 captured at the infinite and closest focus positions. According to this configuration, it is possible to blur a defect-pixel-like pattern in a subject image by shifting its focus and make the defect-pixel-like pattern not to be a defect pixel pattern in any of the evaluation images. According to the technology of the present embodiment, it is possible to prevent the defect-pixel-like pattern included in the subject image from being miss-detected as a defect pixel and to appropriately detect a defect pixel on the image sensor 13. Moreover, because the imaging device is configured to change a focus position to obtain evaluation images, it is possible to detect a defect pixel regardless of the presence or absence of the movement of a camera.
  • the imaging device 1 is configured to use the evaluation images 301 and 303 captured at the infinite and closest focus positions as evaluation images having different focus positions. According to this configuration, it is possible to achieve the maximization of a blurred amount for a subject image between two evaluation images, that is, between both frames of which focus positions are different. The maximization of the blurred amount contributes to the reduction of miss-detection of a defect-pixel-like pattern in a subject image.
  • the detection of a defect pixel based on the two evaluation images 301 and 303 captured at the infinite and closest two focus positions has been exemplified but the present embodiment is not limited to the above.
  • the detection of the defect pixel may be performed based on three or more evaluation images captured at different focus positions. According to this configuration, it is possible to improve the detection accuracy of a defect pixel.
  • FIG. 7 is a diagram explaining an evaluation image 305 acquired in detection processing according to the second embodiment. As illustrated in FIG. 7, a pixel defect can be detected based on the evaluation image 305 that is obtained while moving a focus position during exposure.
  • a focus position may be referred to as "change of focus position” , "sweep of focus position” , or "focus sweep” in the present embodiment.
  • a moving range of a focus position may be referred to as a sweep range.
  • a focus position is assumed to be continuously moved, for example.
  • the evaluation image 305 is an image obtained by the focus sweep during exposure
  • the evaluation image 305 can be expressed as an image with multiple focal points superimposed.
  • the evaluation image 305 corresponds to an image obtained by accumulating pixel values of images captured at the plurality of focus positions, the image including the evaluation images 301 and 303 in one frame. That is to say, if the sweep range of a focus position is larger than a total width of a depth of field and a depth of focus of a focused position of a subject, this subject is in a blurred state on the evaluation image 305 as illustrated in FIG. 7.
  • the sweep range is a range from the infinite focus position to the closest focus position. It should be noted that the sweep range may be an arbitrary range within a range from the infinite focus position to the closest focus position.
  • FIG. 8 is a flowchart illustrating an example of a flow of detection processing according to the second embodiment.
  • the controller 21 sets a focus position to an infinite side (S101b) , and then starts exposure (S102b) .
  • the controller 21 sweeps the focus position to a close side during the exposure (S103b) , and acquires the evaluation image 305 (S104b) .
  • the controller 21 detects a defect pixel based on the evaluation image 305 captured while sweeping the focus position during the exposure (S105b) .
  • the flow of a focus sweep from the infinite side to the close side of the sweep range during exposure for one frame has been exemplified with reference to FIG. 8 but the present embodiment is not limited to the above.
  • the evaluation image 305 may be acquired by performing a focus sweep from the close side to the infinite side of the sweep range during exposure for one frame.
  • the imaging device 1 according to the present embodiment is configured to perform imaging while sweeping a focus position during exposure, instead of performing imaging at the infinite and closest focus positions. Even with this configuration, on the same principle as in the above embodiment, a defect-pixel-like pattern in a subject image can be made blurred. Furthermore, because the imaging device 1 according to the present embodiment performs imaging while sweeping a focus position during exposure, the present embodiment further has effects that defect pixel estimation processing can be finished in one frame. Moreover, because the defect pixel estimation processing is finished in one frame, a frame memory and a defect-pixel-like pattern position memory may be saved only for one frame. Therefore, the imaging device 1 according to the present embodiment can achieve the reduction of a throughput and a memory usage related to defect pixel detection. In other words, the imaging device 1 can achieve speed-up of processing related to defect pixel detection. Moreover, unlike when using two evaluation images, defect-pixel-like pattern positions on subjects can be prevented from being identical accidentally between both frames.
  • the imaging device 1 according to the present embodiment can detect a blinking defect pixel.
  • FIG. 9 is a diagram explaining blinking defect pixel detection performed by the detection processing according to the second embodiment.
  • the imaging device 1 according to the present embodiment is configured to perform imaging while sweeping a focus position during exposure.
  • the imaging device 1 because a blinking defect pixel may transition to a bright state while being sweeping a focus position during exposure, the blinking defect pixel can be easily recorded on the evaluation image 305. It should be noted that, because blurring does not spread to the peripheral area by sweeping a focus position although a blinking defect pixel is darker than a defect pixel that constantly lights up, the blinking defect pixel can be identified as a defect-pixel-like pattern inside a subject image.
  • FIG. 10 is a diagram explaining a movement of a focus position during exposure in detection processing according to the third embodiment.
  • a depth of field is deeper as a focused position is more distant.
  • an image becomes hard to blur in response to a change in a focus position. For this reason, a defect-pixel-like pattern in a distant view becomes hard to blur even if the focus position is swept during exposure from a depth of the depth of field.
  • a range A2 up to a focus position in which the close-side subject image can be sufficiently blurred is regarded as a sweep range of the focus position during exposure.
  • the sweep range of the focus position during exposure according to the present embodiment is the range A2 between the closest focus position and a focus position closer than the infinite by a distance according to a depth of field of the infinity.
  • the sweep range of the focus position during exposure according to the present embodiment is the range A2 obtained by excluding the range A3 of the focus position according to the depth of field of the infinity from the range A1 of the focus position between the infinity and the closest.
  • FIG. 11 is a diagram explaining a movement of a focus position during exposure in detection processing according to the fourth embodiment.
  • a movement speed of a focus position that is, a speed to sweep the focus position is changed in accordance with the focus position.
  • the sweep speed of the focus position is changed in accordance with a width of a depth of field, that is, a depth of the depth of field.
  • the sweep speed of the focus position is set to become larger as the width of a depth of field is larger, as the depth of field is deeper, or as the focus position is farther.
  • a change in a sweep speed with respect to a width of a depth of field, a depth of the depth of field, and a change in a focus position may be set as appropriate to similarly blur a subject regardless of the focus position.
  • the change in the sweep speed may be linear or may be non-linear.
  • FIG. 12 is a diagram illustrating an example of a functional configuration of the image processing device 20 according to the fifth embodiment.
  • the image processing device 20 includes, for example, all or some of the controller 21, the image processing circuitry 23, and the memory 25 in the imaging device 1 according to the embodiments described above.
  • the image processing device 20 realizes functions of an image acquisition module 201, the defect pixel detection module 215, and a defect correction module 203 by a processor executing a program developed on a memory.
  • the processor acquires, as the image acquisition module 201, acquires evaluation images captured by the external imaging device 1 or evaluation images stored in the memory 25.
  • the processor performs, as the defect correction module 203, a defect correction process for correcting an output value from a defect pixel with respect to image data.
  • the image acquisition module 201 is an example of the acquiring module.
  • the defect pixel detection module 215 is an example of the detecting module and the output module.
  • modules 201, 203, and 215 may be realized by a single processor or may be realized by a combination of a plurality of independent processors. Moreover, each of the modules 201, 203, and 215 may be realized by being distributed to or integrated into a plurality of processors.
  • a part or the whole of processing executed by the imaging device 1 according to the present embodiment may be realized by software.
  • a program executed by the imaging device 1 according to the present embodiment is recorded and provided in a computer-readable recording medium, such as CD-ROM, a flexible disk (FD) , CD-R, and DVD (Digital Versatile Disk) , in a file with an installable format or an executable format.
  • a computer-readable recording medium such as CD-ROM, a flexible disk (FD) , CD-R, and DVD (Digital Versatile Disk)
  • a program executed by the imaging device 1 according to the present embodiment may be configured to be provided by being stored on a computer connected to a network such as the Internet and being downloaded by way of the network.
  • a program executed by the imaging device 1 according to the present embodiment may be configured to be provided or distributed by way of a network such as the Internet.
  • a program executed by the imaging device 1 according to the present embodiment may be configured to be previously incorporated into ROM etc. and be provided.
  • Imaging control module (Acquiring module)
  • Focus control module (Acquiring module)
  • IP1 IP2: Image point

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

A defect pixel on an image sensor can be appropriately detected. An image processing device according to one embodiment includes an acquiring module, a detecting module, and an output module. The acquiring module is configured to acquire at least two evaluation images captured at focus positions different from one another including an infinite focus position and a closest focus position. The detecting module is configured to detect a defect pixel of which a pixel defect degree is larger than a predetermined threshold in common between at least the two evaluation images, the pixel defect degree indicating a difference in pixel values between the defect pixel and peripheral pixels. The output module is configured to output the pixel position of the detected defect pixel.

Description

IMAGE PROCESSING DEVICE, IMAGING DEVICE, AND PROGRAM TECHNICAL FIELD
Embodiments of the present disclosure relate to an image processing device, an imaging device, and program.
BACKGROUND
A defect pixel that does not output a correct output value may exist on an image sensor. Because a defect pixel has, for example, a white spot that appears as a high-brightness point on an image due to the addition of a bias voltage to an output according to an amount of incident light or a black spot that appears as a low-brightness point on the image due to low photoelectric sensitivity, this leads to decreasing an image quality. For this reason, there has been known a technology of detecting a defect pixel by inspection at the factory or dynamic defect pixel detection during using and correcting the output of the detected defect pixel, for example. For example, there has been known a technology of detecting a defect pixel based on a correlation with peripheral pixels, such as brightness differences between the defect pixel and the peripheral pixels.
[Prior Art document (s) ]
[Patent literature 1]
Japanese Laid-Open Patent Application 2014-230121
SUMMARY
[Problem to be Solved by the Invention]
However, a pixel corresponding to an image position of a focused subject also has brightness differences between the pixel and peripheral pixels. For this reason, when a defect-pixel-like pattern having brightness differences between the pattern and peripheral pixels is included in an image of the focused subject, this defect-pixel-like pattern may be miss-detected as a defect pixel.
An object of embodiments of the present disclosure is to appropriately detect a defect pixel on an image sensor.
[Means for Solving Problem]
An image processing device according to an embodiment includes an acquiring module, a detecting module, and an output module. The acquiring module is configured to acquire at least two evaluation images captured at focus positions different from one another including an infinite focus position and a closest focus position. The detecting module is configured to detect a defect pixel of which a pixel defect degree is larger than a predetermined threshold in common between at least the two evaluation images, the pixel defect degree indicating a difference in pixel values between the defect pixel and peripheral pixels. The output module is configured to output the pixel position of the detected defect pixel.
An image processing device according to another embodiment includes an acquiring module, a detecting module, and an output module. The acquiring module is configured to acquire an evaluation image captured while moving a focus position during exposure. The detecting module is configured to detect a defect pixel of which a pixel defect degree is larger than a predetermined threshold, the pixel defect degree indicating a difference in pixel values between the defect pixel and peripheral pixels. The output module is configured to output the pixel position of the detected defect pixel.
A program according to further another embodiment causes a computer to execute: acquiring at least two evaluation images captured at focus positions different from one another including an infinite focus position and a closest focus position; detecting a defect pixel of which a pixel defect degree is larger than a predetermined threshold in common between at least the  two evaluation images, the pixel defect degree indicating a difference in pixel values between the defect pixel and peripheral pixels; and outputting the pixel position of the detected defect pixel.
A program according to still further another embodiment causes a computer to execute: acquiring an evaluation image captured while moving a focus position during exposure; detecting a defect pixel of which a pixel defect degree is larger than a predetermined threshold, the pixel defect degree indicating a difference in pixel values between the defect pixel and peripheral pixels; and outputting the pixel position of the detected defect pixel.
[Effect of the Invention]
According to the image processing device, the imaging device, and the program according to the embodiments of the present disclosure, it is possible to appropriately detect a defect pixel on an image sensor.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a diagram illustrating an example of a configuration of an imaging device according to a first embodiment;
FIG. 2 is a diagram illustrating an example of a functional configuration of a controller according to the first embodiment;
FIG. 3 is a diagram explaining evaluation images acquired in detection processing according to the first embodiment;
FIG. 4 is a diagram explaining the evaluation images acquired in the detection processing according to the first embodiment;
FIG. 5 is a diagram explaining defect pixel estimation in the detection processing according to the first embodiment;
FIG. 6 is a flowchart illustrating an example of a flow of the detection processing according to the first embodiment;
FIG. 7 is a diagram explaining an evaluation image acquired in detection processing according to a second embodiment;
FIG. 8 is a flowchart illustrating an example of a flow of the detection processing according to the second embodiment;
FIG. 9 is a diagram explaining blinking defect pixel detection performed by the detection processing according to the second embodiment;
FIG. 10 is a diagram explaining a movement of a focus position during exposure in detection processing according to a third embodiment;
FIG. 11 is a diagram explaining a movement of a focus position during exposure in detection processing according to a fourth embodiment; and
FIG. 12 is a diagram illustrating an example of a functional configuration of an image processing device according to a fifth embodiment.
DETAILED DESCRIPTION
Hereinafter, an image processing device, an imaging device, and a program according to embodiments will be described in detail with reference to the present drawings. It should be noted that the present invention is not limited to the embodiments.
In the description of the present embodiments, components having the same or substantially the same functions as the previously described components for the previous drawing have the same reference numbers, and their descriptions may be omitted as appropriate. Moreover, even if the same or substantially the same part is illustrated, the dimension and ratio of the part may be different depending on a drawing. Moreover, from the viewpoint of ensuring visibility of drawings, for example, reference numbers may be assigned to only main components for description of each drawing, and components having the same or substantially the same functions as the previously described components for the previous drawing may not have reference numbers.
A defect pixel that does not output a correct output value may exist on an image sensor. Because a defect pixel has, for example, a white spot that appears as a high-brightness point on an image due to the addition of a bias voltage to an output according to an amount of incident light or a black spot that appears as a low-brightness point on the image due to low photoelectric sensitivity, this leads to decreasing an image quality.
To improve an image quality by correcting output values of defect pixels, pixel positions of defect pixels detected by inspection at the factory are registered and are stored in a memory etc., for example. Moreover, to correspond to defect pixels newly generated after shipment, pixel positions of defect pixels detected by dynamic defect pixel detection during using are additionally registered, for example.
For example, there has been known a technology of detecting a defect pixel based on a correlation with peripheral pixels, such as brightness differences between the defect pixel and the peripheral pixels. However, a pixel corresponding to an image position of a focused subject also has brightness differences between the pixel and peripheral pixels. For this reason, when a defect-pixel-like pattern is included in an image of the focused subject, this defect-pixel-like pattern may be miss-detected as a defect pixel. Herein, similar to a defect caused by a defect pixel, the defect-pixel-like pattern means an image, or the whole or a part of a shape of the image, which has brightness differences between itself and peripheral pixels. In other words, the defect-pixel-like pattern means an image, or the whole or a part of a shape of the image, which has a high contrast value.
Moreover, for example, there is a technology of performing exposure in a state where a mechanical optical shutter is closed and detecting a defect pixel that has a white spot. However, there is a problem that a defect pixel cannot be detected in an imaging device on which an optical shutter is not mounted.
Moreover, for example, there is a technology of detecting, as a defect pixel, a pixel corresponding to a defect-pixel-like pattern whose position on an image does not vary when an imaging device is moving due to a camera shake or the like. However, there is a problem that a defect pixel cannot be detected when an imaging device does not have a movement, for example, the imaging device is fixed to a tripod or the like. Moreover, when both an imaging device and a subject are moving, there is a problem that the position of the subject on an image does not vary even if the subject is moving and thus a part or the whole of a subject image is miss-detected as a defect-pixel-like pattern.
Moreover, for example, there is a technology of suppressing miss-detection by registering as a defect pixel only a pixel corresponding to a defect-pixel-like pattern detected by a predetermined number of times or more. However, there is a problem that it takes time to detect a defect pixel and a time difference occurs until the defect pixel is corrected.
Therefore, an image processing device, an imaging device, and a program, which can appropriately detect a defect pixel on an image sensor, will be described in the following embodiments.
First Embodiment
FIG. 1 is a diagram illustrating an example of a configuration of an imaging device 1 according to the first embodiment. In the present embodiment, the imaging device 1 equipped with an image processing device 20 is exemplified as illustrated in FIG. 1. As illustrated in FIG. 1, the imaging device 1 includes an imaging unit 10, a controller 21, an image processing circuitry 23, and a memory 25. The imaging unit 10, the controller 21, the image processing circuitry 23, and the memory 25 are connected to be able to communicate with one another via a signal line such as a bus 31.
The imaging unit 10 images a subject field to generate image data. As illustrated in FIG. 1, the imaging unit 10 includes an optical system 11, an image sensor 13, an analog processing circuitry 15, and an A/D converter circuitry 17.
The optical system 11 includes an optical element configured to form an image of a light beam from a subject on an imaging surface 131 of the image sensor 13. It should be noted that FIG. 1 exemplifies a single lens as the optical element of the optical system 11 but the present embodiment is not limited to the above. The optical system 11 may have desired imaging performance by at least one optical element having power. In other words, the optical system 11 may be composed of a compound lens that includes at least one single lens, or may be composed of a combination of a lens system and a reflection system.
The image sensor 13 images a subject field to generate an image signal. The image sensor 13 is arranged on an optical axis of the optical system 11. The image sensor 13 is arranged at a position at which the image of the light beam from the subject is formed by the optical system 11. The image sensor 13 can appropriately employ a solid-state imaging device such as CCD (Charge Coupled Device) and CMOS (Complementary Metal-Oxide Semiconductor) . The image sensor 13 has a configuration that a plurality of light receiving units constituting a plurality of pixels are arrayed in a two-dimensional manner. As an example, the image sensor 13 is a solid-state imaging device having a color filter with a Bayer array structure. The Bayer array structure means an array structure of a color filter in which lines on which R pixels and Gr pixels are alternately arrayed in a horizontal direction and lines on which B pixels and Gb pixels are alternately arrayed in a horizontal direction are alternately arranged in a vertical direction.
The analog processing circuitry 15 performs analog processing such as amplification processing with respect to an image signal read from the image sensor 13.
The A/D converter circuitry 17 converts an image signal output from the analog processing circuitry 15 into digital-format image data.
It should be noted that the imaging unit 10 is configured to be able to change a focus position. Herein, "to be able to change a focus position" means that an image formed on the imaging surface 131 can be made smaller than a diameter of a permissible circle of confusion for each of at least two object points that exist at different positions in an optical axis direction of the optical system 11. A diameter of a permissible circle of confusion is defined depending on a pixel pitch of the image sensor 13 or imaging performance of the optical system 11, for example. In other words, the imaging unit 10 is configured to be able to focus or blur (bokeh) an arbitrary subject. Optionally, the imaging unit 10 is configured to be able to move at least one, of an image-side focus position of the optical system 11, an object-side focus position of the optical system 11, and the imaging surface 131 of the image sensor 13, in the optical axis direction of the optical system 11.
The controller 21 controls each component of the imaging device 1 in accordance with a program stored in the memory 25. The controller 21 includes a processor and a memory as hardware resources. The processor can appropriately employ various processors such as CPU (Central Processing Unit) , DSP (Digital Signal Processor) , ASIC (Application Specific Integrated Circuit) , and FPGA (Field-Programmable Gate Array) . Moreover, the memory can appropriately employ various memories such as ROM (Read Only Memory) , a flash memory, and RAM (Random Access Memory) . It should be noted that the controller 21 may employ a microcomputer.
The image processing circuitry 23 performs various image processing required for displaying and recording an image with respect to the image data. The image processing includes, for example, an optical black (OB) subtraction process, a white balance (WB) correction process, a demosaic process, a color conversion process, a gamma conversion process, a noise reduction process, an enlargement/reduction process, a compression process, and the like. Moreover, the image processing circuitry 23 performs a defect correction process for correcting an output value from a defect pixel with respect to the image data. As an example, the image processing circuitry 23 performs a defect correction process for correcting an output from a defect pixel registered in a defect pixel list by using output values of peripheral pixels, for example.
The memory 25 stores therein a program required for operations of the imaging device 1. Moreover, the memory 25 stores therein information required for various processes of the imaging device 1. This information includes information on pixel positions of defect pixels and information on parameters of image processing, for example. Moreover, the memory 25 temporarily stores therein the image data output from the A/D converter circuitry 17 or the image processing circuitry 23 and various data such as processing data in the controller 21. The memory 25 includes a nonvolatile memory such as ROM and a flash memory, and a volatile memory such as DRAM (Dynamic RAM) , SDRAM (Synchronous DRAM) , and SRAM (Static RAM) .
FIG. 2 is a diagram illustrating an example of a functional configuration of the controller 21 according to the first embodiment. The controller 21 realizes functions of an imaging control module 211, a focus control module 213, and a defect pixel detection module 215 by the processor executing a program developed on the memory. Herein, the imaging control module 211 and the focus control module 213 are an example of an acquiring module. Moreover, the defect pixel detection module 215 is an example of a detecting module and an output module.
It should be noted that the modules 211, 213, and 215 may be realized by a single processor or may be realized by a combination of a plurality of independent processors. Moreover, each of the modules 211, 213, and 215 may be realized by being distributed to or integrated into a plurality of processors.
Based on an AE evaluation value indicating a subject brightness in the image data, the controller 21 performs, as the imaging control module 211, automatic exposure (AE) processing for setting imaging conditions that include an aperture value and a shutter speed value. The controller 21 performs the AE processing by using a first release operation of a user as a trigger, for example. Herein, the first release operation includes an operation tapping an arbitrary subject on a touch panel display during live view display. Moreover, the controller 21 performs imaging processing for controlling the imaging unit 10 to acquire image data. The controller 21 performs the imaging processing by using a second release operation of the user as a trigger, for example. The controller 21 performs imaging processing for defect pixel detection at a predetermined timing, for example. The predetermined timing may be a timing of a user's operation to instruct the controller to execute the defect pixel detection, such as a check mode selection, may be a predetermined periodic timing, or may be a timing for each of a predetermined number of times during power-up or imaging of the imaging device 1. The predetermined timing is previously set at the time of shipment or by the setting of the user and is stored in the memory 25 etc. The imaging processing for defect pixel detection is imaging processing for performing imaging at different focus positions including an infinite focus position and a closest focus position to acquire image data of at least two evaluation images regarding at least the two focus positions.
Based on focus information acquired from the image data and the like, the controller 21 performs, as the focus control module 213, automatic focus (AF) adjustment processing for controlling the drive of a focusing lens included in the optical system 11. The focus information is, for example, an AF evaluation value (contrast value) computed from the image data. Moreover, when the image sensor 13 is configured to have a focus detection pixel, the focus information may be a defocusing amount computed from the output of the focus detection pixel. In the imaging processing for defect pixel detection, the controller 21 controls to drive at least one of the image sensor 13 and a part or the whole of the optical system 11, and sets a focus position to the infinite or the closest.
Based on at least two evaluation images acquired in the imaging processing for defect pixel detection, the controller 21 performs, as the defect pixel detection module 215, detection processing for detecting a defect pixel of the image sensor 13. Optionally, the controller 21 detects a defect-pixel-like pattern from each of at least the two evaluation images. The controller 21 detects a pixel at a pixel position at which the defect-pixel-like pattern is commonly detected  between at least the two evaluation images. The details of the detection processing will be described later.
Moreover, the controller 21 outputs, as the defect pixel detection module 215, the pixel position of the detected defect pixel. The pixel position of the defect pixel output from the controller 21 is used in the defect correction process performed by the image processing circuitry 23, for example. Moreover, when this pixel position is not registered in the defect pixel list stored in the memory 25, the pixel position is additionally registered in the defect pixel list.
FIGS. 3 and 4 are diagrams explaining  evaluation images  301 and 303 acquired in the detection processing according to the first embodiment.
In the imaging processing for defect pixel detection according to the embodiment, the imaging device 1 actively changes the focus position of the imaging device 1 to acquire at least the two  evaluation images  301 and 303 having different focus positions. In the example illustrated in FIGS. 3 and 4, the evaluation image 301 is an image captured at the infinite focus position. The evaluation image 303 is an image captured at the closest focus position.
For example, as illustrated in (a1) and (a2) of FIG. 3, in the case of the infinite focus position, an image point IP1 corresponding to a long-distance object point OP1 is in a focused state, but an image point IP2 corresponding to a short-distance object point OP2, which is located at a position different from the object point OP1 in the optical axis direction of the optical system 11, is not in a focused state. Moreover, for example, as illustrated in (b1) and (b2) of FIG. 3, in the case of the closest focus position, the image point IP1 is not in a focused state, but the image point IP2 is in a focused state. In this way, by imaging the same object point at different focus positions, a state where each image point is blurred, that is, a state where a contrast value of the corresponding image point is low can be made at any of the focus positions.
Because the focused subject is in a state where blurring does not occur, that is, a state where a contrast value of a subject image is high, the subject image cannot be determined to be a defect DP caused by a defect pixel having a high contrast value. In this situation, as illustrated in (a) and (b) of FIG. 4, by imaging subjects at the infinite and closest focus positions, each of the subjects can be made blurred in the corresponding frame. Therefore, by comparing the  evaluation images  301 and 303, for example, an image having a high contrast value common between both frames captured at the infinite and closest focus positions, that is, a pixel at an image position detected as a defect-pixel-like pattern can be detected as a defect pixel.
FIG. 5 is a diagram explaining defect pixel estimation in the detection processing according to the first embodiment. As an example, it can be estimated whether an arbitrary pixel is a defect-pixel-like pattern based on a relationship between the arbitrary pixel and peripheral pixels. It should be noted that the present estimation method for a defect pixel is only an example and thus it does not matter what an estimation method for a defect-pixel-like pattern is.
For example, the controller 21 performs defect pixel estimation on each pixel in accordance with pixel-value differences between the corresponding pixel and the same-color peripheral pixels. In the example illustrated in FIG. 5, an R pixel, a Gr pixel, a Gb pixel, and a B pixel that are targets of the defect pixel estimation are respectively regarded as an R t pixel, a Gr t pixel, a Gb t pixel, and a B t pixel.
A pixel of which a pixel defect degree D is larger than a predetermined threshold Th is estimated as a defect pixel (K = 1) . On the other hand, a pixel of which a pixel defect degree D is not larger than the predetermined threshold Th is estimated as a non-defect pixel (K = 0) . The pixel defect degree D and the determination value K for a pixel of each color can be respectively expressed by the following relational expressions. For example, together with the defect pixel list, the threshold Th for each color is assumed to be stored in the memory 25 or the like.
Figure PCTCN2021128995-appb-000001
(R t pixel)
Figure PCTCN2021128995-appb-000002
(Gr t pixel)
Figure PCTCN2021128995-appb-000003
(Gb t pixel)
Figure PCTCN2021128995-appb-000004
(B t pixel)
Figure PCTCN2021128995-appb-000005
It should be noted that a defect pixel can be estimated without distinguishing between the Gr pixel and Gb pixel. For example, a pixel defect degree D for the Gb t pixel can be expressed by the following relational expression.
(G pixel)
Figure PCTCN2021128995-appb-000006
Herein, an example of an operation of the imaging device 1 according to the embodiment will be explained. FIG. 6 is a flowchart illustrating an example of a flow of the detection processing according to the first embodiment. As described above, the flow of FIG. 6 is assumed to be executed for each pixel at a predetermined timing.
The controller 21 sets a focus position to an infinite distance (S101a) , and captures an image at the infinite focus position to acquire the first evaluation image 301 (S102a) . Similarly, the controller 21 changes the focus position to set the focus position to a closest distance (S103a) , and captures an image at the closest focus position to acquire the second evaluation image 303 (S104a) .
The controller 21 detects, as a defect pixel, a pixel commonly detected between the first evaluation image 301 and the second evaluation image 303, namely, the infinite and closest both frames as a defect-pixel-like pattern (S105a) . After that, the controller 21 outputs a pixel position of the detected defect pixel to the memory 25, for example, and registers the pixel position in the defect pixel list (S106) . In imaging after that, the image processing circuitry 23 performs a defect correction process for correcting an output at the imaging from the defect pixel registered in the defect pixel list by using output values of peripheral pixels, for example.
It should be noted that the flow of performing imaging at the infinite focus position and then imaging at the closest focus position has been exemplified with reference to FIG. 6 but the present embodiment is not limited to the above. For example, imaging at the closest focus position and then imaging at the infinite focus position may be performed.
As described above, the imaging device 1 according to the present embodiment is configured to detect, as a defect pixel, a pixel at a pixel position at which the defect-pixel-like pattern is commonly detected between the  evaluation images  301 and 303 captured at the infinite and closest focus positions. According to this configuration, it is possible to blur a defect-pixel-like pattern in a subject image by shifting its focus and make the defect-pixel-like pattern not to be a defect pixel pattern in any of the evaluation images. According to the technology of the present embodiment, it is possible to prevent the defect-pixel-like pattern included in the subject image from being miss-detected as a defect pixel and to appropriately detect a defect pixel on the image sensor 13. Moreover, because the imaging device is configured to change a focus position to obtain evaluation images, it is possible to detect a defect pixel regardless of the presence or absence of the movement of a camera.
It should be noted that, depending on the focus position of one evaluation image, it may be difficult to obtain a sufficient blurred amount due to limitations on the optical system 11 and imaging conditions. In this situation, the imaging device 1 according to the present embodiment is configured to use the  evaluation images  301 and 303 captured at the infinite and closest focus positions as evaluation images having different focus positions. According to this configuration, it is possible to achieve the maximization of a blurred amount for a subject image between two evaluation images, that is, between both frames of which focus positions are different. The maximization of the blurred amount contributes to the reduction of miss-detection of a defect-pixel-like pattern in a subject image.
It should be noted that, in the present embodiment, the detection of a defect pixel based on the two  evaluation images  301 and 303 captured at the infinite and closest two focus positions has been exemplified but the present embodiment is not limited to the above. The detection of the defect pixel may be performed based on three or more evaluation images captured at different focus positions. According to this configuration, it is possible to improve the detection accuracy of a defect pixel.
Second Embodiment
Herein, the differences from the first embodiment will be mainly explained, and the explanation of the duplicated contents will be omitted as appropriate.
In the first embodiment, the detection of a pixel defect using the  evaluation images  301 and 303 captured at the infinite and closest focus positions has been exemplified, but the present embodiment is not limited to the above. FIG. 7 is a diagram explaining an evaluation image 305 acquired in detection processing according to the second embodiment. As illustrated in FIG. 7, a pixel defect can be detected based on the evaluation image 305 that is obtained while moving a focus position during exposure.
It should be noted that the movement of a focus position may be referred to as "change of focus position" , "sweep of focus position" , or "focus sweep" in the present embodiment. Similarly, a moving range of a focus position may be referred to as a sweep range. In the focus sweep during exposure, a focus position is assumed to be continuously moved, for example.
Because the evaluation image 305 is an image obtained by the focus sweep during exposure, the evaluation image 305 can be expressed as an image with multiple focal points superimposed. In other words, the evaluation image 305 corresponds to an image obtained by accumulating pixel values of images captured at the plurality of focus positions, the image including the  evaluation images  301 and 303 in one frame. That is to say, if the sweep range of a focus position is larger than a total width of a depth of field and a depth of focus of a focused position of a subject, this subject is in a blurred state on the evaluation image 305 as illustrated in FIG. 7. As an example, the sweep range is a range from the infinite focus position to the closest focus position. It should be noted that the sweep range may be an arbitrary range within a range from the infinite focus position to the closest focus position.
FIG. 8 is a flowchart illustrating an example of a flow of detection processing according to the second embodiment. The controller 21 sets a focus position to an infinite side (S101b) , and then starts exposure (S102b) . The controller 21 sweeps the focus position to a close side during the exposure (S103b) , and acquires the evaluation image 305 (S104b) . The controller 21 detects a defect pixel based on the evaluation image 305 captured while sweeping the focus position during the exposure (S105b) .
It should be noted that the flow of a focus sweep from the infinite side to the close side of the sweep range during exposure for one frame has been exemplified with reference to FIG. 8 but the present embodiment is not limited to the above. For example, the evaluation image 305 may be acquired by performing a focus sweep from the close side to the infinite side of the sweep range during exposure for one frame.
As described above, the imaging device 1 according to the present embodiment is configured to perform imaging while sweeping a focus position during exposure, instead of  performing imaging at the infinite and closest focus positions. Even with this configuration, on the same principle as in the above embodiment, a defect-pixel-like pattern in a subject image can be made blurred. Furthermore, because the imaging device 1 according to the present embodiment performs imaging while sweeping a focus position during exposure, the present embodiment further has effects that defect pixel estimation processing can be finished in one frame. Moreover, because the defect pixel estimation processing is finished in one frame, a frame memory and a defect-pixel-like pattern position memory may be saved only for one frame. Therefore, the imaging device 1 according to the present embodiment can achieve the reduction of a throughput and a memory usage related to defect pixel detection. In other words, the imaging device 1 can achieve speed-up of processing related to defect pixel detection. Moreover, unlike when using two evaluation images, defect-pixel-like pattern positions on subjects can be prevented from being identical accidentally between both frames.
Moreover, the imaging device 1 according to the present embodiment can detect a blinking defect pixel. FIG. 9 is a diagram explaining blinking defect pixel detection performed by the detection processing according to the second embodiment. As described above, the imaging device 1 according to the present embodiment is configured to perform imaging while sweeping a focus position during exposure.
Commonly, to record defects DP1 and DP2 caused by a blinking defect pixel on an evaluation image, it is necessary to perform imaging at a timing at which a blinking defect pixel lights up and transitions to a bright state. However, because the pixel position of a blinking defect pixel is uncertain and a lighting-up timing is also uncertain naturally in a state where the blinking defect pixel is not detected as a defect pixel, it is necessary to acquire many evaluation images in order to obtain an evaluation image 302 on which the blinking defect pixel is recorded.
On the other hand, in the imaging device 1 according to the present embodiment, because a blinking defect pixel may transition to a bright state while being sweeping a focus position during exposure, the blinking defect pixel can be easily recorded on the evaluation image 305. It should be noted that, because blurring does not spread to the peripheral area by sweeping a focus position although a blinking defect pixel is darker than a defect pixel that constantly lights up, the blinking defect pixel can be identified as a defect-pixel-like pattern inside a subject image.
Third Embodiment
Herein, the differences from the second embodiment will be mainly explained, and the explanation of the duplicated contents will be omitted as appropriate.
In the second embodiment, the case where a range of a focus position from the infinite to the closest is regarded as a sweep range during exposure has been exemplified, but the present embodiment is not limited to the above. FIG. 10 is a diagram explaining a movement of a focus position during exposure in detection processing according to the third embodiment. Typically, a depth of field is deeper as a focused position is more distant. Moreover, when a depth of field becomes deeper, an image becomes hard to blur in response to a change in a focus position. For this reason, a defect-pixel-like pattern in a distant view becomes hard to blur even if the focus position is swept during exposure from a depth of the depth of field. In other words, at a focus position in a range A3 that is farther than a subject distance where the infinity falls within the depth of field, the infinite object is imaged without blurring. Therefore, a defect-pixel-like pattern on the infinite subject is hard to blur even by the sweep in a range A1 of the focus position during exposure, and may be difficult to distinguish from a defect pixel.
Therefore, in the imaging device 1 according to the present embodiment, a range A2 up to a focus position in which the close-side subject image can be sufficiently blurred is regarded as a sweep range of the focus position during exposure. In other words, the sweep range of the focus position during exposure according to the present embodiment is the range A2 between the closest focus position and a focus position closer than the infinite by a distance according to a depth of field of the infinity. Alternatively, the sweep range of the focus position during exposure according to the present embodiment is the range A2 obtained by excluding the range  A3 of the focus position according to the depth of field of the infinity from the range A1 of the focus position between the infinity and the closest.
According to this configuration, because a defect-pixel-like pattern on a distant subject difficult to distinguish from a defect pixel can be excluded from the evaluation image 305, it is possible to suppress an accuracy decrease in defect pixel detection.
Fourth Embodiment
Herein, the differences from the third embodiment will be mainly explained, and the explanation of the duplicated contents will be omitted as appropriate.
In the third embodiment, a configuration that an accuracy decrease in defect pixel detection is suppressed by excluding a range of a focus position in which a distant object is hard to blur from a sweep range has been exemplified, but the present embodiment is not limited to the above. FIG. 11 is a diagram explaining a movement of a focus position during exposure in detection processing according to the fourth embodiment.
In the imaging device 1 according to the present embodiment, a movement speed of a focus position, that is, a speed to sweep the focus position is changed in accordance with the focus position. In other words, in the imaging device 1 according to the present embodiment, the sweep speed of the focus position is changed in accordance with a width of a depth of field, that is, a depth of the depth of field.
It should be noted that it is desirable to similarly blur a subject regardless of a focus position from the viewpoint of guaranteeing the accuracy of defect pixel detection. As an example, as illustrated by an arrow A4 of FIG. 11, the sweep speed of the focus position is set to become larger as the width of a depth of field is larger, as the depth of field is deeper, or as the focus position is farther. It should be noted that a change in a sweep speed with respect to a width of a depth of field, a depth of the depth of field, and a change in a focus position may be set as appropriate to similarly blur a subject regardless of the focus position. In other words, the change in the sweep speed may be linear or may be non-linear.
According to this configuration, because a defect-pixel-like pattern on a distant subject difficult to distinguish from a defect pixel can be easily made blurred, it is possible to suppress an accuracy decrease in defect pixel detection. It should be noted that the technology according to the present embodiment can also be applied to the imaging device 1 according to the second embodiment.
Fifth Embodiment
It should be noted that the case where the image processing device 20 according to the embodiments is mounted on the imaging device 1 has been exemplified in the embodiments described above but the present invention is not limited to the above. The image processing device 20 according to the present embodiment may be configured as a device independent from the imaging device 1. FIG. 12 is a diagram illustrating an example of a functional configuration of the image processing device 20 according to the fifth embodiment.
The image processing device 20 according to the present embodiment includes, for example, all or some of the controller 21, the image processing circuitry 23, and the memory 25 in the imaging device 1 according to the embodiments described above. As illustrated in FIG. 12, the image processing device 20 realizes functions of an image acquisition module 201, the defect pixel detection module 215, and a defect correction module 203 by a processor executing a program developed on a memory. The processor acquires, as the image acquisition module 201, acquires evaluation images captured by the external imaging device 1 or evaluation images stored in the memory 25. The processor performs, as the defect correction module 203, a defect correction process for correcting an output value from a defect pixel with respect to image data. Herein, the image acquisition module 201 is an example of the acquiring module. Moreover, the defect pixel detection module 215 is an example of the detecting module and the output module.
It should be noted that the modules 201, 203, and 215 may be realized by a single processor or may be realized by a combination of a plurality of independent processors. Moreover, each of  the modules 201, 203, and 215 may be realized by being distributed to or integrated into a plurality of processors.
Even with this configuration, the same effects as those of the embodiments described above are obtained.
It should be noted that a part or the whole of processing executed by the imaging device 1 according to the present embodiment may be realized by software.
A program executed by the imaging device 1 according to the present embodiment is recorded and provided in a computer-readable recording medium, such as CD-ROM, a flexible disk (FD) , CD-R, and DVD (Digital Versatile Disk) , in a file with an installable format or an executable format.
Moreover, a program executed by the imaging device 1 according to the present embodiment may be configured to be provided by being stored on a computer connected to a network such as the Internet and being downloaded by way of the network. Moreover, a program executed by the imaging device 1 according to the present embodiment may be configured to be provided or distributed by way of a network such as the Internet.
Moreover, a program executed by the imaging device 1 according to the present embodiment may be configured to be previously incorporated into ROM etc. and be provided.
According to at least one of the embodiments described above, it is possible to appropriately detect a defect pixel on an image sensor.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions, and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
[Explanations of Letters or Numerals]
1: Imaging device
10: Imaging unit
11: Optical system
13: Image sensor
131: Imaging surface
15: Analog processing circuitry
17: A/D converter circuitry
20: Image processing device
201: Image acquisition module (Acquiring module)
203: Defect correction module
21: Controller
211: Imaging control module (Acquiring module)
213: Focus control module (Acquiring module)
215: Defect pixel detection module (Detecting module, Output module)
23: Image processing circuitry
25: Memory
31: Bus
301, 302, 303, 305: Evaluation image
DP, DP1, DP2: Defect
IP1, IP2: Image point
OP1, OP2: Object point

Claims (8)

  1. An image processing device comprising:
    an acquiring module configured to acquire at least two evaluation images captured at focus positions different from one another including an infinite focus position and a closest focus position;
    a detecting module configured to detect a defect pixel of which a pixel defect degree is larger than a predetermined threshold in common between at least the two evaluation images, the pixel defect degree indicating a difference in pixel values between the defect pixel and peripheral pixels; and
    an output module configured to output the pixel position of the detected defect pixel.
  2. An image processing device comprising:
    an acquiring module configured to acquire an evaluation image captured while moving a focus position during exposure;
    a detecting module configured to detect a defect pixel of which a pixel defect degree is larger than a predetermined threshold, the pixel defect degree indicating a difference in pixel values between the defect pixel and peripheral pixels; and
    an output module configured to output the pixel position of the detected defect pixel.
  3. The image processing device according to claim 2, wherein the evaluation image includes an image captured while moving the focus position between an infinite focus position and a closest focus position.
  4. The image processing device according to claim 2, wherein the evaluation image includes an image captured while moving the focus position between a closest focus position and a focus position closer than infinity by a distance according to a depth of field of the infinity.
  5. The image processing device according to any one of claims 2 to 4, wherein the evaluation image includes an image captured while moving the focus position at a speed according to a depth of the depth of field.
  6. An imaging device comprising:
    the image processing device according to any one of claims 1 to 5; and
    an imaging unit comprising an image sensor of which a plurality of pixels are arrayed on an imaging surface in a two-dimensional manner and an optical system configured to form an image of a light beam from a subject on the imaging surface, the imaging unit being configured to be able to change the focus position in accordance with control of the acquiring module, the imaging unit being configured to capture the evaluation image based on a subject image formed on the imaging surface.
  7. A program causing a computer to execute:
    acquiring at least two evaluation images captured at focus positions different from one another including an infinite focus position and a closest focus position;
    detecting a defect pixel of which a pixel defect degree is larger than a predetermined threshold in common between at least the two evaluation images, the pixel defect degree indicating a difference in pixel values between the defect pixel and peripheral pixels; and
    outputting the pixel position of the detected defect pixel.
  8. A program causing a computer to execute:
    acquiring an evaluation image captured while moving a focus position during exposure;
    detecting a defect pixel of which a pixel defect degree is larger than a predetermined threshold, the pixel defect degree indicating a difference in pixel values between the defect pixel and peripheral pixels; and
    outputting the pixel position of the detected defect pixel.
PCT/CN2021/128995 2021-11-05 2021-11-05 Image processing device, imaging device, and program WO2023077426A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/128995 WO2023077426A1 (en) 2021-11-05 2021-11-05 Image processing device, imaging device, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/128995 WO2023077426A1 (en) 2021-11-05 2021-11-05 Image processing device, imaging device, and program

Publications (1)

Publication Number Publication Date
WO2023077426A1 true WO2023077426A1 (en) 2023-05-11

Family

ID=86240397

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/128995 WO2023077426A1 (en) 2021-11-05 2021-11-05 Image processing device, imaging device, and program

Country Status (1)

Country Link
WO (1) WO2023077426A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060146178A1 (en) * 2003-08-29 2006-07-06 Nikon Corporation Image-capturing system diagnostic device, image-capturing system diagnostic program product and image-capturing device
CN102595028A (en) * 2011-01-11 2012-07-18 索尼公司 Image processing device, image capturing device, image processing method, and program
CN108028895A (en) * 2015-12-16 2018-05-11 谷歌有限责任公司 The calibration of defective image sensor element
CN111007151A (en) * 2019-12-30 2020-04-14 华东理工大学 Ultrasonic phased array rapid full-focusing imaging detection method based on defect pre-positioning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060146178A1 (en) * 2003-08-29 2006-07-06 Nikon Corporation Image-capturing system diagnostic device, image-capturing system diagnostic program product and image-capturing device
CN102595028A (en) * 2011-01-11 2012-07-18 索尼公司 Image processing device, image capturing device, image processing method, and program
CN108028895A (en) * 2015-12-16 2018-05-11 谷歌有限责任公司 The calibration of defective image sensor element
CN111007151A (en) * 2019-12-30 2020-04-14 华东理工大学 Ultrasonic phased array rapid full-focusing imaging detection method based on defect pre-positioning

Similar Documents

Publication Publication Date Title
JP5855035B2 (en) Solid-state imaging device
US9699387B2 (en) Image processing device for processing pupil-divided images obtained through different pupil regions of an imaging optical system, control method thereof, and program
US9667882B2 (en) Image processing apparatus, image-pickup apparatus, image processing method, non-transitory computer-readable storage medium for generating synthesized image data
JP5868061B2 (en) Imaging device
US10225494B2 (en) Image capturing apparatus and control method thereof
JP2015227995A5 (en)
JP5039588B2 (en) IMAGING DEVICE AND IMAGING DEVICE CONTROL METHOD
JP2009302722A (en) Defective pixel processing device and defective pixel processing method
JP6334976B2 (en) Digital camera with focus detection pixels used for photometry
JP7446804B2 (en) Imaging device, its control method, and program
JP2021063976A (en) Focus detector and method for controlling the same, program, and storage medium
WO2023077426A1 (en) Image processing device, imaging device, and program
US10136088B2 (en) Image pickup apparatus that reduces amount of information of defective pixels, method of controlling the same, and storage medium
JP2016158191A (en) Imaging device and imaging system
US8885076B2 (en) Camera sensor defect correction and noise reduction
JP2007306436A (en) Imaging apparatus
JP2011135379A (en) Imaging apparatus, imaging method and program
JP2009232348A (en) Imaging apparatus, distance information acquiring method, image processing method, and drive control method of optical system
JP6704718B2 (en) Imaging device, control method thereof, and control program
JP2007166076A (en) Imaging device
JP2015015704A (en) Distance measuring device, imaging device, and method of controlling distance measuring device
US20240334069A1 (en) Image capturing apparatus, control method thereof, and storage medium
JP5509937B2 (en) Imaging device
JP5404217B2 (en) Imaging apparatus and control method thereof
JP2012124800A (en) Imaging apparatus

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE