WO2021145158A1 - Dispositif d'imagerie et procédé de commande de dispositif d'imagerie - Google Patents

Dispositif d'imagerie et procédé de commande de dispositif d'imagerie Download PDF

Info

Publication number
WO2021145158A1
WO2021145158A1 PCT/JP2020/047759 JP2020047759W WO2021145158A1 WO 2021145158 A1 WO2021145158 A1 WO 2021145158A1 JP 2020047759 W JP2020047759 W JP 2020047759W WO 2021145158 A1 WO2021145158 A1 WO 2021145158A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
unit
image
imaging
interest
Prior art date
Application number
PCT/JP2020/047759
Other languages
English (en)
Japanese (ja)
Inventor
政晴 永田
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2021145158A1 publication Critical patent/WO2021145158A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B11/00Filters or other obturators specially adapted for photographic purposes
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/68Noise processing, e.g. detecting, correcting, reducing or removing noise applied to defects

Definitions

  • the present disclosure relates to an imaging device and a control method for the imaging device.
  • the image sensor may have defects in the pixels (hereinafter referred to as defective pixels) due to manufacturing variations or the influence of radiation, which may appear as random noise in the imaging results. Most of these defective pixels are fixed as "white” or "black” dots.
  • This technology was made in view of such a situation, and it is possible to easily identify the position of the subject by removing defective pixels, random noise, the influence of cosmic rays, etc., and by extension, perform accurate surveying. It is an object of the present invention to provide a possible image pickup apparatus and a control method for the image pickup apparatus.
  • the image pickup apparatus of the embodiment analyzes a lens unit that collects incident light from a subject, an image pickup unit that captures the incident light collected by the lens unit in an unfocused state, and an image captured by the image pickup unit.
  • An image analysis unit that sets the pixel value of the pixel of interest based on the pixel values of a plurality of predetermined peripheral pixels located around one pixel of interest and generates a result image for identifying the position of the subject. , Equipped with.
  • FIG. 1 is a schematic block diagram of an image pickup apparatus of the first embodiment.
  • the image pickup apparatus 10 includes a lens unit 11, an image pickup unit 12, an image storage unit 13, an image analysis unit 14, and a result storage unit 15.
  • the lens unit 11 collects the light from the subject and guides it to the imaging unit 12.
  • the image pickup unit 12 performs photoelectric conversion and analog / digital conversion of the light collected by the lens unit 11 and outputs the light to the image storage unit 13.
  • the light receiving surface of the imaging unit is fixed at a position in front of the image point determined by the lenses constituting the lens unit 11. This is because the received image is intentionally blurred for shooting.
  • the imaging unit 12 includes a photoelectric conversion unit 21, an analog amplifier unit 22, and an AD conversion unit 23.
  • the photoelectric conversion unit 21 includes an image sensor such as a CCD (Charged-coupled devices) or CMOS (Complementary metal-oxide-semiconductor), performs photoelectric conversion of the light incident from the lens unit 11, and uses an analog amplifier as an original imaging signal. Output to unit 22.
  • CCD Charge-coupled devices
  • CMOS Complementary metal-oxide-semiconductor
  • the analog amplifier unit 22 amplifies the input original imaging signal and outputs it as an imaging signal to the AD conversion unit 23.
  • the AD conversion unit 23 performs analog / digital conversion of the imaging signal and outputs it as imaging data to the image storage unit 13.
  • the image storage unit 13 stores the input imaging data in units of captured images.
  • the image analysis unit 14 reads the image capture data of each image captured image from the image storage unit 13, analyzes the image, and outputs the result image to the result storage unit 15 after removing the effects of random noise, cosmic rays, defective pixels, and the like. do.
  • the result storage unit 15 stores the result image output by the image analysis unit 14.
  • FIG. 2 is an explanatory diagram of the arrangement relationship between the lens constituting the lens unit and the photoelectric conversion unit.
  • the first focal point FP1 is located on the optical axis of the lens 11A between the subject OBJ such as a star and the lens 11A which is a biconvex lens constituting the lens unit 11. ..
  • the second focal point FP2 is located on the optical axis of the lens 11A between the image point 21F0 of the lens 11A and the lens 11A. Then, in the case of the example of FIG. 2, the light receiving surface 21F of the photoelectric conversion unit 21 is positioned on the lens 11A side from the image point. Therefore, the image formed on the light receiving surface 21F is in a more blurred state than the image formed on the image point 21F0.
  • the change in the light receiving intensity on the light receiving surface 21F is slower than the change in the light receiving intensity at the image point 21F0.
  • “gentle” means a state in which the change in light receiving intensity is not pulse-like, and for example, the change in light receiving intensity changes from the maximum value like a normal distribution curve.
  • the change in light receiving intensity with respect to adjacent pixels due to the incident of cosmic rays or pixel defects becomes a steep pulse.
  • FIG. 3 is an outline processing flowchart of the image analysis unit.
  • the pixel of interest (pixel of interest) in the photoelectric conversion unit 21 is scanned by using a window WD of 3 ⁇ 3 pixels (pixels) including the surrounding eight pixels (pixels) to be noticed.
  • An example of performing image analysis on pixels will be described.
  • the image analysis unit 14 sets the parameters X and Y for specifying the pixels Px (X, Y) as the pixel of interest to 0, which is the initial value (step S11).
  • the parameter X is a parameter in the row direction
  • the parameter Y is a parameter in the column direction (see FIG. 4 described later).
  • the image analysis unit 14 determines whether or not the parameter Y exceeds Ymax, which is the maximum value of the parameter Y (step S12). That is, the image analysis unit 14 determines whether or not the processing of all the pixels has been completed. In the determination of step S12, if the parameter Y exceeds the maximum value Ymax of the parameter Y (step S12; Yes), the image analysis process is terminated.
  • step S12 if the parameter Y has not yet exceeded the maximum value Ymax of the parameter Y (step S12; No), it is determined whether or not the parameter X exceeds the maximum value of the parameter X, Xmax. (Step S13). That is, the image analysis unit 14 determines whether or not the processing of the pixels for one line is completed.
  • step S13 if the parameter X has not yet exceeded Xmax, which is the maximum value of the parameter X (step S13; No), the image analysis unit 14 sequentially scans the window WD of 3 ⁇ 3 pixels. , Acquire the values of the imaging data corresponding to each of the nine pixels (step S15). Subsequently, the image analysis unit 14 acquires the minimum value of the imaging data among the nine imaging data corresponding to the 3 ⁇ 3 pixel window (step S16).
  • the image analysis unit 14 compares the minimum value of the imaging data acquired in step S16 with the predetermined threshold value data D (step S17).
  • the threshold data D is data for treating the value of the imaging data as noise if the value of the imaging data is smaller than this, and treating it as imaging data of a black spot (the lowest light receiving intensity (luminance) level). ..
  • step S17 if the minimum value of the imaging data acquired in step S16 is equal to or less than the threshold data D (step S16; minimum value of imaging data ⁇ D), the value of the imaging data of the pixel of interest. Is treated as noise and clamped to be treated as imaging data of black spots (light receiving intensity (luminance) level lowest) (step S18).
  • the value of the imaging data after clamping is output as the pixel value of the pixel of interest, and is stored in the result storage unit 15 (step S19).
  • step S13 when the minimum value of the imaging data acquired in step S12 exceeds the threshold data D (step S17; minimum value of imaging data> D), the imaging data of the pixel of interest. Is output as the pixel value of the pixel of interest, and is stored in the result storage unit 15 (step S19).
  • the value of the imaging data corresponding to the pixel of interest changes sharply with respect to any of the values of the imaging data of the eight adjacent pixels, that is, due to the incident of cosmic rays or pixel defects.
  • the value of the imaging data corresponding to the pixel of interest is treated as noise.
  • the values of the imaging data corresponding to the pixel of interest are considered to be gradual changes with respect to all the imaging data values of the eight adjacent pixels, that is, the imaging data corresponding to the actual captured image. If this is the case, the value of the imaging data corresponding to the pixel of interest will be saved as it is.
  • the result image stored in the result storage unit 15 is an image in which the effects of cosmic ray incidents, pixel defects, etc. are removed.
  • FIG. 4 is an explanatory diagram of an example of a captured image.
  • an image G1 which is an image of an image-imposed image data corresponding to 9 pixels in length ⁇ 20 pixels in width will be described as an example.
  • the captured image G1 includes the imaging data PE-N caused by the incident of cosmic rays, pixel defects, etc., and the imaging data PE-1 corresponding to the actual subject.
  • the value of the imaging data PE-N (brightness of the pixels) has a sharp change in the light receiving intensity with respect to the surrounding pixels.
  • the imaging data PE1 corresponding to the actual subject captures a blurred image due to the arrangement relationship between the lens 11A and the photoelectric conversion unit 21, the change in the light receiving intensity with respect to the surrounding pixels is gradual. It turns out that it is a good thing.
  • 5A and 5B are explanatory views of image scanning and the resulting image obtained.
  • the image analysis unit 14 scans the entire captured image by sequentially changing the attention pixels from the left side to the right side and from the upper side to the lower side in the window WD of 3 ⁇ 3 pixels, and for each of the one attention pixel.
  • the values of the imaging data corresponding to the nine pixels are acquired.
  • the pixel of interest is a pixel corresponding to the imaging data PE-N, as shown in FIG. 5A, the light receiving intensity of the imaging data PE-N is steep with respect to the eight surrounding pixels.
  • the minimum value of the imaging data is any of the eight peripheral pixels.
  • the threshold data D is set between the second light-receiving intensity range from the bottom and the third light-receiving intensity range from the bottom in the five-step light-receiving intensity range shown in FIG. Since the minimum value value (first light receiving intensity range from the bottom) of the eight surrounding pixels is equal to or less than the value of the threshold data D, the light receiving intensity of the pixel whose attention pixel corresponds to the imaging data PE-N is , Clamped to the lowest light receiving intensity.
  • the minimum value of the imaging data is any of the four peripheral pixels located diagonally with respect to the imaging data PE-1.
  • the threshold data D is set between the second light receiving intensity range from the bottom and the third light receiving intensity range from the bottom in the five-step light receiving intensity range shown in FIG. Since the minimum value value (third light receiving intensity range from the bottom) of the four diagonally located peripheral pixels exceeds the value of the threshold data D, the pixel of interest is in the imaging data PE-1. The light receiving intensity of the corresponding pixel is the third light receiving intensity range from the bottom.
  • the size of the window is 3 ⁇ 3 pixels, but it can be adjusted to a size of 5 ⁇ 5 pixels, 7 ⁇ 7 pixels, or the like according to the image resolution.
  • the image can be reduced and resized according to the window size.
  • the number of scans can be the same as that of an image having a small number of pixels, and the processing can be simplified or speeded up.
  • FIG. 6 is an explanatory diagram of noise removal processing of a modified example of the first embodiment.
  • a grayscale image is used as the image to be captured, and the light receiving intensity of the pixel of interest is set by comparing the minimum value of the peripheral pixels of the pixel of interest with a predetermined value D.
  • a binarized image is generated from the captured image using a predetermined threshold value, and the peripheral pixels of the pixel of interest (in the case of the above-mentioned 3 ⁇ 3 pixel window, the peripheral 8 pixels).
  • the image analysis unit 14 scans the entire captured image by sequentially changing the attention pixels from the left side to the right side and the upper side to the lower side in the window WD of 3 ⁇ 3 pixels. , Acquire the values of the imaging data corresponding to each of nine pixels for one pixel of interest.
  • the pixel of interest is a pixel corresponding to the imaging data PE-N
  • the pixel of interest is a pixel corresponding to the imaging data PE-1
  • FIG. 7 is an explanatory diagram of a result image of a modified example of the first embodiment.
  • FIG. 8 is an explanatory diagram when the origin of the image is scanned in a window of 3 ⁇ 3 pixels.
  • the target pixel exists in all of the 3 ⁇ 3 pixel window WD has been described, but when scanning the peripheral portion of the captured image, it is included in a part of the 3 ⁇ 3 pixel window WD.
  • the processing target pixel may not exist.
  • among the window WDs of 3 ⁇ 3 pixels, the processing target pixels do not exist in 5 pixels.
  • the process is performed by any of the following methods. (1) The portion where the processing target pixel does not exist is ignored, and the processing is performed only on the portion where the processing target pixel exists.
  • the evaluation is performed only with the four pixels in which the processing target pixel exists.
  • the value is set to 0 and processing is performed as usual.
  • the processing is performed with the remaining 8 pixels, or the value is set to 0 as in the case of (2).
  • FIG. 9 is an outline processing flowchart in a modified example of the first embodiment. Also in the following description, the pixel of interest (pixel of interest) in the photoelectric conversion unit 21 is scanned by using a window of 3 ⁇ 3 pixels (pixels) including the surrounding eight pixels (pixels) of interest. An example of performing image analysis on the above will be described.
  • the image pickup apparatus 10 takes a picture in a state of being out of focus (step S21).
  • the image analysis unit 14 binarizes the captured data of the obtained captured image to generate a binarized image (step S22).
  • it is set to "0" corresponding to black
  • the image analysis unit 14 sets the parameters X and Y for specifying the pixel Px (X, Y) as the pixel of interest to 0, which is the initial value (step S23).
  • the image analysis unit 14 determines whether or not the parameter Y exceeds Ymax, which is the maximum value of the parameter Y (step S24). That is, the image analysis unit 14 determines whether or not the processing of all the pixels has been completed.
  • step S24 when the parameter Y exceeds the maximum value Ymax of the parameter Y (step S24; Yes), the processing is completed for all the pixels constituting the captured image, so that the obtained image is obtained. Is stored in the result storage unit 15 (step S30), and the image analysis process is completed.
  • step S24 if the parameter Y has not yet exceeded the maximum value Ymax of the parameter Y (step S24; No), it is determined whether or not the parameter X exceeds the maximum value of the parameter X, Xmax. (Step S25). That is, the image analysis unit 14 determines whether or not the processing of the pixels for one line is completed.
  • step S25 if the parameter X has not yet exceeded Xmax, which is the maximum value of the parameter X (step S25; No), the image analysis unit 14 sequentially scans the 3 ⁇ 3 pixel window (step S25; No).
  • the result image saved in the result storage unit 15 is an image showing the position of the original target subject (star, etc.) by removing the influences of cosmic ray incidents, pixel defects, and the like.
  • FIG. 10 is a schematic block diagram of the image pickup apparatus of the second embodiment.
  • the same parts as those in the first embodiment of FIG. 1 are designated by the same reference numerals.
  • the difference between the second embodiment and the first embodiment is that the imaging unit 12 is driven along the optical axis of the lens 11A of the lens unit 11 and the light receiving surface 21F of the photoelectric conversion unit 21 constituting the imaging unit 12 is driven. This is a point in which an imaging control unit 31 capable of varying the distance from the image point 21F0 is provided.
  • the imaging surface is configured to be able to be shifted, the focus is shifted as necessary, and the edges of the obtained image are blurred. That is, it is configured so that the value of the imaged data can be smoothed, and eventually the smoothing of the image can be achieved as needed.
  • smoothing the value [image] of the imaging data means, for example, changing the value of the imaging data that changes in a pulse manner in a Gaussian distribution so that the image changes gently (hereinafter, the image). Similarly).
  • the imaging control unit 31 makes the light receiving surface 21F of the photoelectric conversion unit 21 coincide with the image point 21F0, so that the imaging device 10A acquires a focused image as an captured image in the same manner as a normal camera. It becomes possible to do.
  • FIG. 11 is an operation explanatory view of the second embodiment. Further, as shown in FIG. 10, the distance of the light receiving surface 21F of the photoelectric conversion unit 21 from the image point 21F1 is changed by the imaging control unit 31, so that the degree of blurring of the image on the light receiving surface 21F can be changed. , Optimal captured images can be obtained under various imaging conditions.
  • the distance from the image point 21F0 of the light receiving surface 21F of the photoelectric conversion unit 21 can be increased so that the subject has a plurality of pixels. It is also possible to remove the noise by performing the same processing in the state of receiving the light of. Further, when imaging is performed for the purpose of viewing or the like, an image in which the focal position (focus) is matched can be obtained by controlling the image in the same manner as in the conventional imaging device.
  • FIG. 12 is a schematic block diagram of the image pickup apparatus of the third embodiment.
  • the same parts as those in the first embodiment of FIG. 1 are designated by the same reference numerals.
  • the difference between the third embodiment and the first embodiment is that the lens unit 11 is driven along the optical axis of the lens 11A of the lens unit 11 and the light receiving surface of the photoelectric conversion unit 21 constituting the imaging unit 12 is formed. This is a point where a lens control unit 32 capable of changing the distance from the image point 21F0 of the 21F is provided.
  • the lens portion is configured to be capable of shifting along the optical axis, the focus is shifted as necessary, and the edges of the obtained image are blurred, that is, imaging. It is configured so that the data value [image] can be smoothed.
  • the lens control unit 32 makes the light receiving surface 21F of the photoelectric conversion unit 21 coincide with the image point 21F0, so that the imaging device 10B acquires a focused image as an captured image in the same manner as a normal camera. It becomes possible to do.
  • the lens control unit 32 controls the distance of the light receiving surface 21F of the photoelectric conversion unit 21 from the image point 21F0 by the lens control unit 32, the degree of blurring of the image on the light receiving surface 21F can be changed, and various imaging conditions can be met.
  • the optimum captured image can be obtained.
  • an image in which the focal position (focus) is matched can be obtained by controlling the image in the same manner as in the conventional imaging device.
  • FIGS. 13A and 13B are explanatory views of the fourth embodiment. Further, FIG. 14 is a flowchart of the outline processing of the fourth embodiment. Further, FIG. 15 is an explanatory view of the window of the fourth embodiment.
  • the fourth embodiment differs from each of the above embodiments in that the light receiving intensity of the pixel of interest is set to the light receiving intensity of the pixel having the minimum light receiving intensity among the peripheral pixels.
  • a window of 3 ⁇ 3 pixels (pixels) including eight surrounding pixels (pixels) is displayed.
  • An example will be described in which scanning is performed and image analysis is performed on the pixel of interest.
  • the image pickup apparatus 10 takes a picture in a state of being out of focus (step S31).
  • the image analysis unit 14 sequentially scans the window of 3 ⁇ 3 pixels (step S32), and the smallest of the imaging data corresponding to the eight peripheral pixels corresponding to the pixel of interest.
  • the value (minimum value of light receiving intensity) is set as the value of the imaging data of the pixel of interest (step S33).
  • the value C of the imaging data corresponding to the pixel of interest is set to the minimum value among the imaging data corresponding to the eight peripheral pixels corresponding to the pixel of interest. Then, the image analysis unit 14 repeats the processes of steps S32 and S33, with all the pixels constituting the captured image as the pixels of interest one by one.
  • the value of the imaging data corresponding to the pixel of interest changes sharply with respect to any of the values of the imaging data of the eight adjacent pixels, that is, for example, the incident of a cosmic ray or the pixel.
  • the value of the imaging data corresponding to the pixel of interest is regarded as having a low light receiving intensity and is treated as noise.
  • the result image saved in the result storage unit 15 by simple arithmetic processing removes the influence of cosmic ray incidents, pixel defects, etc., and is the original image. It is an image showing the position of the target subject (star, etc.).
  • the light receiving surface 21F of the photoelectric conversion unit 21 is set as a position shifted from the image point (specifically, a position on the lens 11A side).
  • the edges of the obtained image were blurred (the value [image] of the imaging data was smoothed).
  • the image plane is edged so as to be able to be shifted, the focus is shifted as necessary, and the edge of the obtained image is blurred (smoothing of the value [image] of the imaged data].
  • the edge of the obtained image is configured so that the lens portion can be shifted along the optical axis, and the focus is shifted as necessary, as in the third embodiment. It is also possible to make it blurry (to smooth the value [image] of the imaging data).
  • FIG. 16 is a schematic block diagram of the image pickup apparatus of the fifth embodiment.
  • the same parts as those in the first embodiment of FIG. 1 are designated by the same reference numerals.
  • the difference between the fifth embodiment and the first embodiment is that the light receiving surface of the photoelectric conversion unit 21 constituting the image pickup unit 12 is arranged at the image point of the lens 11A, as in the normal image pickup apparatus.
  • a filter unit 41 is provided in front of the lens unit 11 to blur the edges of the obtained image by diffusing or refracting the incident light (to smooth the value [image] of the imaging data). be.
  • FIG. 17 is a configuration explanatory view of the first aspect of the fifth embodiment.
  • an optical low-pass filter LPF that functions as a filter unit 41 is located between the lens 11A and the first focal point FP1 on the optical axis of the lens 11A between the lens 11A and the lens 11A. is doing.
  • the light receiving surface 21F1 of the photoelectric conversion unit 21 is arranged so as to coincide with the image point. In the above configuration, in the state where the optical low-pass filter LPF is removed, the light receiving surface 21F1 of the photoelectric conversion unit 21 coincides with the image point of the lens 11A and is in a focused state.
  • the optical low-pass filter LPF when the optical low-pass filter LPF is inserted into the optical path, the incident light is diffused, which has the effect of blurring the edge of the subject.
  • a known material such as frosted glass or anomalous refraction glass may be used.
  • the result image stored in the result storage unit 15 can be removed from the influences of cosmic ray incidents, pixel defects, etc. by a simple calculation process. Therefore, it becomes an image showing the position of the original target subject (star, etc.).
  • the edge of the subject is blurred by using the optical low-pass filter LPF to smooth the image.
  • the imaging data corresponding to the pixel of interest has been described.
  • FIG. 18 is a configuration explanatory view of the second aspect of the fifth embodiment.
  • a cross filter CF that functions as a filter unit 41 is located between the lens 11A and the first focal point FP1 on the optical axis of the lens 11A between the lens 11A and the lens 11A. ..
  • the cross filter CF can be realized by a known technique such as carving a groove on a thin line on the glass surface.
  • the light receiving surface 21F1 of the photoelectric conversion unit 21 is arranged so as to coincide with the image point. Even in the above configuration, in the state where the cross filter CF is removed, the light receiving surface 21F1 of the photoelectric conversion unit 21 coincides with the image point of the lens 11A and is in a focused state.
  • FIG. 19 is an explanatory diagram of an example of the captured image of the second aspect of the fifth embodiment. Similar to FIG. 4, the captured image G21 also includes the imaging data PE-N caused by the incident of cosmic rays, pixel defects, and the imaging data PE-1 corresponding to the actual subject.
  • FIG. 20 is an explanatory view of the window in the second aspect of the fifth embodiment.
  • a window used for scanning in the second aspect of the fifth embodiment as shown in FIG. 20, a cross-shaped window WD2 is used.
  • FIG. 21 is an outline processing flowchart of the image analysis unit according to the second aspect of the fifth embodiment.
  • the image analysis unit 14 sets the parameters X and Y for specifying the pixel Px (X, Y) as the pixel of interest to 0, which is an initial value (step S41).
  • the parameter X is a parameter in the row direction
  • the parameter Y is a parameter in the column direction (see FIG. 4 described later).
  • the image analysis unit 14 determines whether or not the parameter Y exceeds Ymax, which is the maximum value of the parameter Y (step S42). That is, the image analysis unit 14 determines whether or not the processing of all the pixels has been completed. In the determination of step S42, if the parameter Y exceeds the maximum value Ymax of the parameter Y (step S42; Yes), the image analysis process is terminated.
  • step S42 if the parameter Y has not yet exceeded the maximum value Ymax of the parameter Y (step S42; No), it is determined whether or not the parameter X exceeds the maximum value of the parameter X, Xmax. (Step S43). That is, the image analysis unit 14 determines whether or not the processing of the pixels for one line is completed.
  • step S43 if the parameter X has not yet exceeded Xmax, which is the maximum value of the parameter X (step S43; No), the image analysis unit 14 sequentially scans the cross-shaped window WD2, respectively. The value of the image pickup data corresponding to the five pixels is acquired (step S41). Subsequently, the image analysis unit 14 searches for and acquires the minimum value of the imaging data among the five imaging data corresponding to the window WD2 (step S46).
  • the image analysis unit 14 compares the minimum value of the imaging data acquired in step S46 with the predetermined threshold value data D (step S47).
  • step S47 if the minimum value of the imaging data acquired in step S46 is equal to or less than the threshold data D (step S47; No: minimum value of imaging data ⁇ D), the imaging data of the pixel of interest. Is treated as noise and clamped to be treated as imaging data of black spots (light receiving intensity (luminance) level lowest) (step S48).
  • the value of the imaging data after clamping is output as the pixel value of the pixel of interest, and is stored in the result storage unit 15 (step S49).
  • step S47 when the minimum value of the imaging data acquired in step S46 exceeds the threshold data D (step S47; Yes: minimum value of imaging data> D), the pixel of interest The value of the imaging data is output as the pixel value of the pixel of interest and stored in the result storage unit 15 (step S49).
  • the value of the imaging data corresponding to the pixel of interest changes sharply with respect to any of the values of the imaging data of the four adjacent pixels, that is, due to the incident of cosmic rays or pixel defects.
  • the value of the imaging data corresponding to the pixel of interest is treated as noise.
  • the values of the imaging data corresponding to the pixel of interest are considered to be gradual changes with respect to all the imaging data values of the four adjacent pixels, that is, the imaging data corresponding to the actual captured image. If this is the case, the value of the imaging data corresponding to the pixel of interest will be saved as it is.
  • FIG. 22 is an explanatory diagram of the result image in the second aspect of the fifth embodiment.
  • the result image stored in the result storage unit 15 is an image obtained by removing the effects of cosmic ray incidents, pixel defects, etc., as shown in FIG. Become.
  • the image analysis unit 14 can easily distinguish between the captured image of the subject and the bright spot due to noise.
  • FIG. 23 is an explanatory diagram when the origin of the image is scanned by a cross-shaped window having a 5-pixel configuration.
  • the processing target pixel does not exist in two pixels of the cross-shaped window WD2.
  • the process is performed by any of the following methods. (1) The portion where the processing target pixel does not exist is ignored, and the processing is performed only on the portion where the processing target pixel exists. More specifically, in the case of the example of FIG. 23, the evaluation is performed only with the three pixels in which the processing target pixel exists. (2) For the portion where the processing target pixel does not exist, the value is set to 0 and processing is performed as usual. (3) Copying and processing the pixel values of pixels adjacent to each other in the column direction or the row direction More specifically, in the case of the example of FIG. 23, the pixel values of the pixels Px (0,0) are copied respectively. It will be evaluated.
  • the size of the cross-shaped window WD2 has a 5-pixel configuration, but it is similar to a 9-pixel configuration, a 13-pixel configuration, or a 5-pixel configuration in which the vertical and horizontal lengths are increased according to the image resolution. It is also possible to adjust the size to a 20-pixel configuration of the shape. As a modified example of the cross shape, it is also possible to have a square-shaped 9-pixel configuration at the center and a 13-pixel configuration in which one pixel is added vertically and horizontally.
  • the size of the subject is sufficiently larger than the window size, it is possible to reduce the size of the image and resize it according to the window size.
  • the number of scans can be the same as that of an image having a small number of pixels, and the processing can be simplified or speeded up.
  • FIG. 24 is a schematic block diagram of an image pickup apparatus according to a modified example of the fifth embodiment.
  • the same parts as those in the fifth embodiment of FIG. 16 are designated by the same reference numerals.
  • the modified example of the fifth embodiment is different from the fifth embodiment in that the filter unit 41 can be inserted into and removed from the optical path of the incident light so that the optical axis of the lens 11A of the lens unit 11 passes through the filter unit 41.
  • the point is that the filter control unit 42 that drives the lens is provided.
  • the filter control unit 42 sets the inside of the filter unit 41 to the outside of the optical path of the lens 11A, so that the image pickup apparatus 40A can acquire a focused image as a captured image as in the case of a normal camera. It will be possible.
  • the filter unit 41 is inserted and removed from the optical path of the incident light by the filter control unit 42, but it is for measuring the optical low-pass filter LPF, the cross filter CF, etc. (for position measurement). It is also possible to switch between the filter and a normal optical filter (for example, an infrared filter, an ND filter, etc.) and insert the filter into the optical path.
  • a normal optical filter for example, an infrared filter, an ND filter, etc.
  • the incident of cosmic rays from the outside and the influence of defective pixels of the image sensor can be removed by a simple process, and an image of the subject to be imaged can be obtained. Is possible.
  • the adjustment of the size of the window WD or the window WD2 and the adjustment of the image size have been described, but the modified example of the first embodiment and the second embodiment have been described.
  • the third embodiment, the fourth embodiment, and the first aspect of the fifth embodiment can be similarly applied.
  • the case where the target pixel exists in all the regions of the window WD or the window WD2 has been described.
  • the window WD or a part of the window WD2 having 3 ⁇ 3 pixels is used. It is also possible to configure the above-described methods (1) to (3) to be similarly applied to the case where the processing target pixel does not exist. By adopting such a configuration, processing can be reliably performed even when scanning the peripheral portion of the captured image.
  • the image pickup surface is configured to be able to be shifted, and by shifting the focus as necessary
  • the lens portion is moved along the optical axis. It is configured so that it can be shifted, and by shifting the focus as necessary, the edges of the obtained image can be blurred (the image is smoothed).
  • the value of the imaging data corresponding to the pixel of interest is set to the minimum value among the imaging data corresponding to the peripheral pixels corresponding to the pixel of interest, thereby setting the edge of the subject. It is also possible to blur the image so as to smooth the image.
  • the present technology can also have the following configurations.
  • a lens unit that collects incident light from the subject
  • An imaging unit that captures the incident light collected by the lens unit in an unfocused state
  • an imaging unit The image captured by the imaging unit is analyzed, and the pixel value of the pixel of interest is set based on the pixel values of a plurality of predetermined peripheral pixels located around one pixel of interest to specify the position of the subject.
  • An image analysis unit that generates a result image for Imaging device equipped with.
  • the image analysis unit sets the pixel value of the pixel of interest to the minimum value.
  • the relative arrangement position of the lens unit and the imaging unit is set to a position in the imaging unit that allows imaging in an out-of-focus state.
  • the imaging device according to any one of (1) to (4).
  • a lens control unit that drives the lens unit in the optical axis direction and sets the relative arrangement position of the lens unit and the imaging unit to a position in the imaging unit that enables imaging in an out-of-focus state.
  • An imaging control unit that drives the imaging unit in the optical axis direction and sets the relative arrangement position of the lens unit and the imaging unit to a position in the imaging unit that enables imaging in an out-of-focus state.
  • a filter unit provided with the lens unit and a filter unit inserted into the optical path between the subjects to smooth the obtained image.
  • the imaging device according to any one of (1) to (4).
  • the filter unit is configured as an optical low-pass filter.
  • the imaging device is configured as an optical low-pass filter.
  • a filter unit configured as a cross filter that is inserted into an optical path between the lens unit and the subject to generate striations is provided.
  • a filter unit is provided which is inserted into the optical path between the lens unit and the subject so that the incident light after passing through the lens unit is out of focus.
  • the imaging device according to any one of (1) to (4).
  • the process of setting the pixel value of the pixel of interest when the minimum value of the pixel value of a plurality of predetermined peripheral pixels exceeds a predetermined threshold value, the pixel value of the pixel of interest is set to the minimum value.
  • the process of setting the pixel value of the pixel of interest is the process of setting the minimum value of the pixel value of a plurality of predetermined peripheral pixels as the pixel value of the pixel of interest.
  • the process of setting the pixel value of the pixel of interest includes a process of binarizing the pixel values constituting the captured image into high light receiving intensity and low light receiving intensity.
  • Imaging device 11 Lens unit 11A Lens 12 Imaging unit 13 Image storage unit 14 Image analysis unit 15 Result storage unit 21 Photoelectric conversion unit 21F0 Image point 21F, 21F1 Light receiving surface 22 Analog amplifier unit 23 AD conversion unit 31 Imaging control unit 32 Lens control unit 41 Filter unit 42 Filter control unit CF Cross filter D Threshold data LPF Optical low-pass filter OBJ Subject PE-1 Imaging data PE-N Imaging data (noise) WD, WD2 window

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Studio Devices (AREA)
  • Blocking Light For Cameras (AREA)
  • Automatic Focus Adjustment (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)

Abstract

Dispositif d'imagerie qui, selon un mode de réalisation, est pourvu : d'une unité de lentille pour collecter une lumière incidente provenant d'un sujet ; d'une unité d'imagerie pour imager la lumière incidente collectée par l'unité de lentille dans un état non focalisé ; et d'une unité d'analyse d'image qui analyse une image capturée par l'unité d'imagerie et qui, sur la base des valeurs de pixel d'une pluralité de pixels environnants prédéfinis positionnés autour d'un pixel d'intérêt, règle une valeur de pixel pour le pixel d'intérêt et génère une image de résultat pour identifier la position du sujet.
PCT/JP2020/047759 2020-01-14 2020-12-21 Dispositif d'imagerie et procédé de commande de dispositif d'imagerie WO2021145158A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020003979A JP2021111920A (ja) 2020-01-14 2020-01-14 撮像装置及び撮像装置の制御方法
JP2020-003979 2020-01-14

Publications (1)

Publication Number Publication Date
WO2021145158A1 true WO2021145158A1 (fr) 2021-07-22

Family

ID=76863711

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/047759 WO2021145158A1 (fr) 2020-01-14 2020-12-21 Dispositif d'imagerie et procédé de commande de dispositif d'imagerie

Country Status (2)

Country Link
JP (1) JP2021111920A (fr)
WO (1) WO2021145158A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005328134A (ja) * 2004-05-12 2005-11-24 Sony Corp 撮像装置および固体撮像素子の欠陥検出方法
JP2006180210A (ja) * 2004-12-22 2006-07-06 Sony Corp 撮像装置および方法、並びにプログラム
JP2011135566A (ja) * 2009-11-26 2011-07-07 Nikon Corp 画像処理装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005328134A (ja) * 2004-05-12 2005-11-24 Sony Corp 撮像装置および固体撮像素子の欠陥検出方法
JP2006180210A (ja) * 2004-12-22 2006-07-06 Sony Corp 撮像装置および方法、並びにプログラム
JP2011135566A (ja) * 2009-11-26 2011-07-07 Nikon Corp 画像処理装置

Also Published As

Publication number Publication date
JP2021111920A (ja) 2021-08-02

Similar Documents

Publication Publication Date Title
KR101412752B1 (ko) 디지털 자동 초점 영상 생성 장치 및 방법
JP4388327B2 (ja) 顕微鏡像撮像装置及び顕微鏡像撮像方法
US7295233B2 (en) Detection and removal of blemishes in digital images utilizing original images of defocused scenes
US7991241B2 (en) Image processing apparatus, control method therefor, and program
US20090079862A1 (en) Method and apparatus providing imaging auto-focus utilizing absolute blur value
WO2011099239A1 (fr) Dispositif et procédé d'imagerie, et procédé de traitement d'images pour le dispositif d'imagerie
US8873881B2 (en) Dust detection system and digital camera
JP4196124B2 (ja) 撮像系診断装置、撮像系診断プログラム、撮像系診断プログラム製品、および撮像装置
JP4466015B2 (ja) 画像処理装置および画像処理プログラム
CN102082912A (zh) 图像拍摄装置及图像处理方法
RU2009119259A (ru) Устройство для облегчения фокусировки и соответствующий способ
JP4633245B2 (ja) 表面検査装置及び表面検査方法
JP2009229125A (ja) 距離測定装置および距離測定方法
JP4419479B2 (ja) 画像処理装置および画像処理プログラム
WO2021145158A1 (fr) Dispositif d'imagerie et procédé de commande de dispositif d'imagerie
JP4885471B2 (ja) プリフォームロッドの屈折率分布測定方法
JP4668863B2 (ja) 撮像装置
JP4466017B2 (ja) 画像処理装置および画像処理プログラム
JP5050282B2 (ja) 合焦検出装置、合焦検出方法および合焦検出プログラム
JP2005265467A (ja) 欠陥検出装置
JP6789810B2 (ja) 画像処理方法、画像処理装置、および、撮像装置
JP7001461B2 (ja) 画像処理装置および画像処理方法、並びに撮像装置
JP2001235319A (ja) シェーディング補正装置及びシェーディング補正方法及び表面検査装置
JP2004222232A (ja) 画像処理装置および画像処理プログラム
JP2017219737A (ja) 撮像装置及びその制御方法、プログラム、記憶媒体

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20914647

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20914647

Country of ref document: EP

Kind code of ref document: A1