WO2021145158A1 - Imaging device, and method for controlling imaging device - Google Patents

Imaging device, and method for controlling imaging device Download PDF

Info

Publication number
WO2021145158A1
WO2021145158A1 PCT/JP2020/047759 JP2020047759W WO2021145158A1 WO 2021145158 A1 WO2021145158 A1 WO 2021145158A1 JP 2020047759 W JP2020047759 W JP 2020047759W WO 2021145158 A1 WO2021145158 A1 WO 2021145158A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
unit
image
imaging
interest
Prior art date
Application number
PCT/JP2020/047759
Other languages
French (fr)
Japanese (ja)
Inventor
政晴 永田
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2021145158A1 publication Critical patent/WO2021145158A1/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B11/00Filters or other obturators specially adapted for photographic purposes
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/68Noise processing, e.g. detecting, correcting, reducing or removing noise applied to defects

Definitions

  • the present disclosure relates to an imaging device and a control method for the imaging device.
  • the image sensor may have defects in the pixels (hereinafter referred to as defective pixels) due to manufacturing variations or the influence of radiation, which may appear as random noise in the imaging results. Most of these defective pixels are fixed as "white” or "black” dots.
  • This technology was made in view of such a situation, and it is possible to easily identify the position of the subject by removing defective pixels, random noise, the influence of cosmic rays, etc., and by extension, perform accurate surveying. It is an object of the present invention to provide a possible image pickup apparatus and a control method for the image pickup apparatus.
  • the image pickup apparatus of the embodiment analyzes a lens unit that collects incident light from a subject, an image pickup unit that captures the incident light collected by the lens unit in an unfocused state, and an image captured by the image pickup unit.
  • An image analysis unit that sets the pixel value of the pixel of interest based on the pixel values of a plurality of predetermined peripheral pixels located around one pixel of interest and generates a result image for identifying the position of the subject. , Equipped with.
  • FIG. 1 is a schematic block diagram of an image pickup apparatus of the first embodiment.
  • the image pickup apparatus 10 includes a lens unit 11, an image pickup unit 12, an image storage unit 13, an image analysis unit 14, and a result storage unit 15.
  • the lens unit 11 collects the light from the subject and guides it to the imaging unit 12.
  • the image pickup unit 12 performs photoelectric conversion and analog / digital conversion of the light collected by the lens unit 11 and outputs the light to the image storage unit 13.
  • the light receiving surface of the imaging unit is fixed at a position in front of the image point determined by the lenses constituting the lens unit 11. This is because the received image is intentionally blurred for shooting.
  • the imaging unit 12 includes a photoelectric conversion unit 21, an analog amplifier unit 22, and an AD conversion unit 23.
  • the photoelectric conversion unit 21 includes an image sensor such as a CCD (Charged-coupled devices) or CMOS (Complementary metal-oxide-semiconductor), performs photoelectric conversion of the light incident from the lens unit 11, and uses an analog amplifier as an original imaging signal. Output to unit 22.
  • CCD Charge-coupled devices
  • CMOS Complementary metal-oxide-semiconductor
  • the analog amplifier unit 22 amplifies the input original imaging signal and outputs it as an imaging signal to the AD conversion unit 23.
  • the AD conversion unit 23 performs analog / digital conversion of the imaging signal and outputs it as imaging data to the image storage unit 13.
  • the image storage unit 13 stores the input imaging data in units of captured images.
  • the image analysis unit 14 reads the image capture data of each image captured image from the image storage unit 13, analyzes the image, and outputs the result image to the result storage unit 15 after removing the effects of random noise, cosmic rays, defective pixels, and the like. do.
  • the result storage unit 15 stores the result image output by the image analysis unit 14.
  • FIG. 2 is an explanatory diagram of the arrangement relationship between the lens constituting the lens unit and the photoelectric conversion unit.
  • the first focal point FP1 is located on the optical axis of the lens 11A between the subject OBJ such as a star and the lens 11A which is a biconvex lens constituting the lens unit 11. ..
  • the second focal point FP2 is located on the optical axis of the lens 11A between the image point 21F0 of the lens 11A and the lens 11A. Then, in the case of the example of FIG. 2, the light receiving surface 21F of the photoelectric conversion unit 21 is positioned on the lens 11A side from the image point. Therefore, the image formed on the light receiving surface 21F is in a more blurred state than the image formed on the image point 21F0.
  • the change in the light receiving intensity on the light receiving surface 21F is slower than the change in the light receiving intensity at the image point 21F0.
  • “gentle” means a state in which the change in light receiving intensity is not pulse-like, and for example, the change in light receiving intensity changes from the maximum value like a normal distribution curve.
  • the change in light receiving intensity with respect to adjacent pixels due to the incident of cosmic rays or pixel defects becomes a steep pulse.
  • FIG. 3 is an outline processing flowchart of the image analysis unit.
  • the pixel of interest (pixel of interest) in the photoelectric conversion unit 21 is scanned by using a window WD of 3 ⁇ 3 pixels (pixels) including the surrounding eight pixels (pixels) to be noticed.
  • An example of performing image analysis on pixels will be described.
  • the image analysis unit 14 sets the parameters X and Y for specifying the pixels Px (X, Y) as the pixel of interest to 0, which is the initial value (step S11).
  • the parameter X is a parameter in the row direction
  • the parameter Y is a parameter in the column direction (see FIG. 4 described later).
  • the image analysis unit 14 determines whether or not the parameter Y exceeds Ymax, which is the maximum value of the parameter Y (step S12). That is, the image analysis unit 14 determines whether or not the processing of all the pixels has been completed. In the determination of step S12, if the parameter Y exceeds the maximum value Ymax of the parameter Y (step S12; Yes), the image analysis process is terminated.
  • step S12 if the parameter Y has not yet exceeded the maximum value Ymax of the parameter Y (step S12; No), it is determined whether or not the parameter X exceeds the maximum value of the parameter X, Xmax. (Step S13). That is, the image analysis unit 14 determines whether or not the processing of the pixels for one line is completed.
  • step S13 if the parameter X has not yet exceeded Xmax, which is the maximum value of the parameter X (step S13; No), the image analysis unit 14 sequentially scans the window WD of 3 ⁇ 3 pixels. , Acquire the values of the imaging data corresponding to each of the nine pixels (step S15). Subsequently, the image analysis unit 14 acquires the minimum value of the imaging data among the nine imaging data corresponding to the 3 ⁇ 3 pixel window (step S16).
  • the image analysis unit 14 compares the minimum value of the imaging data acquired in step S16 with the predetermined threshold value data D (step S17).
  • the threshold data D is data for treating the value of the imaging data as noise if the value of the imaging data is smaller than this, and treating it as imaging data of a black spot (the lowest light receiving intensity (luminance) level). ..
  • step S17 if the minimum value of the imaging data acquired in step S16 is equal to or less than the threshold data D (step S16; minimum value of imaging data ⁇ D), the value of the imaging data of the pixel of interest. Is treated as noise and clamped to be treated as imaging data of black spots (light receiving intensity (luminance) level lowest) (step S18).
  • the value of the imaging data after clamping is output as the pixel value of the pixel of interest, and is stored in the result storage unit 15 (step S19).
  • step S13 when the minimum value of the imaging data acquired in step S12 exceeds the threshold data D (step S17; minimum value of imaging data> D), the imaging data of the pixel of interest. Is output as the pixel value of the pixel of interest, and is stored in the result storage unit 15 (step S19).
  • the value of the imaging data corresponding to the pixel of interest changes sharply with respect to any of the values of the imaging data of the eight adjacent pixels, that is, due to the incident of cosmic rays or pixel defects.
  • the value of the imaging data corresponding to the pixel of interest is treated as noise.
  • the values of the imaging data corresponding to the pixel of interest are considered to be gradual changes with respect to all the imaging data values of the eight adjacent pixels, that is, the imaging data corresponding to the actual captured image. If this is the case, the value of the imaging data corresponding to the pixel of interest will be saved as it is.
  • the result image stored in the result storage unit 15 is an image in which the effects of cosmic ray incidents, pixel defects, etc. are removed.
  • FIG. 4 is an explanatory diagram of an example of a captured image.
  • an image G1 which is an image of an image-imposed image data corresponding to 9 pixels in length ⁇ 20 pixels in width will be described as an example.
  • the captured image G1 includes the imaging data PE-N caused by the incident of cosmic rays, pixel defects, etc., and the imaging data PE-1 corresponding to the actual subject.
  • the value of the imaging data PE-N (brightness of the pixels) has a sharp change in the light receiving intensity with respect to the surrounding pixels.
  • the imaging data PE1 corresponding to the actual subject captures a blurred image due to the arrangement relationship between the lens 11A and the photoelectric conversion unit 21, the change in the light receiving intensity with respect to the surrounding pixels is gradual. It turns out that it is a good thing.
  • 5A and 5B are explanatory views of image scanning and the resulting image obtained.
  • the image analysis unit 14 scans the entire captured image by sequentially changing the attention pixels from the left side to the right side and from the upper side to the lower side in the window WD of 3 ⁇ 3 pixels, and for each of the one attention pixel.
  • the values of the imaging data corresponding to the nine pixels are acquired.
  • the pixel of interest is a pixel corresponding to the imaging data PE-N, as shown in FIG. 5A, the light receiving intensity of the imaging data PE-N is steep with respect to the eight surrounding pixels.
  • the minimum value of the imaging data is any of the eight peripheral pixels.
  • the threshold data D is set between the second light-receiving intensity range from the bottom and the third light-receiving intensity range from the bottom in the five-step light-receiving intensity range shown in FIG. Since the minimum value value (first light receiving intensity range from the bottom) of the eight surrounding pixels is equal to or less than the value of the threshold data D, the light receiving intensity of the pixel whose attention pixel corresponds to the imaging data PE-N is , Clamped to the lowest light receiving intensity.
  • the minimum value of the imaging data is any of the four peripheral pixels located diagonally with respect to the imaging data PE-1.
  • the threshold data D is set between the second light receiving intensity range from the bottom and the third light receiving intensity range from the bottom in the five-step light receiving intensity range shown in FIG. Since the minimum value value (third light receiving intensity range from the bottom) of the four diagonally located peripheral pixels exceeds the value of the threshold data D, the pixel of interest is in the imaging data PE-1. The light receiving intensity of the corresponding pixel is the third light receiving intensity range from the bottom.
  • the size of the window is 3 ⁇ 3 pixels, but it can be adjusted to a size of 5 ⁇ 5 pixels, 7 ⁇ 7 pixels, or the like according to the image resolution.
  • the image can be reduced and resized according to the window size.
  • the number of scans can be the same as that of an image having a small number of pixels, and the processing can be simplified or speeded up.
  • FIG. 6 is an explanatory diagram of noise removal processing of a modified example of the first embodiment.
  • a grayscale image is used as the image to be captured, and the light receiving intensity of the pixel of interest is set by comparing the minimum value of the peripheral pixels of the pixel of interest with a predetermined value D.
  • a binarized image is generated from the captured image using a predetermined threshold value, and the peripheral pixels of the pixel of interest (in the case of the above-mentioned 3 ⁇ 3 pixel window, the peripheral 8 pixels).
  • the image analysis unit 14 scans the entire captured image by sequentially changing the attention pixels from the left side to the right side and the upper side to the lower side in the window WD of 3 ⁇ 3 pixels. , Acquire the values of the imaging data corresponding to each of nine pixels for one pixel of interest.
  • the pixel of interest is a pixel corresponding to the imaging data PE-N
  • the pixel of interest is a pixel corresponding to the imaging data PE-1
  • FIG. 7 is an explanatory diagram of a result image of a modified example of the first embodiment.
  • FIG. 8 is an explanatory diagram when the origin of the image is scanned in a window of 3 ⁇ 3 pixels.
  • the target pixel exists in all of the 3 ⁇ 3 pixel window WD has been described, but when scanning the peripheral portion of the captured image, it is included in a part of the 3 ⁇ 3 pixel window WD.
  • the processing target pixel may not exist.
  • among the window WDs of 3 ⁇ 3 pixels, the processing target pixels do not exist in 5 pixels.
  • the process is performed by any of the following methods. (1) The portion where the processing target pixel does not exist is ignored, and the processing is performed only on the portion where the processing target pixel exists.
  • the evaluation is performed only with the four pixels in which the processing target pixel exists.
  • the value is set to 0 and processing is performed as usual.
  • the processing is performed with the remaining 8 pixels, or the value is set to 0 as in the case of (2).
  • FIG. 9 is an outline processing flowchart in a modified example of the first embodiment. Also in the following description, the pixel of interest (pixel of interest) in the photoelectric conversion unit 21 is scanned by using a window of 3 ⁇ 3 pixels (pixels) including the surrounding eight pixels (pixels) of interest. An example of performing image analysis on the above will be described.
  • the image pickup apparatus 10 takes a picture in a state of being out of focus (step S21).
  • the image analysis unit 14 binarizes the captured data of the obtained captured image to generate a binarized image (step S22).
  • it is set to "0" corresponding to black
  • the image analysis unit 14 sets the parameters X and Y for specifying the pixel Px (X, Y) as the pixel of interest to 0, which is the initial value (step S23).
  • the image analysis unit 14 determines whether or not the parameter Y exceeds Ymax, which is the maximum value of the parameter Y (step S24). That is, the image analysis unit 14 determines whether or not the processing of all the pixels has been completed.
  • step S24 when the parameter Y exceeds the maximum value Ymax of the parameter Y (step S24; Yes), the processing is completed for all the pixels constituting the captured image, so that the obtained image is obtained. Is stored in the result storage unit 15 (step S30), and the image analysis process is completed.
  • step S24 if the parameter Y has not yet exceeded the maximum value Ymax of the parameter Y (step S24; No), it is determined whether or not the parameter X exceeds the maximum value of the parameter X, Xmax. (Step S25). That is, the image analysis unit 14 determines whether or not the processing of the pixels for one line is completed.
  • step S25 if the parameter X has not yet exceeded Xmax, which is the maximum value of the parameter X (step S25; No), the image analysis unit 14 sequentially scans the 3 ⁇ 3 pixel window (step S25; No).
  • the result image saved in the result storage unit 15 is an image showing the position of the original target subject (star, etc.) by removing the influences of cosmic ray incidents, pixel defects, and the like.
  • FIG. 10 is a schematic block diagram of the image pickup apparatus of the second embodiment.
  • the same parts as those in the first embodiment of FIG. 1 are designated by the same reference numerals.
  • the difference between the second embodiment and the first embodiment is that the imaging unit 12 is driven along the optical axis of the lens 11A of the lens unit 11 and the light receiving surface 21F of the photoelectric conversion unit 21 constituting the imaging unit 12 is driven. This is a point in which an imaging control unit 31 capable of varying the distance from the image point 21F0 is provided.
  • the imaging surface is configured to be able to be shifted, the focus is shifted as necessary, and the edges of the obtained image are blurred. That is, it is configured so that the value of the imaged data can be smoothed, and eventually the smoothing of the image can be achieved as needed.
  • smoothing the value [image] of the imaging data means, for example, changing the value of the imaging data that changes in a pulse manner in a Gaussian distribution so that the image changes gently (hereinafter, the image). Similarly).
  • the imaging control unit 31 makes the light receiving surface 21F of the photoelectric conversion unit 21 coincide with the image point 21F0, so that the imaging device 10A acquires a focused image as an captured image in the same manner as a normal camera. It becomes possible to do.
  • FIG. 11 is an operation explanatory view of the second embodiment. Further, as shown in FIG. 10, the distance of the light receiving surface 21F of the photoelectric conversion unit 21 from the image point 21F1 is changed by the imaging control unit 31, so that the degree of blurring of the image on the light receiving surface 21F can be changed. , Optimal captured images can be obtained under various imaging conditions.
  • the distance from the image point 21F0 of the light receiving surface 21F of the photoelectric conversion unit 21 can be increased so that the subject has a plurality of pixels. It is also possible to remove the noise by performing the same processing in the state of receiving the light of. Further, when imaging is performed for the purpose of viewing or the like, an image in which the focal position (focus) is matched can be obtained by controlling the image in the same manner as in the conventional imaging device.
  • FIG. 12 is a schematic block diagram of the image pickup apparatus of the third embodiment.
  • the same parts as those in the first embodiment of FIG. 1 are designated by the same reference numerals.
  • the difference between the third embodiment and the first embodiment is that the lens unit 11 is driven along the optical axis of the lens 11A of the lens unit 11 and the light receiving surface of the photoelectric conversion unit 21 constituting the imaging unit 12 is formed. This is a point where a lens control unit 32 capable of changing the distance from the image point 21F0 of the 21F is provided.
  • the lens portion is configured to be capable of shifting along the optical axis, the focus is shifted as necessary, and the edges of the obtained image are blurred, that is, imaging. It is configured so that the data value [image] can be smoothed.
  • the lens control unit 32 makes the light receiving surface 21F of the photoelectric conversion unit 21 coincide with the image point 21F0, so that the imaging device 10B acquires a focused image as an captured image in the same manner as a normal camera. It becomes possible to do.
  • the lens control unit 32 controls the distance of the light receiving surface 21F of the photoelectric conversion unit 21 from the image point 21F0 by the lens control unit 32, the degree of blurring of the image on the light receiving surface 21F can be changed, and various imaging conditions can be met.
  • the optimum captured image can be obtained.
  • an image in which the focal position (focus) is matched can be obtained by controlling the image in the same manner as in the conventional imaging device.
  • FIGS. 13A and 13B are explanatory views of the fourth embodiment. Further, FIG. 14 is a flowchart of the outline processing of the fourth embodiment. Further, FIG. 15 is an explanatory view of the window of the fourth embodiment.
  • the fourth embodiment differs from each of the above embodiments in that the light receiving intensity of the pixel of interest is set to the light receiving intensity of the pixel having the minimum light receiving intensity among the peripheral pixels.
  • a window of 3 ⁇ 3 pixels (pixels) including eight surrounding pixels (pixels) is displayed.
  • An example will be described in which scanning is performed and image analysis is performed on the pixel of interest.
  • the image pickup apparatus 10 takes a picture in a state of being out of focus (step S31).
  • the image analysis unit 14 sequentially scans the window of 3 ⁇ 3 pixels (step S32), and the smallest of the imaging data corresponding to the eight peripheral pixels corresponding to the pixel of interest.
  • the value (minimum value of light receiving intensity) is set as the value of the imaging data of the pixel of interest (step S33).
  • the value C of the imaging data corresponding to the pixel of interest is set to the minimum value among the imaging data corresponding to the eight peripheral pixels corresponding to the pixel of interest. Then, the image analysis unit 14 repeats the processes of steps S32 and S33, with all the pixels constituting the captured image as the pixels of interest one by one.
  • the value of the imaging data corresponding to the pixel of interest changes sharply with respect to any of the values of the imaging data of the eight adjacent pixels, that is, for example, the incident of a cosmic ray or the pixel.
  • the value of the imaging data corresponding to the pixel of interest is regarded as having a low light receiving intensity and is treated as noise.
  • the result image saved in the result storage unit 15 by simple arithmetic processing removes the influence of cosmic ray incidents, pixel defects, etc., and is the original image. It is an image showing the position of the target subject (star, etc.).
  • the light receiving surface 21F of the photoelectric conversion unit 21 is set as a position shifted from the image point (specifically, a position on the lens 11A side).
  • the edges of the obtained image were blurred (the value [image] of the imaging data was smoothed).
  • the image plane is edged so as to be able to be shifted, the focus is shifted as necessary, and the edge of the obtained image is blurred (smoothing of the value [image] of the imaged data].
  • the edge of the obtained image is configured so that the lens portion can be shifted along the optical axis, and the focus is shifted as necessary, as in the third embodiment. It is also possible to make it blurry (to smooth the value [image] of the imaging data).
  • FIG. 16 is a schematic block diagram of the image pickup apparatus of the fifth embodiment.
  • the same parts as those in the first embodiment of FIG. 1 are designated by the same reference numerals.
  • the difference between the fifth embodiment and the first embodiment is that the light receiving surface of the photoelectric conversion unit 21 constituting the image pickup unit 12 is arranged at the image point of the lens 11A, as in the normal image pickup apparatus.
  • a filter unit 41 is provided in front of the lens unit 11 to blur the edges of the obtained image by diffusing or refracting the incident light (to smooth the value [image] of the imaging data). be.
  • FIG. 17 is a configuration explanatory view of the first aspect of the fifth embodiment.
  • an optical low-pass filter LPF that functions as a filter unit 41 is located between the lens 11A and the first focal point FP1 on the optical axis of the lens 11A between the lens 11A and the lens 11A. is doing.
  • the light receiving surface 21F1 of the photoelectric conversion unit 21 is arranged so as to coincide with the image point. In the above configuration, in the state where the optical low-pass filter LPF is removed, the light receiving surface 21F1 of the photoelectric conversion unit 21 coincides with the image point of the lens 11A and is in a focused state.
  • the optical low-pass filter LPF when the optical low-pass filter LPF is inserted into the optical path, the incident light is diffused, which has the effect of blurring the edge of the subject.
  • a known material such as frosted glass or anomalous refraction glass may be used.
  • the result image stored in the result storage unit 15 can be removed from the influences of cosmic ray incidents, pixel defects, etc. by a simple calculation process. Therefore, it becomes an image showing the position of the original target subject (star, etc.).
  • the edge of the subject is blurred by using the optical low-pass filter LPF to smooth the image.
  • the imaging data corresponding to the pixel of interest has been described.
  • FIG. 18 is a configuration explanatory view of the second aspect of the fifth embodiment.
  • a cross filter CF that functions as a filter unit 41 is located between the lens 11A and the first focal point FP1 on the optical axis of the lens 11A between the lens 11A and the lens 11A. ..
  • the cross filter CF can be realized by a known technique such as carving a groove on a thin line on the glass surface.
  • the light receiving surface 21F1 of the photoelectric conversion unit 21 is arranged so as to coincide with the image point. Even in the above configuration, in the state where the cross filter CF is removed, the light receiving surface 21F1 of the photoelectric conversion unit 21 coincides with the image point of the lens 11A and is in a focused state.
  • FIG. 19 is an explanatory diagram of an example of the captured image of the second aspect of the fifth embodiment. Similar to FIG. 4, the captured image G21 also includes the imaging data PE-N caused by the incident of cosmic rays, pixel defects, and the imaging data PE-1 corresponding to the actual subject.
  • FIG. 20 is an explanatory view of the window in the second aspect of the fifth embodiment.
  • a window used for scanning in the second aspect of the fifth embodiment as shown in FIG. 20, a cross-shaped window WD2 is used.
  • FIG. 21 is an outline processing flowchart of the image analysis unit according to the second aspect of the fifth embodiment.
  • the image analysis unit 14 sets the parameters X and Y for specifying the pixel Px (X, Y) as the pixel of interest to 0, which is an initial value (step S41).
  • the parameter X is a parameter in the row direction
  • the parameter Y is a parameter in the column direction (see FIG. 4 described later).
  • the image analysis unit 14 determines whether or not the parameter Y exceeds Ymax, which is the maximum value of the parameter Y (step S42). That is, the image analysis unit 14 determines whether or not the processing of all the pixels has been completed. In the determination of step S42, if the parameter Y exceeds the maximum value Ymax of the parameter Y (step S42; Yes), the image analysis process is terminated.
  • step S42 if the parameter Y has not yet exceeded the maximum value Ymax of the parameter Y (step S42; No), it is determined whether or not the parameter X exceeds the maximum value of the parameter X, Xmax. (Step S43). That is, the image analysis unit 14 determines whether or not the processing of the pixels for one line is completed.
  • step S43 if the parameter X has not yet exceeded Xmax, which is the maximum value of the parameter X (step S43; No), the image analysis unit 14 sequentially scans the cross-shaped window WD2, respectively. The value of the image pickup data corresponding to the five pixels is acquired (step S41). Subsequently, the image analysis unit 14 searches for and acquires the minimum value of the imaging data among the five imaging data corresponding to the window WD2 (step S46).
  • the image analysis unit 14 compares the minimum value of the imaging data acquired in step S46 with the predetermined threshold value data D (step S47).
  • step S47 if the minimum value of the imaging data acquired in step S46 is equal to or less than the threshold data D (step S47; No: minimum value of imaging data ⁇ D), the imaging data of the pixel of interest. Is treated as noise and clamped to be treated as imaging data of black spots (light receiving intensity (luminance) level lowest) (step S48).
  • the value of the imaging data after clamping is output as the pixel value of the pixel of interest, and is stored in the result storage unit 15 (step S49).
  • step S47 when the minimum value of the imaging data acquired in step S46 exceeds the threshold data D (step S47; Yes: minimum value of imaging data> D), the pixel of interest The value of the imaging data is output as the pixel value of the pixel of interest and stored in the result storage unit 15 (step S49).
  • the value of the imaging data corresponding to the pixel of interest changes sharply with respect to any of the values of the imaging data of the four adjacent pixels, that is, due to the incident of cosmic rays or pixel defects.
  • the value of the imaging data corresponding to the pixel of interest is treated as noise.
  • the values of the imaging data corresponding to the pixel of interest are considered to be gradual changes with respect to all the imaging data values of the four adjacent pixels, that is, the imaging data corresponding to the actual captured image. If this is the case, the value of the imaging data corresponding to the pixel of interest will be saved as it is.
  • FIG. 22 is an explanatory diagram of the result image in the second aspect of the fifth embodiment.
  • the result image stored in the result storage unit 15 is an image obtained by removing the effects of cosmic ray incidents, pixel defects, etc., as shown in FIG. Become.
  • the image analysis unit 14 can easily distinguish between the captured image of the subject and the bright spot due to noise.
  • FIG. 23 is an explanatory diagram when the origin of the image is scanned by a cross-shaped window having a 5-pixel configuration.
  • the processing target pixel does not exist in two pixels of the cross-shaped window WD2.
  • the process is performed by any of the following methods. (1) The portion where the processing target pixel does not exist is ignored, and the processing is performed only on the portion where the processing target pixel exists. More specifically, in the case of the example of FIG. 23, the evaluation is performed only with the three pixels in which the processing target pixel exists. (2) For the portion where the processing target pixel does not exist, the value is set to 0 and processing is performed as usual. (3) Copying and processing the pixel values of pixels adjacent to each other in the column direction or the row direction More specifically, in the case of the example of FIG. 23, the pixel values of the pixels Px (0,0) are copied respectively. It will be evaluated.
  • the size of the cross-shaped window WD2 has a 5-pixel configuration, but it is similar to a 9-pixel configuration, a 13-pixel configuration, or a 5-pixel configuration in which the vertical and horizontal lengths are increased according to the image resolution. It is also possible to adjust the size to a 20-pixel configuration of the shape. As a modified example of the cross shape, it is also possible to have a square-shaped 9-pixel configuration at the center and a 13-pixel configuration in which one pixel is added vertically and horizontally.
  • the size of the subject is sufficiently larger than the window size, it is possible to reduce the size of the image and resize it according to the window size.
  • the number of scans can be the same as that of an image having a small number of pixels, and the processing can be simplified or speeded up.
  • FIG. 24 is a schematic block diagram of an image pickup apparatus according to a modified example of the fifth embodiment.
  • the same parts as those in the fifth embodiment of FIG. 16 are designated by the same reference numerals.
  • the modified example of the fifth embodiment is different from the fifth embodiment in that the filter unit 41 can be inserted into and removed from the optical path of the incident light so that the optical axis of the lens 11A of the lens unit 11 passes through the filter unit 41.
  • the point is that the filter control unit 42 that drives the lens is provided.
  • the filter control unit 42 sets the inside of the filter unit 41 to the outside of the optical path of the lens 11A, so that the image pickup apparatus 40A can acquire a focused image as a captured image as in the case of a normal camera. It will be possible.
  • the filter unit 41 is inserted and removed from the optical path of the incident light by the filter control unit 42, but it is for measuring the optical low-pass filter LPF, the cross filter CF, etc. (for position measurement). It is also possible to switch between the filter and a normal optical filter (for example, an infrared filter, an ND filter, etc.) and insert the filter into the optical path.
  • a normal optical filter for example, an infrared filter, an ND filter, etc.
  • the incident of cosmic rays from the outside and the influence of defective pixels of the image sensor can be removed by a simple process, and an image of the subject to be imaged can be obtained. Is possible.
  • the adjustment of the size of the window WD or the window WD2 and the adjustment of the image size have been described, but the modified example of the first embodiment and the second embodiment have been described.
  • the third embodiment, the fourth embodiment, and the first aspect of the fifth embodiment can be similarly applied.
  • the case where the target pixel exists in all the regions of the window WD or the window WD2 has been described.
  • the window WD or a part of the window WD2 having 3 ⁇ 3 pixels is used. It is also possible to configure the above-described methods (1) to (3) to be similarly applied to the case where the processing target pixel does not exist. By adopting such a configuration, processing can be reliably performed even when scanning the peripheral portion of the captured image.
  • the image pickup surface is configured to be able to be shifted, and by shifting the focus as necessary
  • the lens portion is moved along the optical axis. It is configured so that it can be shifted, and by shifting the focus as necessary, the edges of the obtained image can be blurred (the image is smoothed).
  • the value of the imaging data corresponding to the pixel of interest is set to the minimum value among the imaging data corresponding to the peripheral pixels corresponding to the pixel of interest, thereby setting the edge of the subject. It is also possible to blur the image so as to smooth the image.
  • the present technology can also have the following configurations.
  • a lens unit that collects incident light from the subject
  • An imaging unit that captures the incident light collected by the lens unit in an unfocused state
  • an imaging unit The image captured by the imaging unit is analyzed, and the pixel value of the pixel of interest is set based on the pixel values of a plurality of predetermined peripheral pixels located around one pixel of interest to specify the position of the subject.
  • An image analysis unit that generates a result image for Imaging device equipped with.
  • the image analysis unit sets the pixel value of the pixel of interest to the minimum value.
  • the relative arrangement position of the lens unit and the imaging unit is set to a position in the imaging unit that allows imaging in an out-of-focus state.
  • the imaging device according to any one of (1) to (4).
  • a lens control unit that drives the lens unit in the optical axis direction and sets the relative arrangement position of the lens unit and the imaging unit to a position in the imaging unit that enables imaging in an out-of-focus state.
  • An imaging control unit that drives the imaging unit in the optical axis direction and sets the relative arrangement position of the lens unit and the imaging unit to a position in the imaging unit that enables imaging in an out-of-focus state.
  • a filter unit provided with the lens unit and a filter unit inserted into the optical path between the subjects to smooth the obtained image.
  • the imaging device according to any one of (1) to (4).
  • the filter unit is configured as an optical low-pass filter.
  • the imaging device is configured as an optical low-pass filter.
  • a filter unit configured as a cross filter that is inserted into an optical path between the lens unit and the subject to generate striations is provided.
  • a filter unit is provided which is inserted into the optical path between the lens unit and the subject so that the incident light after passing through the lens unit is out of focus.
  • the imaging device according to any one of (1) to (4).
  • the process of setting the pixel value of the pixel of interest when the minimum value of the pixel value of a plurality of predetermined peripheral pixels exceeds a predetermined threshold value, the pixel value of the pixel of interest is set to the minimum value.
  • the process of setting the pixel value of the pixel of interest is the process of setting the minimum value of the pixel value of a plurality of predetermined peripheral pixels as the pixel value of the pixel of interest.
  • the process of setting the pixel value of the pixel of interest includes a process of binarizing the pixel values constituting the captured image into high light receiving intensity and low light receiving intensity.
  • Imaging device 11 Lens unit 11A Lens 12 Imaging unit 13 Image storage unit 14 Image analysis unit 15 Result storage unit 21 Photoelectric conversion unit 21F0 Image point 21F, 21F1 Light receiving surface 22 Analog amplifier unit 23 AD conversion unit 31 Imaging control unit 32 Lens control unit 41 Filter unit 42 Filter control unit CF Cross filter D Threshold data LPF Optical low-pass filter OBJ Subject PE-1 Imaging data PE-N Imaging data (noise) WD, WD2 window

Abstract

An imaging device according to an embodiment is provided with: a lens unit for collecting incident light from a subject; an imaging unit for imaging the incident light collected by the lens unit in a non-focused state; and an image analyzing unit which analyzes an image captured by the imaging unit and which, on the basis of the pixel values of a plurality of predetermined surrounding pixels positioned around a pixel of interest, set a pixel value for the pixel of interest and generates a result image for identifying the position of the subject.

Description

撮像装置及び撮像装置の制御方法Imaging device and control method of imaging device
 本開示は、撮像装置及び撮像装置の制御方法に関する。 The present disclosure relates to an imaging device and a control method for the imaging device.
 イメージセンサには、製造上のバラツキや、放射線の影響で、画素に欠陥(以下、欠陥画素)が生じ、ランダムノイズとなって撮像結果に現れることがある。これらの欠陥画素の大半は「白」または「黒」の点として固定されてしまう。 The image sensor may have defects in the pixels (hereinafter referred to as defective pixels) due to manufacturing variations or the influence of radiation, which may appear as random noise in the imaging results. Most of these defective pixels are fixed as "white" or "black" dots.
 また、暗所で撮像する場合、一般的には、長時間露光する、アンプにより微小信号を増幅する(ゲインを上げる)といった手法が採られていた。 In addition, when imaging in a dark place, generally, a method such as long-time exposure and amplification of a minute signal (increasing the gain) by an amplifier has been adopted.
 長時間露光する場合、被写体が動いている、或いは、撮影装置側がきちんと固定されていない場合は被写体ブレや手ブレが発生し、意図した撮影結果にならないといった課題がある。この課題に対しては、露光時間を十分短くし、その分、アンプなどを使用し、微小信号を増幅することで対応可能である。 When exposing for a long time, if the subject is moving or the shooting device side is not fixed properly, subject blurring and camera shake will occur, and there is a problem that the intended shooting result will not be obtained. This problem can be solved by shortening the exposure time sufficiently and using an amplifier or the like to amplify a minute signal.
 しかしながら、アンプそのものが持つノイズ成分や、電源回路からのノイズなども同時に増幅してしまうため撮影結果にランダムノイズが記録されてしまう課題がある。 However, there is a problem that random noise is recorded in the shooting result because the noise component of the amplifier itself and the noise from the power supply circuit are amplified at the same time.
 また、宇宙空間で撮影する場合、宇宙線(放射線)等の影響で欠陥画素が増加したり、一時的に画素が誤動作したりし、その結果、輝点ノイズとして撮像されてしまうことで、例えば人工衛星から星を撮影した場合に星とノイズの区別ができない課題がある。
 また、欠陥画素に起因するノイズを補正するために、様々な技術が提案されている(例えば、特許文献1~特許文献3参照)。
In addition, when shooting in outer space, defective pixels may increase due to the influence of cosmic rays (radiation), or the pixels may malfunction temporarily, resulting in imaging as bright spot noise, for example. When a star is photographed from an artificial satellite, there is a problem that the star and noise cannot be distinguished.
Further, various techniques have been proposed for correcting noise caused by defective pixels (see, for example, Patent Documents 1 to 3).
特開2015-035717号公報Japanese Unexamined Patent Publication No. 2015-305717 特開2001-128068号公報Japanese Unexamined Patent Publication No. 2001-128068 特表2019-500761号公報Special Table 2019-500731, Gazette
 しかしながら、上記従来技術は、イメージセンサ上に発生する「欠陥画素」を補正することが目的であるため、暗所撮影時に発生するランダムノイズや、宇宙線が照射されることにより突発的に発生する輝点等については対応できなかった。
 特に測量などの目的で必要となる星空の撮影や、暗所でのマーカーの撮影などでは、ランダムノイズ、宇宙線の影響で正確な測量ができないという課題があった。
However, since the purpose of the above-mentioned prior art is to correct "defective pixels" generated on the image sensor, it is suddenly generated due to random noise generated at the time of shooting in a dark place or irradiation with cosmic rays. We could not deal with bright spots.
In particular, when shooting a starry sky, which is necessary for surveying purposes, or shooting markers in a dark place, there is a problem that accurate surveying cannot be performed due to the effects of random noise and cosmic rays.
 本技術は、このような状況に鑑みてなされたものであり、欠陥画素、ランダムノイズ、宇宙線の影響等を除去して被写体の位置を容易に特定し、ひいては、正確な測量を行うことが可能な撮像装置及び撮像装置の制御方法を提供することを目的としている。 This technology was made in view of such a situation, and it is possible to easily identify the position of the subject by removing defective pixels, random noise, the influence of cosmic rays, etc., and by extension, perform accurate surveying. It is an object of the present invention to provide a possible image pickup apparatus and a control method for the image pickup apparatus.
 実施形態の撮像装置は、被写体からの入射光を集光するレンズ部と、レンズ部により集光された入射光を非合焦状態で撮像する撮像部と、撮像部における撮像画像の解析を行い、一の注目画素の周囲に位置する複数の所定の周囲画素の画素値に基づいて、注目画素の画素値を設定して、被写体の位置を特定するための結果画像を生成する画像解析部と、を備える。 The image pickup apparatus of the embodiment analyzes a lens unit that collects incident light from a subject, an image pickup unit that captures the incident light collected by the lens unit in an unfocused state, and an image captured by the image pickup unit. , An image analysis unit that sets the pixel value of the pixel of interest based on the pixel values of a plurality of predetermined peripheral pixels located around one pixel of interest and generates a result image for identifying the position of the subject. , Equipped with.
第1実施形態の撮像装置の概要構成ブロック図である。It is a schematic block diagram of the image pickup apparatus of 1st Embodiment. レンズ部を構成しているレンズと、光電変換部と、の配置関係の説明図である。It is explanatory drawing of the arrangement relation of the lens which constitutes a lens part, and the photoelectric conversion part. 画像解析部の概要処理フローチャートである。It is an outline processing flowchart of an image analysis unit. 撮像画像の一例の説明図である。It is explanatory drawing of an example of the captured image. 画像走査及び得られた結果画像の説明図である。It is explanatory drawing of the image scanning and the obtained result image. 画像走査及び得られた結果画像の説明図である。It is explanatory drawing of the image scanning and the obtained result image. 第1実施形態の変形例のノイズ除去処理の説明図である。It is explanatory drawing of the noise removal processing of the modification of 1st Embodiment. 第1実施形態の変形例の結果画像の説明図である。It is explanatory drawing of the result image of the modification of 1st Embodiment. 画像原点を3×3画素のウィンドウで走査している場合の説明図である。It is explanatory drawing in the case of scanning the image origin in a window of 3 × 3 pixels. 第1実施形態の変形例における概要処理フローチャートである。It is a summary processing flowchart in the modification of 1st Embodiment. 第2実施形態の撮像装置の概要構成ブロック図である。It is a schematic block diagram of the image pickup apparatus of 2nd Embodiment. 第2実施形態の動作説明図である。It is operation explanatory drawing of 2nd Embodiment. 第3実施形態の撮像装置の概要構成ブロック図である。It is a schematic block diagram of the image pickup apparatus of 3rd Embodiment. 第4実施形態の説明図である。It is explanatory drawing of 4th Embodiment. 第4実施形態の説明図である。It is explanatory drawing of 4th Embodiment. 第4実施形態の概要処理のフローチャートである。It is a flowchart of the outline processing of 4th Embodiment. 第4実施形態のウィンドウの説明図である。It is explanatory drawing of the window of 4th Embodiment. 第5実施形態の撮像装置の概要構成ブロック図である。It is a schematic block diagram of the image pickup apparatus of 5th Embodiment. 第5実施形態の第1態様の構成説明図である。It is a block diagram of the 1st aspect of 5th Embodiment. 第5実施形態の第2態様の構成説明図である。It is a block diagram of the 2nd aspect of 5th Embodiment. 第5実施形態の第2態様の撮像画像の一例の説明図である。It is explanatory drawing of an example of the captured image of the 2nd aspect of 5th Embodiment. 第5実施形態の第2態様におけるウィンドウの説明図である。It is explanatory drawing of the window in 2nd aspect of 5th Embodiment. 第5実施形態の第2態様における画像解析部の概要処理フローチャートである。It is a summary processing flowchart of the image analysis part in the 2nd aspect of 5th Embodiment. 第5実施形態の第2態様における結果画像の説明図である。It is explanatory drawing of the result image in 2nd aspect of 5th Embodiment. 画像原点を5画素構成の十字型のウィンドウで走査している場合の説明図である。It is explanatory drawing in the case of scanning the image origin in a cross-shaped window having a 5-pixel configuration. 第5実施形態の変形例の撮像装置の概要構成ブロック図である。It is a schematic block diagram of the image pickup apparatus of the modification of 5th Embodiment.
 以下、図面を参照して、実施形態について詳細に説明する。
[1]第1実施形態
 図1は、第1実施形態の撮像装置の概要構成ブロック図である。
 撮像装置10は、レンズ部11と、撮像部12と、画像蓄積部13と、画像解析部14と、結果保存部15と、を備えている。
Hereinafter, embodiments will be described in detail with reference to the drawings.
[1] First Embodiment FIG. 1 is a schematic block diagram of an image pickup apparatus of the first embodiment.
The image pickup apparatus 10 includes a lens unit 11, an image pickup unit 12, an image storage unit 13, an image analysis unit 14, and a result storage unit 15.
 レンズ部11は、被写体からの光を集光して撮像部12に導く。
 撮像部12は、レンズ部11により集光された光の光電変換及びアナログ/ディジタル変換を行って、画像蓄積部13に出力する。この場合において、撮像部の受光面は、レンズ部11を構成しているレンズにより定まる像点よりも手前の位置に固定されている。これは、受光画像を意図的にぼかして撮影するためである。
The lens unit 11 collects the light from the subject and guides it to the imaging unit 12.
The image pickup unit 12 performs photoelectric conversion and analog / digital conversion of the light collected by the lens unit 11 and outputs the light to the image storage unit 13. In this case, the light receiving surface of the imaging unit is fixed at a position in front of the image point determined by the lenses constituting the lens unit 11. This is because the received image is intentionally blurred for shooting.
 この場合において、撮像部12は、光電変換部21と、アナログアンプ部22と、AD変換部23と、を備えている。
 光電変換部21は、CCD(Charged-coupled devices)あるいはCMOS(Complementary metal-oxide-semiconductor)等のイメージセンサを備えて、レンズ部11から入射した光の光電変換を行って原撮像信号としてアナログアンプ部22に出力する。
In this case, the imaging unit 12 includes a photoelectric conversion unit 21, an analog amplifier unit 22, and an AD conversion unit 23.
The photoelectric conversion unit 21 includes an image sensor such as a CCD (Charged-coupled devices) or CMOS (Complementary metal-oxide-semiconductor), performs photoelectric conversion of the light incident from the lens unit 11, and uses an analog amplifier as an original imaging signal. Output to unit 22.
 アナログアンプ部22は、入力された原撮像信号の増幅を行って撮像信号として、AD変換部23に出力する。
 AD変換部23は、撮像信号のアナログ/ディジタル変換を行って撮像データとして画像蓄積部13に出力する。
The analog amplifier unit 22 amplifies the input original imaging signal and outputs it as an imaging signal to the AD conversion unit 23.
The AD conversion unit 23 performs analog / digital conversion of the imaging signal and outputs it as imaging data to the image storage unit 13.
 画像蓄積部13は、入力された撮像データを撮像画像単位で記憶する。
 画像解析部14は、画像蓄積部13から撮像画像単位の撮像データを読み出し、解析を行って、ランダムノイズ、宇宙線の影響、欠陥画素などの影響を除去した結果画像を結果保存部15に出力する。
 結果保存部15は、画像解析部14の出力した結果画像を保存する。
The image storage unit 13 stores the input imaging data in units of captured images.
The image analysis unit 14 reads the image capture data of each image captured image from the image storage unit 13, analyzes the image, and outputs the result image to the result storage unit 15 after removing the effects of random noise, cosmic rays, defective pixels, and the like. do.
The result storage unit 15 stores the result image output by the image analysis unit 14.
 図2は、レンズ部を構成しているレンズと、光電変換部と、の配置関係の説明図である。
 図2に示すように、星などの被写体OBJとレンズ部11を構成している両凸レンズであるレンズ11Aとの間のレンズ11Aの光軸上には、第1の焦点FP1が位置している。
FIG. 2 is an explanatory diagram of the arrangement relationship between the lens constituting the lens unit and the photoelectric conversion unit.
As shown in FIG. 2, the first focal point FP1 is located on the optical axis of the lens 11A between the subject OBJ such as a star and the lens 11A which is a biconvex lens constituting the lens unit 11. ..
 同様にレンズ11Aの像点21F0とレンズ11Aとの間のレンズ11Aの光軸上には、第2の焦点FP2が位置している。
 そして、図2の例の場合、光電変換部21の受光面21Fは、像点よりレンズ11A側に位置するようにされている。したがって、受光面21Fにおいて形成される像は、像点21F0において形成される像よりもぼやけた状態となっている。
Similarly, the second focal point FP2 is located on the optical axis of the lens 11A between the image point 21F0 of the lens 11A and the lens 11A.
Then, in the case of the example of FIG. 2, the light receiving surface 21F of the photoelectric conversion unit 21 is positioned on the lens 11A side from the image point. Therefore, the image formed on the light receiving surface 21F is in a more blurred state than the image formed on the image point 21F0.
 したがって、受光面21F上における受光強度変化は、像点21F0における受光強度変化よりも緩やかなものとなる。ここで「緩やか」とは、受光強度変化がパルス的ではなく、例えば、受光強度変化が正規分布曲線のように最大値から変化するような状態をいう。
 これに対し、宇宙線の入射あるいは画素欠陥による隣接画素に対する受光強度変化は、急峻なパルス状のものとなる。
Therefore, the change in the light receiving intensity on the light receiving surface 21F is slower than the change in the light receiving intensity at the image point 21F0. Here, "gentle" means a state in which the change in light receiving intensity is not pulse-like, and for example, the change in light receiving intensity changes from the maximum value like a normal distribution curve.
On the other hand, the change in light receiving intensity with respect to adjacent pixels due to the incident of cosmic rays or pixel defects becomes a steep pulse.
 ここで、画像解析処理について詳細に説明する。
 図3は、画像解析部の概要処理フローチャートである。
 以下の説明においては、光電変換部21における注目画素(注目ピクセル)に対して、周囲の8個の画素(ピクセル)を含む3×3画素(ピクセル)のウィンドウWDを用いて、走査を行い注目画素に対する画像解析を行う場合を例として説明する。
Here, the image analysis process will be described in detail.
FIG. 3 is an outline processing flowchart of the image analysis unit.
In the following description, the pixel of interest (pixel of interest) in the photoelectric conversion unit 21 is scanned by using a window WD of 3 × 3 pixels (pixels) including the surrounding eight pixels (pixels) to be noticed. An example of performing image analysis on pixels will be described.
 まず画像解析部14は、注目画素としての画素Px(X,Y)を特定するためのパラメータX、Yを初期値である0に設定する(ステップS11)。本実施形態では、パラメータXは、行方向のパラメータであり、パラメータYは、列方向のパラメータである(後述の図4参照)。 First, the image analysis unit 14 sets the parameters X and Y for specifying the pixels Px (X, Y) as the pixel of interest to 0, which is the initial value (step S11). In the present embodiment, the parameter X is a parameter in the row direction, and the parameter Y is a parameter in the column direction (see FIG. 4 described later).
 次に画像解析部14は、パラメータYがパラメータYの最大値であるYmaxを超えているか否かを判断する(ステップS12)。すなわち、画像解析部14は、全ての画素の処理が終わったか否かを判断する。
 ステップS12の判断において、パラメータYがパラメータYの最大値Ymaxを超えている場合には(ステップS12;Yes)、画像解析処理を終了する。
Next, the image analysis unit 14 determines whether or not the parameter Y exceeds Ymax, which is the maximum value of the parameter Y (step S12). That is, the image analysis unit 14 determines whether or not the processing of all the pixels has been completed.
In the determination of step S12, if the parameter Y exceeds the maximum value Ymax of the parameter Y (step S12; Yes), the image analysis process is terminated.
 ステップS12の判断において、未だパラメータYがパラメータYの最大値Ymaxを超えていない場合には(ステップS12;No)、パラメータXがパラメータXの最大値であるXmaxを超えているか否かを判断する(ステップS13)。すなわち、画像解析部14は、1ライン分の画素の処理が終わったか否かを判断する。 In the determination of step S12, if the parameter Y has not yet exceeded the maximum value Ymax of the parameter Y (step S12; No), it is determined whether or not the parameter X exceeds the maximum value of the parameter X, Xmax. (Step S13). That is, the image analysis unit 14 determines whether or not the processing of the pixels for one line is completed.
 ステップS13の判断において、パラメータXがパラメータXの最大値であるXmaxを超えている場合には(ステップS13;Yes)、画像解析部14は、Y=Y+1として(ステップS14)、処理を再びステップS12に移行する。
 ステップS13の判断において、未だパラメータXがパラメータXの最大値であるXmaxを超えていない場合には(ステップS13;No)、画像解析部14は、3×3画素のウィンドウWDを順次走査して、それぞれ9個の画素に対応する撮像データの値を取得する(ステップS15)。
 続いて画像解析部14は、3×3画素のウィンドウに対応する9個の撮像データ内において撮像データの最小値を取得する(ステップS16)。
In the determination of step S13, if the parameter X exceeds Xmax, which is the maximum value of the parameter X (step S13; Yes), the image analysis unit 14 sets Y = Y + 1 (step S14) and steps the process again. Move to S12.
In the determination of step S13, if the parameter X has not yet exceeded Xmax, which is the maximum value of the parameter X (step S13; No), the image analysis unit 14 sequentially scans the window WD of 3 × 3 pixels. , Acquire the values of the imaging data corresponding to each of the nine pixels (step S15).
Subsequently, the image analysis unit 14 acquires the minimum value of the imaging data among the nine imaging data corresponding to the 3 × 3 pixel window (step S16).
 続いて画像解析部14は、ステップS16で取得した撮像データの最小値と、所定のしきい値データDとを比較する(ステップS17)。
 ここで、しきい値データDは、撮像データの値がこれより小さければ、当該撮像データの値をノイズとして扱い、黒点(受光強度(輝度)レベル最低)の撮像データとして扱うためのデータである。
Subsequently, the image analysis unit 14 compares the minimum value of the imaging data acquired in step S16 with the predetermined threshold value data D (step S17).
Here, the threshold data D is data for treating the value of the imaging data as noise if the value of the imaging data is smaller than this, and treating it as imaging data of a black spot (the lowest light receiving intensity (luminance) level). ..
 これにより、ステップS17の比較において、ステップS16で取得した撮像データの最小値がしきい値データD以下であれば(ステップS16;撮像データの最小値≦D)、当該注目画素の撮像データの値をノイズとして扱い、黒点(受光強度(輝度)レベル最低)の撮像データとして扱うためにクランプする(ステップS18)。 As a result, in the comparison of step S17, if the minimum value of the imaging data acquired in step S16 is equal to or less than the threshold data D (step S16; minimum value of imaging data ≤ D), the value of the imaging data of the pixel of interest. Is treated as noise and clamped to be treated as imaging data of black spots (light receiving intensity (luminance) level lowest) (step S18).
 そして、クランプ後の撮像データの値を、当該注目画素の画素値として出力し、結果保存部15に保存する(ステップS19)。 Then, the value of the imaging data after clamping is output as the pixel value of the pixel of interest, and is stored in the result storage unit 15 (step S19).
 一方、ステップS13の比較において、ステップS12で取得した撮像データの最小値がしきい値データDを超えている場合には(ステップS17;撮像データの最小値>D)、当該注目画素の撮像データの値を、当該注目画素の画素値として出力し、結果保存部15に保存する(ステップS19)。 On the other hand, in the comparison of step S13, when the minimum value of the imaging data acquired in step S12 exceeds the threshold data D (step S17; minimum value of imaging data> D), the imaging data of the pixel of interest. Is output as the pixel value of the pixel of interest, and is stored in the result storage unit 15 (step S19).
 続いて、画像解析部14は、X=X+1として(ステップS20)、処理を再びステップS13に移行する。 Subsequently, the image analysis unit 14 sets X = X + 1 (step S20), and shifts the process to step S13 again.
 これらの結果、注目画素に対応する撮像データの値が、隣接する8個の画素の撮像データの値のいずれかに対して、急峻に変化している、すなわち、宇宙線の入射あるいは画素欠陥による値の変化であると考えられる場合には、当該注目画素に対応する撮像データの値はノイズとして扱われることとなる。 As a result, the value of the imaging data corresponding to the pixel of interest changes sharply with respect to any of the values of the imaging data of the eight adjacent pixels, that is, due to the incident of cosmic rays or pixel defects. When it is considered that the value is changed, the value of the imaging data corresponding to the pixel of interest is treated as noise.
 一方、注目画素に対応する撮像データの値が、隣接する8個の画素の撮像データの値の全てに対して、緩やかに変化している、すなわち、実際の撮像画像に対応する撮像データと考えられる場合には、当該注目画素に対応する撮像データの値をそのまま保存することとなる。 On the other hand, the values of the imaging data corresponding to the pixel of interest are considered to be gradual changes with respect to all the imaging data values of the eight adjacent pixels, that is, the imaging data corresponding to the actual captured image. If this is the case, the value of the imaging data corresponding to the pixel of interest will be saved as it is.
 これらの結果、結果保存部15に保存される結果画像は、宇宙線の入射、画素欠陥等の影響を除去した画像となる。 As a result, the result image stored in the result storage unit 15 is an image in which the effects of cosmic ray incidents, pixel defects, etc. are removed.
 次に、図面を参照してより具体的な結果画像の取得について説明する。
 図4は、撮像画像の一例の説明図である。
 図4においては、理解の容易のため、縦9画素×横20画素に対応する撮像撮像データのイメージである撮像画像G1を例として説明する。
Next, acquisition of a more specific result image will be described with reference to the drawings.
FIG. 4 is an explanatory diagram of an example of a captured image.
In FIG. 4, for the sake of easy understanding, an image G1 which is an image of an image-imposed image data corresponding to 9 pixels in length × 20 pixels in width will be described as an example.
 撮像画像G1においては、宇宙線の入射、画素欠陥等に起因する撮像データPE-Nと、実際の被写体に対応する撮像データPE-1と、が含まれている。 The captured image G1 includes the imaging data PE-N caused by the incident of cosmic rays, pixel defects, etc., and the imaging data PE-1 corresponding to the actual subject.
 図4に示すように、撮像データPE-Nの値(画素の明るさ)は、周囲画素に対して、受光強度が急峻に変化していることが分かる。 As shown in FIG. 4, it can be seen that the value of the imaging data PE-N (brightness of the pixels) has a sharp change in the light receiving intensity with respect to the surrounding pixels.
 一方、実際の被写体に対応する撮像データPE1は、レンズ11A及び光電変換部21の配置関係に起因して、ぼやけた画像を撮像していることとなるので、周囲画素に対する受光強度の変化は緩やかなものとなっていることが分かる。 On the other hand, since the imaging data PE1 corresponding to the actual subject captures a blurred image due to the arrangement relationship between the lens 11A and the photoelectric conversion unit 21, the change in the light receiving intensity with respect to the surrounding pixels is gradual. It turns out that it is a good thing.
 図5A及び図5Bは、画像走査及び得られた結果画像の説明図である。
 まず画像解析部14は、3×3画素のウィンドウWDを、例えば、左側から右側及び上側から下側に順次注目画素を変更して撮像画像全体を走査して、一つの注目画素に対してそれぞれ9個の画素に対応する撮像データの値を取得する。
5A and 5B are explanatory views of image scanning and the resulting image obtained.
First, the image analysis unit 14 scans the entire captured image by sequentially changing the attention pixels from the left side to the right side and from the upper side to the lower side in the window WD of 3 × 3 pixels, and for each of the one attention pixel. The values of the imaging data corresponding to the nine pixels are acquired.
 より詳細には、注目画素が撮像データPE-Nに対応する画素である場合には、図5Aに示すように、撮像データPE-Nの受光強度は、8個の周囲画素に対して急峻に変化しており、3×3画素のウィンドウWDに対応する9個の撮像データ内において撮像データの最小値は、8個の周囲画素のいずれかとなる。 More specifically, when the pixel of interest is a pixel corresponding to the imaging data PE-N, as shown in FIG. 5A, the light receiving intensity of the imaging data PE-N is steep with respect to the eight surrounding pixels. Of the nine imaging data corresponding to the 3 × 3 pixel window WD, the minimum value of the imaging data is any of the eight peripheral pixels.
 この場合に、しきい値データDが図4に示した5段階の受光強度範囲において、下から2番目の受光強度範囲と、下から3番目の受光強度範囲の間に設定されていたとすると、8個の周囲画素における最小値の値(下から1番目の受光強度範囲)は、しきい値データDの値以下であるので、注目画素が撮像データPE-Nに対応する画素の受光強度は、最も低受光強度にクランプされる。 In this case, assuming that the threshold data D is set between the second light-receiving intensity range from the bottom and the third light-receiving intensity range from the bottom in the five-step light-receiving intensity range shown in FIG. Since the minimum value value (first light receiving intensity range from the bottom) of the eight surrounding pixels is equal to or less than the value of the threshold data D, the light receiving intensity of the pixel whose attention pixel corresponds to the imaging data PE-N is , Clamped to the lowest light receiving intensity.
 これに対し、注目画素が撮像データPE-1である場合には、図5Aに示すように、撮像データPE-1の受光強度は、8個の周囲画素に対して緩やかに変化しており、3×3画素のウィンドウWDに対応する9個の撮像データ内において撮像データの最小値は、撮像データPE-1に対して斜め方向に位置する4個の周囲画素のいずれかとなる。 On the other hand, when the pixel of interest is the imaging data PE-1, the light receiving intensity of the imaging data PE-1 gradually changes with respect to the eight surrounding pixels, as shown in FIG. 5A. Among the nine imaging data corresponding to the 3 × 3 pixel window WD, the minimum value of the imaging data is any of the four peripheral pixels located diagonally with respect to the imaging data PE-1.
 この場合には、しきい値データDが図4に示した5段階の受光強度範囲において、下から2番目の受光強度範囲と、下から3番目の受光強度範囲の間に設定されていたとすると、4個の斜め方向に位置する周囲画素における最小値の値(下から3番目の受光強度範囲)は、しきい値データDの値を超えているので、注目画素が撮像データPE-1に対応する画素の受光強度は、下から3番目の受光強度範囲とされる。 In this case, it is assumed that the threshold data D is set between the second light receiving intensity range from the bottom and the third light receiving intensity range from the bottom in the five-step light receiving intensity range shown in FIG. Since the minimum value value (third light receiving intensity range from the bottom) of the four diagonally located peripheral pixels exceeds the value of the threshold data D, the pixel of interest is in the imaging data PE-1. The light receiving intensity of the corresponding pixel is the third light receiving intensity range from the bottom.
 同様の処理は、注目画素が撮像データPE-1に対応する画素の周囲の8個の画素についても行われる結果、最終的な結果画像は、図5Bに示すようなものとなる。
 これにより注目画素が撮像データPE-1に対応する画素には、被写体としての星等が存在することがわかる。
The same processing is also performed for the eight pixels around the pixel in which the pixel of interest corresponds to the imaging data PE-1, and the final result image is as shown in FIG. 5B.
From this, it can be seen that a star or the like as a subject exists in the pixel whose attention pixel corresponds to the imaging data PE-1.
 同様にして、撮像画像全体について注目画素を走査して処理を行うことにより、当該撮像画像のみの処理で、宇宙線の照射や、欠陥画素に起因するノイズを除去して、被写体に対応する撮像データを得ることができる。 Similarly, by scanning and processing the pixel of interest for the entire captured image, cosmic ray irradiation and noise caused by defective pixels are removed by processing only the captured image, and imaging corresponding to the subject is performed. You can get the data.
 換言すれば、光電変換部21上でランダムに発生する小さな光点は、レンズ部11のレンズ11Aを通過して得られたぼやけた点光源のみが、結果保存部15に保存されることになる。
 この場合に、欠陥画素の箇所等を記憶しておく必要も無いので、記憶容量も低減でき、例外処理を行う必要も無いので、簡易な処理で所望の画像を得ることができる。
In other words, as for the small light spots randomly generated on the photoelectric conversion unit 21, only the blurred point light source obtained by passing through the lens 11A of the lens unit 11 is stored in the result storage unit 15. ..
In this case, since it is not necessary to store the location of the defective pixel or the like, the storage capacity can be reduced, and it is not necessary to perform exception processing, so that a desired image can be obtained by simple processing.
 以上の説明においては、ウィンドウのサイズは、3×3画素としていたが、画像解像度に応じて、5×5画素、7×7画素等のサイズに調整することも可能である。
 逆に被写体の大きさがウィンドウサイズよりも十分に大きい場合には、ウィンドウサイズに合わせて、画像を縮小してリサイズするようにすることも可能である。これにより画素数が多い画像であっても画素数が少ない画像と同じ走査回数とすることができ、処理の簡素化あるいは高速化が図れる。
In the above description, the size of the window is 3 × 3 pixels, but it can be adjusted to a size of 5 × 5 pixels, 7 × 7 pixels, or the like according to the image resolution.
On the contrary, when the size of the subject is sufficiently larger than the window size, the image can be reduced and resized according to the window size. As a result, even if the image has a large number of pixels, the number of scans can be the same as that of an image having a small number of pixels, and the processing can be simplified or speeded up.
 図6は、第1実施形態の変形例のノイズ除去処理の説明図である。
 上記第1実施形態のノイズ除去処理においては、撮像画像として、グレースケール画像を用い、注目画素の周辺画素の最小値を所定値Dと比較することにより注目画素の受光強度を設定していたが、本変形例は、撮影画像をから所定のしきい値を用いて、2値化画像を生成し、さらに注目画素の周辺画素(上述の3×3画素のウィンドウの場合、周辺の8画素)に一つでも低受光強度(=0)の画素が含まれている場合には、注目画素の値を低受光強度(=0)とするものである。
FIG. 6 is an explanatory diagram of noise removal processing of a modified example of the first embodiment.
In the noise removal processing of the first embodiment, a grayscale image is used as the image to be captured, and the light receiving intensity of the pixel of interest is set by comparing the minimum value of the peripheral pixels of the pixel of interest with a predetermined value D. In this modification, a binarized image is generated from the captured image using a predetermined threshold value, and the peripheral pixels of the pixel of interest (in the case of the above-mentioned 3 × 3 pixel window, the peripheral 8 pixels). When even one pixel having a low light receiving intensity (= 0) is included, the value of the pixel of interest is set to the low light receiving intensity (= 0).
 例えば、図4の撮像画像G1を2値化した場合には、図6に示すように、2値化画像G3として得られる。
 この2値化画像G3に対して、画像解析部14は、3×3画素のウィンドウWDを、例えば、左側から右側及び上側から下側に順次注目画素を変更して撮像画像全体を走査して、一つの注目画素に対してそれぞれ9個の画素に対応する撮像データの値を取得する。
For example, when the captured image G1 of FIG. 4 is binarized, it is obtained as a binarized image G3 as shown in FIG.
With respect to the binarized image G3, the image analysis unit 14 scans the entire captured image by sequentially changing the attention pixels from the left side to the right side and the upper side to the lower side in the window WD of 3 × 3 pixels. , Acquire the values of the imaging data corresponding to each of nine pixels for one pixel of interest.
 そして、注目画素に対する周囲の8個の画素のうち、低受光強度(=0)となっている画素が一つでも含まれているか否かを判断する。 Then, it is determined whether or not even one pixel having a low light receiving intensity (= 0) is included among the eight surrounding pixels with respect to the pixel of interest.
 図6の例の場合には、注目画素が撮像データPE-Nに対応する画素である場合には、周囲の8個の画素の全てが低受光強度(=0)となっているので、注目画素である撮像データPE-Nに対応する画素の値は、低受光強度(=0)とされる。 In the case of the example of FIG. 6, when the pixel of interest is a pixel corresponding to the imaging data PE-N, all of the surrounding eight pixels have low light receiving intensity (= 0), so attention is paid. The value of the pixel corresponding to the imaging data PE-N, which is a pixel, is set to low light reception intensity (= 0).
 これに対し、注目画素が撮像データPE-1に対応する画素である場合には、周囲の8個の画素の全てが高受光強度(=1)となっているので、注目画素である撮像データPE-1に対応する画素の値は、高受光強度(=1)とされる。 On the other hand, when the pixel of interest is a pixel corresponding to the imaging data PE-1, all of the surrounding eight pixels have high light receiving intensity (= 1), so that the imaging data of interest is pixel. The value of the pixel corresponding to PE-1 is a high light receiving intensity (= 1).
 図7は、第1実施形態の変形例の結果画像の説明図である。
 上述の処理の結果、図6の撮影画像G3により得られる結果画像G4は、図7に示すように、注目画素が撮像データPE-1に対応する画素のみが高受光強度(=1)とされ、ノイズの影響がなく、当該位置に被写体である星などが位置していることが容易に判断できることとなる。
FIG. 7 is an explanatory diagram of a result image of a modified example of the first embodiment.
As a result of the above processing, in the result image G4 obtained by the captured image G3 of FIG. 6, as shown in FIG. 7, only the pixel whose attention pixel corresponds to the imaging data PE-1 has a high light receiving intensity (= 1). Therefore, it can be easily determined that a star or the like, which is a subject, is located at the position without being affected by noise.
 図8は、画像原点を3×3画素のウィンドウで走査している場合の説明図である。
 以上の説明においては、3×3画素のウィンドウWDの全てに対象画素が存在する場合について説明したが、撮影画像の周縁部を走査する場合には、3×3画素のウィンドウWDの一部に処理対象画素が存在しない場合がある。
 図8の例の場合、3×3画素のウィンドウWDのうち、5個の画素において処理対象画素が存在していない。
 このような場合には、例えば、以下のいずれかの方法により、処理を行う。
 (1) 処理対象画素が存在しない部分については、無視し、処理対象画素が存在する部分のみで処理を行う。より具体的は、図8の例の場合、処理対象画素が存在する4つの画素のみで評価を行うこととなる。
 (2) 処理対象画素が存在しない部分については、値を0として、通常通り処理を行う。
 (3) 列方向あるいは行方向に隣接する画素の画素値をそのままコピーする(なお、左上の画素には、列方向あるいは行方向に隣接する画素は存在しないので、(1)の場合と同様に、残り8個の画素で処理を行うか、(2)の場合と同様に、値を0として処理する。
 以上のような構成を採ることにより、撮影画像の周縁部を走査する場合でも確実に処理が行える。
FIG. 8 is an explanatory diagram when the origin of the image is scanned in a window of 3 × 3 pixels.
In the above description, the case where the target pixel exists in all of the 3 × 3 pixel window WD has been described, but when scanning the peripheral portion of the captured image, it is included in a part of the 3 × 3 pixel window WD. The processing target pixel may not exist.
In the case of the example of FIG. 8, among the window WDs of 3 × 3 pixels, the processing target pixels do not exist in 5 pixels.
In such a case, for example, the process is performed by any of the following methods.
(1) The portion where the processing target pixel does not exist is ignored, and the processing is performed only on the portion where the processing target pixel exists. More specifically, in the case of the example of FIG. 8, the evaluation is performed only with the four pixels in which the processing target pixel exists.
(2) For the portion where the processing target pixel does not exist, the value is set to 0 and processing is performed as usual.
(3) Copy the pixel values of pixels adjacent in the column direction or row direction as they are (Note that the upper left pixel does not have pixels adjacent in the column direction or row direction, so as in the case of (1). , The processing is performed with the remaining 8 pixels, or the value is set to 0 as in the case of (2).
By adopting the above configuration, processing can be reliably performed even when scanning the peripheral portion of the captured image.
 ここで、第1実施形態の変形例の処理について説明する。
 図9は、第1実施形態の変形例における概要処理フローチャートである。
 以下の説明においても、光電変換部21における注目画素(注目ピクセル)に対して、周囲の8個の画素(ピクセル)を含む3×3画素(ピクセル)のウィンドウを用いて、走査を行い注目画素に対する画像解析を行う場合を例として説明する。
Here, the processing of the modified example of the first embodiment will be described.
FIG. 9 is an outline processing flowchart in a modified example of the first embodiment.
Also in the following description, the pixel of interest (pixel of interest) in the photoelectric conversion unit 21 is scanned by using a window of 3 × 3 pixels (pixels) including the surrounding eight pixels (pixels) of interest. An example of performing image analysis on the above will be described.
 まず撮像装置10は、焦点をずらした状態で撮影する(ステップS21)。
 次に画像解析部14は、得られた撮影画像の撮像データの2値化を行い、2値化画像を生成する(ステップS22)。
 この場合において、2値化の処理は、例えば、画素の値が、所定のしきい値未満(=暗い)である場合には、黒に相当する“0”とし、所定のしきい値以上(=明るい)である場合には、白に相当する“1”とする。
First, the image pickup apparatus 10 takes a picture in a state of being out of focus (step S21).
Next, the image analysis unit 14 binarizes the captured data of the obtained captured image to generate a binarized image (step S22).
In this case, in the binarization process, for example, when the pixel value is less than a predetermined threshold value (= dark), it is set to "0" corresponding to black, and is equal to or higher than the predetermined threshold value (= = Bright), it is set to "1" corresponding to white.
 次に画像解析部14は、注目画素としての画素Px(X,Y)を特定するためのパラメータX、Yを初期値である0に設定する(ステップS23)。 Next, the image analysis unit 14 sets the parameters X and Y for specifying the pixel Px (X, Y) as the pixel of interest to 0, which is the initial value (step S23).
 次に画像解析部14は、パラメータYがパラメータYの最大値であるYmaxを超えているか否かを判断する(ステップS24)。すなわち、画像解析部14は、全ての画素の処理が終わったか否かを判断する。 Next, the image analysis unit 14 determines whether or not the parameter Y exceeds Ymax, which is the maximum value of the parameter Y (step S24). That is, the image analysis unit 14 determines whether or not the processing of all the pixels has been completed.
 ステップS24の判断において、パラメータYがパラメータYの最大値Ymaxを超えている場合には(ステップS24;Yes)、撮影画像を構成している全ての画素について処理が終わったので、得られた画像を結果保存部15に保存して(ステップS30)、画像解析処理を終了する。 In the determination of step S24, when the parameter Y exceeds the maximum value Ymax of the parameter Y (step S24; Yes), the processing is completed for all the pixels constituting the captured image, so that the obtained image is obtained. Is stored in the result storage unit 15 (step S30), and the image analysis process is completed.
 ステップS24の判断において、未だパラメータYがパラメータYの最大値Ymaxを超えていない場合には(ステップS24;No)、パラメータXがパラメータXの最大値であるXmaxを超えているか否かを判断する(ステップS25)。すなわち、画像解析部14は、1ライン分の画素の処理が終わったか否かを判断する。 In the determination of step S24, if the parameter Y has not yet exceeded the maximum value Ymax of the parameter Y (step S24; No), it is determined whether or not the parameter X exceeds the maximum value of the parameter X, Xmax. (Step S25). That is, the image analysis unit 14 determines whether or not the processing of the pixels for one line is completed.
 ステップS25の判断において、パラメータXがパラメータXの最大値であるXmaxを超えている場合には(ステップS25;Yes)、画像解析部14は、Y=Y+1として(ステップS26)、処理を再びステップS24に移行する。
 ステップS25の判断において、未だパラメータXがパラメータXの最大値であるXmaxを超えていない場合には(ステップS25;No)、画像解析部14は、3×3画素のウィンドウを順次走査して(ステップS27)、注目画素に対応する8個の周辺画素に、一つでも低受光強度(=0)の画素があるか否かを判断し、注目画素に対応する8個の周辺画素に、一つでも低受光強度(=0)の画素がある場合には、当該注目画素の撮像データの値を低受光強度(=0)とする(ステップS28)。
In the determination of step S25, if the parameter X exceeds Xmax, which is the maximum value of the parameter X (step S25; Yes), the image analysis unit 14 sets Y = Y + 1 (step S26) and steps the process again. Move to S24.
In the determination of step S25, if the parameter X has not yet exceeded Xmax, which is the maximum value of the parameter X (step S25; No), the image analysis unit 14 sequentially scans the 3 × 3 pixel window (step S25; No). Step S27), it is determined whether or not there is at least one pixel having a low light receiving intensity (= 0) among the eight peripheral pixels corresponding to the pixel of interest, and one of the eight peripheral pixels corresponding to the pixel of interest is selected. When there is a pixel having a low light receiving intensity (= 0) at any time, the value of the imaging data of the pixel of interest is set to the low light receiving intensity (= 0) (step S28).
 続いて、画像解析部14は、X=X+1として(ステップS29)、処理を再びステップS25に移行する。 Subsequently, the image analysis unit 14 sets X = X + 1 (step S29), and shifts the process to step S25 again.
 上記処理がなされ、得られた画像が結果保存部15に保存された結果、注目画素に対応する撮像データの値が、隣接する8個の画素の撮像データの値のいずれかに対して、急峻に変化している、すなわち、例えば、宇宙線の入射あるいは画素欠陥による値の変化であると考えられる場合には、当該注目画素に対応する撮像データの値は低受光強度(=0)とされて、ノイズとして扱われることとなる。 As a result of the above processing being performed and the obtained image being stored in the result storage unit 15, the value of the imaging data corresponding to the pixel of interest is steep with respect to any of the values of the imaging data of the eight adjacent pixels. That is, when it is considered that the value is changed due to the incident of cosmic rays or pixel defects, the value of the imaging data corresponding to the pixel of interest is set to low light receiving intensity (= 0). Therefore, it will be treated as noise.
 一方、注目画素に隣接する8個の画素の撮像データが全て高受光強度である場合には、当該注目画素に対応する撮像データの値は高受光強度(=1)のままとされて、保存することとなる。 On the other hand, when the imaging data of the eight pixels adjacent to the pixel of interest all have high light receiving intensity, the value of the imaging data corresponding to the pixel of interest remains at high light receiving intensity (= 1) and is stored. Will be done.
 これらの結果、結果保存部15に保存される結果画像は、宇宙線の入射、画素欠陥等の影響を除去して、本来の目的の被写体(星等)の位置を表す画像となる。 As a result, the result image saved in the result storage unit 15 is an image showing the position of the original target subject (star, etc.) by removing the influences of cosmic ray incidents, pixel defects, and the like.
[2]第2実施形態
 図10は、第2実施形態の撮像装置の概要構成ブロック図である。
 図10において、図1の第1実施形態と同様の部分には、同一の符号を付すものとする。
 本第2実施形態が第1実施形態と異なる点は、撮像部12をレンズ部11のレンズ11Aの光軸に沿って駆動し、撮像部12を構成している光電変換部21の受光面21Fの像点21F0からの距離を可変できる撮像制御部31を設けるようにした点である。
[2] Second Embodiment FIG. 10 is a schematic block diagram of the image pickup apparatus of the second embodiment.
In FIG. 10, the same parts as those in the first embodiment of FIG. 1 are designated by the same reference numerals.
The difference between the second embodiment and the first embodiment is that the imaging unit 12 is driven along the optical axis of the lens 11A of the lens unit 11 and the light receiving surface 21F of the photoelectric conversion unit 21 constituting the imaging unit 12 is driven. This is a point in which an imaging control unit 31 capable of varying the distance from the image point 21F0 is provided.
 換言すれば、本第2実施形態においては、撮像面をずらすことが可能となるように構成し、必要に応じてフォーカスをずらし、得られる画像のエッジをぼかすようにしている。すなわち、撮像データの値の平滑化、ひいては、画像の平滑化(smoothing)を必要に応じて図ることができるように構成しているのである。
 ここで、撮像データの値[画像]の平滑化とは、例えば、パルス的に変化する撮像データの値をガウス分布的に変化し、画像がなだらかに変化するようにすることである(以下、同様)。
In other words, in the second embodiment, the imaging surface is configured to be able to be shifted, the focus is shifted as necessary, and the edges of the obtained image are blurred. That is, it is configured so that the value of the imaged data can be smoothed, and eventually the smoothing of the image can be achieved as needed.
Here, smoothing the value [image] of the imaging data means, for example, changing the value of the imaging data that changes in a pulse manner in a Gaussian distribution so that the image changes gently (hereinafter, the image). Similarly).
 この場合において、撮像制御部31により、光電変換部21の受光面21Fを像点21F0と一致させることにより、撮像装置10Aは、通常のカメラと同様に、焦点の合った画像を撮像画像として取得することが可能となる。 In this case, the imaging control unit 31 makes the light receiving surface 21F of the photoelectric conversion unit 21 coincide with the image point 21F0, so that the imaging device 10A acquires a focused image as an captured image in the same manner as a normal camera. It becomes possible to do.
 図11は、第2実施形態の動作説明図である。
 さらに光電変換部21の受光面21Fの像点21F1からの距離を、図10に示すように、撮像制御部31により可変することで、受光面21F上における画像のぼやける度合いを可変することができ、様々な撮像条件に最適な撮像画像を得ることができる。
FIG. 11 is an operation explanatory view of the second embodiment.
Further, as shown in FIG. 10, the distance of the light receiving surface 21F of the photoelectric conversion unit 21 from the image point 21F1 is changed by the imaging control unit 31, so that the degree of blurring of the image on the light receiving surface 21F can be changed. , Optimal captured images can be obtained under various imaging conditions.
 例えば、被写体が小さく、1画素内に全てが収まってしまうような場合であっても、より光電変換部21の受光面21Fの像点21F0からの距離を大きくすることで、複数画素で被写体からの光を受光する状態として、同様の処理を行って、ノイズを除去するようにすることも可能である。また、鑑賞目的等で撮像を行う場合は、従来の撮像装置と同様に制御することで焦点位置(フォーカス)が合った画像が得られることとなる。 For example, even if the subject is small and all of it fits within one pixel, the distance from the image point 21F0 of the light receiving surface 21F of the photoelectric conversion unit 21 can be increased so that the subject has a plurality of pixels. It is also possible to remove the noise by performing the same processing in the state of receiving the light of. Further, when imaging is performed for the purpose of viewing or the like, an image in which the focal position (focus) is matched can be obtained by controlling the image in the same manner as in the conventional imaging device.
[3]第3実施形態
 図12は、第3実施形態の撮像装置の概要構成ブロック図である。
 図12において、図1の第1実施形態と同様の部分には、同一の符号を付すものとする。
[3] Third Embodiment FIG. 12 is a schematic block diagram of the image pickup apparatus of the third embodiment.
In FIG. 12, the same parts as those in the first embodiment of FIG. 1 are designated by the same reference numerals.
 本第3実施形態が第1実施形態と異なる点は、レンズ部11を、当該レンズ部11のレンズ11Aの光軸沿って駆動し、撮像部12を構成している光電変換部21の受光面21Fの像点21F0からの距離を可変できるレンズ制御部32を設けるようにした点である。 The difference between the third embodiment and the first embodiment is that the lens unit 11 is driven along the optical axis of the lens 11A of the lens unit 11 and the light receiving surface of the photoelectric conversion unit 21 constituting the imaging unit 12 is formed. This is a point where a lens control unit 32 capable of changing the distance from the image point 21F0 of the 21F is provided.
 換言すれば、本第3実施形態においては、レンズ部を光軸に沿ってずらすことが可能となるように構成し、必要に応じてフォーカスをずらし、得られる画像のエッジをぼかす、すなわち、撮像データの値[画像]の平滑化を図ることができるように構成しているのである。 In other words, in the third embodiment, the lens portion is configured to be capable of shifting along the optical axis, the focus is shifted as necessary, and the edges of the obtained image are blurred, that is, imaging. It is configured so that the data value [image] can be smoothed.
 この場合において、レンズ制御部32により、光電変換部21の受光面21Fを像点21F0と一致させることにより、撮像装置10Bは、通常のカメラと同様に、焦点の合った画像を撮像画像として取得することが可能となる。 In this case, the lens control unit 32 makes the light receiving surface 21F of the photoelectric conversion unit 21 coincide with the image point 21F0, so that the imaging device 10B acquires a focused image as an captured image in the same manner as a normal camera. It becomes possible to do.
 また、レンズ制御部32により、光電変換部21の受光面21Fの像点21F0からの距離を可変させることで、受光面21F上における画像のぼやける度合いを可変することができ、様々な撮像条件に最適な撮像画像を得ることができる。
 また、鑑賞目的等で撮像を行う場合は、従来の撮像装置と同様に制御することで焦点位置(フォーカス)が合った画像が得られることとなる。
Further, by changing the distance of the light receiving surface 21F of the photoelectric conversion unit 21 from the image point 21F0 by the lens control unit 32, the degree of blurring of the image on the light receiving surface 21F can be changed, and various imaging conditions can be met. The optimum captured image can be obtained.
Further, when imaging is performed for the purpose of viewing or the like, an image in which the focal position (focus) is matched can be obtained by controlling the image in the same manner as in the conventional imaging device.
[4]第4実施形態
 図13A及び図13Bは、第4実施形態の説明図である。
 また、図14は、第4実施形態の概要処理のフローチャートである。
 また、図15は、第4実施形態のウィンドウの説明図である。
 本第4実施形態が、上記各実施形態と異なる点は、注目画素の受光強度を周辺画素のうち、最小の受光強度を有する画素の受光強度とする点である。
[4] Fourth Embodiment FIGS. 13A and 13B are explanatory views of the fourth embodiment.
Further, FIG. 14 is a flowchart of the outline processing of the fourth embodiment.
Further, FIG. 15 is an explanatory view of the window of the fourth embodiment.
The fourth embodiment differs from each of the above embodiments in that the light receiving intensity of the pixel of interest is set to the light receiving intensity of the pixel having the minimum light receiving intensity among the peripheral pixels.
 以下の説明においては、再び図1を参照するとともに、光電変換部21における注目画素(注目ピクセル)に対して、周囲の8個の画素(ピクセル)を含む3×3画素(ピクセル)のウィンドウを用いて、走査を行い注目画素に対する画像解析を行う場合を例として説明する。 In the following description, with reference to FIG. 1 again, with respect to the pixel of interest (pixel of interest) in the photoelectric conversion unit 21, a window of 3 × 3 pixels (pixels) including eight surrounding pixels (pixels) is displayed. An example will be described in which scanning is performed and image analysis is performed on the pixel of interest.
 まず撮像装置10は、焦点をずらした状態で撮影する(ステップS31)。
 次に画像解析部14は、図13Aに示すように、3×3画素のウィンドウを順次走査して(ステップS32)、注目画素に対応する8個の周辺画素に対応する撮像データのうち、最小値(受光強度最小値)を当該注目画素の撮像データの値とする(ステップS33)。
First, the image pickup apparatus 10 takes a picture in a state of being out of focus (step S31).
Next, as shown in FIG. 13A, the image analysis unit 14 sequentially scans the window of 3 × 3 pixels (step S32), and the smallest of the imaging data corresponding to the eight peripheral pixels corresponding to the pixel of interest. The value (minimum value of light receiving intensity) is set as the value of the imaging data of the pixel of interest (step S33).
 具体的には、図15に示すように、3×3画素(ピクセル)の撮像データのうち、注目画素に対応する8個の周辺画素に対応する撮像データの値をa1~a8とした場合、注目画素に対応する撮像データの値Cを最小値を算出する関数MINを用い、次式で定義する。
    C=MIN(a1~a8)
Specifically, as shown in FIG. 15, when the values of the imaging data corresponding to the eight peripheral pixels corresponding to the pixel of interest in the imaging data of 3 × 3 pixels (pixels) are a1 to a8, The value C of the imaging data corresponding to the pixel of interest is defined by the following equation using the function MIN that calculates the minimum value.
C = MIN (a1 to a8)
 これらの結果、注目画素に対応する撮像データの値Cは、注目画素に対応する8個の周辺画素に対応する撮像データのうち、最小値とされる。
 そして、画像解析部14は、撮影画像を構成している全ての画素を一つずつ注目画素としてステップS32及びステップS33の処理を繰り返し行う。
As a result, the value C of the imaging data corresponding to the pixel of interest is set to the minimum value among the imaging data corresponding to the eight peripheral pixels corresponding to the pixel of interest.
Then, the image analysis unit 14 repeats the processes of steps S32 and S33, with all the pixels constituting the captured image as the pixels of interest one by one.
 そして、撮影画像を構成している全ての画素について処理が終わった段階で、得られた画像を結果保存部15に保存する(ステップS34)。 Then, when the processing for all the pixels constituting the captured image is completed, the obtained image is saved in the result storage unit 15 (step S34).
 これらの結果、注目画素に対応する撮像データの値が、隣接する8個の画素の撮像データの値のいずれかに対して、急峻に変化している、すなわち、例えば、宇宙線の入射あるいは画素欠陥による値の変化であると考えられる場合には、当該注目画素に対応する撮像データの値は低受光強度とされて、ノイズとして扱われることとなる。 As a result, the value of the imaging data corresponding to the pixel of interest changes sharply with respect to any of the values of the imaging data of the eight adjacent pixels, that is, for example, the incident of a cosmic ray or the pixel. When it is considered that the value is changed due to a defect, the value of the imaging data corresponding to the pixel of interest is regarded as having a low light receiving intensity and is treated as noise.
 一方、注目画素に対応する撮像データの値が、隣接する8個の画素の撮像データの値の全てに対して、緩やかに変化している場合には、それら8個の画素の撮像データのうち、最小値の値が当該注目画素に対応する撮像データの値とされて保存されることとなる。 On the other hand, when the value of the imaging data corresponding to the pixel of interest changes gently with respect to all the values of the imaging data of the eight adjacent pixels, among the imaging data of those eight pixels , The minimum value is saved as the value of the imaging data corresponding to the pixel of interest.
 以上の説明のように、本第4実施形態においても、簡易な演算処理で、結果保存部15に保存される結果画像は、宇宙線の入射、画素欠陥等の影響を除去して、本来の目的の被写体(星等)の位置を表す画像となる。 As described above, also in the fourth embodiment, the result image saved in the result storage unit 15 by simple arithmetic processing removes the influence of cosmic ray incidents, pixel defects, etc., and is the original image. It is an image showing the position of the target subject (star, etc.).
 以上の説明においては、本第4実施形態においては、第1実施形態と同様に、光電変換部21の受光面21Fを像点よりずらした位置(具体的には、レンズ11A側の位置)として得られる画像のエッジをぼかす(撮像データの値[画像]の平滑化を図る)ようにしていた。 In the above description, in the fourth embodiment, as in the first embodiment, the light receiving surface 21F of the photoelectric conversion unit 21 is set as a position shifted from the image point (specifically, a position on the lens 11A side). The edges of the obtained image were blurred (the value [image] of the imaging data was smoothed).
 しかしながら、第2実施形態と同様に、撮像面をずらすことが可能となるようにエッジし、必要に応じてフォーカスをずらし、得られる画像のエッジをぼかす(撮像データの値[画像]の平滑化を図る)ことができるように構成したり、第3実施形態と同様に、レンズ部を光軸に沿ってずらすことが可能となるようにし、必要に応じてフォーカスをずらし、得られる画像のエッジをぼかす(撮像データの値[画像]の平滑化を図る)ように構成したりすることも可能である。 However, as in the second embodiment, the image plane is edged so as to be able to be shifted, the focus is shifted as necessary, and the edge of the obtained image is blurred (smoothing of the value [image] of the imaged data]. The edge of the obtained image is configured so that the lens portion can be shifted along the optical axis, and the focus is shifted as necessary, as in the third embodiment. It is also possible to make it blurry (to smooth the value [image] of the imaging data).
[5]第5実施形態
 図16は、第5実施形態の撮像装置の概要構成ブロック図である。
 図16において、図1の第1実施形態と同様の部分には、同一の符号を付すものとする。
[5] Fifth Embodiment FIG. 16 is a schematic block diagram of the image pickup apparatus of the fifth embodiment.
In FIG. 16, the same parts as those in the first embodiment of FIG. 1 are designated by the same reference numerals.
 本第5実施形態が第1実施形態と異なる点は、通常の撮像装置と同様に、撮像部12を構成している光電変換部21の受光面がレンズ11Aの像点に配置されている点と、レンズ部11の前段に、入射光を拡散、屈折などにより、得られる画像のエッジをぼかす(撮像データの値[画像]の平滑化を図る)ようにするフィルタ部41を設けた点である。 The difference between the fifth embodiment and the first embodiment is that the light receiving surface of the photoelectric conversion unit 21 constituting the image pickup unit 12 is arranged at the image point of the lens 11A, as in the normal image pickup apparatus. In addition, a filter unit 41 is provided in front of the lens unit 11 to blur the edges of the obtained image by diffusing or refracting the incident light (to smooth the value [image] of the imaging data). be.
 図17は、第5実施形態の第1態様の構成説明図である。
 図17に示すように、レンズ11Aとの間のレンズ11Aの光軸上であって、レンズ11Aと第1の焦点FP1との間に、フィルタ部41として機能する光学的ロウパスフィルタLPFが位置している。
FIG. 17 is a configuration explanatory view of the first aspect of the fifth embodiment.
As shown in FIG. 17, an optical low-pass filter LPF that functions as a filter unit 41 is located between the lens 11A and the first focal point FP1 on the optical axis of the lens 11A between the lens 11A and the lens 11A. is doing.
 そして、光電変換部21の受光面21F1は、像点と一致するように配置されている。
 上記構成において、光学的ロウパスフィルタLPFを除いた状態では、光電変換部21の受光面21F1は、レンズ11Aの像点に一致しており、合焦した状態となる。
The light receiving surface 21F1 of the photoelectric conversion unit 21 is arranged so as to coincide with the image point.
In the above configuration, in the state where the optical low-pass filter LPF is removed, the light receiving surface 21F1 of the photoelectric conversion unit 21 coincides with the image point of the lens 11A and is in a focused state.
 しかし、光学的ロウパスフィルタLPFが光路中に挿入されることにより、入射光は拡散されて、被写体のエッジをぼかす効果がある。
 この場合において、光学的ロウパスフィルタLPFとしては、曇りガラスや異常屈折ガラスなど、公知の素材を使用すればよい。
However, when the optical low-pass filter LPF is inserted into the optical path, the incident light is diffused, which has the effect of blurring the edge of the subject.
In this case, as the optical low-pass filter LPF, a known material such as frosted glass or anomalous refraction glass may be used.
 そして、上述した各実施形態と同様に、ノイズ除去処理を行うことで、簡易な演算処理で、結果保存部15に保存される結果画像は、宇宙線の入射、画素欠陥等の影響を除去して、本来の目的の被写体(星等)の位置を表す画像となる。 Then, by performing the noise removal processing as in each of the above-described embodiments, the result image stored in the result storage unit 15 can be removed from the influences of cosmic ray incidents, pixel defects, etc. by a simple calculation process. Therefore, it becomes an image showing the position of the original target subject (star, etc.).
 以上の説明においては、光学的ロウパスフィルタLPFを用いて被写体のエッジをぼかして、画像の平滑化を図る場合について説明したが、第4実施形態と同様に、注目画素に対応する撮像データの値を注目画素に対応する周辺画素に対応する撮像データのうち、最小値とすることにより、被写体のエッジをぼかして、画像の平滑化を図るように構成することも可能である。 In the above description, the case where the edge of the subject is blurred by using the optical low-pass filter LPF to smooth the image has been described. However, as in the fourth embodiment, the imaging data corresponding to the pixel of interest has been described. By setting the value to the minimum value among the imaging data corresponding to the peripheral pixels corresponding to the pixel of interest, the edge of the subject can be blurred to smooth the image.
 図18は、第5実施形態の第2態様の構成説明図である。
 図18に示すように、レンズ11Aとの間のレンズ11Aの光軸上であって、レンズ11Aと第1の焦点FP1との間に、フィルタ部41として機能するクロスフィルタCFが位置している。
FIG. 18 is a configuration explanatory view of the second aspect of the fifth embodiment.
As shown in FIG. 18, a cross filter CF that functions as a filter unit 41 is located between the lens 11A and the first focal point FP1 on the optical axis of the lens 11A between the lens 11A and the lens 11A. ..
 ここで、クロスフィルタCFとは、ガラス表面に細い線上の溝を刻む等の公知の技術で実現可能である。 Here, the cross filter CF can be realized by a known technique such as carving a groove on a thin line on the glass surface.
 そして、光電変換部21の受光面21F1は、像点と一致するように配置されている。
 上記構成においても、クロスフィルタCFを除いた状態では、光電変換部21の受光面21F1は、レンズ11Aの像点に一致しており、合焦した状態となる。
The light receiving surface 21F1 of the photoelectric conversion unit 21 is arranged so as to coincide with the image point.
Even in the above configuration, in the state where the cross filter CF is removed, the light receiving surface 21F1 of the photoelectric conversion unit 21 coincides with the image point of the lens 11A and is in a focused state.
 しかし、クロスフィルタCFが光路中に挿入されることにより、例えば、入射光は点光源の中心から水平、垂直方向に光条が発生することとなる。 However, when the cross filter CF is inserted into the optical path, for example, incident light has streaks generated horizontally and vertically from the center of the point light source.
 図19は、第5実施形態の第2態様の撮像画像の一例の説明図である。
 撮像画像G21においても図4と同様に、宇宙線の入射、画素欠陥等に起因する撮像データPE-Nと、実際の被写体に対応する撮像データPE-1と、が含まれている。
FIG. 19 is an explanatory diagram of an example of the captured image of the second aspect of the fifth embodiment.
Similar to FIG. 4, the captured image G21 also includes the imaging data PE-N caused by the incident of cosmic rays, pixel defects, and the imaging data PE-1 corresponding to the actual subject.
 図19に示すように、撮像データPE-1に対応する点光源の中心から水平、垂直方向に光条が発生している。これに対し撮像データPE-Nについては、クロスフィルタCFの影響を受けないので、1画素のみが高受光強度となっている。 As shown in FIG. 19, striations are generated horizontally and vertically from the center of the point light source corresponding to the imaging data PE-1. On the other hand, since the imaging data PE-N is not affected by the cross filter CF, only one pixel has a high light receiving intensity.
 ここで、第5実施形態の第2態様における画像解析処理について詳細に説明する。
 図20は、第5実施形態の第2態様におけるウィンドウの説明図である。
 第5実施形態の第2態様における走査に用いるウィンドウとしては、図20に示すように、十字型のウィンドウWD2が用いられる。
Here, the image analysis process according to the second aspect of the fifth embodiment will be described in detail.
FIG. 20 is an explanatory view of the window in the second aspect of the fifth embodiment.
As a window used for scanning in the second aspect of the fifth embodiment, as shown in FIG. 20, a cross-shaped window WD2 is used.
 次に第5実施形態の第2態様における動作を説明する。
 図21は、第5実施形態の第2態様における画像解析部の概要処理フローチャートである。
 まず画像解析部14は、注目画素としての画素Px(X,Y)を特定するためのパラメータX、Yを初期値である0に設定する(ステップS41)。本実施形態では、パラメータXは、行方向のパラメータであり、パラメータYは、列方向のパラメータである(後述の図4参照)。
Next, the operation in the second aspect of the fifth embodiment will be described.
FIG. 21 is an outline processing flowchart of the image analysis unit according to the second aspect of the fifth embodiment.
First, the image analysis unit 14 sets the parameters X and Y for specifying the pixel Px (X, Y) as the pixel of interest to 0, which is an initial value (step S41). In the present embodiment, the parameter X is a parameter in the row direction, and the parameter Y is a parameter in the column direction (see FIG. 4 described later).
 次に画像解析部14は、パラメータYがパラメータYの最大値であるYmaxを超えているか否かを判断する(ステップS42)。すなわち、画像解析部14は、全ての画素の処理が終わったか否かを判断する。
 ステップS42の判断において、パラメータYがパラメータYの最大値Ymaxを超えている場合には(ステップS42;Yes)、画像解析処理を終了する。
Next, the image analysis unit 14 determines whether or not the parameter Y exceeds Ymax, which is the maximum value of the parameter Y (step S42). That is, the image analysis unit 14 determines whether or not the processing of all the pixels has been completed.
In the determination of step S42, if the parameter Y exceeds the maximum value Ymax of the parameter Y (step S42; Yes), the image analysis process is terminated.
 ステップS42の判断において、未だパラメータYがパラメータYの最大値Ymaxを超えていない場合には(ステップS42;No)、パラメータXがパラメータXの最大値であるXmaxを超えているか否かを判断する(ステップS43)。すなわち、画像解析部14は、1ライン分の画素の処理が終わったか否かを判断する。 In the determination of step S42, if the parameter Y has not yet exceeded the maximum value Ymax of the parameter Y (step S42; No), it is determined whether or not the parameter X exceeds the maximum value of the parameter X, Xmax. (Step S43). That is, the image analysis unit 14 determines whether or not the processing of the pixels for one line is completed.
 ステップS13の判断において、パラメータXがパラメータXの最大値であるXmaxを超えている場合には(ステップS43;Yes)、画像解析部14は、Y=Y+1として(ステップS44)、処理を再びステップS42に移行する。
 ステップS43の判断において、未だパラメータXがパラメータXの最大値であるXmaxを超えていない場合には(ステップS43;No)、画像解析部14は、十字型のウィンドウWD2を順次走査して、それぞれ5個の画素に対応する撮像データの値を取得する(ステップS41)。
 続いて画像解析部14は、ウィンドウWD2に対応する5個の撮像データ内において撮像データの最小値を検索し、取得する(ステップS46)。
In the determination of step S13, if the parameter X exceeds Xmax, which is the maximum value of the parameter X (step S43; Yes), the image analysis unit 14 sets Y = Y + 1 (step S44) and steps the process again. Move to S42.
In the determination of step S43, if the parameter X has not yet exceeded Xmax, which is the maximum value of the parameter X (step S43; No), the image analysis unit 14 sequentially scans the cross-shaped window WD2, respectively. The value of the image pickup data corresponding to the five pixels is acquired (step S41).
Subsequently, the image analysis unit 14 searches for and acquires the minimum value of the imaging data among the five imaging data corresponding to the window WD2 (step S46).
 続いて画像解析部14は、ステップS46で取得した撮像データの最小値と、所定のしきい値データDとを比較する(ステップS47)。 Subsequently, the image analysis unit 14 compares the minimum value of the imaging data acquired in step S46 with the predetermined threshold value data D (step S47).
 これにより、ステップS47の比較において、ステップS46で取得した撮像データの最小値がしきい値データD以下であれば(ステップS47;No:撮像データの最小値≦D)、当該注目画素の撮像データの値をノイズとして扱い、黒点(受光強度(輝度)レベル最低)の撮像データとして扱うためにクランプする(ステップS48)。 As a result, in the comparison of step S47, if the minimum value of the imaging data acquired in step S46 is equal to or less than the threshold data D (step S47; No: minimum value of imaging data ≤ D), the imaging data of the pixel of interest. Is treated as noise and clamped to be treated as imaging data of black spots (light receiving intensity (luminance) level lowest) (step S48).
 そして、クランプ後の撮像データの値を、当該注目画素の画素値として出力し、結果保存部15に保存する(ステップS49)。 Then, the value of the imaging data after clamping is output as the pixel value of the pixel of interest, and is stored in the result storage unit 15 (step S49).
 一方、ステップS47の比較において、ステップS46で取得した撮像データの最小値がしきい値データDを超えている場合には(ステップS47;Yes:撮像データの最小値>D)、当該注目画素の撮像データの値を、当該注目画素の画素値として出力し、結果保存部15に保存する(ステップS49)。 On the other hand, in the comparison of step S47, when the minimum value of the imaging data acquired in step S46 exceeds the threshold data D (step S47; Yes: minimum value of imaging data> D), the pixel of interest The value of the imaging data is output as the pixel value of the pixel of interest and stored in the result storage unit 15 (step S49).
 これらの結果、注目画素に対応する撮像データの値が、隣接する4個の画素の撮像データの値のいずれかに対して、急峻に変化している、すなわち、宇宙線の入射あるいは画素欠陥による値の変化であると考えられる場合には、当該注目画素に対応する撮像データの値はノイズとして扱われることとなる。 As a result, the value of the imaging data corresponding to the pixel of interest changes sharply with respect to any of the values of the imaging data of the four adjacent pixels, that is, due to the incident of cosmic rays or pixel defects. When it is considered that the value is changed, the value of the imaging data corresponding to the pixel of interest is treated as noise.
 一方、注目画素に対応する撮像データの値が、隣接する4個の画素の撮像データの値の全てに対して、緩やかに変化している、すなわち、実際の撮像画像に対応する撮像データと考えられる場合には、当該注目画素に対応する撮像データの値をそのまま保存することとなる。 On the other hand, the values of the imaging data corresponding to the pixel of interest are considered to be gradual changes with respect to all the imaging data values of the four adjacent pixels, that is, the imaging data corresponding to the actual captured image. If this is the case, the value of the imaging data corresponding to the pixel of interest will be saved as it is.
 図22は、本第5実施形態の第2態様における結果画像の説明図である。
 これらの結果、本第5実施形態の第2態様によっても、結果保存部15に保存される結果画像は、図22に示すように、宇宙線の入射、画素欠陥等の影響を除去した画像となる。
FIG. 22 is an explanatory diagram of the result image in the second aspect of the fifth embodiment.
As a result, even in the second aspect of the fifth embodiment, the result image stored in the result storage unit 15 is an image obtained by removing the effects of cosmic ray incidents, pixel defects, etc., as shown in FIG. Become.
 すなわち、画像解析部14では、実効的に、クロスフィルタCFにより発生した光条を検出することで、容易な処理で、被写体の撮像画像と、ノイズによる輝点と、を区別することができる。 That is, by effectively detecting the striations generated by the cross filter CF, the image analysis unit 14 can easily distinguish between the captured image of the subject and the bright spot due to noise.
 以上の第5実施形態の第2態様における説明においては、クロスフィルタCFを用いて中心画素値を判定する方法として、第1実施形態と同様の場合について説明したが、図9に示した第1実施形態の変形例における処理、あるいは、図14に示した第4実施形態における処理を適用して中心画素値を判定するように構成することも可能である。 In the above description of the second aspect of the fifth embodiment, a case similar to that of the first embodiment has been described as a method of determining the center pixel value using the cross filter CF, but the first aspect shown in FIG. 9 has been described. It is also possible to apply the process in the modified example of the embodiment or the process in the fourth embodiment shown in FIG. 14 to determine the center pixel value.
 図23は、画像原点を5画素構成の十字型のウィンドウで走査している場合の説明図である。
 以上の説明においては、十字型のウィンドウWD2の全てに対象画素が存在する場合について説明したが、撮影画像の周縁部を走査する場合には、十字型のウィンドウWD2の一部に処理対象画素が存在しない場合がある。
FIG. 23 is an explanatory diagram when the origin of the image is scanned by a cross-shaped window having a 5-pixel configuration.
In the above description, the case where the target pixels exist in all of the cross-shaped window WD2 has been described, but when scanning the peripheral portion of the captured image, the processing target pixels are present in a part of the cross-shaped window WD2. It may not exist.
 図23の例の場合、十字型のウィンドウWD2うち、2個の画素において処理対象画素が存在していない。
 このような場合には、例えば、以下のいずれかの方法により、処理を行う。
 (1) 処理対象画素が存在しない部分については、無視し、処理対象画素が存在する部分のみで処理を行う。より具体的には、図23の例の場合、処理対象画素が存在する3つの画素のみで評価を行うこととなる。
 (2) 処理対象画素が存在しない部分については、値を0として、通常通り処理を行う。
 (3) 列方向あるいは行方向に隣接する画素の画素値をそのままコピーして処理するより具体的には、図23の例の場合、画素Px(0,0)の画素値をそれぞれコピーして評価を行うこととなる。
In the case of the example of FIG. 23, the processing target pixel does not exist in two pixels of the cross-shaped window WD2.
In such a case, for example, the process is performed by any of the following methods.
(1) The portion where the processing target pixel does not exist is ignored, and the processing is performed only on the portion where the processing target pixel exists. More specifically, in the case of the example of FIG. 23, the evaluation is performed only with the three pixels in which the processing target pixel exists.
(2) For the portion where the processing target pixel does not exist, the value is set to 0 and processing is performed as usual.
(3) Copying and processing the pixel values of pixels adjacent to each other in the column direction or the row direction More specifically, in the case of the example of FIG. 23, the pixel values of the pixels Px (0,0) are copied respectively. It will be evaluated.
 以上のような構成を採ることにより、撮影画像の周縁部を走査する場合でも確実に処理が行える。 By adopting the above configuration, processing can be reliably performed even when scanning the peripheral portion of the captured image.
 以上の説明においては、十字型のウィンドウWD2のサイズは、5画素構成としていたが、画像解像度に応じて、縦横の長さをそれぞれ長くした9画素構成、13画素構成や、5画素構成の相似形の20画素構成等のサイズに調整することも可能である。なお、十字型の変形例として、中心部を正方形状の9画素構成とし、上下左右にそれぞれ1画素追加した13画素構成等とすることも可能である。 In the above description, the size of the cross-shaped window WD2 has a 5-pixel configuration, but it is similar to a 9-pixel configuration, a 13-pixel configuration, or a 5-pixel configuration in which the vertical and horizontal lengths are increased according to the image resolution. It is also possible to adjust the size to a 20-pixel configuration of the shape. As a modified example of the cross shape, it is also possible to have a square-shaped 9-pixel configuration at the center and a 13-pixel configuration in which one pixel is added vertically and horizontally.
 逆に被写体の大きさがウィンドウサイズよりも十分に大きい場合には、ウィンドウサイズに合わせて、画像を縮小してリサイズするようにすることも可能である。これにより画素数が多い画像であっても画素数が少ない画像と同じ走査回数とすることができ、処理の簡素化あるいは高速化が図れる。 On the contrary, if the size of the subject is sufficiently larger than the window size, it is possible to reduce the size of the image and resize it according to the window size. As a result, even if the image has a large number of pixels, the number of scans can be the same as that of an image having a small number of pixels, and the processing can be simplified or speeded up.
 図24は、第5実施形態の変形例の撮像装置の概要構成ブロック図である。
 図24において、図16の第5実施形態と同様の部分には、同一の符号を付すものとする。
FIG. 24 is a schematic block diagram of an image pickup apparatus according to a modified example of the fifth embodiment.
In FIG. 24, the same parts as those in the fifth embodiment of FIG. 16 are designated by the same reference numerals.
 本第5実施形態の変形例が第5実施形態と異なる点は、フィルタ部41をレンズ部11のレンズ11Aの光軸が通過するように、フィルタ部41を入射光の光路中に挿脱可能に駆動するフィルタ制御部42を設けるようにした点である。 The modified example of the fifth embodiment is different from the fifth embodiment in that the filter unit 41 can be inserted into and removed from the optical path of the incident light so that the optical axis of the lens 11A of the lens unit 11 passes through the filter unit 41. The point is that the filter control unit 42 that drives the lens is provided.
 この場合において、フィルタ制御部42により、フィルタ部41内をレンズ11Aの光路外とすることにより、撮像装置40Aは、通常のカメラと同様に、焦点の合った画像を撮像画像として取得することが可能となる。 In this case, the filter control unit 42 sets the inside of the filter unit 41 to the outside of the optical path of the lens 11A, so that the image pickup apparatus 40A can acquire a focused image as a captured image as in the case of a normal camera. It will be possible.
 以上の説明においては、フィルタ部41をフィルタ制御部42により入射光の光路中に挿脱する構成を採っていたが、光学的ロウパスフィルタLPFやクロスフィルタCF等の測量用(位置測定用)フィルタと、通常の光学フィルタ(例えば、赤外線フィルタ、NDフィルタ等)とを切り替えて光路中に挿入する構成とすることも可能である。 In the above description, the filter unit 41 is inserted and removed from the optical path of the incident light by the filter control unit 42, but it is for measuring the optical low-pass filter LPF, the cross filter CF, etc. (for position measurement). It is also possible to switch between the filter and a normal optical filter (for example, an infrared filter, an ND filter, etc.) and insert the filter into the optical path.
 以上の説明のように、各実施形態によれば、外部からの宇宙線の入射や、撮像素子の欠陥画素の影響を簡易な処理で、除去できるとともに、撮像対象の被写体の撮像画像を得ることが可能となる。 As described above, according to each embodiment, the incident of cosmic rays from the outside and the influence of defective pixels of the image sensor can be removed by a simple process, and an image of the subject to be imaged can be obtained. Is possible.
[6]実施形態の変形例
 なお、本技術の実施の形態は、上述した実施の形態に限定されるものではなく、本技術の要旨を逸脱しない範囲において種々の変更が可能である。
[6] Modifications of the Embodiment The embodiment of the present technology is not limited to the above-described embodiment, and various changes can be made without departing from the gist of the present technology.
 例えば、上記第1実施形態及び第5実施形態の第2態様においては、ウィンドウWDあるいはウィンドウWD2のサイズの調整及び画像サイズの調整について述べたが、第1実施形態の変形例、第2実施形態、第3実施形態、第4実施形態、第5実施形態の第1態様においても同様に適用が可能である。 For example, in the second aspect of the first embodiment and the fifth embodiment, the adjustment of the size of the window WD or the window WD2 and the adjustment of the image size have been described, but the modified example of the first embodiment and the second embodiment have been described. , The third embodiment, the fourth embodiment, and the first aspect of the fifth embodiment can be similarly applied.
 また、上記第1実施形態及び第5実施形態の第2態様においては、ウィンドウWDあるいはウィンドウWD2の全ての領域に対象画素が存在する場合について説明したが、第1実施形態の変形例、第2実施形態、第3実施形態、第4実施形態、第5実施形態の第1態様においても、撮影画像の周縁部を走査する場合には、3×3画素のウィンドウWDあるいはウィンドウWD2の一部に処理対象画素が存在しない場合について、上述した(1)~(3)の手法を同様に適用するように構成することも可能である。このような構成を採ることにより、撮影画像の周縁部を走査する場合でも確実に処理が行える。 Further, in the second aspect of the first embodiment and the fifth embodiment, the case where the target pixel exists in all the regions of the window WD or the window WD2 has been described. Also in the first embodiment of the embodiment, the third embodiment, the fourth embodiment, and the fifth embodiment, when scanning the peripheral portion of the captured image, the window WD or a part of the window WD2 having 3 × 3 pixels is used. It is also possible to configure the above-described methods (1) to (3) to be similarly applied to the case where the processing target pixel does not exist. By adopting such a configuration, processing can be reliably performed even when scanning the peripheral portion of the captured image.
 また、上記第2実施形態においては、撮像面をずらすことが可能となるように構成し、必要に応じてフォーカスをずらすことにより、上記第3実施形態においては、レンズ部を光軸に沿ってずらすことが可能となるように構成し、必要に応じてフォーカスをずらすことにより、得られる画像のエッジをぼかす(画像の平滑化を行う)ことができるように構成していた。 Further, in the second embodiment, the image pickup surface is configured to be able to be shifted, and by shifting the focus as necessary, in the third embodiment, the lens portion is moved along the optical axis. It is configured so that it can be shifted, and by shifting the focus as necessary, the edges of the obtained image can be blurred (the image is smoothed).
 しかしながら、これらに代えて、第4実施形態と同様に、注目画素に対応する撮像データの値を注目画素に対応する周辺画素に対応する撮像データのうち、最小値とすることにより、被写体のエッジをぼかして、画像の平滑化を図るように構成することも可能である。 However, instead of these, as in the fourth embodiment, the value of the imaging data corresponding to the pixel of interest is set to the minimum value among the imaging data corresponding to the peripheral pixels corresponding to the pixel of interest, thereby setting the edge of the subject. It is also possible to blur the image so as to smooth the image.
 さらに、本技術は、以下の構成とすることも可能である。
(1)
 被写体からの入射光を集光するレンズ部と、
 前記レンズ部により集光された入射光を非合焦状態で撮像する撮像部と、
 前記撮像部における撮像画像の解析を行い、一の注目画素の周囲に位置する複数の所定の周囲画素の画素値に基づいて、前記注目画素の画素値を設定して、前記被写体の位置を特定するための結果画像を生成する画像解析部と、
 を備えた撮像装置。
(2)
 前記画像解析部は、複数の所定の周囲画素の画素値の最小値が所定のしきい値を超えている場合に、前記注目画素の画素値を前記最小値に設定し、
 前記最小値が前記しきい値以下である場合に、前記注目画素の画素値を所定の低受光強度値とする、
 (1)記載の撮像装置。
(3)
 前記画像解析部は、複数の所定の周囲画素の画素値の最小値を、前記注目画素の画素値として設定する、
 (1)記載の撮像装置。
(4)
 前記画像解析部は、前記撮像画像を構成している画素値を高受光強度及び低受光強度に区分する2値化処理を行い、複数の所定の周囲画素の画素値に前記低受光強度に区分される画素値が含まれている場合に、前記注目画素の画素値を前記低受光強度値に設定する、
 (1)記載の撮像装置。
(5)
 前記レンズ部と、前記撮像部と、の相対的な配置位置は、前記撮像部において、非合焦状態で撮像可能な位置に設定されている、
 (1)~(4)のいずれかに記載の撮像装置。
(6)
 前記レンズ部を光軸方向に駆動して、前記レンズ部と、前記撮像部と、の相対的な配置位置は、前記撮像部において、非合焦状態で撮像可能な位置に設定するレンズ制御部を備えた、
 (5)記載の撮像装置。
(7)
 前記撮像部を光軸方向に駆動して、前記レンズ部と、前記撮像部と、の相対的な配置位置は、前記撮像部において、非合焦状態で撮像可能な位置に設定する撮像制御部を備えた、
 (5)記載の撮像装置。
(8)
 前記レンズ部と、前記被写体の間の光路中に挿入されて、得られる画像の平滑化を行うフィルタ部を備えた、
 (1)~(4)のいずれかに記載の撮像装置。
(9)
 前記フィルタ部は、光学的ロウパスフィルタとして構成されている、
 (8)記載の撮像装置。
(10)
 前記レンズ部と、前記被写体の間の光路中に挿入されて、光条を生成するクロスフィルタとして構成されたフィルタ部を備えた、
 (1)~(4)のいずれかに記載の撮像装置。
(11)
 前記レンズ部と、前記被写体の間の光路中に挿入されて、前記レンズ部を通過後の前記入射光が非合焦状態となるようにするフィルタ部を備えた、
 (1)~(4)のいずれかに記載の撮像装置。
(12)
 前記フィルタ部を光路中に挿脱可能に挿入して、前記非合焦状態で撮像可能とするフィルタ制御部を備えた、
 (8)~(11)のいずれかに記載の撮像装置。
(13)
 被写体からの入射光を集光するレンズ部と、前記レンズ部により集光された入射光を非合焦状態で撮像する撮像部と、を有する撮像装置の制御方法において、
 撮像画像の解析を行い、一の注目画素の周囲に位置する複数の所定の周囲画素の画素値に基づいて、前記注目画素の画素値を設定する処理を前記注目画素を更新しつつ繰り返し行う過程と、
 複数の注目画素に対応する前記設定した画素値に基づいて前記被写体の位置を特定するための結果画像を生成する過程と、
 を備えた撮像装置の制御方法。
(14)
 前記注目画素の画素値を設定する過程は、複数の所定の周囲画素の画素値の最小値が所定のしきい値を超えている場合に、前記注目画素の画素値を前記最小値に設定する過程と、
 前記最小値が前記しきい値以下である場合に、前記注目画素の画素値を所定の低受光強度値とする過程と、
 を備えた(13)記載の撮像装置の制御方法。
(15)
 前記注目画素の画素値を設定する過程は、複数の所定の周囲画素の画素値の最小値を、前記注目画素の画素値として設定する過程、
 を備えた(13)記載の撮像装置の制御方法。
(16)
 前記注目画素の画素値を設定する過程は、前記撮像画像を構成している画素値を高受光強度及び低受光強度に区分する2値化処理を行う過程と、
 複数の所定の周囲画素の画素値に前記低受光強度に区分される画素値が含まれている場合に、前記注目画素の画素値を前記低受光強度値に設定する過程と、
 を備えた(13)記載の撮像装置の制御方法。
Further, the present technology can also have the following configurations.
(1)
A lens unit that collects incident light from the subject,
An imaging unit that captures the incident light collected by the lens unit in an unfocused state, and an imaging unit.
The image captured by the imaging unit is analyzed, and the pixel value of the pixel of interest is set based on the pixel values of a plurality of predetermined peripheral pixels located around one pixel of interest to specify the position of the subject. An image analysis unit that generates a result image for
Imaging device equipped with.
(2)
When the minimum value of the pixel value of a plurality of predetermined peripheral pixels exceeds a predetermined threshold value, the image analysis unit sets the pixel value of the pixel of interest to the minimum value.
When the minimum value is equal to or less than the threshold value, the pixel value of the pixel of interest is set to a predetermined low light receiving intensity value.
(1) The imaging device according to the above.
(3)
The image analysis unit sets the minimum value of the pixel values of a plurality of predetermined peripheral pixels as the pixel values of the pixel of interest.
(1) The imaging device according to the above.
(4)
The image analysis unit performs binarization processing for classifying the pixel values constituting the captured image into high light receiving intensity and low light receiving intensity, and classifies the pixel values of a plurality of predetermined surrounding pixels into the low light receiving intensity. When the pixel value to be used is included, the pixel value of the pixel of interest is set to the low light receiving intensity value.
(1) The imaging device according to the above.
(5)
The relative arrangement position of the lens unit and the imaging unit is set to a position in the imaging unit that allows imaging in an out-of-focus state.
The imaging device according to any one of (1) to (4).
(6)
A lens control unit that drives the lens unit in the optical axis direction and sets the relative arrangement position of the lens unit and the imaging unit to a position in the imaging unit that enables imaging in an out-of-focus state. With,
(5) The imaging device according to the above.
(7)
An imaging control unit that drives the imaging unit in the optical axis direction and sets the relative arrangement position of the lens unit and the imaging unit to a position in the imaging unit that enables imaging in an out-of-focus state. With,
(5) The imaging device according to the above.
(8)
A filter unit provided with the lens unit and a filter unit inserted into the optical path between the subjects to smooth the obtained image.
The imaging device according to any one of (1) to (4).
(9)
The filter unit is configured as an optical low-pass filter.
(8) The imaging device according to the above.
(10)
A filter unit configured as a cross filter that is inserted into an optical path between the lens unit and the subject to generate striations is provided.
The imaging device according to any one of (1) to (4).
(11)
A filter unit is provided which is inserted into the optical path between the lens unit and the subject so that the incident light after passing through the lens unit is out of focus.
The imaging device according to any one of (1) to (4).
(12)
The filter unit is provided with a filter control unit that can be inserted into and removed from the optical path to enable imaging in the out-of-focus state.
The imaging device according to any one of (8) to (11).
(13)
In a control method of an imaging device having a lens unit that collects incident light from a subject and an imaging unit that captures the incident light collected by the lens unit in an out-of-focus state.
A process of analyzing a captured image and repeatedly performing a process of setting a pixel value of the pixel of interest based on the pixel values of a plurality of predetermined peripheral pixels located around one pixel of interest while updating the pixel of interest. When,
A process of generating a result image for specifying the position of the subject based on the set pixel values corresponding to a plurality of pixels of interest, and a process of generating a result image.
A control method for an image pickup device equipped with.
(14)
In the process of setting the pixel value of the pixel of interest, when the minimum value of the pixel value of a plurality of predetermined peripheral pixels exceeds a predetermined threshold value, the pixel value of the pixel of interest is set to the minimum value. The process and
A process of setting the pixel value of the pixel of interest to a predetermined low light receiving intensity value when the minimum value is equal to or less than the threshold value.
The control method of the image pickup apparatus according to (13).
(15)
The process of setting the pixel value of the pixel of interest is the process of setting the minimum value of the pixel value of a plurality of predetermined peripheral pixels as the pixel value of the pixel of interest.
The control method of the image pickup apparatus according to (13).
(16)
The process of setting the pixel value of the pixel of interest includes a process of binarizing the pixel values constituting the captured image into high light receiving intensity and low light receiving intensity.
A process of setting the pixel value of the pixel of interest to the low light-receiving intensity value when the pixel values of a plurality of predetermined peripheral pixels include pixel values classified into the low light-receiving intensity.
The control method of the image pickup apparatus according to (13).
 10、10A、10B、40A 撮像装置
 11  レンズ部
 11A レンズ
 12  撮像部
 13  画像蓄積部
 14  画像解析部
 15  結果保存部
 21  光電変換部
 21F0 像点
 21F、21F1 受光面
 22  アナログアンプ部
 23  AD変換部
 31  撮像制御部
 32  レンズ制御部
 41  フィルタ部
 42  フィルタ制御部
 CF  クロスフィルタ
 D   しきい値データ
 LPF 光学的ロウパスフィルタ
 OBJ 被写体
 PE-1  撮像データ
 PE-N  撮像データ(ノイズ)
 WD、WD2  ウィンドウ
10, 10A, 10B, 40A Imaging device 11 Lens unit 11A Lens 12 Imaging unit 13 Image storage unit 14 Image analysis unit 15 Result storage unit 21 Photoelectric conversion unit 21F0 Image point 21F, 21F1 Light receiving surface 22 Analog amplifier unit 23 AD conversion unit 31 Imaging control unit 32 Lens control unit 41 Filter unit 42 Filter control unit CF Cross filter D Threshold data LPF Optical low-pass filter OBJ Subject PE-1 Imaging data PE-N Imaging data (noise)
WD, WD2 window

Claims (12)

  1.  被写体からの入射光を集光するレンズ部と、
     前記レンズ部により集光された入射光を非合焦状態で撮像する撮像部と、
     前記撮像部における撮像画像の解析を行い、一の注目画素の周囲に位置する複数の所定の周囲画素の画素値に基づいて、前記注目画素の画素値を設定して、前記被写体の位置を特定するための結果画像を生成する画像解析部と、
     を備えた撮像装置。
    A lens unit that collects incident light from the subject,
    An imaging unit that captures the incident light collected by the lens unit in an unfocused state, and an imaging unit.
    The image captured by the imaging unit is analyzed, and the pixel value of the pixel of interest is set based on the pixel values of a plurality of predetermined peripheral pixels located around one pixel of interest to specify the position of the subject. An image analysis unit that generates a result image for
    Imaging device equipped with.
  2.  前記画像解析部は、複数の所定の周囲画素の画素値の最小値が所定のしきい値を超えている場合に、前記注目画素の画素値を前記最小値に設定し、
     前記最小値が前記しきい値以下である場合に、前記注目画素の画素値を所定の低受光強度値とする、
     請求項1記載の撮像装置。
    When the minimum value of the pixel value of a plurality of predetermined peripheral pixels exceeds a predetermined threshold value, the image analysis unit sets the pixel value of the pixel of interest to the minimum value.
    When the minimum value is equal to or less than the threshold value, the pixel value of the pixel of interest is set to a predetermined low light receiving intensity value.
    The imaging device according to claim 1.
  3.  前記画像解析部は、複数の所定の周囲画素の画素値の最小値を、前記注目画素の画素値として設定する、
     請求項1記載の撮像装置。
    The image analysis unit sets the minimum value of the pixel values of a plurality of predetermined peripheral pixels as the pixel values of the pixel of interest.
    The imaging device according to claim 1.
  4.  前記画像解析部は、前記撮像画像を構成している画素値を高受光強度及び低受光強度に区分する2値化処理を行い、複数の所定の周囲画素の画素値に前記低受光強度に区分される画素値が含まれている場合に、前記注目画素の画素値を前記低受光強度に対応する値に設定する、
     請求項1記載の撮像装置。
    The image analysis unit performs binarization processing for classifying the pixel values constituting the captured image into high light receiving intensity and low light receiving intensity, and classifies the pixel values of a plurality of predetermined surrounding pixels into the low light receiving intensity. When the pixel value to be used is included, the pixel value of the pixel of interest is set to a value corresponding to the low light receiving intensity.
    The imaging device according to claim 1.
  5.  前記レンズ部と、前記撮像部と、の相対的な配置位置は、前記撮像部において、非合焦状態で撮像可能な位置に設定されている、
     請求項1乃至請求項4のいずれか一項記載の撮像装置。
    The relative arrangement position of the lens unit and the imaging unit is set to a position in the imaging unit that allows imaging in an out-of-focus state.
    The imaging device according to any one of claims 1 to 4.
  6.  前記レンズ部を光軸方向に駆動して、前記レンズ部と、前記撮像部と、の相対的な配置位置は、前記撮像部において、非合焦状態で撮像可能な位置に設定するレンズ制御部を備えた、
     請求項5記載の撮像装置。
    A lens control unit that drives the lens unit in the optical axis direction and sets the relative arrangement position of the lens unit and the imaging unit to a position in the imaging unit that enables imaging in an out-of-focus state. With,
    The imaging device according to claim 5.
  7.  前記撮像部を光軸方向に駆動して、前記レンズ部と、前記撮像部と、の相対的な配置位置は、前記撮像部において、非合焦状態で撮像可能な位置に設定する撮像制御部を備えた、
     請求項5記載の撮像装置。
    An imaging control unit that drives the imaging unit in the optical axis direction and sets the relative arrangement position of the lens unit and the imaging unit to a position in the imaging unit that enables imaging in an out-of-focus state. With,
    The imaging device according to claim 5.
  8.  前記レンズ部と、前記被写体の間の光路中に挿入されて、得られる画像の平滑化を行うフィルタ部を備えた、
     請求項1乃至請求項4のいずれか一項記載の撮像装置。
    A filter unit provided with the lens unit and a filter unit inserted into the optical path between the subjects to smooth the obtained image.
    The imaging device according to any one of claims 1 to 4.
  9.  前記フィルタ部は、光学的ロウパスフィルタとして構成されている、
     請求項8記載の撮像装置。
    The filter unit is configured as an optical low-pass filter.
    The imaging device according to claim 8.
  10.  前記レンズ部と、前記被写体の間の光路中に挿入されて、光条を生成するクロスフィルタとして構成されたフィルタ部を備えた、
     請求項1乃至請求項4のいずれか一項記載の撮像装置。 
    A filter unit configured as a cross filter that is inserted into an optical path between the lens unit and the subject to generate striations is provided.
    The imaging device according to any one of claims 1 to 4.
  11.  前記フィルタ部を光路中に挿脱可能に挿入して、前記非合焦状態で撮像可能とするフィルタ制御部を備えた、
     請求項8乃至請求項10のいずれか一項記載の撮像装置。
    The filter unit is provided with a filter control unit that can be inserted into and removed from the optical path to enable imaging in the out-of-focus state.
    The imaging device according to any one of claims 8 to 10.
  12.  被写体からの入射光を集光するレンズ部と、前記レンズ部により集光された入射光を非合焦状態で撮像する撮像部と、を有する撮像装置の制御方法において、
     撮像画像の解析を行い、一の注目画素の周囲に位置する複数の所定の周囲画素の画素値に基づいて、前記注目画素の画素値を設定する処理を前記注目画素を更新しつつ繰り返し行う過程と、
     複数の注目画素に対応する前記設定した画素値に基づいて前記被写体の位置を特定するための結果画像を生成する過程と、
     を備えた撮像装置の制御方法。
    In a control method of an imaging device having a lens unit that collects incident light from a subject and an imaging unit that captures the incident light collected by the lens unit in an out-of-focus state.
    A process of analyzing a captured image and repeatedly performing a process of setting a pixel value of the pixel of interest based on the pixel values of a plurality of predetermined peripheral pixels located around one pixel of interest while updating the pixel of interest. When,
    A process of generating a result image for specifying the position of the subject based on the set pixel values corresponding to a plurality of pixels of interest, and a process of generating a result image.
    A control method for an image pickup device equipped with.
PCT/JP2020/047759 2020-01-14 2020-12-21 Imaging device, and method for controlling imaging device WO2021145158A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020003979A JP2021111920A (en) 2020-01-14 2020-01-14 Imaging apparatus and imaging apparatus control method
JP2020-003979 2020-01-14

Publications (1)

Publication Number Publication Date
WO2021145158A1 true WO2021145158A1 (en) 2021-07-22

Family

ID=76863711

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/047759 WO2021145158A1 (en) 2020-01-14 2020-12-21 Imaging device, and method for controlling imaging device

Country Status (2)

Country Link
JP (1) JP2021111920A (en)
WO (1) WO2021145158A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005328134A (en) * 2004-05-12 2005-11-24 Sony Corp Imaging apparatus and defect detecting method of solid-state imaging element
JP2006180210A (en) * 2004-12-22 2006-07-06 Sony Corp Imaging apparatus and method, and program
JP2011135566A (en) * 2009-11-26 2011-07-07 Nikon Corp Image processing apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005328134A (en) * 2004-05-12 2005-11-24 Sony Corp Imaging apparatus and defect detecting method of solid-state imaging element
JP2006180210A (en) * 2004-12-22 2006-07-06 Sony Corp Imaging apparatus and method, and program
JP2011135566A (en) * 2009-11-26 2011-07-07 Nikon Corp Image processing apparatus

Also Published As

Publication number Publication date
JP2021111920A (en) 2021-08-02

Similar Documents

Publication Publication Date Title
KR101412752B1 (en) Apparatus and method for digital auto-focus
JP4388327B2 (en) Microscope image imaging apparatus and microscope image imaging method
US7295233B2 (en) Detection and removal of blemishes in digital images utilizing original images of defocused scenes
US7991241B2 (en) Image processing apparatus, control method therefor, and program
JP5132401B2 (en) Image processing apparatus and image processing method
US20090079862A1 (en) Method and apparatus providing imaging auto-focus utilizing absolute blur value
WO2011099239A1 (en) Imaging device and method, and image processing method for imaging device
TWI511546B (en) Dust detection system and digital camera
JP4196124B2 (en) Imaging system diagnostic apparatus, imaging system diagnostic program, imaging system diagnostic program product, and imaging apparatus
JP4466015B2 (en) Image processing apparatus and image processing program
WO2006081478A1 (en) Imaging system with a lens having increased light collection efficiency and a deblurring equalizer
CN102082912A (en) Image capturing apparatus and image processing method
RU2009119259A (en) DEVICE FOR EASY FOCUSING AND RELATED METHOD
JP4633245B2 (en) Surface inspection apparatus and surface inspection method
JP2009229125A (en) Distance measuring device and distance measuring method
JP4419479B2 (en) Image processing apparatus and image processing program
WO2021145158A1 (en) Imaging device, and method for controlling imaging device
JP4885471B2 (en) Method for measuring refractive index distribution of preform rod
JP4466017B2 (en) Image processing apparatus and image processing program
JP5050282B2 (en) Focus detection device, focus detection method, and focus detection program
JP2005265467A (en) Defect detection device
JP6789810B2 (en) Image processing method, image processing device, and imaging device
JP2001235319A (en) Shading correcting apparatus and shading correcting method, and surface inspection apparatus
JP2004222232A (en) Image processor and image processing program
JP2017219737A (en) Imaging device, control method of the same and program thereof, and recording medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20914647

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20914647

Country of ref document: EP

Kind code of ref document: A1