WO2024042783A1 - Image processing device, image processing method, and program - Google Patents

Image processing device, image processing method, and program Download PDF

Info

Publication number
WO2024042783A1
WO2024042783A1 PCT/JP2023/017159 JP2023017159W WO2024042783A1 WO 2024042783 A1 WO2024042783 A1 WO 2024042783A1 JP 2023017159 W JP2023017159 W JP 2023017159W WO 2024042783 A1 WO2024042783 A1 WO 2024042783A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
wavelength band
correction value
image processing
processing device
Prior art date
Application number
PCT/JP2023/017159
Other languages
French (fr)
Japanese (ja)
Inventor
高志 椚瀬
和佳 岡田
慶延 岸根
睦 川中子
友也 平川
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Publication of WO2024042783A1 publication Critical patent/WO2024042783A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/02Details
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/12Generating the spectrum; Monochromators
    • G01J3/26Generating the spectrum; Monochromators using multiple reflection, e.g. Fabry-Perot interferometer, variable interference filters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/30Measuring the intensity of spectral lines directly on the spectrum itself
    • G01J3/36Investigating two or more bands of a spectrum by separate detectors
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B11/00Filters or other obturators specially adapted for photographic purposes
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B17/00Details of cameras or camera bodies; Accessories therefor
    • G03B17/02Bodies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/12Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with one sensor only
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the technology of the present disclosure relates to an image processing device, an image processing method, and a program.
  • an imaging device including an imaging optical system, an imaging element, and a signal processing section.
  • the imaging optical system has a first pupil region that passes light in a first wavelength band and a second pupil region that passes light in a second wavelength band different from the first wavelength band. be.
  • the imaging optical system combines the axial chromatic aberration of the imaging optical system due to the difference between the first wavelength band and the second wavelength band with the aberration other than the axial chromatic aberration of the imaging optical system and the first pupil region of the imaging optical system and It is reduced based on the relationship with the position of the second pupil area.
  • the image sensor includes a first pixel that receives light passing through a first pupil region of an imaging optical system, and a second pixel that receives light that passes through a second pupil region.
  • the signal processing unit processes the signal output from the image sensor, and generates a first image in a first wavelength band and a second wavelength band based on the output signal of the first pixel and the output signal of the second pixel. , respectively.
  • JP 2022-063720A discloses an image correction device including a band image acquisition means, a high-resolution image acquisition means, a position difference acquisition means, a correction band image creation means, and a correction band image output means.
  • the band image acquisition means acquires a plurality of band images obtained by imaging the subject.
  • the high-resolution image acquisition means acquires a high-resolution image, which is obtained by imaging the subject and has a higher resolution than the band image.
  • the positional difference acquisition means sets one of the plurality of band images as a reference band image, sets at least one of the remaining band images as a target band image, and acquires a positional difference between the target band image and the reference band image.
  • the correction band image creation means takes a pixel of the target band image as a target pixel, and for each target pixel, divides the imaging region of the target pixel into a plurality of regions, and calculates the pixel value of each partial region created by dividing the image capturing region of the target pixel into a plurality of regions.
  • the target pixel at the pixel position of the reference band image is determined based on the value and the pixel value relationship between multiple pixels of the high-resolution image corresponding to the target pixel, and from the determined pixel value and position difference for each partial region.
  • a corrected band image is created that retains the pixel values of light related to the band image.
  • the correction band image output means outputs a correction band image.
  • the imaging optical system includes a lens that forms an optical image of a subject.
  • the optical member includes a frame body having a plurality of aperture regions, and a plurality of optical filters disposed in at least one of the plurality of aperture regions, the two or more optical filters transmitting light having at least some different wavelength bands. It has a plurality of optical filters including an optical filter, and a plurality of polarization filters arranged in at least one of the plurality of aperture regions, and the plurality of polarization filters having different polarization directions. The amount of light emitted from the imaging optical system can be changed for a plurality of aperture regions.
  • JP 2014-045275A discloses an image processing device including an image acquisition section, a phase difference image generation section, and a high resolution processing section.
  • the image acquisition unit acquires a captured image obtained by capturing a first subject image and a second subject image having a parallax with respect to the first subject image.
  • the phase difference image generation unit generates a first image corresponding to the first subject image and a second image corresponding to the second subject image based on the captured image.
  • the high-resolution processing section performs high-resolution processing on the captured image based on the first image and the second image.
  • one embodiment of the technology of the present disclosure can obtain a multispectral image with higher image quality than when a multispectral image is generated based on a spectral image in which image shift has occurred.
  • a first aspect of the technology of the present disclosure is an image processing device applied to an image output from an imaging device including an optical system, wherein the optical system includes a plurality of The image processing device has a filter, and the image processing device includes a processor, and the processor is an image processing device that performs processing on the image to correct an image shift due to a position shift of an optical image caused by being separated by a plurality of filters. be.
  • a second aspect of the technology of the present disclosure is the image processing apparatus according to the first aspect, in which the optical system has a plurality of apertures, each aperture is provided with a filter, and the image shift is at least
  • the image processing apparatus includes image shift based on the characteristics of each aperture, and processing is performed based on the characteristics.
  • a third aspect according to the technology of the present disclosure is an image processing apparatus according to the second aspect, which is characterized by including a position of the center of gravity of the opening.
  • a fourth aspect according to the technology of the present disclosure is an image processing apparatus according to the third aspect, in which the center of gravity position is determined based on the position and/or shape of the opening.
  • a fifth aspect of the technology of the present disclosure is that in the image processing apparatus according to any one of the first to fourth aspects, positional deviation of the optical image is caused by at least a characteristic of the optical system.
  • This is an image processing device that includes positional deviation.
  • a sixth aspect of the technology of the present disclosure is that in the image processing apparatus according to any one of the first to fifth aspects, processing is performed on a partial area of the image. It is an image processing device.
  • a seventh aspect of the technology of the present disclosure is an image processing apparatus according to any one of the first to sixth aspects, wherein the image is a spectral image for generating a multispectral image. It is a processing device.
  • An eighth aspect according to the technology of the present disclosure is that in the image processing device according to the seventh aspect, the image is obtained by performing interference removal processing on imaged data obtained by imaging with the imaging device.
  • This is an image processing device that is a generated image.
  • a ninth aspect of the technology of the present disclosure is the image processing device according to any one of the first to eighth aspects, wherein the plurality of filters have different wavelength bands. .
  • a tenth aspect of the technology of the present disclosure is the image processing device according to any one of the first to ninth aspects, wherein the plurality of filters are arranged in line around an optical axis. It is an image processing device.
  • An eleventh aspect according to the technology of the present disclosure is an image processing apparatus according to the ninth aspect, in which processing is performed based on a combination of wavelength bands.
  • a twelfth aspect of the technology of the present disclosure is an image processing apparatus according to any one of the first to eleventh aspects, in which processing is performed based on design values regarding the optical system. It is.
  • a thirteenth aspect according to the technology of the present disclosure is an image processing apparatus according to the ninth aspect, in which processing is performed based on a correction value for each wavelength band.
  • a fourteenth aspect according to the technology of the present disclosure is an image processing apparatus according to any one of the ninth aspect, the eleventh aspect, and the thirteenth aspect, in which the image is processed using wavelengths corresponding to each wavelength band.
  • This is an image processing device that includes band images.
  • a fifteenth aspect according to the technology of the present disclosure is the image processing apparatus according to the thirteenth aspect, wherein the image includes a wavelength band image corresponding to each wavelength band, and the correction value is determined according to the position of the wavelength band image. They are different image processing devices.
  • a 16th aspect according to the technology of the present disclosure is the image processing apparatus according to the 13th aspect, wherein the image includes a wavelength band image corresponding to each wavelength band, and the correction value is a wavelength band image with respect to a reference position in the image.
  • This is an image processing device that is determined based on the direction and/or amount of positional shift of an image.
  • a seventeenth aspect according to the technology of the present disclosure is an image processing apparatus according to the fourteenth aspect, in which the wavelength band image is an image showing a characteristic part of a subject.
  • An 18th aspect according to the technology of the present disclosure is the image processing apparatus according to the 16th aspect, wherein the wavelength band image is an image showing a characteristic part of a subject, and the reference position is a position corresponding to the characteristic part. It is an image processing device.
  • a nineteenth aspect of the technology of the present disclosure is an image processing apparatus according to the sixteenth aspect, wherein the reference position is a position of any one of the wavelength band images. be.
  • a 20th aspect according to the technology of the present disclosure is an image processing apparatus according to the 17th aspect, in which the subject has a point and the characteristic part is a point.
  • a twenty-first aspect according to the technology of the present disclosure is an image processing apparatus according to the seventeenth aspect, in which the subject has a checkered pattern and the characteristic portion is an intersection included in the checkered pattern.
  • a twenty-second aspect according to the technology of the present disclosure is an image processing apparatus according to the seventeenth aspect, in which the subject is an image processing apparatus that is a calibration member.
  • a twenty-third aspect of the technology of the present disclosure is the image processing apparatus according to the seventeenth aspect, wherein the subject has a plurality of characteristic parts, and the plurality of characteristic parts are arranged in a straight line. It is.
  • a twenty-fourth aspect according to the technology of the present disclosure is the image processing apparatus according to the seventeenth aspect, wherein the subject has a plurality of characteristic parts, and the plurality of characteristic parts are arranged at equal intervals. It is.
  • a twenty-fifth aspect according to the technology of the present disclosure is an image processing apparatus according to the seventeenth aspect, in which the subject is an image processing apparatus including an object for inspection.
  • a twenty-sixth aspect according to the technology of the present disclosure is an image processing apparatus according to the twenty-fifth aspect, in which the subject is an image processing apparatus including a plurality of objects for inspection.
  • a twenty-seventh aspect of the technology of the present disclosure is the image processing device according to any one of the first to twenty-sixth aspects, wherein the optical system includes polarizing filters provided corresponding to each filter.
  • the plurality of polarizing filters have mutually different polarization axes
  • the imaging device includes an image sensor having a plurality of pixel blocks, and each pixel block includes a plurality of types of polarizers having mutually different polarization axes. This is an image processing device provided.
  • a twenty-eighth aspect of the technology of the present disclosure is an image processing method applied to an image output from an imaging device including an optical system, the optical system comprising a plurality of
  • the image processing method includes performing a process on an image to correct an image shift due to a position shift of an optical image caused by being separated by a plurality of filters.
  • a twenty-ninth aspect of the technology of the present disclosure is a program for causing a computer to perform image processing on an image output from an imaging device including an optical system, wherein the optical system is provided around an optical axis.
  • the program includes a plurality of filters, and the image processing includes processing for correcting an image shift due to a position shift of an optical image caused by dispersing an image by a plurality of filters.
  • FIG. 1 is a perspective view showing an example of an imaging device. It is an exploded perspective view showing an example of a pupil division filter.
  • FIG. 2 is a block diagram showing an example of the hardware configuration of an imaging device.
  • FIG. 2 is an explanatory diagram showing an example of the configuration of a photoelectric conversion element.
  • FIG. 2 is a block diagram illustrating an example of a manner in which a multispectral image is generated based on a plurality of spectral images.
  • FIG. 3 is a front view showing an example of a multispectral image generated based on a plurality of spectral images having image shifts.
  • FIG. 2 is a schematic diagram showing a first example of the relationship between an aperture formed in a pupil splitting filter and an optical image.
  • FIG. 6 is a schematic diagram showing a second example of the relationship between an aperture formed in a pupil splitting filter and an optical image.
  • FIG. 2 is a block diagram illustrating an example of a functional configuration for executing multispectral image generation processing.
  • FIG. 2 is a block diagram illustrating an example of the operation of an output value acquisition section and an interference removal processing section.
  • FIG. 3 is a block diagram illustrating an example of the operation of a correction processing section.
  • FIG. 2 is a block diagram illustrating an example of the operation of a multispectral image generation section.
  • FIG. 2 is a block diagram illustrating an example of a hardware configuration of an imaging device and an example of a functional configuration for executing correction value derivation processing.
  • FIG. 2 is a block diagram showing an example of an optical image formed on a light-receiving surface when a dot chart is imaged by an imaging device.
  • FIG. 2 is a block diagram illustrating an example of the operation of a spectral image acquisition unit.
  • FIG. 3 is a block diagram illustrating an example of the operation of a correction value deriving section.
  • 3 is a flowchart illustrating an example of the flow of multispectral image generation processing.
  • 7 is a flowchart illustrating an example of the flow of correction value derivation processing. It is a block diagram showing an example of operation of a correction value calculation part concerning a 1st modification. It is a block diagram showing an example of operation of a correction value calculation part concerning a 2nd modification.
  • FIG. 2 is a block diagram illustrating an example of an optical image formed on a light receiving surface when a lattice chart is imaged by an imaging device. It is a block diagram showing an example of operation of a spectral image acquisition part concerning a 3rd modification. It is a block diagram showing an example of operation of a correction value derivation part concerning a 3rd modification. It is a block diagram showing an example of operation of a correction value derivation part concerning a 4th modification. It is a block diagram showing an example of operation of a correction value derivation part concerning a 5th modification. It is a block diagram showing an example of operation of a correction value derivation part concerning a 6th modification.
  • LED is an abbreviation for "light emitting diode.”
  • CMOS is an abbreviation for "Complementary Metal Oxide Semiconductor.”
  • CCD is an abbreviation for “Charge Coupled Device”.
  • I/F is an abbreviation for "Interface”.
  • RAM is an abbreviation for "Random Access Memory.”
  • CPU is an abbreviation for "Central Processing Unit.”
  • GPU is an abbreviation for “Graphics Processing Unit.”
  • EEPROM is an abbreviation for "Electrically Erasable and Programmable Read Only Memory.”
  • HDD is an abbreviation for "Hard Disk Drive.”
  • EL is an abbreviation for "Electro Luminescence”.
  • TPU is an abbreviation for "Tensor processing unit”.
  • SSD is an abbreviation for “Solid State Drive.”
  • USB is an abbreviation for “Universal Serial Bus.”
  • ASIC is an abbreviation for “Application Specific Integrated Circuit.”
  • FPGA is an abbreviation for “Field-Programmable Gate Array.”
  • PLD is an abbreviation for “Programmable Logic Device”. SoC is an abbreviation for “System-on-a-Chip.”
  • IC is an abbreviation for "Integrated Circuit.”
  • center refers to an error that is generally allowed in the technical field to which the technology of the present disclosure belongs, in addition to the perfect “center,” and is contrary to the spirit of the technology of the present disclosure. Refers to the ⁇ center'' in the sense that it includes a certain degree of error.
  • “same” means not only “the same” but also an error that is generally allowed in the technical field to which the technology of the present disclosure belongs, and which is contrary to the spirit of the technology of the present disclosure. It refers to "the same” in the sense that it includes a certain degree of error.
  • orthogonal means not only complete “orthogonal” but also an error that is generally allowed in the technical field to which the technology of the present disclosure belongs, and is contrary to the spirit of the technology of the present disclosure. This refers to “orthogonal” in the sense that it includes a degree of error that does not occur.
  • perpendicular refers to an error that is generally allowed in the technical field to which the technology of the present disclosure belongs, in addition to being perfectly perpendicular, to the extent that it does not go against the spirit of the technology of the present disclosure. It refers to vertical in the sense of including the error of.
  • a straight line refers to not only a perfect straight line but also an error that is generally allowed in the technical field to which the technology of the present disclosure belongs, and that does not go against the spirit of the technology of the present disclosure. It refers to a straight line that includes the error of.
  • equal spacing refers to errors that are generally allowed in the technical field to which the technology of the present disclosure belongs, in addition to perfectly equal spacing, and is contrary to the spirit of the technology of the present disclosure. It refers to equal intervals that include a certain degree of error.
  • the imaging device 10 includes a lens device 12 and an imaging device body 14.
  • the imaging device 10 is an example of an “imaging device” and an “image processing device” according to the technology of the present disclosure.
  • the lens device 12 includes a pupil division filter 16 that separates incident light into a plurality of wavelength bands.
  • the imaging device 10 is a multispectral camera that generates and outputs a multispectral image 74 by capturing light that has been split into multiple wavelength bands by the pupil splitting filter 16.
  • a multispectral image generated based on light separated into three wavelength bands will be described as an example of the multispectral image 74.
  • the three wavelength bands are just an example, and there may be four or more wavelength bands. That is, the imaging device 10 may be a multispectral camera that can image a subject with higher wavelength resolution than a multispectral camera that can image light separated into three wavelength bands.
  • the multispectral image 74 may include an image obtained by capturing light in the visible light band, and may include a wavelength band that cannot be perceived by the human eye (for example, a near-infrared band and/or The image may include an image in which light in the ultraviolet band, etc.) is visualized.
  • a wavelength band that cannot be perceived by the human eye for example, a near-infrared band and/or
  • the image may include an image in which light in the ultraviolet band, etc.
  • Examples of uses of the multispectral image 74 include measurement of a subject as an object to be observed in various fields such as medicine, agriculture, and industry, inspection of a subject, analysis of a subject, and evaluation of a subject. .
  • the pupil division filter 16 includes a frame 18, spectral filters 20A to 20C, and polarization filters 22A to 22C.
  • the frame 18 has openings 24A to 24C.
  • the openings 24A to 24C have the same shape.
  • each of the openings 24A to 24C has a fan shape.
  • the shape of each of the openings 24A to 24C may be a shape other than a fan shape (for example, a square shape or a circular shape).
  • the openings 24A to 24C are arranged at equal intervals around the optical axis OA.
  • the center of gravity G of each aperture 24A to 24C is located off the optical axis OA.
  • Each center of gravity position G is the center of the geometry of each aperture 24A-24C.
  • each opening 24A to 24C will be referred to as an "opening 24."
  • the opening 24 is an example of an "opening" according to the technology of the present disclosure.
  • the spectral filters 20A to 20C are provided in the apertures 24A to 24C, respectively, so that they are arranged at equal intervals around the optical axis OA.
  • Each of the spectral filters 20A to 20C is a bandpass filter that transmits light in a specific wavelength band.
  • the spectral filters 20A to 20C have different wavelength bands. Specifically, the spectral filter 20A has a first wavelength band ⁇ 1 , the spectral filter 20B has a second wavelength band ⁇ 2 , and the spectral filter 20C has a third wavelength band ⁇ 3. has.
  • each of the spectral filters 20A to 20C will be referred to as a "spectral filter 20.”
  • the spectral filter 20 is an example of a “filter” according to the technology of the present disclosure.
  • the first wavelength band ⁇ 1 , the second wavelength band ⁇ 2 , and the third wavelength band ⁇ 3 is referred to as a “wavelength band ⁇ ”.
  • the polarizing filters 22A to 22C are provided corresponding to the spectral filters 20A to 20C, respectively. Specifically, the polarizing filter 22A is provided in the aperture 24A, and is overlapped with the spectral filter 20A. The polarizing filter 22B is provided in the aperture 24B and overlapped with the spectral filter 20B. The polarizing filter 22C is provided in the aperture 24C and is overlapped with the spectral filter 20C.
  • Each of the polarizing filters 22A to 22C is an optical filter that transmits light vibrating in a specific direction.
  • the polarizing filters 22A to 22C have polarization axes OA with mutually different polarization angles.
  • polarizing filter 22A has a first polarizing angle ⁇ 1
  • polarizing filter 22B has a second polarizing angle ⁇ 2
  • polarizing filter 22C has a third polarizing angle ⁇ 3 .
  • the polarization axis may also be referred to as the transmission axis.
  • the first polarization angle ⁇ 1 is set to 0°
  • the second polarization angle ⁇ 2 is set to 45°
  • the third polarization angle ⁇ 3 is set to 90°. .
  • each of the polarizing filters 22A to 22C will be referred to as a "polarizing filter 22.”
  • the polarizing filter 22 is an example of the "polarizing filter 22" according to the technology of the present disclosure. Furthermore, if it is not necessary to separately explain the first polarization angle ⁇ 1 , the second polarization angle ⁇ 2 , and the third polarization angle ⁇ 3 , the first polarization angle ⁇ 1 , the second polarization angle ⁇ 2 , and Each of the third polarization angles ⁇ 3 is referred to as a “polarization angle ⁇ ”.
  • the number of apertures 24 is three, corresponding to the number of wavelength bands ⁇ , but the number of apertures 24 is three, corresponding to the number of wavelength bands ⁇ . (i.e., the number of the plurality of spectral filters 20). Furthermore, unused openings 24 among the plurality of openings 24 may be covered by a shielding member (not shown). Further, in the example shown in FIG. 2, the plurality of spectral filters 20 have different wavelength bands ⁇ , but the plurality of spectral filters 20 may include spectral filters 20 having the same wavelength band ⁇ . .
  • the lens device 12 includes an optical system 26, and the imaging device body 14 includes an image sensor 28.
  • the optical system 26 is an example of an "optical system” according to the technology of the present disclosure
  • the image sensor 28 is an example of an "image sensor” according to the technology of the present disclosure.
  • the optical system 26 includes a pupil splitting filter 16, a first lens 30, and a second lens 32.
  • the first lens 30, the pupil splitting filter 16, and the second lens 32 are arranged along the optical axis OA of the lens device 12 from the subject 4 side to the image sensor 28 side.
  • the lenses 32 are arranged in this order.
  • the subject 4 side may be referred to as the "object side” and the image sensor 28 side may be referred to as the "image side”.
  • the first lens 30 causes light obtained by the light emitted from the light source 2 being reflected by the subject 4 (hereinafter referred to as "subject light") to enter the pupil division filter 16.
  • the subject light is an example of "light” according to the technology of the present disclosure.
  • the second lens 32 forms an image of the subject light that has passed through the pupil splitting filter 16 onto a light receiving surface 34A of a photoelectric conversion element 34 provided in the image sensor 28.
  • the light source 2 is, for example, an LED light source, a laser light source, or an incandescent light bulb.
  • the light emitted from the light source 2 is unpolarized.
  • the light source 2 may be included in the imaging device body 14 and/or the lens device 12. Furthermore, the light emitted from the light source may be natural light.
  • the pupil splitting filter 16 is placed at the pupil position of the optical system 26.
  • the pupil position refers to the aperture surface that limits the brightness of the optical system 26.
  • the pupil position here includes a nearby position, and the nearby position refers to the range from the entrance pupil to the exit pupil.
  • the configuration of the pupil division filter 16 is as described using FIG. 2.
  • FIG. 3 for convenience, a plurality of spectral filters 20 and a plurality of polarizing filters 22 are shown arranged in a straight line along a direction perpendicular to the optical axis OA.
  • the image sensor 28 includes a photoelectric conversion element 34 and a signal processing circuit 36.
  • the image sensor 28 is, for example, a CMOS image sensor.
  • a CMOS image sensor is exemplified as the image sensor 28, but the technology of the present disclosure is not limited to this.
  • the image sensor 28 may be another type of image sensor such as a CCD image sensor. The technology of the present disclosure is realized.
  • FIG. 3 shows a schematic configuration of the photoelectric conversion element 34.
  • FIG. 4 specifically shows the configuration of a part of the photoelectric conversion element 34.
  • the photoelectric conversion element 34 includes a pixel layer 38, a polarizing filter layer 40, and a spectral filter layer 42. Note that the configuration of the photoelectric conversion element 34 shown in FIG. 3 is an example, and the technology of the present disclosure is valid even if the photoelectric conversion element 34 does not include the spectral filter layer 42.
  • the pixel layer 38 has a plurality of pixels 44.
  • the plurality of pixels 44 are arranged in a matrix and form a light receiving surface 34A of the photoelectric conversion element 34.
  • Each pixel 44 is a physical pixel having a photodiode (not shown), photoelectrically converts the received light, and outputs an electrical signal according to the amount of received light.
  • the pixels 44 provided in the photoelectric conversion element 34 will be referred to as "physical pixels 44.”
  • the pixels forming the multispectral image 74 are referred to as "image pixels.”
  • the photoelectric conversion element 34 outputs the electrical signals output from the plurality of physical pixels 44 to the signal processing circuit 36 as image data.
  • the signal processing circuit 36 digitizes the analog imaging data input from the photoelectric conversion element 34.
  • the image data is image data indicating a captured image 70.
  • a plurality of physical pixels 44 form a plurality of pixel blocks 46.
  • Each pixel block 46 is formed by a total of four physical pixels 44, two in the vertical direction and two in the horizontal direction.
  • the four physical pixels 44 forming each pixel block 46 are shown arranged in a straight line along the direction perpendicular to the optical axis OA, but the four physical pixels 44 are arranged adjacent to the photoelectric conversion element 34 in the vertical and horizontal directions (see FIG. 4).
  • the physical pixel 44 is an example of a "pixel” according to the technology of the present disclosure
  • the pixel block 46 is an example of a "pixel block” according to the technology of the present disclosure.
  • the polarizing filter layer 40 has multiple types of polarizers 48A to 48D.
  • Each polarizer 48A to 48D is an optical filter that transmits light vibrating in a specific direction.
  • the polarizers 48A to 48D have polarization axes OA with mutually different polarization angles ⁇ . Specifically, polarizer 48A has a first polarization angle ⁇ 1 , polarizer 48B has a second polarization angle ⁇ 2 , and polarizer 48C has a third polarization angle ⁇ 3 . , and the polarizer 48D has a fourth polarization angle ⁇ 4 .
  • the first polarization angle ⁇ 1 is set to 0°
  • the second polarization angle ⁇ 2 is set to 45°
  • the third polarization angle ⁇ 3 is set to 90°
  • the fourth polarization angle ⁇ 4 is set to 135°.
  • each of the polarizers 48A to 48D will be referred to as a "polarizer 48.”
  • the polarizer 48 is an example of a "polarizer” according to the technology of the present disclosure.
  • the first polarization angle ⁇ 1 , the second polarization angle ⁇ 2 , the third polarization angle ⁇ 3 , and the fourth polarization angle ⁇ 4 are each referred to as “polarization angle ⁇ ”.
  • the spectral filter layer 42 includes a B filter 50A, a G filter 50B, and an R filter 50C.
  • the B filter 50A is a blue band filter that transmits most of the light in the blue wavelength band among the plurality of wavelength bands.
  • the G filter 50B is a green band filter that transmits the most light in the green wavelength band among the plurality of wavelength bands.
  • the R filter 50C is a red band filter that transmits most of the light in the red wavelength band among the plurality of wavelength bands.
  • a B filter 50A, a G filter 50B, and an R filter 50C are assigned to each pixel block 46.
  • the B filter 50A, the G filter 50B, and the R filter 50C are shown arranged in a straight line along the direction perpendicular to the optical axis OA, but as an example, as shown in FIG.
  • the B filter 50A, the G filter 50B, and the R filter 50C are arranged in a matrix in a predetermined pattern arrangement.
  • the B filter 50A, the G filter 50B, and the R filter 50C are arranged in a matrix in a Bayer arrangement, as an example of a predetermined pattern arrangement.
  • the predetermined pattern arrangement may be an RGB stripe arrangement, an R/G checkered arrangement, an X-Trans (registered trademark) arrangement, a honeycomb arrangement, or the like other than the Bayer arrangement.
  • filters 50 the B filter 50A, the G filter 50B, and the R filter 50C will be referred to as "filters 50", respectively.
  • the imaging device body 14 includes a control driver 52, an input/output I/F 54, a computer 56, and a display 58 in addition to the image sensor 28.
  • a signal processing circuit 36, a control driver 52, a computer 56, and a display 58 are connected to the input/output I/F 54.
  • the computer 56 has a processor 60, a storage 62, and a RAM 64.
  • the processor 60 controls the entire imaging device 10 .
  • the processor 60 is, for example, an arithmetic processing device including a CPU and a GPU, and the GPU operates under the control of the CPU and is responsible for executing processing regarding images.
  • an arithmetic processing unit including a CPU and a GPU is cited as an example of the processor 60, but this is just an example, and the processor 60 may be one or more CPUs with integrated GPU functions. , one or more CPUs without integrated GPU functionality.
  • the processor 60, storage 62, and RAM 64 are connected via a bus 66, and the bus 66 is connected to the input/output I/F 54.
  • Processor 60 is an example of a "processor” according to the technology of the present disclosure.
  • the computer 56 is an example of a "computer” according to the technology of the present disclosure.
  • the storage 62 is a non-temporary storage medium and stores various parameters and programs.
  • storage 62 is flash memory (eg, EEPROM).
  • flash memory eg, EEPROM
  • HDD high-density dynamic random access memory
  • the RAM 64 temporarily stores various information and is used as a work memory. Examples of the RAM 64 include DRAM and/or SRAM.
  • the processor 60 reads a necessary program from the storage 62 and executes the read program on the RAM 64.
  • Processor 60 controls control driver 52 and signal processing circuit 36 according to a program executed on RAM 64.
  • the control driver 52 controls the photoelectric conversion element 34 under the control of the processor 60.
  • the display 58 is, for example, a liquid crystal display or an EL display, and displays various images including the multispectral image 74. Note that the display 58 may be included in an external device (not shown) that is communicably connected to the imaging device 10.
  • the imaging device 10 generates spectral images 72A to 72C corresponding to each wavelength band ⁇ based on the captured image 70, and generates a multispectral image 74 based on the spectral images 72A to 72C. be done.
  • the spectral image 72A is a spectral image corresponding to the first wavelength band ⁇ 1
  • the spectral image 72B is a spectral image corresponding to the second wavelength band ⁇ 2
  • the spectral image 72C is a spectral image corresponding to the third wavelength band ⁇ 3. This is the corresponding spectral image.
  • each of the spectral images 72A to 72C will be referred to as a "spectral image 72."
  • the spectral image 72 is an example of a "spectral image” and an “image” according to the technology of the present disclosure.
  • FIG. 6 shows an example of the multispectral image 74.
  • Multispectral image 74 includes multiple spectral images 72.
  • Each spectral image 72 includes an object having the shape of the letter "F" as an image.
  • an image shift (hereinafter referred to as "image shift") has occurred in each spectrum image 72.
  • Each image shift differs for each spectrum image 72.
  • the reason why the image shift differs for each spectral image 72 is that the spectral filter 20 (see FIG. 2) corresponding to each spectral image 72 is provided in the aperture 24 formed at different positions with respect to the optical axis OA,
  • An example of this is that parallax occurs between the apertures 24 of.
  • a positional shift occurs in the optical image formed on the light receiving surface 34A for each wavelength band ⁇ due to the parallax.
  • the positional shift of the optical image has a different magnitude and direction for each wavelength band ⁇ .
  • the cause of different image shifts for each spectral image 72 is, for example, that the parallax generated in the optical system 26 is different for each wavelength band ⁇ due to the plurality of apertures 24 being formed at different positions. Can be mentioned. In a state where an image shift occurs in each spectral image 72, the image quality of the multispectral image 74 deteriorates compared to a case where no image shift occurs.
  • FIG. 7 shows an example of a mode in which the light emitted from the object point 76 is imaged on the light receiving surface 34A when the shape of the aperture 24 is fan-shaped
  • the optical image 78 formed on the light receiving surface 34A has a shape corresponding to the shape of the aperture 24.
  • the positional deviation of the optical image 78 is defined by the center of gravity 78A of the shape of the optical image 78 (that is, the center of the geometric shape)
  • the positional deviation of the optical image 78 differs depending on the shape of the optical image 78. In this way, the positional shift of the optical image 78 is affected not only by the position of the aperture 24 but also by the shape of the aperture 24.
  • the positional shift of the optical image 78 differs depending on the center of gravity position G of the aperture 24, which is defined by the position and shape of the aperture 24.
  • the center of gravity position G of the aperture 24 is defined by its relative position to the optical axis OA.
  • the positional shift of the optical image 78 also differs depending on the characteristics of the optical system 26 other than the center of gravity position G of the aperture 24.
  • the characteristics of the optical system 26 other than the center of gravity position G of the aperture 24 include, for example, the arrangement position of the pupil division filter 16.
  • factors that cause image shift include distortion occurring in the optical system 26 and/or trapezoidation caused by imaging a subject surface that is not perpendicular to the optical axis OA. Examples include distortion.
  • the processor 60 performs the following process. A multispectral image generation process is performed.
  • a multispectral image generation program 80 is stored in the storage 62.
  • the multispectral image generation program 80 is an example of a "program" according to the technology of the present disclosure.
  • the processor 60 reads the multispectral image generation program 80 from the storage 62 and executes the read multispectral image generation program 80 on the RAM 64.
  • Processor 60 executes multispectral image generation processing to generate multispectral image 74 according to multispectral image generation program 80 executed on RAM 64 .
  • the multispectral image generation process is realized by the processor 60 operating as an output value acquisition section 82, an interference removal processing section 84, a correction processing section 86, and a multispectral image generation section 88 according to the multispectral image generation program 80. .
  • the output value acquisition unit 82 calculates the output value Y of each physical pixel 44 based on the imaging data. get.
  • the output value Y of each physical pixel 44 corresponds to the luminance value of each pixel included in the captured image 70 indicated by the captured image data.
  • the output value Y of each physical pixel 44 is a value that includes interference (that is, crosstalk). That is, since light in each wavelength band ⁇ of the first wavelength band ⁇ 1 , the second wavelength band ⁇ 2 , and the third wavelength band ⁇ 3 is incident on each physical pixel 44, the output value Y is The value is a mixture of a value corresponding to the light amount in the band ⁇ 1 , a value depending on the light amount in the second wavelength band ⁇ 2 , and a value depending on the light amount in the third wavelength band ⁇ 3 .
  • the processor 60 performs a process of separating and extracting a value corresponding to each wavelength band ⁇ from the output value Y for each physical pixel 44, that is, a process of removing crosstalk. It is necessary to perform removal processing on the output value Y. Therefore, in this embodiment, in order to obtain the multispectral image 74 (see FIG. 7), the interference removal processing unit 84 applies interference to the output value Y of each physical pixel 44 acquired by the output value acquisition unit 82. Execute the removal process.
  • the output value Y of each physical pixel 44 includes each brightness value for each polarization angle ⁇ for red, green, and blue as a component of the output value Y.
  • the output value Y of each physical pixel 44 is expressed by equation (1).
  • Y ⁇ 1_R is the brightness value of the red component of the output value Y whose polarization angle is the first polarization angle ⁇ 1
  • Y ⁇ 2_R is the brightness value of the component of the output value Y that is red and whose polarization angle is the second polarization angle ⁇ .
  • Y ⁇ 3_R is the brightness value of the red component of the output value Y whose polarization angle is the third polarization angle ⁇ 3
  • Y ⁇ 4_R is the brightness value of the red component of the output value Y whose polarization angle is the third polarization angle ⁇ 3 . is the brightness value of the component whose fourth polarization angle is ⁇ 4 .
  • Y ⁇ 1_G is the luminance value of the green component of the output value Y whose polarization angle is the first polarization angle ⁇ 1
  • Y ⁇ 2_G is the luminance value of the component of the output value Y that is green and whose polarization angle is the second polarization angle ⁇ .
  • Y ⁇ 3_G is the luminance value of the component of the output value Y that is green and has a polarization angle of the third polarization angle ⁇ 3
  • Y ⁇ 4_G is the luminance value of the component that is green and has the polarization angle of the third polarization angle ⁇ 3 of the output value Y. is the brightness value of the component whose fourth polarization angle is ⁇ 4 .
  • Y ⁇ 1_B is the luminance value of the blue component of the output value Y whose polarization angle is the first polarization angle ⁇ 1
  • Y ⁇ 2_B is the luminance value of the blue component of the output value Y whose polarization angle is the second polarization angle ⁇ .
  • Y ⁇ 3_B is the luminance value of the blue component of the output value Y whose polarization angle is the third polarization angle ⁇ 3
  • Y ⁇ 4_B is the polarization angle of the blue component of the output value Y. is the brightness value of the component whose fourth polarization angle is ⁇ 4 .
  • the pixel value X of each image pixel forming the multispectral image 74 is the brightness value X ⁇ 1 of polarized light in a first wavelength band ⁇ 1 having a first polarization angle ⁇ 1 (hereinafter referred to as "first wavelength band polarized light” ) .
  • a brightness value X ⁇ 2 of polarized light in a second wavelength band ⁇ 2 having a second polarization angle ⁇ 2 (hereinafter referred to as “second wavelength band polarized light”), and a third wavelength band having a third polarization angle ⁇ 3
  • a luminance value X ⁇ 3 of polarized light of ⁇ 3 (hereinafter referred to as “third wavelength band polarized light”) is included as a component of the pixel value X.
  • the pixel value X of each image pixel is expressed by equation (2).
  • A is an interference matrix.
  • the interference matrix A (not shown) is a matrix indicating characteristics of interference.
  • the interference matrix A includes a plurality of known information such as the spectrum of the subject light, the spectral transmittance of the first lens 30, the spectral transmittance of the second lens 32, the spectral transmittance of the plurality of spectral filters 20, and the spectral sensitivity of the image sensor 28. predefined based on the value of .
  • the interference removal matrix A + also includes the spectrum of the subject light, the spectral transmittance of the first lens 30, the spectral transmittance of the second lens 32, the spectral transmittance of the plurality of spectral filters 20, and the image sensor. This is a matrix defined based on the spectral sensitivity, etc. of No. 28.
  • the interference cancellation matrix A + is stored in the storage 62 in advance.
  • the interference cancellation processing unit 84 acquires the interference cancellation matrix A + stored in the storage 62 and the output value Y of each physical pixel 44 acquired by the output value acquisition unit 82, and uses the acquired interference cancellation matrix A + Based on the output value Y of each physical pixel 44, the pixel value X of each image pixel is output using equation (4).
  • the pixel value X of each image pixel is the brightness value X ⁇ 1 of the first wavelength band polarized light, the brightness value X ⁇ 2 of the second wavelength band polarized light, and the brightness value X ⁇ 3 of the third wavelength band polarized light. are included as components of the pixel value X.
  • the spectral image 72A of the captured image 70 is an image corresponding to the luminance value X ⁇ 1 of light in the first wavelength band ⁇ 1 (that is, an image based on the luminance value X ⁇ 1 ).
  • the spectral image 72B of the captured image 70 is an image corresponding to the brightness value X ⁇ 2 of light in the second wavelength band ⁇ 2 (that is, an image based on the brightness value X ⁇ 2 ).
  • the spectral image 72C of the captured image 70 is an image corresponding to the brightness value X ⁇ 3 of light in the third wavelength band ⁇ 3 (that is, an image based on the brightness value X ⁇ 3 ).
  • the captured image 70 is divided into the spectrum image 72A corresponding to the brightness value X ⁇ 1 of the first wavelength band polarized light and the brightness of the second wavelength band polarized light. It is separated into a spectral image 72B corresponding to the value X ⁇ 2 and a spectral image 72C corresponding to the luminance value X ⁇ 3 of the third wavelength band polarized light. That is, the captured image 70 is separated into spectral images 72 for each wavelength band ⁇ of the plurality of spectral filters 20. As described above, image shift occurs in each spectrum image 72 (see FIG. 6). The image shift differs for each spectrum image 72.
  • the correction processing unit 86 performs a process of correcting image shift (hereinafter referred to as "correction process") on each spectrum image 72.
  • the correction process is an example of “image processing” and “calibration process” according to the technology of the present disclosure.
  • correction value groups 90A to 90C are stored in advance.
  • the correction value group 90A includes a plurality of correction values 92A for correcting image shift of the spectral image 72A.
  • the correction value group 90B includes a plurality of correction values 92B for correcting image shift of the spectral image 72B.
  • the correction value group 90C includes a plurality of correction values 92C for correcting image shift of the spectral image 72C.
  • the process of deriving each of the correction values 92A to 92C (hereinafter referred to as "correction value derivation process”) will be described in detail later.
  • each correction value 92A to 92C will be referred to as a "correction value 92.”
  • Each correction value 92 may be determined for each image pixel included in the spectral image 72, or may be determined for each image region of the spectral image 72. By performing a correction process based on each correction value 92 on each spectral image 72, the image shift of each spectral image 72 is corrected.
  • the multispectral image generation unit 88 generates a multispectral image 74 by combining a plurality of spectral images 72 whose image deviations have been corrected by the correction processing unit 86.
  • multispectral image data representing multispectral image 74 is output to display 58.
  • Display 58 displays multispectral image 74 based on the multispectral image data.
  • the multispectral image data may be output to a device other than the display 58 (not shown).
  • FIG. 13 shows an example of a processing device 100 for deriving the correction values 92 corresponding to each of the above-mentioned spectral images 72.
  • the processing device 100 includes a computer 102.
  • Computer 102 includes a processor 104, storage 106, and RAM 108.
  • the processor 104, storage 106, and RAM 108 are realized by the same hardware as the above-described processor 60, storage 62, and RAM 64 (see FIG. 2).
  • a correction value derivation program 110 is stored in the storage 106.
  • the processor 104 reads the correction value derivation program 110 from the storage 106 and executes the read correction value derivation program 110 on the RAM 108.
  • the processor 104 executes a correction value derivation process for deriving each correction value 92 according to a correction value derivation program 110 executed on the RAM 108.
  • the correction value derivation process is realized by the processor 104 operating as the spectral image acquisition unit 112 and the correction value derivation unit 114 according to the correction value derivation program 110.
  • the processing device 100 may be the imaging device 10. That is, the correction value derivation process for deriving the correction value 92 may be performed by the imaging device 10.
  • FIGS. 14 to 16 show an example of how the correction value 92 is derived.
  • a dot chart 120 is used as the subject for deriving the correction value 92.
  • the dot chart 120 has a plurality of dots 122. Each dot 122 is a circular point. The size of the dots 122 may be set arbitrarily. For example, the plurality of dots 122 are arranged on a pair of diagonal lines (not shown) of the dot chart 120.
  • the dot chart 120 is an example of a "subject” and a "calibration member” according to the technology of the present disclosure.
  • the dot 122 is an example of a "characteristic part" and a "point” according to the technology of the present disclosure.
  • FIG. 14 shows an example of optical images 130A to 130C, optical images 132A to 132C, and optical images 134A to 134C that are formed on the light receiving surface 34A when the dot chart 120 is imaged by the imaging device 10.
  • Optical images 130A to 130C are optical images corresponding to the first dot 122A located at the upper right of the plurality of dots 122
  • optical images 132A to 132C are optical images corresponding to the first dot 122A located at the lower right of the plurality of dots 122.
  • the optical images 134A to 134C are optical images corresponding to the third dot 122C located at the center of the plurality of dots 122.
  • the optical image 130A, the optical image 132A, and the optical image 134A are optical images corresponding to the first wavelength band ⁇ 1
  • the optical image 130B, the optical image 132B, and the optical image 134B are the optical images corresponding to the second wavelength band ⁇ 2
  • the optical image 130C, the optical image 132C, and the optical image 134C are optical images corresponding to the third wavelength band ⁇ 3 .
  • the first reference position 136A is a position corresponding to the first dot 122A within the light receiving surface 34A
  • the second reference position 136B is a position corresponding to the second dot 122B within the light receiving surface 34A
  • the third reference position is a position corresponding to the second dot 122B within the light receiving surface 34A
  • 136C is a position corresponding to the third dot 122C within the light receiving surface 34A.
  • the optical images 130A to 130C are shifted from the first reference position 136A.
  • the direction and amount of positional deviation of the optical images 130A to 130C are different from each other.
  • the optical images 132A to 132C are shifted from the second reference position 136B, and the optical images 134A to 134C are misaligned from the third reference position 136C.
  • the directions and amounts of positional deviations of optical images 132A to 132C are different from each other, and the directions and amounts of positional deviations of optical images 134A to 134C are different from each other.
  • the first dot 122A and the second dot 122B are arranged at symmetrical positions in the vertical direction of the dot chart 120, but the direction of each positional shift of the optical images 130A to 130C and the optical images 132A to 132C and The amounts differ from each other depending on the location and shape of the opening 24 (see FIG. 2). Therefore, the optical images 130A to 130C and the optical images 132A to 132C are not symmetrical in the vertical direction of the light receiving surface 34A, but are asymmetrical. Further, as in the optical images 134A to 134C corresponding to the third dot 122C, positional deviation also occurs at the center of the light receiving surface 34A.
  • the spectral image acquisition unit 112 acquires each of the spectral images 72A to 72C obtained by capturing the dot chart 120.
  • the spectral image 72A includes a wavelength band image 140A corresponding to the optical image 130A
  • the spectral image 72B includes a wavelength band image 140B corresponding to the optical image 130B
  • the spectral image 72C includes a wavelength band image 140B corresponding to the optical image 130B.
  • the reference position 142 is a position corresponding to the first dot 122A in each spectrum image 72.
  • the wavelength band images 140A to 140C are shifted from the reference position 142.
  • the direction and amount of positional shift of the wavelength band images 140A to 140C are different from each other.
  • the wavelength band images 140A to 140C will be referred to as "wavelength band images 140.”
  • the wavelength band image (not shown) corresponding to the dots 122 other than the first dot 122A may be referred to as the "wavelength band image 140.”
  • the wavelength band image 140 is an example of a "wavelength band image" according to the technology of the present disclosure.
  • the correction value derivation unit 114 derives the correction value 92 in the following manner. For example, the correction value deriving unit 114 acquires each of the spectral images 72A to 72C acquired by the spectral image acquiring unit 112. Further, the correction value deriving unit 114 calculates the position of the wavelength band image 140A in the spectral image 72A and the position of the reference position 142 in the spectral image 72A to the position of the first dot 122A in the dot chart 120 (see FIG. 15). It is specified based on the position and design values regarding the optical system 26 (see FIG. 2).
  • the correction value deriving unit 114 converts the position of the wavelength band image 140B in the spectral image 72B and the position of the reference position 142 in the spectral image 72B to the position of the first dot 122A in the dot chart 120 and the optical It is specified based on the design value etc. regarding the system 26. Further, the correction value deriving unit 114 converts the position of the wavelength band image 140C in the spectral image 72C and the position of the reference position 142 in the spectral image 72C to the position of the first dot 122A in the dot chart 120 and the optical system. It is specified based on the design value etc. regarding 26. The position of the first dot 122A in the dot chart 120 and the design values regarding the optical system 26 are both known values.
  • the design values related to the optical system 26 may include values related to specifications given to the imaging device 10 as a product, and may include values related to specifications. Further, the design values may include values related to the characteristics of the optical system 26.
  • the characteristics of the optical system 26 may include, for example, the arrangement position of the pupil splitting filter 16 and/or the characteristics of the aperture 24 (see FIG. 2).
  • the characteristics of the opening 24 may include the center of gravity position G of the opening 24 (see FIGS. 7 and 8). The center of gravity position G of the opening 24 may be determined based on the position and/or shape of the opening 24, for example.
  • the design values regarding the optical system 26 may include values regarding the combination of wavelength bands ⁇ of each spectral filter 20 (see FIG. 2).
  • the value regarding the combination of wavelength bands ⁇ may be a value indicating the wavelength band of each spectral filter 20 itself, or may be a value indicating the relationship between the wavelength bands of each spectral filter 20.
  • the correction value deriving unit 114 derives the direction and amount of positional deviation of the wavelength band image 140A with respect to the reference position 142 based on the spectral image 72A, and based on the derived direction and amount, the correction value deriving unit 114 A correction value 92A is derived for the image pixel corresponding to the wavelength band image 140A among the plurality of image pixels included in the image.
  • the correction value 92A is a correction value for correcting the positional shift of the wavelength band image 140A.
  • the correction value 92A includes a value indicating a direction opposite to the direction of positional deviation of the wavelength band image 140A with respect to the reference position 142 (that is, a direction of the reference position 142 with respect to the wavelength band image 140A), and a value indicating the derived amount. including.
  • the correction value deriving unit 114 derives the direction and amount of positional deviation of the wavelength band image 140B with respect to the reference position 142 based on the spectral image 72B, and based on the derived direction and amount, the correction value deriving unit 114 A correction value 92B for the image pixel corresponding to the wavelength band image 140B among the plurality of image pixels is derived.
  • the correction value 92B is a correction value for correcting the positional shift of the wavelength band image 140B.
  • the correction value deriving unit 114 derives the direction and amount of positional deviation of the wavelength band image 140C with respect to the reference position 142 based on the spectral image 72C, and based on the derived direction and amount, A correction value 92C for the image pixel corresponding to the wavelength band image 140C among the image pixels of is derived.
  • the correction value 92C is a correction value for correcting the positional shift of the wavelength band image 140C.
  • Each correction value 92 differs depending on the position of the wavelength band image 140.
  • FIGS. 15 and 16 show an example in which the correction value 92 for the image pixel corresponding to the wavelength band image 140 is derived based on the wavelength band image 140 corresponding to the first dot 122A.
  • the direction and amount of positional deviation are also derived for the wavelength band image 140 corresponding to the dots 122 other than the first dot 122A, and based on the derived direction and amount, the correction value for the image pixel corresponding to the wavelength band image 140 is calculated.
  • 92 is derived.
  • non-corresponding image pixels For image pixels (hereinafter referred to as “non-corresponding image pixels”) other than the image pixels corresponding to the wavelength band image 140 (hereinafter referred to as “corresponding image pixels”), for example, the correction value 92 of the corresponding image pixel The correction value 92 may be derived based on . Further, the correction value 92 for the non-corresponding image pixel may be derived based on the position of the non-corresponding image pixel with respect to the corresponding image pixel.
  • each correction value 92 may include a correction value for correcting image shift due to distortion and/or trapezoidal distortion. Further, the correction value 92 may be derived based on only one of the direction and amount of positional deviation. Further, the correction value 92 may be derived based on a value obtained through experiment. Moreover, although an example is given here in which each correction value 92 is derived by the processing device 100, each correction value 92 may be experimentally derived by a developer of the imaging device 10 or the like. The correction value 92 derived in the above manner is stored in the storage 62 of the imaging device 10.
  • FIG. 17 shows an example of the flow of multispectral image generation processing executed by the imaging device 10.
  • step ST10 the multispectral image generation process moves to step ST12.
  • step ST12 the interference cancellation processing unit 84 acquires the interference cancellation matrix A + stored in the storage 62 and the output value Y of each physical pixel 44 acquired in step ST10, and obtains the interference cancellation matrix A
  • the pixel value X of each image pixel is output based on + and the output value Y of each physical pixel 44 (see FIG. 10).
  • the captured image 70 is divided into a spectrum image 72A corresponding to the luminance value X ⁇ 1 of the first wavelength band polarized light and a spectrum corresponding to the luminance value X ⁇ 2 of the second wavelength band polarized light. It is separated into an image 72B and a spectrum image 72C corresponding to the luminance value X ⁇ 3 of the third wavelength band polarized light.
  • the multispectral image generation process moves to step ST14.
  • step ST14 the correction processing unit 86 performs a correction process to correct image shift on each of the spectral images 72A to 72C (see FIG. 11). After the process of step ST14 is executed, the multispectral image generation process moves to step ST16.
  • step ST16 the multispectral image generation unit 88 generates the multispectral image 74 by combining the spectral images 72A to 72C that have been corrected in step ST14 (see FIG. 12). After the process of step ST16 is executed, the multispectral image generation process moves to step ST18.
  • step ST18 the processor 60 determines whether a condition for terminating the multispectral image generation process (ie, an terminating condition) is satisfied.
  • An example of the termination condition is a condition that the user has given an instruction to the imaging device 10 to terminate the multispectral image generation process.
  • the determination is negative and the multispectral image generation process moves to step ST10.
  • the termination condition is satisfied, the determination is affirmative and the multispectral image generation process is terminated.
  • the image processing method described as the function of the imaging device 10 described above is an example of the "image processing method" according to the technology of the present disclosure.
  • FIG. 18 shows an example of the flow of the correction value derivation process executed by the processing device 100.
  • step ST20 the spectral image acquisition unit 112 acquires each of the spectral images 72A to 72C obtained by the imaging device 10 (see FIG. 15). After the process of step ST20 is executed, the correction value derivation process moves to step ST22.
  • step ST22 the correction value deriving unit 114 derives the direction and amount of positional shift of the wavelength band images 140A to 140C based on the respective spectral images 72A to 72C, and based on the derived direction and amount, the correction value deriving unit 114 A correction value 92 for image pixels included in 72C is derived (see FIG. 16).
  • the derived correction value 92 is stored in the storage 62 of the imaging device 10. After the process of step ST22 is executed, the correction value derivation process moves to step ST24.
  • step ST24 the processor 104 determines whether a condition for terminating the correction value derivation process (ie, an terminating condition) is satisfied.
  • An example of the termination condition is a condition that correction values 92 for a plurality of image pixels included in each of the spectral images 72A to 72C have been derived.
  • the termination condition is not satisfied, the determination is negative and the correction value derivation process moves to step ST22.
  • the termination condition is satisfied, the determination is affirmative and the correction value derivation process is terminated.
  • the optical system 26 includes a plurality of spectral filters 20 provided around the optical axis OA (see FIGS. 2 and 3).
  • the processor 60 performs a correction process on each spectral image 72 to correct image deviation due to positional deviation of the optical image caused by being separated by the plurality of spectral filters 20 (see FIG. 11). Therefore, it is possible to obtain a multispectral image 74 with higher image quality than when the multispectral image 74 is generated based on the spectral image 72 in which image shift has occurred.
  • the correction process is performed on a plurality of spectral images 72 to generate a multispectral image 74. Therefore, before the multispectral image 74 is generated, image shifts occurring in the plurality of spectral images 72 can be corrected.
  • the correction process is performed, for example, based on design values regarding the optical system 26. Therefore, it is possible to correct image shift according to design values regarding the optical system 26.
  • the design values regarding the optical system 26 include, for example, the characteristics of the optical system 26. Therefore, by performing the correction process based on the characteristics of the optical system 26, it is possible to correct the image shift according to the characteristics of the optical system 26.
  • the characteristics of the optical system 26 include, for example, the arrangement position of the pupil splitting filter 16. Therefore, by performing the correction process based on the arrangement position of the pupil division filter 16, it is possible to correct image shift according to the arrangement position of the pupil division filter 16.
  • the characteristics of the optical system 26 include, for example, the characteristics of the aperture 24. Therefore, by performing the correction process based on the characteristics of the aperture 24, it is possible to correct the image shift according to the characteristics of the aperture 24.
  • the characteristics of the opening 24 include, for example, the center of gravity position G of the opening 24. Therefore, by performing the correction process based on the center of gravity position G of the aperture 24, it is possible to correct the image shift according to the center of gravity position G of the aperture 24.
  • the center of gravity position G of the opening 24 is, for example, a position determined based on the position and/or shape of the opening 24. Therefore, by performing the correction process based on the position and/or shape of the aperture 24, it is possible to correct image shift according to the position and/or shape of the aperture 24.
  • the design values regarding the optical system 26 include, for example, values regarding the combination of wavelength bands ⁇ of each spectral filter 20. Therefore, by performing the correction process based on the value related to the combination of wavelength bands ⁇ of each spectral filter 20, it is possible to correct image shift according to the combination of wavelength bands ⁇ of each spectral filter 20.
  • the correction process is performed based on the correction value 92 for each wavelength band ⁇ . Therefore, even if the image shift differs between the spectral images 72 corresponding to each wavelength band ⁇ , the image shift can be corrected for each spectral image 72 corresponding to each wavelength band ⁇ .
  • Each spectral image 72 includes a wavelength band image 140.
  • the positional shift of the wavelength band image 140 differs for each spectrum image 72. Therefore, by correcting the positional deviation of the wavelength band image 140 for each spectral image 72 through the correction process, the image deviation of each spectral image 72 can be corrected.
  • the correction value 92 used in the correction process differs depending on the position of the wavelength band image 140. Therefore, by using the correction value 92 according to the position of the wavelength band image 140 in the correction process, it is possible to correct positional deviations that differ for each wavelength band image 140.
  • the correction value 92 is determined based on the direction and/or amount of positional shift of the wavelength band image 140 with respect to the reference position 142 within the spectrum image 72. Therefore, by using the correction value 92 corresponding to the direction and/or amount of positional deviation of the wavelength band image 140 in the correction process, the positional deviation based on the direction and/or amount of positional deviation of the wavelength band image 140 is corrected. be able to.
  • a dot chart 120 having dots 122 is used in the correction value derivation process for deriving the correction value 92. Therefore, in the correction value derivation process, for example, the position of the wavelength band image 140 can be specified using a known value such as the position of the dot 122 in the dot chart 120.
  • the dot chart 120 has dots 122 as a characteristic part. Therefore, the position of the wavelength band image 140 can be specified more easily than when a calibration member including a characteristic portion having a more complicated shape than the dots 122 is used.
  • the position corresponding to the dot 122 in the spectrum image 72 is set as the reference position 142. Therefore, in each spectrum image 72, the correction value 92 can be derived based on the positional shift of each wavelength band image 140 with respect to the reference position 142.
  • interference removal processing is performed on the imaging data. Therefore, even if the output value Y of each physical pixel 44 corresponding to the imaging data is a value that includes interference (that is, crosstalk), by performing the interference removal process, the output value Y corresponds to each wavelength band ⁇ . The values can be separated and extracted. Thereby, a spectrum image 72 corresponding to each wavelength band ⁇ can be obtained from the captured image 70.
  • the plurality of spectral filters 20 have mutually different wavelength bands ⁇ . Therefore, a spectrum image 72 corresponding to each wavelength band ⁇ can be obtained.
  • the plurality of spectral filters 20 are arranged side by side around the optical axis OA. Therefore, the number of pupil divisions can be secured in a smaller space than, for example, when a plurality of spectral filters 20 are arranged concentrically with the optical axis OA.
  • the correction value derivation unit 114 derives the correction value 92 in the following manner.
  • the correction value deriving unit 114 selects any one of the plurality of wavelength band images 140 and sets the selected wavelength band image 140 as the reference position 142.
  • the wavelength band image 140C is set as the reference position 142 will be described as an example.
  • the correction value deriving unit 114 derives the correction value 92 in the same manner as in the above embodiment, except that the wavelength band image 140C is set as the reference position 142.
  • the correction value deriving unit 114 derives the direction and amount of positional deviation of the wavelength band image 140A with respect to the reference position 142 based on the spectral image 72A, and based on the derived direction and amount, the correction value deriving unit 114 derives the direction and amount of positional deviation of the wavelength band image 140A with respect to the reference position 142.
  • a correction value 92A for the image pixel is derived.
  • the correction value deriving unit 114 derives the direction and amount of positional deviation of the wavelength band image 140B with respect to the reference position 142 based on the spectrum image 72B, and corresponds to the wavelength band image 140A based on the derived direction and amount.
  • a correction value 92B for the image pixel is derived. Note that the correction value deriving unit 114 sets the correction value 92C to 0 for the image pixel corresponding to the wavelength band image 140C set as the reference position 142.
  • the position of any one of the plurality of wavelength band images 140 is set as the reference position 142. Therefore, the correction value 92 can be derived based on the positional deviation of the remaining wavelength band images 140 with respect to the position of the wavelength band image 140 set as the reference position 142.
  • each correction value 92 does not need to include a correction value for correcting image shift due to distortion and/or trapezoidal distortion. In this way, the amount of correction by the correction value 92 can be reduced compared to the case where the correction value 92 includes a correction value for correcting image shift due to distortion and/or trapezoidal distortion.
  • the correction value derivation unit 114 derives the correction value 92 in the following manner. For example, the correction value deriving unit 114 selects any one of the plurality of spectral images 72. Here, the case where the spectrum image 72C is selected will be described as an example. Further, the correction value deriving unit 114 extracts the wavelength band image 140C corresponding to the first dot 122A from the spectrum image 72C by performing image processing on the spectrum image 72C.
  • the correction value deriving unit 114 sets the extracted wavelength band image 140C as the reference position 142.
  • the wavelength band image 140C corresponding to the first dot 122A in the spectrum image 72C is set as the reference position 142.
  • the correction value deriving unit 114 extracts the wavelength band image 140A corresponding to the first dot 122A from the spectrum image 72A by performing image processing on the spectrum image 72A. Similarly, the correction value deriving unit 114 extracts the wavelength band image 140B corresponding to the first dot 122A from the spectrum image 72B by performing image processing on the spectrum image 72B.
  • the correction value deriving unit 114 identifies the position of the wavelength band image 140A within the spectrum image 72A by image processing. Similarly, the correction value deriving unit 114 identifies the position of the wavelength band image 140B within the spectrum image 72B by image processing. Further, the correction value deriving unit 114 specifies the position of the wavelength band image 140C within the spectrum image 72C by image processing, and sets the specified position of the wavelength band image 140C as the position of the reference position 142.
  • the correction value deriving unit 114 derives the correction value 92A based on the direction and amount of positional deviation of the wavelength band image 140A with respect to the reference position 142. Further, the correction value deriving unit 114 derives the correction value 92B based on the direction and amount of positional deviation of the wavelength band image 140B with respect to the reference position 142. Note that also in the second modification, the correction value 92C is set to 0.
  • the position of the wavelength band image 140 within the spectral image 72 and the position of the reference position 142 within the spectral image 72 are specified by image processing. Therefore, for example, the position of the wavelength band image 140 in the spectral image 72 and the reference position 142 in the spectral image 72 can be determined without using the position of the dot 122 in the dot chart 120 and the design values regarding the optical system 26. can be located.
  • each correction value 92 does not need to include a correction value for correcting image shift due to distortion and/or trapezoidal distortion. In this way, the amount of correction by the correction value 92 can be reduced compared to the case where the correction value 92 includes a correction value for correcting image shift due to distortion and/or trapezoidal distortion.
  • a third modification shown in FIGS. 21 to 23 is an example in which a grid chart 150 is used instead of the dot chart 120 in the second modification.
  • the grid chart 150 has a grid pattern.
  • the lattice pattern may be a check pattern in which each region 152 surrounded by the lattice has a different color in adjacent regions 152, or a pattern composed only of lattice-like lines.
  • a check pattern is used as an example of a check pattern.
  • the adjacent areas 152 may have any combination of colors.
  • the intersection points 154 included in the grid pattern correspond to characteristic parts.
  • the plurality of regions 152 are arranged linearly in the vertical and horizontal directions of the grid chart 150, and the plurality of intersections 154 are also linearly arranged in the vertical and horizontal directions of the grid chart 150.
  • each region 152 has a square shape, and the plurality of intersection points 154 are arranged at equal intervals in the vertical and horizontal directions of the grid chart 150.
  • the grid chart 150 is an example of a "subject” and a "calibration member” according to the technology of the present disclosure.
  • the intersection 154 is an example of a “characteristic portion” and an “intersection” according to the technology of the present disclosure.
  • FIG. 21 shows an example of optical images 160A to 160C, optical images 162A to 162C, and optical images 164A to 164C that are formed on the light receiving surface 34A when the lattice chart 150 is imaged by the imaging device 10.
  • the optical images 160A to 160C are optical images corresponding to the first intersection point 154A located in the second row from the top and the second column from the right among the plurality of intersection points 154
  • the optical images 162A to 162C are the optical images corresponding to the first intersection point 154A located in the second row from the top and the second column from the right among the plurality of intersection points 154.
  • the optical image 160A, the optical image 162A, and the optical image 164A are optical images corresponding to the first wavelength band ⁇ 1
  • the optical image 160B, the optical image 162B, and the optical image 164B are the optical images corresponding to the second wavelength band ⁇ 2
  • the optical image 160C, the optical image 162C, and the optical image 164C are optical images corresponding to the third wavelength band ⁇ 3 .
  • the first reference position 166A is a position corresponding to the first intersection 154A in the light receiving surface 34A
  • the second reference position 166B is a position corresponding to the second intersection 154B in the light receiving surface 34A
  • the third reference position 166C is a position corresponding to the third intersection 154C within the light receiving surface 34A.
  • the optical images 160A to 160C are shifted from the first reference position 166A.
  • the direction and amount of positional deviation of the optical images 160A to 160C are different from each other.
  • the optical images 162A to 162C are shifted from the second reference position 166B, and the optical images 164A to 164C are misaligned from the third reference position 166C.
  • the directions and amounts of positional deviations of optical images 162A to 162C are different from each other, and the directions and amounts of positional deviations of optical images 164A to 164C are different from each other.
  • the spectral image acquisition unit 112 acquires each of the spectral images 72A to 72C obtained by capturing the grid chart 150.
  • the spectral image 72A includes a wavelength band image 170A corresponding to the optical image 160A
  • the spectral image 72B includes a wavelength band image 170B corresponding to the optical image 160B
  • the spectral image 72C includes a wavelength band image 170B corresponding to the optical image 160B.
  • the direction and amount of positional shift of the wavelength band images 170A to 170C are different from each other.
  • the wavelength band images 170A to 170C will be referred to as "wavelength band images 170.”
  • the correction value derivation unit 114 derives the correction value 92 in the following manner. For example, the correction value deriving unit 114 selects any one of the plurality of spectral images 72. Here, the case where the spectrum image 72C is selected will be described as an example. Further, the correction value deriving unit 114 extracts the wavelength band image 170C from the spectrum image 72C by performing image processing on the spectrum image 72C.
  • the correction value deriving unit 114 sets the extracted wavelength band image 170C as the reference position 172.
  • a wavelength band image 170C corresponding to the first intersection 154A in the spectrum image 72C is set as the reference position 172.
  • the correction value deriving unit 114 extracts the wavelength band image 170A corresponding to the first intersection point 154A from the spectrum image 72A by performing image processing on the spectrum image 72A. Similarly, the correction value deriving unit 114 extracts the wavelength band image 170B corresponding to the first intersection point 154A from the spectrum image 72B by performing image processing on the spectrum image 72B.
  • the correction value deriving unit 114 identifies the position of the wavelength band image 170A within the spectrum image 72A by image processing. Similarly, the correction value deriving unit 114 identifies the position of the wavelength band image 170B within the spectrum image 72B by image processing. Further, the correction value deriving unit 114 specifies the position of the wavelength band image 170C within the spectrum image 72C by image processing, and sets the specified position of the wavelength band image 170C as the position of the reference position 172.
  • the correction value deriving unit 114 derives the correction value 92A based on the direction and amount of positional deviation of the wavelength band image 170A with respect to the reference position 172. Further, the correction value deriving unit 114 derives the correction value 92B based on the direction and amount of positional deviation of the wavelength band image 170B with respect to the reference position 172. Note that also in the third modification, the correction value 92C is set to 0.
  • the grid chart 150 has a grid pattern as a characteristic part. Therefore, the position of the wavelength band image 170 can be specified more easily than when a calibration member having a characteristic portion having a more complicated shape than a checkered pattern is used.
  • each correction value 92 does not need to include a correction value for correcting image shift due to distortion and/or trapezoidal distortion. In this way, the amount of correction by the correction value 92 can be reduced compared to the case where the correction value 92 includes a correction value for correcting image shift due to distortion and/or trapezoidal distortion.
  • the lattice chart 150 described in the third modification example is used. Similar to the third modification, the correction value deriving unit 114 derives the direction and amount of positional deviation of the wavelength band image 170 with respect to the reference position 172 based on the spectrum image 72, and calculates the wavelength based on the derived direction and amount. A correction value 92 for the image pixel corresponding to the band image 170 is derived.
  • the correction value deriving unit 114 derives each correction value 92 based on, for example, that the intersection points 154 of each row of the lattice chart 150 are lined up in a straight line in the horizontal direction of the lattice chart 150.
  • the correction values 92 correspond to the intersection points 154 of each row of the grid chart 150.
  • a correction value that aligns the positions of the wavelength band images 170 in the vertical direction of the spectrum image 72 is derived.
  • FIG. 24 shows a mode in which each correction value 92 is derived based on the fact that the intersection points 154 in the second row from the top of the grid chart 150 are lined up in a straight line in the horizontal direction of the grid chart 150.
  • each correction value 92 is derived based on the fact that the intersection points 154 of each row of the grid chart 150 are lined up in a straight line in the horizontal direction of the grid chart 150. Therefore, for example, as the correction value 92, a correction value can be obtained in which the positions of the wavelength band image 170 corresponding to the intersection points 154 of each row of the lattice chart 150 are aligned in the vertical direction of the spectrum image 72.
  • the spectral image 72 can be suppressed in the vertical direction.
  • a value of 92 may be derived.
  • the wavelength band images 170 corresponding to the intersection points 154 of each column of the lattice chart 150 have different amounts of positional deviation in the lateral direction of the spectral image 72.
  • a correction value may be derived that aligns the wavelength band image 170 corresponding to the intersection point 154 in the horizontal direction of the spectrum image 72.
  • FIG. 25 shows a mode in which each correction value 92 is derived based on the fact that the intersection points 154 in the second column from the right of the grid chart 150 are lined up in a straight line in the vertical direction of the grid chart 150.
  • a correction value can be obtained as the correction value 92 such that the positions of the wavelength band images 170 corresponding to the intersections 154 of each column of the lattice chart 150 are aligned in the horizontal direction of the spectrum image 72.
  • a correction value 92 is derived in which the position of the wavelength band image 170 corresponding to the intersection point 154 of each column of the lattice chart 150 is shifted in the horizontal direction of the spectrum image 72, Lateral distortion of the spectrum image 72 can be suppressed.
  • Each correction value 92 may be derived based on the fact that the intersection points 154 of each column are lined up in a straight line in the vertical direction of the grid chart 150.
  • FIG. 26 shows that the intersection points 154 in the second row from the top of the grid chart 150 are lined up in a straight line in the horizontal direction of the grid chart 150, and that the intersection points 154 in the second column from the right of the grid chart 150 are arranged in a straight line in the horizontal direction of the grid chart 150.
  • a mode in which each correction value 92 is derived based on the vertical alignment of 150 is shown.
  • the correction value 92 for example, as the correction value 92, the positions of the wavelength band images 170 corresponding to the intersections 154 of each row of the lattice chart 150 are aligned in the vertical direction of the spectrum image 72, and the intersections 154 of each column of the lattice chart 150 It is possible to obtain a correction value that aligns the positions of the wavelength band images 170 corresponding to the spectrum image 72 in the horizontal direction.
  • the correction value 92 may be a correction value that shifts the position of the wavelength band image 170 corresponding to the intersection 154 of each row of the lattice chart 150 in the vertical direction of the spectral image 72, and/or a correction value for each column of the lattice chart 150.
  • each correction value 92 may include a correction value for correcting image shift due to trapezoidal distortion. In this way, the trapezoidal distortion of the spectral image 72 can be suppressed compared to the case where the correction value 92 does not include a correction value for correcting image shift due to trapezoidal distortion.
  • the correction value deriving unit 114 calculates each correction value 92 based on, for example, that the intersection points 154 of each row of the grid chart 150 are arranged at equal intervals in the horizontal direction of the grid chart 150. may be derived. In this way, each correction value 92 is derived regardless of whether the intersection points 154 of each row of the grid chart 150 are arranged at equal intervals in the horizontal direction of the grid chart 150. Distortion can be suppressed.
  • the correction value deriving unit 114 calculates each correction value based on, for example, that the intersection points 154 of each column of the grid chart 150 are arranged at equal intervals in the vertical direction of the grid chart 150. 92 may be derived. In this way, each correction value 92 is derived regardless of whether the intersection points 154 of each column of the grid chart 150 are arranged at regular intervals in the vertical direction of the grid chart 150. distortion can be suppressed.
  • the correction value deriving unit 114 determines, for example, that the intersection points 154 of each row of the grid chart 150 are arranged at equal intervals in the horizontal direction of the grid chart 150; Each correction value 92 may be derived based on the fact that the intersection points 154 of each column are arranged at equal intervals in the vertical direction of the grid chart 150.
  • each correction value 92 is derived regardless of whether the intersection points 154 of each row of the grid chart 150 are arranged at equal intervals in the horizontal direction of the grid chart 150, and/or Compared to the case where each correction value 92 is derived regardless of whether the intersection points 154 are arranged at regular intervals in the vertical direction of the grid chart 150, vertical and/or horizontal distortion of the spectral image 72 can be suppressed. can.
  • each correction value 92 may include a correction value for correcting image shift due to trapezoidal distortion. In this way, the trapezoidal distortion of the spectral image 72 can be suppressed compared to the case where the correction value 92 does not include a correction value for correcting image shift due to trapezoidal distortion.
  • intersection points 154 of each row of the grid chart 150 may be arranged at equal intervals in the horizontal direction of the grid chart 150, and/or the intersection points 154 of each column of the grid chart 150 may be arranged at equal intervals in the vertical direction of the grid chart 150.
  • the correction value 92 may be derived in the following manner.
  • the interval between the wavelength band images 170 corresponding to the intersection 154 located in an area other than the central area of the lattice chart 150 is set as the interval between the wavelength band images 170 corresponding to the intersection 154 located in the central area of the lattice chart 150.
  • a correction value may be derived to match the interval between 170 and 170. In this way, compared to the case where the correction value 92 is derived regardless of the interval between the wavelength band images 170 corresponding to the intersection 154 located in the central area of the lattice chart 150, the vertical direction of the spectral image 72 and/or Lateral distortion can be suppressed.
  • a seventh modification shown in FIG. 27 is an example in which inspection objects 180A to 180D are used as the subject 4 instead of the dot chart 120 in the second modification.
  • each of the inspection objects 180A to 180D is a subject.
  • Each inspection object 180A to 180D may be of any type.
  • the inspection objects 180A to 180D are different types of objects.
  • each of the inspection objects 180A to 180D will be referred to as an "inspection object 180.”
  • Each test object 180 has a feature 182.
  • an example is given in which the number of the plurality of inspection objects 180 is four, but the number of the plurality of inspection objects 180 may be any number.
  • the inspection object 180 is an example of a "subject" and "inspection object" according to the technology of the present disclosure.
  • FIG. 27 shows an example of optical images 190A to 190C formed on the light receiving surface 34A when the inspection objects 180A to 180D are imaged by the imaging device 10.
  • the optical images 190A to 190C are optical images corresponding to a characteristic portion 182 (hereinafter referred to as “first characteristic portion 182A”) of the first inspection object 180A.
  • the optical image 190A is an optical image corresponding to the first wavelength band ⁇ 1
  • the optical image 190B is an optical image corresponding to the second wavelength band ⁇ 2
  • the optical image 190C is an optical image corresponding to the third wavelength band ⁇ 3 .
  • the reference position 192 is a position corresponding to the first characteristic portion 182A within the light receiving surface 34A.
  • the optical images 190A to 190C are shifted from the reference position 192.
  • the direction and amount of positional deviation of the optical images 190A to 190C are different from each other.
  • the spectral image 72A includes a wavelength band image 200A corresponding to the optical image 190A
  • the spectral image 72B includes a wavelength band image 200B corresponding to the optical image 190B
  • the spectral image 72C includes a wavelength band image 200C corresponding to the optical image 190C.
  • the direction and amount of positional shift of the wavelength band images 200A to 200C are different from each other.
  • the wavelength band images 200A to 200C will be referred to as "wavelength band images 200.”
  • the correction value deriving unit 114 derives the correction value 92 in the following manner. For example, the correction value deriving unit 114 selects any one of the plurality of spectral images 72. Here, the case where the spectrum image 72C is selected will be described as an example. Further, the correction value deriving unit 114 extracts the wavelength band image 200C from the spectrum image 72C by performing image processing on the spectrum image 72C.
  • the correction value deriving unit 114 sets the extracted wavelength band image 200C as the reference position 202.
  • a wavelength band image 200C corresponding to the first characteristic portion 182A in the spectrum image 72C is set as the reference position 202.
  • the correction value deriving unit 114 extracts the wavelength band image 200A corresponding to the first characteristic portion 182A from the spectrum image 72A by performing image processing on the spectrum image 72A. Similarly, the correction value deriving unit 114 extracts the wavelength band image 200B corresponding to the first characteristic portion 182A from the spectral image 72B by performing image processing on the spectral image 72B.
  • the correction value deriving unit 114 identifies the position of the wavelength band image 200A within the spectrum image 72A by image processing. Similarly, the correction value deriving unit 114 identifies the position of the wavelength band image 200B within the spectrum image 72B by image processing. Further, the correction value deriving unit 114 specifies the position of the wavelength band image 200C within the spectrum image 72C by image processing, and sets the specified position of the wavelength band image 200C as the position of the reference position 202.
  • the correction value deriving unit 114 derives the correction value 92A based on the direction and amount of positional shift of the wavelength band image 200A with respect to the reference position 202. Further, the correction value deriving unit 114 derives the correction value 92B based on the direction and amount of positional deviation of the wavelength band image 200B with respect to the reference position 202. Note that in the seventh modification as well, the correction value 92C is set to 0.
  • the correction value 92 can be derived without using a calibration member (that is, a dedicated member for deriving the correction value 92).
  • correction value deriving process according to the seventh modification may be incorporated into the correction process in the imaging device 10. Then, the correction value 92 derived in the correction value derivation process according to the seventh modification may be used in the correction process.
  • each of the inspection objects 180A to 180E will be referred to as an "inspection object 180."
  • Each inspection object 180 has a first feature 184 and a second feature 186.
  • the correction value deriving unit 114 sets the wavelength band image 200 corresponding to the first characteristic portion 184 as the reference position 202, and calculates the direction and amount of positional deviation of the wavelength band image 200 with respect to the reference position 202 based on the spectrum image 72. Based on the derived direction and amount, a correction value 92 for the image pixel corresponding to the wavelength band image 200 is derived.
  • the correction value deriving unit 114 derives each correction value 92 based on, for example, that the position of the first characteristic portion 184 with respect to the second characteristic portion 186 in each inspection object 180 is the same. do.
  • the wavelength band image 200 corresponding to the first characteristic portion 184 of each inspection object 180 may be different in direction and/or amount from the wavelength band image 200 corresponding to the second characteristic portion 186 of each inspection object 180.
  • a correction value is derived as the correction value 92 in which the direction and/or amount of the positional shift of the wavelength band images 200 corresponding to the first characteristic portions 184 of each inspection object 180 are the same.
  • each correction value 92 is derived, for example, based on the fact that the position of the first characteristic portion 184 with respect to the second characteristic portion 186 in each inspection object 180 is the same. Therefore, for example, it is possible to obtain, as the correction value 92, a correction value in which the directions and/or amounts of positional deviations of the wavelength band images 200 corresponding to the first characteristic portions 184 of each inspection object 180 are aligned.
  • the vertical direction of the spectral image 72 is Directional and/or lateral distortion can be suppressed.
  • each correction value 92 is derived based on the fact that the position of the first characteristic part 184 with respect to the second characteristic part 186 is the same in each inspection object 180, the correction value 92 is calculated according to the following procedure. It may be derived as follows.
  • a wavelength band image corresponding to the first characteristic portion 184 and the second characteristic portion 186 of the inspection objects 180A to 180D located in a region other than the central region of the imaging target region 210 by the imaging device 10 is used.
  • a correction value is derived that adjusts the interval between the wavelength band images 200 to the interval between the wavelength band images 200 corresponding to the first characteristic portion 184 and the second characteristic portion 186 of the inspection object 180E located in the central region of the imaging target region 210. may be done. In this way, compared to the case where the correction value 92 is derived regardless of the interval between the wavelength band images 200 corresponding to the first characteristic portion 184 and the second characteristic portion 186 of the inspection object 180E located in the central region. Thus, vertical and/or horizontal distortion of the spectral image 72 can be suppressed.
  • correction value deriving process according to the eighth modification may be incorporated into the correction process in the imaging device 10. Then, the correction value 92 derived in the correction value derivation process according to the eighth modification may be used in the correction process.
  • the imaging target area 210 by the imaging device 10 includes a subject area 210A where the subject 220A is placed, a subject area 210B where the subject 220B is placed, and an empty area where the subject 220A and the subject 220B are not placed. 210C and a free area 210D.
  • the correction value deriving unit 114 divides each spectrum image 72 obtained by imaging the imaging target region 210 by the imaging device 10 into regions 212A to 212D.
  • the area 212A corresponds to the subject area 210A
  • the area 212B corresponds to the subject area 210B
  • the area 212C corresponds to the free area 210C
  • the area 212D corresponds to the free area 210D.
  • a region 212A includes a wavelength band image 230A corresponding to a subject 220A
  • a region 212B includes a wavelength band image 230B corresponding to a subject 220B.
  • the correction value deriving unit 114 may divide each spectral image 72 into regions 212A to 212D based on an instruction received by the imaging device 10 from a user or the like, and performs image processing on each spectral image 72. By doing so, each spectral image 72 may be divided into a plurality of regions 212A to 212D based on the presence or absence of a wavelength band image. Here, each spectral image 72 is divided into four areas 212A to 212D corresponding to two subjects 220A and 220B, but the number of subjects included in the imaging target area 210 and each spectral image 72 are The number of regions to be divided may be any number. Then, the correction value deriving unit 114 derives the correction value 92 for the area 212A and the area 212B, but does not derive the correction value 92 for the area 212C and the area 212D.
  • the correction value 92 is derived for the region 212A and the region 212B, and the correction value 92 is not derived for the region 212C and the region 212D. Therefore, the load on the processor 104 of the processing device 100 can be reduced compared to the case where the correction value 92 is also derived for the region 212C and the region 212D.
  • the correction value 92 derived according to the ninth modification when used, a correction process is performed on a part of the spectral image 72 in the imaging device 10. Therefore, the load on the processor 60 of the imaging device 10 can be reduced compared to the case where the correction process is performed on the entire region of the spectral image 72.
  • the correction process on the spectral image 72 is executed in the imaging device 10, but the spectral image 72 is input from the image capturing device 10 to an external device, and the correction process on the spectral image 72 is executed in the external device.
  • the external device is an example of an "image processing device" related to the technology of the present disclosure.
  • the processor 60 is illustrated in the imaging device 10, but instead of the processor 60, or together with the processor 60, at least one other CPU, at least one GPU, and/or at least one TPU may also be used.
  • the processor 104 is illustrated as an example of the processing device 100, but instead of or together with the processor 104, at least one other CPU, at least one GPU, and/or at least one TPU may also be used.
  • the imaging device 10 has been described using an example in which the multispectral image generation program 80 is stored in the storage 62, but the technology of the present disclosure is not limited to this.
  • the multispectral image generation program 80 may be stored in a portable non-transitory computer-readable storage medium (hereinafter simply referred to as "non-transitory storage medium") such as an SSD or a USB memory.
  • non-transitory storage medium such as an SSD or a USB memory.
  • a multispectral image generation program 80 stored on a non-transitory storage medium may be installed on the computer 56 of the imaging device 10.
  • the multispectral image generation program 80 is stored in a storage device such as another computer or a server device connected to the imaging device 10 via a network, and the multispectral image generation program 80 is generated in response to a request from the imaging device 10. may be downloaded and installed on the computer 56 of the imaging device 10.
  • the entire multispectral image generation program 80 in a storage device such as another computer or server device connected to the imaging device 10, or in the storage 62, but a part of the multispectral image generation program 80. may be stored.
  • the processing device 100 has been described using an example in which the correction value derivation program 110 is stored in the storage 106, but the technology of the present disclosure is not limited to this.
  • the correction value derivation program 110 may be stored in a non-temporary storage medium.
  • the correction value derivation program 110 stored in a non-transitory storage medium may be installed on the computer 102 of the processing device 100.
  • correction value derivation program 110 is stored in a storage device such as another computer or server device connected to the processing device 100 via a network, and the correction value derivation program 110 is downloaded in response to a request from the processing device 100. may be installed on the computer 102 of the processing device 100.
  • correction value derivation program 110 it is not necessary to store all of the correction value derivation program 110 in a storage device such as another computer or server device connected to the processing device 100, or in the storage 106, but only a part of the correction value derivation program 110 is stored. You can leave it.
  • the imaging device 10 has a built-in computer 56, the technology of the present disclosure is not limited to this, and for example, the computer 56 may be provided outside the imaging device 10.
  • processing device 100 has a built-in computer 102
  • the technology of the present disclosure is not limited to this, and for example, the computer 102 may be provided outside the processing device 100.
  • the computer 56 including the processor 60, the storage 62, and the RAM 64 is illustrated as an example of the imaging device 10, but the technology of the present disclosure is not limited to this, and instead of the computer 56, an ASIC, A device including an FPGA and/or a PLD may also be applied. Further, instead of the computer 56, a combination of hardware configuration and software configuration may be used.
  • the computer 102 including the processor 104, the storage 106, and the RAM 108 is illustrated as the processing device 100, but the technology of the present disclosure is not limited to this, and instead of the computer 102, an ASIC, A device including an FPGA and/or a PLD may also be applied. Further, in place of the computer 102, a combination of hardware configuration and software configuration may be used.
  • processors can be used as hardware resources for executing the various processes described in the above embodiments.
  • the processor include a CPU, which is a general-purpose processor that functions as a hardware resource that executes various processes by executing software, that is, a program.
  • the processor include a dedicated electronic circuit such as an FPGA, a PLD, or an ASIC, which is a processor having a circuit configuration specifically designed to execute a specific process.
  • Each processor has a built-in memory or is connected to it, and each processor uses the memory to perform various processes.
  • Hardware resources that execute various processes may be configured with one of these various processors, or a combination of two or more processors of the same type or different types (for example, a combination of multiple FPGAs, or a CPU and FPGA). Furthermore, the hardware resource that executes various processes may be one processor.
  • one processor is configured by a combination of one or more CPUs and software, and this processor functions as a hardware resource that executes various processes.
  • a and/or B has the same meaning as “at least one of A and B.” That is, “A and/or B” means that it may be only A, only B, or a combination of A and B. Furthermore, in this specification, even when three or more items are expressed by connecting them with “and/or”, the same concept as “A and/or B" is applied.
  • a member for deriving correction values used in calibration processing is a calibration process for an image output from an imaging device including an optical system,
  • the optical system is provided around an optical axis and includes a plurality of filters having mutually different wavelength bands
  • the correction value is a correction value for correcting an image shift due to a positional shift of an optical image that occurs when the image is separated by the plurality of filters
  • the image includes a wavelength band image corresponding to each of the wavelength bands
  • the member includes a characteristic portion corresponding to the reference position when the correction value is derived based on the direction and/or amount of positional shift of the wavelength band image with respect to the reference position in the image.
  • a device for deriving correction values used in calibration processing is a calibration process for an image output from an imaging device including an optical system,
  • the optical system is provided around an optical axis and includes a plurality of filters having mutually different wavelength bands
  • the correction value is a correction value for correcting an image shift due to a positional shift of an optical image that occurs when the image is separated by the plurality of filters
  • the image includes a wavelength band image corresponding to each of the wavelength bands
  • the device includes a processor;
  • the apparatus includes: the processor deriving the correction value based on the direction and/or amount of displacement of the wavelength band image with respect to a reference position within the image.
  • a method for deriving a correction value used in a calibration process comprising:
  • the calibration process is a calibration process for an image output from an imaging device including an optical system,
  • the optical system is provided around an optical axis and includes a plurality of filters having mutually different wavelength bands
  • the correction value is a correction value for correcting an image shift due to a positional shift of an optical image that occurs when the image is separated by the plurality of filters
  • the image includes a wavelength band image corresponding to each of the wavelength bands
  • the method comprises deriving the correction value based on the direction and/or amount of displacement of the wavelength band image with respect to a reference position within the image.
  • a program for causing a computer to execute processing for deriving correction values used in calibration processing is a calibration process for an image output from an imaging device including an optical system,
  • the optical system is provided around an optical axis and includes a plurality of filters having mutually different wavelength bands
  • the correction value is a correction value for correcting an image shift due to a positional shift of an optical image that occurs when the image is separated by the plurality of filters
  • the image includes a wavelength band image corresponding to each of the wavelength bands
  • the processing includes deriving the correction value based on the direction and/or amount of positional shift of the wavelength band image with respect to a reference position within the image.

Landscapes

  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

This image processing device is to be used for an image outputted from an imaging device comprising an optical system. The optical system has a plurality of filters provided around an optical axis. This image processing device comprises a processor. The processor performs, on the image, a process for correcting image deviation due to positional deviation of an optical image caused by light diffraction by means of the plurality of filters.

Description

画像処理装置、画像処理方法、及びプログラムImage processing device, image processing method, and program
 本開示の技術は、画像処理装置、画像処理方法、及びプログラムに関する。 The technology of the present disclosure relates to an image processing device, an image processing method, and a program.
 国際公開第2021/085368号パンフレットには、撮像光学系と、撮像素子と、信号処理部とを備えた撮像装置が開示されている。撮像光学系は、第1の波長帯域の光を通過させる第1の瞳領域及び第1の波長帯域とは異なる第2の波長帯域の光を通過させる第2の瞳領域を有する撮像光学系である。撮像光学系は、第1の波長帯域と第2の波長帯域との違いによる撮像光学系の軸上色収差を、撮像光学系の軸上色収差以外の収差と撮像光学系の第1の瞳領域及び第2の瞳領域の位置との関係に基づいて低減する。撮像素子は、撮像光学系の第1の瞳領域を通過する光を受光する第1の画素及び第2の瞳領域を通過する光を受光する第2の画素を含む。信号処理部は、撮像素子から出力される信号を処理し、第1の画素の出力信号及び第2の画素の出力信号に基づいて第1の波長帯域の第1の画像及び第2の波長帯域の第2の画像をそれぞれ生成する。 International Publication No. 2021/085368 pamphlet discloses an imaging device including an imaging optical system, an imaging element, and a signal processing section. The imaging optical system has a first pupil region that passes light in a first wavelength band and a second pupil region that passes light in a second wavelength band different from the first wavelength band. be. The imaging optical system combines the axial chromatic aberration of the imaging optical system due to the difference between the first wavelength band and the second wavelength band with the aberration other than the axial chromatic aberration of the imaging optical system and the first pupil region of the imaging optical system and It is reduced based on the relationship with the position of the second pupil area. The image sensor includes a first pixel that receives light passing through a first pupil region of an imaging optical system, and a second pixel that receives light that passes through a second pupil region. The signal processing unit processes the signal output from the image sensor, and generates a first image in a first wavelength band and a second wavelength band based on the output signal of the first pixel and the output signal of the second pixel. , respectively.
 特開2022-063720号公報には、バンド画像取得手段と、高解像度画像取得手段と、位置差取得手段と、補正バンド画像作成手段と、補正バンド画像出力手段とを備える画像補正装置が開示されている。バンド画像取得手段は、被写体を撮像して得られた複数のバンド画像を取得する。高解像度画像取得手段は、被写体を撮像して得られた、バンド画像より解像度の高い高解像度画像を取得する。位置差取得手段は、複数のバンド画像のうちの1つを基準バンド画像とし、残りの少なくとも1つを対象バンド画像とし、対象バンド画像と基準バンド画像との間の位置差を取得する。補正バンド画像作成手段は、対象バンド画像の画素を対象画素とし、対象画素毎に、対象画素の撮像領域を複数の領域に分割してできる個々の部分領域の画素値を、当該対象画素の画素値と当該対象画素に対応する高解像度画像の複数の画素の間の画素値の関係とに基づいて決定し、決定した部分領域毎の画素値と位置差とから基準バンド画像の画素位置における対象バンド画像に係る光の画素値を保持する補正バンド画像を作成する。補正バンド画像出力手段は、補正バンド画像を出力する。 JP 2022-063720A discloses an image correction device including a band image acquisition means, a high-resolution image acquisition means, a position difference acquisition means, a correction band image creation means, and a correction band image output means. ing. The band image acquisition means acquires a plurality of band images obtained by imaging the subject. The high-resolution image acquisition means acquires a high-resolution image, which is obtained by imaging the subject and has a higher resolution than the band image. The positional difference acquisition means sets one of the plurality of band images as a reference band image, sets at least one of the remaining band images as a target band image, and acquires a positional difference between the target band image and the reference band image. The correction band image creation means takes a pixel of the target band image as a target pixel, and for each target pixel, divides the imaging region of the target pixel into a plurality of regions, and calculates the pixel value of each partial region created by dividing the image capturing region of the target pixel into a plurality of regions. The target pixel at the pixel position of the reference band image is determined based on the value and the pixel value relationship between multiple pixels of the high-resolution image corresponding to the target pixel, and from the determined pixel value and position difference for each partial region. A corrected band image is created that retains the pixel values of light related to the band image. The correction band image output means outputs a correction band image.
 国際公開第2021/153473号パンフレットには、撮像光学系と、光学部材とを備えるレンズ装置が開示されている。撮像光学系は、被写体の光学像を結像させるレンズを含む。光学部材は、複数の開口領域を備える枠体と、複数の開口領域の少なくとも1つに配置される複数の光学フィルタであって、少なくとも一部の波長帯域が異なる光を透過させる2つ以上の光学フィルタを含む複数の光学フィルタと、複数の開口領域の少なくとも1つに配置される複数の偏光フィルタであって、偏光方向が異なる複数の偏光フィルタとを有する。撮像光学系から出射される光の光量は、複数の開口領域について変更可能である。 International Publication No. 2021/153473 pamphlet discloses a lens device including an imaging optical system and an optical member. The imaging optical system includes a lens that forms an optical image of a subject. The optical member includes a frame body having a plurality of aperture regions, and a plurality of optical filters disposed in at least one of the plurality of aperture regions, the two or more optical filters transmitting light having at least some different wavelength bands. It has a plurality of optical filters including an optical filter, and a plurality of polarization filters arranged in at least one of the plurality of aperture regions, and the plurality of polarization filters having different polarization directions. The amount of light emitted from the imaging optical system can be changed for a plurality of aperture regions.
 特開2014-045275号公報には、画像取得部と、位相差画像生成部と、高解像化処理部とを備える画像処理装置が開示されている。画像取得部は、第1被写体像と第1被写体像に対して視差を有する第2被写体像とを撮像して得られた撮像画像を取得する。位相差画像生成部は、撮像画像に基づいて、第1被写体像に対応する第1画像と第2被写体像に対応する第2画像とを生成する。高解像化処理部は、第1画像と第2画像とに基づいて、撮像画像の高解像化処理を行う。 JP 2014-045275A discloses an image processing device including an image acquisition section, a phase difference image generation section, and a high resolution processing section. The image acquisition unit acquires a captured image obtained by capturing a first subject image and a second subject image having a parallax with respect to the first subject image. The phase difference image generation unit generates a first image corresponding to the first subject image and a second image corresponding to the second subject image based on the captured image. The high-resolution processing section performs high-resolution processing on the captured image based on the first image and the second image.
 本開示の技術に係る一つの実施形態は、一例として、画像ずれが生じているスペクトル画像に基づいてマルチスペクトル画像が生成される場合に比して、画質の高いマルチスペクトル画像を得ることができる画像処理装置、画像処理方法、及びプログラムを提供する。 As an example, one embodiment of the technology of the present disclosure can obtain a multispectral image with higher image quality than when a multispectral image is generated based on a spectral image in which image shift has occurred. Provides an image processing device, an image processing method, and a program.
 本開示の技術に係る第1の態様は、光学系を備える撮像装置から出力される画像に対して適用される画像処理装置であって、光学系は、光軸の周りに設けられた複数のフィルタを有し、画像処理装置は、プロセッサを備え、プロセッサは、画像に対して、複数のフィルタによって分光されることで生じる光学像の位置ずれによる画像ずれを補正する処理を行う画像処理装置である。 A first aspect of the technology of the present disclosure is an image processing device applied to an image output from an imaging device including an optical system, wherein the optical system includes a plurality of The image processing device has a filter, and the image processing device includes a processor, and the processor is an image processing device that performs processing on the image to correct an image shift due to a position shift of an optical image caused by being separated by a plurality of filters. be.
 本開示の技術に係る第2の態様は、第1の態様に係る画像処理装置において、光学系は、複数の開口を有し、各開口には、各フィルタが設けられ、画像ずれは、少なくとも各開口の特徴に基づく画像ずれを含み、処理は、特徴に基づいて行われる画像処理装置である。 A second aspect of the technology of the present disclosure is the image processing apparatus according to the first aspect, in which the optical system has a plurality of apertures, each aperture is provided with a filter, and the image shift is at least The image processing apparatus includes image shift based on the characteristics of each aperture, and processing is performed based on the characteristics.
 本開示の技術に係る第3の態様は、第2の態様に係る画像処理装置において、特徴は、開口の重心位置を含む画像処理装置である。 A third aspect according to the technology of the present disclosure is an image processing apparatus according to the second aspect, which is characterized by including a position of the center of gravity of the opening.
 本開示の技術に係る第4の態様は、第3の態様に係る画像処理装置において、重心位置は、開口の位置及び/又は形状に基づいて定まる位置である画像処理装置である。 A fourth aspect according to the technology of the present disclosure is an image processing apparatus according to the third aspect, in which the center of gravity position is determined based on the position and/or shape of the opening.
 本開示の技術に係る第5の態様は、第1の態様から第4の態様の何れか一つの態様に係る画像処理装置において、光学像の位置ずれは、少なくとも光学系の特性で生じる光学像の位置ずれを含む画像処理装置である。 A fifth aspect of the technology of the present disclosure is that in the image processing apparatus according to any one of the first to fourth aspects, positional deviation of the optical image is caused by at least a characteristic of the optical system. This is an image processing device that includes positional deviation.
 本開示の技術に係る第6の態様は、第1の態様から第5の態様の何れか一つの態様に係る画像処理装置において、処理は、画像のうちの一部の領域に対して行われる画像処理装置である。 A sixth aspect of the technology of the present disclosure is that in the image processing apparatus according to any one of the first to fifth aspects, processing is performed on a partial area of the image. It is an image processing device.
 本開示の技術に係る第7の態様は、第1の態様から第6の態様の何れか一つの態様に係る画像処理装置において、画像は、マルチスペクトル画像を生成するためのスペクトル画像である画像処理装置である。 A seventh aspect of the technology of the present disclosure is an image processing apparatus according to any one of the first to sixth aspects, wherein the image is a spectral image for generating a multispectral image. It is a processing device.
 本開示の技術に係る第8の態様は、第7の態様に係る画像処理装置において、画像は、撮像装置によって撮像されることにより得られた撮像データに対して混信除去処理が行われることにより生成された画像である画像処理装置である。 An eighth aspect according to the technology of the present disclosure is that in the image processing device according to the seventh aspect, the image is obtained by performing interference removal processing on imaged data obtained by imaging with the imaging device. This is an image processing device that is a generated image.
 本開示の技術に係る第9の態様は、第1の態様から第8の態様の何れか一つの態様に係る画像処理装置において、複数のフィルタは、互いに異なる波長帯を有する画像処理装置である。 A ninth aspect of the technology of the present disclosure is the image processing device according to any one of the first to eighth aspects, wherein the plurality of filters have different wavelength bands. .
 本開示の技術に係る第10の態様は、第1の態様から第9の態様の何れか一つの態様に係る画像処理装置において、複数のフィルタは、光軸の周りに並んで配置されている画像処理装置である。 A tenth aspect of the technology of the present disclosure is the image processing device according to any one of the first to ninth aspects, wherein the plurality of filters are arranged in line around an optical axis. It is an image processing device.
 本開示の技術に係る第11の態様は、第9の態様に係る画像処理装置において、処理は、波長帯の組み合わせに基づいて行われる画像処理装置である。 An eleventh aspect according to the technology of the present disclosure is an image processing apparatus according to the ninth aspect, in which processing is performed based on a combination of wavelength bands.
 本開示の技術に係る第12の態様は、第1の態様から第11の態様の何れか一つの態様に係る画像処理装置において、処理は、光学系に関する設計値に基づいて行われる画像処理装置である。 A twelfth aspect of the technology of the present disclosure is an image processing apparatus according to any one of the first to eleventh aspects, in which processing is performed based on design values regarding the optical system. It is.
 本開示の技術に係る第13の態様は、第9の態様に係る画像処理装置において、処理は、波長帯毎の補正値に基づいて行われる画像処理装置である。 A thirteenth aspect according to the technology of the present disclosure is an image processing apparatus according to the ninth aspect, in which processing is performed based on a correction value for each wavelength band.
 本開示の技術に係る第14の態様は、第9の態様、第11の態様、及び第13の態様の何れか一つの態様に係る画像処理装置において、画像は、各波長帯に対応する波長帯画像を含む画像処理装置である。 A fourteenth aspect according to the technology of the present disclosure is an image processing apparatus according to any one of the ninth aspect, the eleventh aspect, and the thirteenth aspect, in which the image is processed using wavelengths corresponding to each wavelength band. This is an image processing device that includes band images.
 本開示の技術に係る第15の態様は、第13の態様に係る画像処理装置において、画像は、各波長帯に対応する波長帯画像を含み、補正値は、波長帯画像の位置に応じて異なる画像処理装置である。 A fifteenth aspect according to the technology of the present disclosure is the image processing apparatus according to the thirteenth aspect, wherein the image includes a wavelength band image corresponding to each wavelength band, and the correction value is determined according to the position of the wavelength band image. They are different image processing devices.
 本開示の技術に係る第16の態様は、第13の態様に係る画像処理装置において、画像は、各波長帯に対応する波長帯画像を含み、補正値は、画像内の基準位置に対する波長帯画像の位置ずれの方向及び/又は量に基づいて定まる画像処理装置である。 A 16th aspect according to the technology of the present disclosure is the image processing apparatus according to the 13th aspect, wherein the image includes a wavelength band image corresponding to each wavelength band, and the correction value is a wavelength band image with respect to a reference position in the image. This is an image processing device that is determined based on the direction and/or amount of positional shift of an image.
 本開示の技術に係る第17の態様は、第14の態様に係る画像処理装置において、波長帯画像は、被写体の特徴部分を示す画像である画像処理装置である。 A seventeenth aspect according to the technology of the present disclosure is an image processing apparatus according to the fourteenth aspect, in which the wavelength band image is an image showing a characteristic part of a subject.
 本開示の技術に係る第18の態様は、第16の態様に係る画像処理装置において、波長帯画像は、被写体の特徴部分を示す画像であり、基準位置は、特徴部分に対応する位置である画像処理装置である。 An 18th aspect according to the technology of the present disclosure is the image processing apparatus according to the 16th aspect, wherein the wavelength band image is an image showing a characteristic part of a subject, and the reference position is a position corresponding to the characteristic part. It is an image processing device.
 本開示の技術に係る第19の態様は、第16の態様に係る画像処理装置において、基準位置は、複数の波長帯画像のうちのいずれか一つの波長帯画像の位置である画像処理装置である。 A nineteenth aspect of the technology of the present disclosure is an image processing apparatus according to the sixteenth aspect, wherein the reference position is a position of any one of the wavelength band images. be.
 本開示の技術に係る第20の態様は、第17の態様に係る画像処理装置において、被写体は、点を有し、特徴部分は、点である画像処理装置である。 A 20th aspect according to the technology of the present disclosure is an image processing apparatus according to the 17th aspect, in which the subject has a point and the characteristic part is a point.
 本開示の技術に係る第21の態様は、第17の態様に係る画像処理装置において、被写体は、格子柄を有し、特徴部分は、格子柄に含まれる交点である画像処理装置である。 A twenty-first aspect according to the technology of the present disclosure is an image processing apparatus according to the seventeenth aspect, in which the subject has a checkered pattern and the characteristic portion is an intersection included in the checkered pattern.
 本開示の技術に係る第22の態様は、第17の態様に係る画像処理装置において、被写体は、校正用部材である画像処理装置である。 A twenty-second aspect according to the technology of the present disclosure is an image processing apparatus according to the seventeenth aspect, in which the subject is an image processing apparatus that is a calibration member.
 本開示の技術に係る第23の態様は、第17の態様に係る画像処理装置において、被写体は、複数の特徴部分を有し、複数の特徴部分は、直線状に配置されている画像処理装置である。 A twenty-third aspect of the technology of the present disclosure is the image processing apparatus according to the seventeenth aspect, wherein the subject has a plurality of characteristic parts, and the plurality of characteristic parts are arranged in a straight line. It is.
 本開示の技術に係る第24の態様は、第17の態様に係る画像処理装置において、被写体は、複数の特徴部分を有し、複数の特徴部分は、等間隔に配置されている画像処理装置である。 A twenty-fourth aspect according to the technology of the present disclosure is the image processing apparatus according to the seventeenth aspect, wherein the subject has a plurality of characteristic parts, and the plurality of characteristic parts are arranged at equal intervals. It is.
 本開示の技術に係る第25の態様は、第17の態様に係る画像処理装置において、被写体は、検査用物体を含む画像処理装置である。 A twenty-fifth aspect according to the technology of the present disclosure is an image processing apparatus according to the seventeenth aspect, in which the subject is an image processing apparatus including an object for inspection.
 本開示の技術に係る第26の態様は、第25の態様に係る画像処理装置において、被写体は、複数の検査用物体を含む画像処理装置である。 A twenty-sixth aspect according to the technology of the present disclosure is an image processing apparatus according to the twenty-fifth aspect, in which the subject is an image processing apparatus including a plurality of objects for inspection.
 本開示の技術に係る第27の態様は、第1の態様から第26の態様の何れか一つの態様に係る画像処理装置において、光学系は、各フィルタに対応して設けられた偏光フィルタを有し、複数の偏光フィルタは、互いに異なる偏光軸を有し、撮像装置は、複数の画素ブロックを有するイメージセンサを備え、各画素ブロックには、互いに異なる偏光軸を有する複数種類の偏光子が設けられている画像処理装置である。 A twenty-seventh aspect of the technology of the present disclosure is the image processing device according to any one of the first to twenty-sixth aspects, wherein the optical system includes polarizing filters provided corresponding to each filter. The plurality of polarizing filters have mutually different polarization axes, and the imaging device includes an image sensor having a plurality of pixel blocks, and each pixel block includes a plurality of types of polarizers having mutually different polarization axes. This is an image processing device provided.
 本開示の技術に係る第28の態様は、光学系を備える撮像装置から出力される画像に対して適用される画像処理方法であって、光学系は、光軸の周りに設けられた複数のフィルタを有し、画像処理方法は、画像に対して、複数のフィルタによって分光されることで生ずる光学像の位置ずれによる画像ずれを補正する処理を行うことを備える画像処理方法である。 A twenty-eighth aspect of the technology of the present disclosure is an image processing method applied to an image output from an imaging device including an optical system, the optical system comprising a plurality of The image processing method includes performing a process on an image to correct an image shift due to a position shift of an optical image caused by being separated by a plurality of filters.
 本開示の技術に係る第29の態様は、光学系を備える撮像装置から出力される画像に対してコンピュータに画像処理を実行させるためのプログラムであって、光学系は、光軸の周りに設けられた複数のフィルタを有し、画像処理は、画像に対して、複数のフィルタによって分光されることで生じる光学像の位置ずれによる画像ずれを補正する処理を含むプログラムである。 A twenty-ninth aspect of the technology of the present disclosure is a program for causing a computer to perform image processing on an image output from an imaging device including an optical system, wherein the optical system is provided around an optical axis. The program includes a plurality of filters, and the image processing includes processing for correcting an image shift due to a position shift of an optical image caused by dispersing an image by a plurality of filters.
撮像装置の一例を示す斜視図である。FIG. 1 is a perspective view showing an example of an imaging device. 瞳分割フィルタの一例を示す分解斜視図である。It is an exploded perspective view showing an example of a pupil division filter. 撮像装置のハードウェア構成の一例を示すブロック図である。FIG. 2 is a block diagram showing an example of the hardware configuration of an imaging device. 光電変換素子の構成の一例を示す説明図である。FIG. 2 is an explanatory diagram showing an example of the configuration of a photoelectric conversion element. 複数のスペクトル画像に基づいてマルチスペクトル画像が生成される態様の一例を示すブロック図である。FIG. 2 is a block diagram illustrating an example of a manner in which a multispectral image is generated based on a plurality of spectral images. 画像ずれを有する複数のスペクトル画像に基づいて生成されたマルチスペクトル画像の一例を示す正面図である。FIG. 3 is a front view showing an example of a multispectral image generated based on a plurality of spectral images having image shifts. 瞳分割フィルタに形成された開口と光学像との関係の第1例を示す模式図である。FIG. 2 is a schematic diagram showing a first example of the relationship between an aperture formed in a pupil splitting filter and an optical image. 瞳分割フィルタに形成された開口と光学像との関係の第2例を示す模式図である。FIG. 6 is a schematic diagram showing a second example of the relationship between an aperture formed in a pupil splitting filter and an optical image. マルチスペクトル画像生成処理を実行するための機能的な構成の一例を示すブロック図である。FIG. 2 is a block diagram illustrating an example of a functional configuration for executing multispectral image generation processing. 出力値取得部及び混信除去処理部の動作の一例を示すブロック図である。FIG. 2 is a block diagram illustrating an example of the operation of an output value acquisition section and an interference removal processing section. 補正処理部の動作の一例を示すブロック図である。FIG. 3 is a block diagram illustrating an example of the operation of a correction processing section. マルチスペクトル画像生成部の動作の一例を示すブロック図である。FIG. 2 is a block diagram illustrating an example of the operation of a multispectral image generation section. 撮像装置のハードウェア構成の一例及び補正値導出処理を実行するための機能的な構成の一例を示すブロック図である。FIG. 2 is a block diagram illustrating an example of a hardware configuration of an imaging device and an example of a functional configuration for executing correction value derivation processing. ドットチャートが撮像装置によって撮像される場合に受光面に結像される光学像の一例を示すブロック図である。FIG. 2 is a block diagram showing an example of an optical image formed on a light-receiving surface when a dot chart is imaged by an imaging device. スペクトル画像取得部の動作の一例を示すブロック図である。FIG. 2 is a block diagram illustrating an example of the operation of a spectral image acquisition unit. 補正値導出部の動作の一例を示すブロック図である。FIG. 3 is a block diagram illustrating an example of the operation of a correction value deriving section. マルチスペクトル画像生成処理の流れの一例を示すフローチャートである。3 is a flowchart illustrating an example of the flow of multispectral image generation processing. 補正値導出処理の流れの一例を示すフローチャートである。7 is a flowchart illustrating an example of the flow of correction value derivation processing. 第1変形例に係る補正値算出部の動作の一例を示すブロック図である。It is a block diagram showing an example of operation of a correction value calculation part concerning a 1st modification. 第2変形例に係る補正値算出部の動作の一例を示すブロック図である。It is a block diagram showing an example of operation of a correction value calculation part concerning a 2nd modification. 格子チャートが撮像装置によって撮像される場合に受光面に結像される光学像の一例を示すブロック図である。FIG. 2 is a block diagram illustrating an example of an optical image formed on a light receiving surface when a lattice chart is imaged by an imaging device. 第3変形例に係るスペクトル画像取得部の動作の一例を示すブロック図である。It is a block diagram showing an example of operation of a spectral image acquisition part concerning a 3rd modification. 第3変形例に係る補正値導出部の動作の一例を示すブロック図である。It is a block diagram showing an example of operation of a correction value derivation part concerning a 3rd modification. 第4変形例に係る補正値導出部の動作の一例を示すブロック図である。It is a block diagram showing an example of operation of a correction value derivation part concerning a 4th modification. 第5変形例に係る補正値導出部の動作の一例を示すブロック図である。It is a block diagram showing an example of operation of a correction value derivation part concerning a 5th modification. 第6変形例に係る補正値導出部の動作の一例を示すブロック図である。It is a block diagram showing an example of operation of a correction value derivation part concerning a 6th modification. 第7変形例に係る補正値導出部の動作の一例を示すブロック図である。It is a block diagram showing an example of operation of a correction value derivation part concerning a 7th modification. 第8変形例に係る補正値導出部の動作の一例を示すブロック図である。It is a block diagram showing an example of operation of a correction value derivation part concerning the 8th modification. 第9変形例に係る補正値導出部の動作の一例を示すブロック図である。It is a block diagram showing an example of operation of a correction value derivation part concerning a 9th modification.
 以下、添付図面に従って本開示の技術に係る画像処理装置、画像処理方法、及びプログラムの実施形態の一例について説明する。 An example of an embodiment of an image processing device, an image processing method, and a program according to the technology of the present disclosure will be described below with reference to the accompanying drawings.
 先ず、以下の説明で使用される文言について説明する。 First, the words used in the following explanation will be explained.
 LEDとは、“light emitting diode”の略称を指す。CMOSとは、“Complementary Metal Oxide Semiconductor”の略称を指す。CCDとは、“Charge Coupled Device”の略称を指す。I/Fとは、“Interface”の略称を指す。RAMとは、“Random Access Memory”の略称を指す。CPUとは、“Central Processing Unit”の略称を指す。GPUとは、“Graphics Processing Unit”の略称を指す。EEPROMとは、“Electrically Erasable and Programmable Read Only Memory”の略称を指す。HDDとは、“Hard Disk Drive”の略称を指す。ELとは、“Electro Luminescence”の略称を指す。TPUとは、“Tensor processing unit”の略称を指す。SSDとは、“Solid State Drive”の略称を指す。USBとは、“Universal Serial Bus”の略称を指す。ASICとは、“Application Specific Integrated Circuit”の略称を指す。FPGAとは、“Field-Programmable Gate Array”の略称を指す。PLDとは、“Programmable Logic Device”の略称を指す。SoCとは、“System-on-a-Chip”の略称を指す。ICとは、“Integrated Circuit”の略称を指す。 LED is an abbreviation for "light emitting diode." CMOS is an abbreviation for "Complementary Metal Oxide Semiconductor." CCD is an abbreviation for “Charge Coupled Device”. I/F is an abbreviation for "Interface". RAM is an abbreviation for "Random Access Memory." CPU is an abbreviation for "Central Processing Unit." GPU is an abbreviation for “Graphics Processing Unit.” EEPROM is an abbreviation for "Electrically Erasable and Programmable Read Only Memory." HDD is an abbreviation for "Hard Disk Drive." EL is an abbreviation for "Electro Luminescence". TPU is an abbreviation for "Tensor processing unit". SSD is an abbreviation for "Solid State Drive." USB is an abbreviation for "Universal Serial Bus." ASIC is an abbreviation for “Application Specific Integrated Circuit.” FPGA is an abbreviation for "Field-Programmable Gate Array." PLD is an abbreviation for “Programmable Logic Device”. SoC is an abbreviation for "System-on-a-Chip." IC is an abbreviation for "Integrated Circuit."
 本明細書の説明において、「中心」とは、完全な「中心」の他に、本開示の技術が属する技術分野で一般的に許容される誤差であって、本開示の技術の趣旨に反しない程度の誤差を含めた意味合いでの「中心」を指す。本明細書の説明において、「同じ」とは、完全な「同じ」の他に、本開示の技術が属する技術分野で一般的に許容される誤差であって、本開示の技術の趣旨に反しない程度の誤差を含めた意味合いでの「同じ」を指す。本明細書の説明において、「直交」とは、完全な「直交」の他に、本開示の技術が属する技術分野で一般的に許容される誤差であって、本開示の技術の趣旨に反しない程度の誤差を含めた意味合いでの「直交」を指す。本明細書の説明において、「垂直」とは、完全な垂直の他に、本開示の技術が属する技術分野で一般的に許容される誤差であって、本開示の技術の趣旨に反しない程度の誤差を含めた意味合いでの垂直を指す。本明細書の説明において、「直線」とは、完全な直線の他に、本開示の技術が属する技術分野で一般的に許容される誤差であって、本開示の技術の趣旨に反しない程度の誤差を含めた意味合いでの直線を指す。本明細書の説明において、「等間隔」とは、完全な等間隔の他に、本開示の技術が属する技術分野で一般的に許容される誤差であって、本開示の技術の趣旨に反しない程度の誤差を含めた意味合いでの等間隔を指す。 In the description of this specification, the term "center" refers to an error that is generally allowed in the technical field to which the technology of the present disclosure belongs, in addition to the perfect "center," and is contrary to the spirit of the technology of the present disclosure. Refers to the ``center'' in the sense that it includes a certain degree of error. In the description of this specification, "same" means not only "the same" but also an error that is generally allowed in the technical field to which the technology of the present disclosure belongs, and which is contrary to the spirit of the technology of the present disclosure. It refers to "the same" in the sense that it includes a certain degree of error. In the description of this specification, "orthogonal" means not only complete "orthogonal" but also an error that is generally allowed in the technical field to which the technology of the present disclosure belongs, and is contrary to the spirit of the technology of the present disclosure. This refers to "orthogonal" in the sense that it includes a degree of error that does not occur. In the description of this specification, "perpendicular" refers to an error that is generally allowed in the technical field to which the technology of the present disclosure belongs, in addition to being perfectly perpendicular, to the extent that it does not go against the spirit of the technology of the present disclosure. It refers to vertical in the sense of including the error of. In the description of this specification, a "straight line" refers to not only a perfect straight line but also an error that is generally allowed in the technical field to which the technology of the present disclosure belongs, and that does not go against the spirit of the technology of the present disclosure. It refers to a straight line that includes the error of. In the description of this specification, "equal spacing" refers to errors that are generally allowed in the technical field to which the technology of the present disclosure belongs, in addition to perfectly equal spacing, and is contrary to the spirit of the technology of the present disclosure. It refers to equal intervals that include a certain degree of error.
 一例として図1に示すように、撮像装置10は、レンズ装置12と、撮像装置ボディ14とを備える。撮像装置10は、本開示の技術に係る「撮像装置」及び「画像処理装置」の一例である。レンズ装置12は、入射した光を複数の波長帯に分光する瞳分割フィルタ16を有する。撮像装置10は、瞳分割フィルタ16によって複数の波長帯に分光された光を撮像することによりマルチスペクトル画像74を生成して出力するマルチスペクトルカメラである。 As shown in FIG. 1 as an example, the imaging device 10 includes a lens device 12 and an imaging device body 14. The imaging device 10 is an example of an "imaging device" and an "image processing device" according to the technology of the present disclosure. The lens device 12 includes a pupil division filter 16 that separates incident light into a plurality of wavelength bands. The imaging device 10 is a multispectral camera that generates and outputs a multispectral image 74 by capturing light that has been split into multiple wavelength bands by the pupil splitting filter 16.
 本実施形態では、一例として、マルチスペクトル画像74として、3つの波長帯に分光された光に基づいて生成されたマルチスペクトル画像を例に挙げて説明する。3つの波長帯は、あくまでも一例に過ぎず、4つ以上の波長帯であってもよい。すなわち、撮像装置10は、3つの波長帯に分光された光を撮像可能なマルチスペクトルカメラよりも高い波長分解能で被写体を撮像可能なマルチスペクトルカメラであってもよい。 In the present embodiment, a multispectral image generated based on light separated into three wavelength bands will be described as an example of the multispectral image 74. The three wavelength bands are just an example, and there may be four or more wavelength bands. That is, the imaging device 10 may be a multispectral camera that can image a subject with higher wavelength resolution than a multispectral camera that can image light separated into three wavelength bands.
 また、マルチスペクトル画像74には、可視光帯域の光が撮像されることで得られた画像が含まれていてもよく、人間の眼では知覚できない波長帯(例えば、近赤外帯域及び/又は紫外帯域等)の光が可視化された画像が含まれていてもよい。 Further, the multispectral image 74 may include an image obtained by capturing light in the visible light band, and may include a wavelength band that cannot be perceived by the human eye (for example, a near-infrared band and/or The image may include an image in which light in the ultraviolet band, etc.) is visualized.
 なお、マルチスペクトル画像74の用途としては、例えば、医療、農業、及び工業等の各種分野での観察対象物体としての被写体の計測、被写体の検査、被写体の分析、及び被写体の評価などが挙げられる。 Examples of uses of the multispectral image 74 include measurement of a subject as an object to be observed in various fields such as medicine, agriculture, and industry, inspection of a subject, analysis of a subject, and evaluation of a subject. .
 一例として図2に示すように、瞳分割フィルタ16は、枠体18と、分光フィルタ20A~20Cと、偏光フィルタ22A~22Cとを有する。 As shown in FIG. 2 as an example, the pupil division filter 16 includes a frame 18, spectral filters 20A to 20C, and polarization filters 22A to 22C.
 枠体18は、開口24A~24Cを有する。開口24A~24Cは、互いに同一の形状である。一例として、各開口24A~24Cの形状は、扇形状である。なお、各開口24A~24Cの形状は、扇形状以外の形状(例えば、四角形状又は円形状等)でもよい。開口24A~24Cは、光軸OAの周りに等間隔に並んで形成されている。各開口24A~24Cの重心位置Gは、光軸OAから外れた位置に位置する。各重心位置Gは、各開口24A~24Cの幾何学的形状の中心である。以下、開口24A~24Cを区別して説明する必要が無い場合には、各開口24A~24Cを「開口24」と称する。開口24は、本開示の技術に係る「開口」の一例である。 The frame 18 has openings 24A to 24C. The openings 24A to 24C have the same shape. As an example, each of the openings 24A to 24C has a fan shape. Note that the shape of each of the openings 24A to 24C may be a shape other than a fan shape (for example, a square shape or a circular shape). The openings 24A to 24C are arranged at equal intervals around the optical axis OA. The center of gravity G of each aperture 24A to 24C is located off the optical axis OA. Each center of gravity position G is the center of the geometry of each aperture 24A-24C. Hereinafter, unless it is necessary to separately explain the openings 24A to 24C, each opening 24A to 24C will be referred to as an "opening 24." The opening 24 is an example of an "opening" according to the technology of the present disclosure.
 分光フィルタ20A~20Cは、開口24A~開口24Cにそれぞれ設けられることにより、光軸OAの周りに等間隔に並んで配置されている。各分光フィルタ20A~20Cは、特定の波長帯の光を透過させるバンドパスフィルタである。分光フィルタ20A~20Cは、互いに異なる波長帯を有する。具体的には、分光フィルタ20Aは、第1波長帯λを有しており、分光フィルタ20Bは、第2波長帯λを有しており、分光フィルタ20Cは、第3波長帯λを有する。 The spectral filters 20A to 20C are provided in the apertures 24A to 24C, respectively, so that they are arranged at equal intervals around the optical axis OA. Each of the spectral filters 20A to 20C is a bandpass filter that transmits light in a specific wavelength band. The spectral filters 20A to 20C have different wavelength bands. Specifically, the spectral filter 20A has a first wavelength band λ1 , the spectral filter 20B has a second wavelength band λ2 , and the spectral filter 20C has a third wavelength band λ3. has.
 以下、分光フィルタ20A~20Cを区別して説明する必要が無い場合には、各分光フィルタ20A~20Cを「分光フィルタ20」と称する。分光フィルタ20は、本開示の技術に係る「フィルタ」の一例である。また、第1波長帯λ、第2波長帯λ、及び第3波長帯λを区別して説明する必要が無い場合には、第1波長帯λ、第2波長帯λ、及び第3波長帯λをそれぞれ「波長帯λ」と称する。 Hereinafter, unless it is necessary to separately explain the spectral filters 20A to 20C, each of the spectral filters 20A to 20C will be referred to as a "spectral filter 20." The spectral filter 20 is an example of a "filter" according to the technology of the present disclosure. In addition, if it is not necessary to separately explain the first wavelength band λ 1 , the second wavelength band λ 2 , and the third wavelength band λ 3 , the first wavelength band λ 1 , the second wavelength band λ 2 , and Each of the third wavelength bands λ 3 is referred to as a “wavelength band λ”.
 偏光フィルタ22A~22Cは、分光フィルタ20A~20Cにそれぞれ対応して設けられている。具体的には、偏光フィルタ22Aは、開口24Aに設けられており、分光フィルタ20Aと重ね合わされている。偏光フィルタ22Bは、開口24Bに設けられており、分光フィルタ20Bと重ね合わされている。偏光フィルタ22Cは、開口24Cに設けられており、分光フィルタ20Cと重ね合わされている。 The polarizing filters 22A to 22C are provided corresponding to the spectral filters 20A to 20C, respectively. Specifically, the polarizing filter 22A is provided in the aperture 24A, and is overlapped with the spectral filter 20A. The polarizing filter 22B is provided in the aperture 24B and overlapped with the spectral filter 20B. The polarizing filter 22C is provided in the aperture 24C and is overlapped with the spectral filter 20C.
 各偏光フィルタ22A~22Cは、特定の方向に振動する光を透過させる光学フィルタである。偏光フィルタ22A~22Cは、互いに異なる偏光角度の偏光軸OAを有する。具体的には、偏光フィルタ22Aは、第1偏光角度αを有しており、偏光フィルタ22Bは、第2偏光角度αを有しており、偏光フィルタ22Cは、第3偏光角度αを有する。なお、偏光軸を透過軸と称してもよい。一例として、第1偏光角度αは、0°に設定されており、第2偏光角度αは、45°に設定されており、第3偏光角度αは、90°に設定されている。 Each of the polarizing filters 22A to 22C is an optical filter that transmits light vibrating in a specific direction. The polarizing filters 22A to 22C have polarization axes OA with mutually different polarization angles. Specifically, polarizing filter 22A has a first polarizing angle α 1, polarizing filter 22B has a second polarizing angle α 2 , and polarizing filter 22C has a third polarizing angle α 3 . has. Note that the polarization axis may also be referred to as the transmission axis. As an example, the first polarization angle α 1 is set to 0°, the second polarization angle α 2 is set to 45°, and the third polarization angle α 3 is set to 90°. .
 以下、偏光フィルタ22A~22Cを区別して説明する必要が無い場合には、各偏光フィルタ22A~22Cを「偏光フィルタ22」と称する。偏光フィルタ22は、本開示の技術に係る「偏光フィルタ22」の一例である。また、第1偏光角度α、第2偏光角度α、及び第3偏光角度αを区別して説明する必要が無い場合には、第1偏光角度α、第2偏光角度α、及び第3偏光角度αをそれぞれ「偏光角度α」と称する。 Hereinafter, unless it is necessary to explain the polarizing filters 22A to 22C separately, each of the polarizing filters 22A to 22C will be referred to as a "polarizing filter 22." The polarizing filter 22 is an example of the "polarizing filter 22" according to the technology of the present disclosure. Furthermore, if it is not necessary to separately explain the first polarization angle α 1 , the second polarization angle α 2 , and the third polarization angle α 3 , the first polarization angle α 1 , the second polarization angle α 2 , and Each of the third polarization angles α 3 is referred to as a “polarization angle α”.
 なお、図2に示される例では、複数の波長帯λの数に対応して、複数の開口24の数が3つとされているが、複数の開口24の数は、複数の波長帯λの数(すなわち、複数の分光フィルタ20の数)よりも多くてもよい。また、複数の開口24のうちの使用しない開口24は、遮蔽部材(図示省略)によって塞がれていてもよい。また、図2に示される例では、複数の分光フィルタ20は、互いに異なる波長帯λを有するが、複数の分光フィルタ20には、同じ波長帯λを有する分光フィルタ20が含まれていてもよい。 In the example shown in FIG. 2, the number of apertures 24 is three, corresponding to the number of wavelength bands λ, but the number of apertures 24 is three, corresponding to the number of wavelength bands λ. (i.e., the number of the plurality of spectral filters 20). Furthermore, unused openings 24 among the plurality of openings 24 may be covered by a shielding member (not shown). Further, in the example shown in FIG. 2, the plurality of spectral filters 20 have different wavelength bands λ, but the plurality of spectral filters 20 may include spectral filters 20 having the same wavelength band λ. .
 一例として図3に示すように、レンズ装置12は、光学系26を備えており、撮像装置ボディ14は、イメージセンサ28を備えている。光学系26は、本開示の技術に係る「光学系」の一例であり、イメージセンサ28は、本開示の技術に係る「イメージセンサ」の一例である。 As shown in FIG. 3 as an example, the lens device 12 includes an optical system 26, and the imaging device body 14 includes an image sensor 28. The optical system 26 is an example of an "optical system" according to the technology of the present disclosure, and the image sensor 28 is an example of an "image sensor" according to the technology of the present disclosure.
 光学系26は、瞳分割フィルタ16と、第1レンズ30と、第2レンズ32とを有する。第1レンズ30、瞳分割フィルタ16、及び第2レンズ32は、被写体4側からイメージセンサ28側にかけて、レンズ装置12の光軸OAに沿って第1レンズ30、瞳分割フィルタ16、及び第2レンズ32の順に配置されている。なお、以下の説明では、被写体4側を「物体側」と称し、イメージセンサ28側を「像側」と称する場合がある。 The optical system 26 includes a pupil splitting filter 16, a first lens 30, and a second lens 32. The first lens 30, the pupil splitting filter 16, and the second lens 32 are arranged along the optical axis OA of the lens device 12 from the subject 4 side to the image sensor 28 side. The lenses 32 are arranged in this order. In the following description, the subject 4 side may be referred to as the "object side" and the image sensor 28 side may be referred to as the "image side".
 第1レンズ30は、光源2から発せられた光が被写体4で反射することで得られた光(以下、「被写体光」と称する)を瞳分割フィルタ16に入射させる。被写体光は、本開示の技術に係る「光」の一例である。第2レンズ32は、瞳分割フィルタ16を透過した被写体光をイメージセンサ28に設けられた光電変換素子34の受光面34A上に結像させる。 The first lens 30 causes light obtained by the light emitted from the light source 2 being reflected by the subject 4 (hereinafter referred to as "subject light") to enter the pupil division filter 16. The subject light is an example of "light" according to the technology of the present disclosure. The second lens 32 forms an image of the subject light that has passed through the pupil splitting filter 16 onto a light receiving surface 34A of a photoelectric conversion element 34 provided in the image sensor 28.
 光源2は、例えば、LED光源、レーザ光源、又は白熱電球等である。光源2から発せられる光は、無偏光である。光源2は、撮像装置ボディ14及び/又はレンズ装置12に備えられてもよい。また、光源から発せられる光は、自然光でもよい。 The light source 2 is, for example, an LED light source, a laser light source, or an incandescent light bulb. The light emitted from the light source 2 is unpolarized. The light source 2 may be included in the imaging device body 14 and/or the lens device 12. Furthermore, the light emitted from the light source may be natural light.
 瞳分割フィルタ16は、光学系26の瞳位置に配置されている。瞳位置とは、光学系26の明るさを制限する絞り面を指す。ここでの瞳位置には、近傍位置も含まれ、近傍位置とは入射瞳から射出瞳までの範囲を指す。瞳分割フィルタ16の構成は、図2を用いて説明した通りである。図3では、便宜上、複数の分光フィルタ20及び複数の偏光フィルタ22が光軸OAと直交する方向に沿って直線状に配列された状態で示されている。 The pupil splitting filter 16 is placed at the pupil position of the optical system 26. The pupil position refers to the aperture surface that limits the brightness of the optical system 26. The pupil position here includes a nearby position, and the nearby position refers to the range from the entrance pupil to the exit pupil. The configuration of the pupil division filter 16 is as described using FIG. 2. In FIG. 3, for convenience, a plurality of spectral filters 20 and a plurality of polarizing filters 22 are shown arranged in a straight line along a direction perpendicular to the optical axis OA.
 イメージセンサ28は、光電変換素子34及び信号処理回路36を備えている。イメージセンサ28は、一例として、CMOSイメージセンサである。本実施形態では、イメージセンサ28としてCMOSイメージセンサが例示されているが、本開示の技術はこれに限定されず、例えば、イメージセンサ28がCCDイメージセンサ等の他種類のイメージセンサであっても本開示の技術は成立する。 The image sensor 28 includes a photoelectric conversion element 34 and a signal processing circuit 36. The image sensor 28 is, for example, a CMOS image sensor. In this embodiment, a CMOS image sensor is exemplified as the image sensor 28, but the technology of the present disclosure is not limited to this. For example, the image sensor 28 may be another type of image sensor such as a CCD image sensor. The technology of the present disclosure is realized.
 一例として図3中には、光電変換素子34の模式的な構成が示されている。また、一例として図4には、光電変換素子34の一部の構成が具体的に示されている。光電変換素子34は、画素層38、偏光フィルタ層40、及び分光フィルタ層42を有する。なお、図3に示す光電変換素子34の構成は一例であって、光電変換素子34が分光フィルタ層42を有しなくても本開示の技術は成立する。 As an example, FIG. 3 shows a schematic configuration of the photoelectric conversion element 34. Furthermore, as an example, FIG. 4 specifically shows the configuration of a part of the photoelectric conversion element 34. The photoelectric conversion element 34 includes a pixel layer 38, a polarizing filter layer 40, and a spectral filter layer 42. Note that the configuration of the photoelectric conversion element 34 shown in FIG. 3 is an example, and the technology of the present disclosure is valid even if the photoelectric conversion element 34 does not include the spectral filter layer 42.
 画素層38は、複数の画素44を有する。複数の画素44は、マトリクス状に配置されており、光電変換素子34の受光面34Aを形成している。各画素44は、フォトダイオード(図示省略)を有する物理的な画素であり、受光した光を光電変換し、受光量に応じた電気信号を出力する。 The pixel layer 38 has a plurality of pixels 44. The plurality of pixels 44 are arranged in a matrix and form a light receiving surface 34A of the photoelectric conversion element 34. Each pixel 44 is a physical pixel having a photodiode (not shown), photoelectrically converts the received light, and outputs an electrical signal according to the amount of received light.
 以下、マルチスペクトル画像74を形成する画素と区別するために、光電変換素子34に設けられた画素44を「物理画素44」と称する。また、マルチスペクトル画像74を形成する画素を「画像画素」と称する。 Hereinafter, in order to distinguish from the pixels forming the multispectral image 74, the pixels 44 provided in the photoelectric conversion element 34 will be referred to as "physical pixels 44." Moreover, the pixels forming the multispectral image 74 are referred to as "image pixels."
 光電変換素子34は、複数の物理画素44から出力された電気信号を撮像データとして信号処理回路36に対して出力する。信号処理回路36は、光電変換素子34から入力されたアナログの撮像データをデジタル化する。撮像データは、撮像画像70を示す画像データである。 The photoelectric conversion element 34 outputs the electrical signals output from the plurality of physical pixels 44 to the signal processing circuit 36 as image data. The signal processing circuit 36 digitizes the analog imaging data input from the photoelectric conversion element 34. The image data is image data indicating a captured image 70.
 複数の物理画素44は、複数の画素ブロック46を形成している。各画素ブロック46は、縦横2個ずつの合計4個の物理画素44によって形成されている。図3では、便宜上、各画素ブロック46を形成する4個の物理画素44が光軸OAと直交する方向に沿って直線状に配列された状態で示されているが、4個の物理画素44は、光電変換素子34の縦方向及び横方向にそれぞれ隣接して配置されている(図4参照)。物理画素44は、本開示の技術に係る「画素」の一例であり、画素ブロック46は、本開示の技術に係る「画素ブロック」の一例である。 A plurality of physical pixels 44 form a plurality of pixel blocks 46. Each pixel block 46 is formed by a total of four physical pixels 44, two in the vertical direction and two in the horizontal direction. In FIG. 3, for convenience, the four physical pixels 44 forming each pixel block 46 are shown arranged in a straight line along the direction perpendicular to the optical axis OA, but the four physical pixels 44 are arranged adjacent to the photoelectric conversion element 34 in the vertical and horizontal directions (see FIG. 4). The physical pixel 44 is an example of a "pixel" according to the technology of the present disclosure, and the pixel block 46 is an example of a "pixel block" according to the technology of the present disclosure.
 偏光フィルタ層40は、複数種類の偏光子48A~48Dを有する。各偏光子48A~48Dは、特定の方向に振動する光を透過させる光学フィルタである。偏光子48A~48Dは、互いに異なる偏光角度θの偏光軸OAを有する。具体的には、偏光子48Aは、第1偏光角度θを有しており、偏光子48Bは、第2偏光角度θを有しており、偏光子48Cは、第3偏光角度θを有しており、偏光子48Dは、第4偏光角度θを有する。一例として、第1偏光角度θは、0°に設定されており、第2偏光角度θは、45°に設定されており、第3偏光角度θは、90°に設定されており、第4偏光角度θは、135°に設定されている。 The polarizing filter layer 40 has multiple types of polarizers 48A to 48D. Each polarizer 48A to 48D is an optical filter that transmits light vibrating in a specific direction. The polarizers 48A to 48D have polarization axes OA with mutually different polarization angles θ. Specifically, polarizer 48A has a first polarization angle θ 1 , polarizer 48B has a second polarization angle θ 2 , and polarizer 48C has a third polarization angle θ 3 . , and the polarizer 48D has a fourth polarization angle θ4 . As an example, the first polarization angle θ 1 is set to 0°, the second polarization angle θ 2 is set to 45°, and the third polarization angle θ 3 is set to 90°. , the fourth polarization angle θ 4 is set to 135°.
 以下、偏光子48A~48Dを区別して説明する必要が無い場合には、各偏光子48A~48Dを「偏光子48」と称する。偏光子48は、本開示の技術に係る「偏光子」の一例である。また、第1偏光角度θ、第2偏光角度θ、第3偏光角度θ、及び第4偏光角度θを区別して説明する必要が無い場合には、第1偏光角度θ、第2偏光角度θ、第3偏光角度θ、及び第4偏光角度θをそれぞれ「偏光角度θ」と称する。 Hereinafter, unless it is necessary to explain the polarizers 48A to 48D separately, each of the polarizers 48A to 48D will be referred to as a "polarizer 48." The polarizer 48 is an example of a "polarizer" according to the technology of the present disclosure. In addition, if it is not necessary to separately explain the first polarization angle θ 1 , the second polarization angle θ 2 , the third polarization angle θ 3 , and the fourth polarization angle θ 4 , the first polarization angle θ 1 , the The second polarization angle θ 2 , the third polarization angle θ 3 , and the fourth polarization angle θ 4 are each referred to as “polarization angle θ”.
 分光フィルタ層42は、Bフィルタ50A、Gフィルタ50B、及びRフィルタ50Cを有する。Bフィルタ50Aは、複数の波長帯の光のうちの青色の波長帯の光を最も多く透過させる青色帯域フィルタである。Gフィルタ50Bは、複数の波長帯の光のうちの緑色の波長帯の光を最も多く透過させる緑色帯域フィルタである。Rフィルタ50Cは、複数の波長帯の光のうちの赤色の波長帯の光を最も多く透過させる赤色帯域フィルタである。Bフィルタ50A、Gフィルタ50B、及びRフィルタ50Cは、各画素ブロック46に割り当てられている。 The spectral filter layer 42 includes a B filter 50A, a G filter 50B, and an R filter 50C. The B filter 50A is a blue band filter that transmits most of the light in the blue wavelength band among the plurality of wavelength bands. The G filter 50B is a green band filter that transmits the most light in the green wavelength band among the plurality of wavelength bands. The R filter 50C is a red band filter that transmits most of the light in the red wavelength band among the plurality of wavelength bands. A B filter 50A, a G filter 50B, and an R filter 50C are assigned to each pixel block 46.
 図3では、便宜上、Bフィルタ50A、Gフィルタ50B、及びRフィルタ50Cが光軸OAと直交する方向に沿って直線状に配列された状態で示されているが、一例として図4に示すように、Bフィルタ50A、Gフィルタ50B、及びRフィルタ50Cは既定のパターン配列でマトリクス状に配置されている。図4に示す例では、Bフィルタ50A、Gフィルタ50B、及びRフィルタ50Cは、既定のパターン配列の一例として、ベイヤ配列でマトリクス状に配置されている。なお、既定のパターン配列は、ベイヤ配列以外に、RGBストライプ配列、R/G市松配列、X-Trans(登録商標)配列、又はハニカム配列等でもよい。 In FIG. 3, for convenience, the B filter 50A, the G filter 50B, and the R filter 50C are shown arranged in a straight line along the direction perpendicular to the optical axis OA, but as an example, as shown in FIG. The B filter 50A, the G filter 50B, and the R filter 50C are arranged in a matrix in a predetermined pattern arrangement. In the example shown in FIG. 4, the B filter 50A, the G filter 50B, and the R filter 50C are arranged in a matrix in a Bayer arrangement, as an example of a predetermined pattern arrangement. Note that the predetermined pattern arrangement may be an RGB stripe arrangement, an R/G checkered arrangement, an X-Trans (registered trademark) arrangement, a honeycomb arrangement, or the like other than the Bayer arrangement.
 以下、Bフィルタ50A、Gフィルタ50B、及びRフィルタ50Cを区別して説明する必要がない場合には、Bフィルタ50A、Gフィルタ50B、及びRフィルタ50Cをそれぞれ「フィルタ50」と称する。 Hereinafter, unless it is necessary to explain the B filter 50A, G filter 50B, and R filter 50C separately, the B filter 50A, the G filter 50B, and the R filter 50C will be referred to as "filters 50", respectively.
 一例として図3に示すように、撮像装置ボディ14は、イメージセンサ28に加えて、制御ドライバ52、入出力I/F54、コンピュータ56、及びディスプレイ58を備える。入出力I/F54には、信号処理回路36、制御ドライバ52、コンピュータ56、及びディスプレイ58が接続されている。 As shown in FIG. 3 as an example, the imaging device body 14 includes a control driver 52, an input/output I/F 54, a computer 56, and a display 58 in addition to the image sensor 28. A signal processing circuit 36, a control driver 52, a computer 56, and a display 58 are connected to the input/output I/F 54.
 コンピュータ56は、プロセッサ60、ストレージ62、及びRAM64を有する。プロセッサ60は、撮像装置10の全体を制御する。プロセッサ60は、例えば、CPU及びGPUを含む演算処理装置であり、GPUは、CPUの制御下で動作し、画像に関する処理の実行を担う。ここでは、プロセッサ60の一例としてCPU及びGPUを含む演算処理装置を挙げているが、これはあくまでも一例に過ぎず、プロセッサ60は、GPU機能を統合した1つ以上のCPUであってもよいし、GPU機能を統合していない1つ以上のCPUであってもよい。 The computer 56 has a processor 60, a storage 62, and a RAM 64. The processor 60 controls the entire imaging device 10 . The processor 60 is, for example, an arithmetic processing device including a CPU and a GPU, and the GPU operates under the control of the CPU and is responsible for executing processing regarding images. Here, an arithmetic processing unit including a CPU and a GPU is cited as an example of the processor 60, but this is just an example, and the processor 60 may be one or more CPUs with integrated GPU functions. , one or more CPUs without integrated GPU functionality.
 プロセッサ60、ストレージ62、及びRAM64は、バス66を介して接続されており、バス66は、入出力I/F54に接続されている。プロセッサ60は、本開示の技術に係る「プロセッサ」の一例である。コンピュータ56は、本開示の技術に係る「コンピュータ」の一例である。 The processor 60, storage 62, and RAM 64 are connected via a bus 66, and the bus 66 is connected to the input/output I/F 54. Processor 60 is an example of a "processor" according to the technology of the present disclosure. The computer 56 is an example of a "computer" according to the technology of the present disclosure.
 ストレージ62は、非一時的記憶媒体であり、各種パラメータ及び各種プログラムを記憶している。例えば、ストレージ62は、フラッシュメモリ(例えば、EEPROM)である。但し、これは、あくまでも一例に過ぎず、フラッシュメモリと共に、HDD等をストレージ62として適用してもよい。RAM64は、各種情報を一時的に記憶し、ワークメモリとして用いられる。RAM64としては、例えば、DRAM及び/又はSRAM等が挙げられる。 The storage 62 is a non-temporary storage medium and stores various parameters and programs. For example, storage 62 is flash memory (eg, EEPROM). However, this is just an example, and an HDD or the like may be used as the storage 62 in addition to a flash memory. The RAM 64 temporarily stores various information and is used as a work memory. Examples of the RAM 64 include DRAM and/or SRAM.
 プロセッサ60は、ストレージ62から必要なプログラムを読み出し、読み出したプログラムをRAM64上で実行する。プロセッサ60は、RAM64上で実行するプログラムに従って、制御ドライバ52及び信号処理回路36を制御する。制御ドライバ52は、プロセッサ60の制御下で光電変換素子34を制御する。 The processor 60 reads a necessary program from the storage 62 and executes the read program on the RAM 64. Processor 60 controls control driver 52 and signal processing circuit 36 according to a program executed on RAM 64. The control driver 52 controls the photoelectric conversion element 34 under the control of the processor 60.
 ディスプレイ58は、例えば液晶表示器又はELディスプレイであり、マルチスペクトル画像74を含む各種画像を表示する。なお、ディスプレイ58は、撮像装置10に通信可能に接続された外部装置(図示省略)に備えられてもよい。 The display 58 is, for example, a liquid crystal display or an EL display, and displays various images including the multispectral image 74. Note that the display 58 may be included in an external device (not shown) that is communicably connected to the imaging device 10.
 一例として図5に示すように、撮像装置10では、撮像画像70に基づいて各波長帯λに対応するスペクトル画像72A~72Cが生成され、スペクトル画像72A~72Cに基づいてマルチスペクトル画像74が生成される。スペクトル画像72Aは、第1波長帯λに対応するスペクトル画像であり、スペクトル画像72Bは、第2波長帯λに対応するスペクトル画像であり、スペクトル画像72Cは、第3波長帯λに対応するスペクトル画像である。 As an example, as shown in FIG. 5, the imaging device 10 generates spectral images 72A to 72C corresponding to each wavelength band λ based on the captured image 70, and generates a multispectral image 74 based on the spectral images 72A to 72C. be done. The spectral image 72A is a spectral image corresponding to the first wavelength band λ1 , the spectral image 72B is a spectral image corresponding to the second wavelength band λ2 , and the spectral image 72C is a spectral image corresponding to the third wavelength band λ3. This is the corresponding spectral image.
 撮像画像70に基づいてスペクトル画像72A~72Cを生成する処理、及び、スペクトル画像72A~72Cに基づいてマルチスペクトル画像74を生成する処理の具体的な内容については、後述する。以下、スペクトル画像72A~72Cを区別して説明する必要が無い場合には、各スペクトル画像72A~72Cを「スペクトル画像72」と称する。スペクトル画像72は、本開示の技術に係る「スペクトル画像」及び「画像」の一例である。 The specific details of the process of generating the spectral images 72A to 72C based on the captured image 70 and the process of generating the multispectral image 74 based on the spectral images 72A to 72C will be described later. Hereinafter, unless it is necessary to explain the spectral images 72A to 72C separately, each of the spectral images 72A to 72C will be referred to as a "spectral image 72." The spectral image 72 is an example of a "spectral image" and an "image" according to the technology of the present disclosure.
 ここで、図6には、マルチスペクトル画像74の一例が示されている。マルチスペクトル画像74は、複数のスペクトル画像72を含む。各スペクトル画像72は、「F」の文字の形を有する被写体を画像として含む。 Here, FIG. 6 shows an example of the multispectral image 74. Multispectral image 74 includes multiple spectral images 72. Each spectral image 72 includes an object having the shape of the letter "F" as an image.
 図6に示す例では、各スペクトル画像72に画像のずれ(以下、「画像ずれ」と称する)が生じている。各画像ずれは、スペクトル画像72毎に異なる。スペクトル画像72毎に画像ずれが異なる要因としては、各スペクトル画像72に対応する分光フィルタ20(図2参照)が光軸OAに対して異なる位置に形成された開口24に設けられることにより、複数の開口24間で視差が生じることが挙げられる。複数の開口24間で視差が生じると、波長帯λ毎に受光面34Aに結像される光学像に視差による位置ずれが生じる。光学像の位置ずれは、波長帯λ毎に異なる大きさ及び方向を有する。 In the example shown in FIG. 6, an image shift (hereinafter referred to as "image shift") has occurred in each spectrum image 72. Each image shift differs for each spectrum image 72. The reason why the image shift differs for each spectral image 72 is that the spectral filter 20 (see FIG. 2) corresponding to each spectral image 72 is provided in the aperture 24 formed at different positions with respect to the optical axis OA, An example of this is that parallax occurs between the apertures 24 of. When parallax occurs between the plurality of apertures 24, a positional shift occurs in the optical image formed on the light receiving surface 34A for each wavelength band λ due to the parallax. The positional shift of the optical image has a different magnitude and direction for each wavelength band λ.
 このように、スペクトル画像72毎に異なる画像ずれが生じる要因としては、例えば、複数の開口24が互いに異なる位置に形成されることにより、光学系26で生じる視差が波長帯λ毎に異なることが挙げられる。そして、各スペクトル画像72に画像ずれが生じている状態では、画像ずれが生じていない場合に比して、マルチスペクトル画像74の画質が低下する。 As described above, the cause of different image shifts for each spectral image 72 is, for example, that the parallax generated in the optical system 26 is different for each wavelength band λ due to the plurality of apertures 24 being formed at different positions. Can be mentioned. In a state where an image shift occurs in each spectral image 72, the image quality of the multispectral image 74 deteriorates compared to a case where no image shift occurs.
 また、光学像の位置ずれは、開口24の位置だけなく、開口24の形状にも影響を受ける。ここで、図7には、開口24の形状が扇形状である場合に物点76から発せられた光が受光面34Aに結像される態様の一例が示されており、図8は、開口24の形状が矩形状である場合に物点76から発せられた光が受光面34Aに結像される態様の一例が示されている。 Further, the positional shift of the optical image is affected not only by the position of the aperture 24 but also by the shape of the aperture 24. Here, FIG. 7 shows an example of a mode in which the light emitted from the object point 76 is imaged on the light receiving surface 34A when the shape of the aperture 24 is fan-shaped, and FIG. An example of a mode in which light emitted from the object point 76 is imaged on the light receiving surface 34A when the shape of the object point 24 is rectangular is shown.
 一例として図7及び図8に示すように、一般に光学系26は収差を有するため、光は、受光面34Aの1点には集光せずに、開口24の形状に対応する拡がりを有する。このため、受光面34Aに結像された光学像78は、開口24の形状に対応する形状を有する。そして、光学像78の位置ずれを光学像78の形状の重心78A(すなわち、幾何学的形状の中心)で規定した場合、光学像78の位置ずれは、光学像78の形状によって異なる。このように、光学像78の位置ずれは、開口24の位置だけなく、開口24の形状にも影響を受ける。 As an example, as shown in FIGS. 7 and 8, since the optical system 26 generally has an aberration, the light does not converge on one point on the light receiving surface 34A, but spreads out corresponding to the shape of the aperture 24. Therefore, the optical image 78 formed on the light receiving surface 34A has a shape corresponding to the shape of the aperture 24. When the positional deviation of the optical image 78 is defined by the center of gravity 78A of the shape of the optical image 78 (that is, the center of the geometric shape), the positional deviation of the optical image 78 differs depending on the shape of the optical image 78. In this way, the positional shift of the optical image 78 is affected not only by the position of the aperture 24 but also by the shape of the aperture 24.
 つまり、光学像78の位置ずれは、開口24の位置及び形状によって規定される開口24の重心位置Gによって異なる。例えば、開口24の重心位置Gは、光軸OAに対する相対位置で規定される。また、光学像78の位置ずれは、開口24の重心位置G以外の光学系26の特性によっても異なる。開口24の重心位置G以外の光学系26の特性としては、例えば、瞳分割フィルタ16の配置位置等が挙げられる。 In other words, the positional shift of the optical image 78 differs depending on the center of gravity position G of the aperture 24, which is defined by the position and shape of the aperture 24. For example, the center of gravity position G of the aperture 24 is defined by its relative position to the optical axis OA. Further, the positional shift of the optical image 78 also differs depending on the characteristics of the optical system 26 other than the center of gravity position G of the aperture 24. The characteristics of the optical system 26 other than the center of gravity position G of the aperture 24 include, for example, the arrangement position of the pupil division filter 16.
 なお、画像ずれが生じる要因としては、波長帯λ毎に異なる視差の他に、光学系26で発生するディストーション、及び/又は、光軸OAと非垂直な被写体面が撮像されることにより生じる台形歪み等も挙げられる。 In addition to the parallax that differs for each wavelength band λ, factors that cause image shift include distortion occurring in the optical system 26 and/or trapezoidation caused by imaging a subject surface that is not perpendicular to the optical axis OA. Examples include distortion.
 本実施形態では、画像ずれが生じているスペクトル画像72に基づいてマルチスペクトル画像74が生成される場合に比して、画質の高いマルチスペクトル画像74を得るために、プロセッサ60によって以下に説明するマルチスペクトル画像生成処理が行われる。 In this embodiment, in order to obtain a multispectral image 74 with higher image quality than in the case where the multispectral image 74 is generated based on the spectral image 72 in which image shift has occurred, the processor 60 performs the following process. A multispectral image generation process is performed.
 一例として図9に示すように、ストレージ62には、マルチスペクトル画像生成プログラム80が記憶されている。マルチスペクトル画像生成プログラム80は、本開示の技術に係る「プログラム」の一例である。プロセッサ60は、ストレージ62からマルチスペクトル画像生成プログラム80を読み出し、読み出したマルチスペクトル画像生成プログラム80をRAM64上で実行する。プロセッサ60は、RAM64上で実行するマルチスペクトル画像生成プログラム80に従って、マルチスペクトル画像74を生成するためのマルチスペクトル画像生成処理を実行する。 As an example, as shown in FIG. 9, a multispectral image generation program 80 is stored in the storage 62. The multispectral image generation program 80 is an example of a "program" according to the technology of the present disclosure. The processor 60 reads the multispectral image generation program 80 from the storage 62 and executes the read multispectral image generation program 80 on the RAM 64. Processor 60 executes multispectral image generation processing to generate multispectral image 74 according to multispectral image generation program 80 executed on RAM 64 .
 マルチスペクトル画像生成処理は、プロセッサ60がマルチスペクトル画像生成プログラム80に従って、出力値取得部82、混信除去処理部84、補正処理部86、及びマルチスペクトル画像生成部88として動作することで実現される。 The multispectral image generation process is realized by the processor 60 operating as an output value acquisition section 82, an interference removal processing section 84, a correction processing section 86, and a multispectral image generation section 88 according to the multispectral image generation program 80. .
 一例として図10に示すように、出力値取得部82は、イメージセンサ28から出力された撮像データがプロセッサ60に入力された場合に、撮像データに基づいて、各物理画素44の出力値Yを取得する。各物理画素44の出力値Yは、撮像データによって示される撮像画像70に含まれる各画素の輝度値に対応する。 As an example, as shown in FIG. 10, when the imaging data output from the image sensor 28 is input to the processor 60, the output value acquisition unit 82 calculates the output value Y of each physical pixel 44 based on the imaging data. get. The output value Y of each physical pixel 44 corresponds to the luminance value of each pixel included in the captured image 70 indicated by the captured image data.
 ここで、各物理画素44の出力値Yは、混信(すなわち、クロストーク)が含まれた値である。すなわち、各物理画素44には、第1波長帯λ、第2波長帯λ、及び第3波長帯λの各波長帯λの光が入射するため、出力値Yは、第1波長帯λの光量に応じた値、第2波長帯λの光量に応じた値、及び第3波長帯λの光量に応じた値が混合した値となる。 Here, the output value Y of each physical pixel 44 is a value that includes interference (that is, crosstalk). That is, since light in each wavelength band λ of the first wavelength band λ 1 , the second wavelength band λ 2 , and the third wavelength band λ 3 is incident on each physical pixel 44, the output value Y is The value is a mixture of a value corresponding to the light amount in the band λ 1 , a value depending on the light amount in the second wavelength band λ 2 , and a value depending on the light amount in the third wavelength band λ 3 .
 マルチスペクトル画像74を得るためには、プロセッサ60が、物理画素44毎に、出力値Yから各波長帯λに対応した値を分離して抽出する処理、すなわち、混信を除去する処理である混信除去処理を出力値Yに対して行う必要がある。そこで、本実施形態では、マルチスペクトル画像74(図7参照)を取得するために、混信除去処理部84が、出力値取得部82によって取得された各物理画素44の出力値Yに対して混信除去処理を実行する。 In order to obtain the multispectral image 74, the processor 60 performs a process of separating and extracting a value corresponding to each wavelength band λ from the output value Y for each physical pixel 44, that is, a process of removing crosstalk. It is necessary to perform removal processing on the output value Y. Therefore, in this embodiment, in order to obtain the multispectral image 74 (see FIG. 7), the interference removal processing unit 84 applies interference to the output value Y of each physical pixel 44 acquired by the output value acquisition unit 82. Execute the removal process.
 ここで、混信除去処理について説明する。各物理画素44の出力値Yは、赤色、緑色、及び青色について、偏光角度θ毎の各輝度値を出力値Yの成分として含む。各物理画素44の出力値Yは、式(1)によって表される。
Here, the interference removal process will be explained. The output value Y of each physical pixel 44 includes each brightness value for each polarization angle θ for red, green, and blue as a component of the output value Y. The output value Y of each physical pixel 44 is expressed by equation (1).
 ただし、Yθ1_Rは、出力値Yのうちの赤色で偏光角度が第1偏光角度θである成分の輝度値、Yθ2_Rは、出力値Yのうちの赤色で偏光角度が第2偏光角度θである成分の輝度値、Yθ3_Rは、出力値Yのうちの赤色で偏光角度が第3偏光角度θである成分の輝度値、Yθ4_Rは、出力値Yのうちの赤色で偏光角度が第4偏光角度θである成分の輝度値である。 However, Y θ1_R is the brightness value of the red component of the output value Y whose polarization angle is the first polarization angle θ 1 , and Y θ2_R is the brightness value of the component of the output value Y that is red and whose polarization angle is the second polarization angle θ. 2 , Y θ3_R is the brightness value of the red component of the output value Y whose polarization angle is the third polarization angle θ 3, and Y θ4_R is the brightness value of the red component of the output value Y whose polarization angle is the third polarization angle θ 3 . is the brightness value of the component whose fourth polarization angle is θ4 .
 また、Yθ1_Gは、出力値Yのうちの緑色で偏光角度が第1偏光角度θである成分の輝度値、Yθ2_Gは、出力値Yのうちの緑色で偏光角度が第2偏光角度θである成分の輝度値、Yθ3_Gは、出力値Yのうちの緑色で偏光角度が第3偏光角度θである成分の輝度値、Yθ4_Gは、出力値Yのうちの緑色で偏光角度が第4偏光角度θである成分の輝度値である。 Further, Y θ1_G is the luminance value of the green component of the output value Y whose polarization angle is the first polarization angle θ 1 , and Y θ2_G is the luminance value of the component of the output value Y that is green and whose polarization angle is the second polarization angle θ. 2 , Y θ3_G is the luminance value of the component of the output value Y that is green and has a polarization angle of the third polarization angle θ 3, and Y θ4_G is the luminance value of the component that is green and has the polarization angle of the third polarization angle θ 3 of the output value Y. is the brightness value of the component whose fourth polarization angle is θ4 .
 また、Yθ1_Bは、出力値Yのうちの青色で偏光角度が第1偏光角度θである成分の輝度値、Yθ2_Bは、出力値Yのうちの青色で偏光角度が第2偏光角度θである成分の輝度値、Yθ3_Bは、出力値Yのうちの青色で偏光角度が第3偏光角度θである成分の輝度値、Yθ4_Bは、出力値Yのうちの青色で偏光角度が第4偏光角度θである成分の輝度値である。 Further, Y θ1_B is the luminance value of the blue component of the output value Y whose polarization angle is the first polarization angle θ 1 , and Y θ2_B is the luminance value of the blue component of the output value Y whose polarization angle is the second polarization angle θ. 2 , Y θ3_B is the luminance value of the blue component of the output value Y whose polarization angle is the third polarization angle θ 3 , and Y θ4_B is the polarization angle of the blue component of the output value Y. is the brightness value of the component whose fourth polarization angle is θ4 .
 マルチスペクトル画像74を形成する各画像画素の画素値Xは、第1偏光角度αを有する第1波長帯λの偏光(以下、「第1波長帯偏光」と称する)の輝度値Xλ1と、第2偏光角度αを有する第2波長帯λの偏光(以下、「第2波長帯偏光」と称する)の輝度値Xλ2と、第3偏光角度αを有する第3波長帯λの偏光(以下、「第3波長帯偏光」と称する)の輝度値Xλ3とを画素値Xの成分として含む。各画像画素の画素値Xは、式(2)によって表される。
The pixel value X of each image pixel forming the multispectral image 74 is the brightness value X λ1 of polarized light in a first wavelength band λ 1 having a first polarization angle α 1 (hereinafter referred to as "first wavelength band polarized light" ) . , a brightness value X λ2 of polarized light in a second wavelength band λ 2 having a second polarization angle α 2 (hereinafter referred to as “second wavelength band polarized light”), and a third wavelength band having a third polarization angle α 3 A luminance value X λ3 of polarized light of λ 3 (hereinafter referred to as “third wavelength band polarized light”) is included as a component of the pixel value X. The pixel value X of each image pixel is expressed by equation (2).
 各物理画素44の出力値Yは、式(3)によって表される。
The output value Y of each physical pixel 44 is expressed by equation (3).
 式(3)において、Aは、混信行列である。混信行列A(図示省略)は、混信の特性を示す行列である。混信行列Aは、被写体光のスペクトル、第1レンズ30の分光透過率、第2レンズ32の分光透過率、複数の分光フィルタ20の分光透過率、及びイメージセンサ28の分光感度等の複数の既知の値に基づいて事前に規定される。 In equation (3), A is an interference matrix. The interference matrix A (not shown) is a matrix indicating characteristics of interference. The interference matrix A includes a plurality of known information such as the spectrum of the subject light, the spectral transmittance of the first lens 30, the spectral transmittance of the second lens 32, the spectral transmittance of the plurality of spectral filters 20, and the spectral sensitivity of the image sensor 28. predefined based on the value of .
 混信行列Aの一般逆行列である混信除去行列をAとした場合、各画像画素の画素値Xは、式(4)によって表される。
When the interference cancellation matrix, which is the general inverse of the interference matrix A, is A + , the pixel value X of each image pixel is expressed by equation (4).
 混信除去行列Aも、混信行列Aと同様に、被写体光のスペクトル、第1レンズ30の分光透過率、第2レンズ32の分光透過率、複数の分光フィルタ20の分光透過率、及びイメージセンサ28の分光感度等に基づいて規定される行列である。混信除去行列Aは、ストレージ62に予め記憶される。 Similar to the interference matrix A, the interference removal matrix A + also includes the spectrum of the subject light, the spectral transmittance of the first lens 30, the spectral transmittance of the second lens 32, the spectral transmittance of the plurality of spectral filters 20, and the image sensor. This is a matrix defined based on the spectral sensitivity, etc. of No. 28. The interference cancellation matrix A + is stored in the storage 62 in advance.
 混信除去処理部84は、ストレージ62に記憶されている混信除去行列Aと、出力値取得部82によって取得された各物理画素44の出力値Yとを取得し、取得した混信除去行列Aと各物理画素44の出力値Yとに基づいて、式(4)により、各画像画素の画素値Xを出力する。 The interference cancellation processing unit 84 acquires the interference cancellation matrix A + stored in the storage 62 and the output value Y of each physical pixel 44 acquired by the output value acquisition unit 82, and uses the acquired interference cancellation matrix A + Based on the output value Y of each physical pixel 44, the pixel value X of each image pixel is output using equation (4).
 ここで、上述の通り、各画像画素の画素値Xは、第1波長帯偏光の輝度値Xλ1と、第2波長帯偏光の輝度値Xλ2と、第3波長帯偏光の輝度値Xλ3とを画素値Xの成分として含む。 Here, as described above, the pixel value X of each image pixel is the brightness value X λ1 of the first wavelength band polarized light, the brightness value X λ2 of the second wavelength band polarized light, and the brightness value X λ3 of the third wavelength band polarized light. are included as components of the pixel value X.
 撮像画像70のうちのスペクトル画像72Aは、第1波長帯λの光の輝度値Xλ1に対応する画像(すなわち、輝度値Xλ1に依拠した画像)である。撮像画像70のうちのスペクトル画像72Bは、第2波長帯λの光の輝度値Xλ2に対応する画像(すなわち、輝度値Xλ2に依拠した画像)である。撮像画像70のうちのスペクトル画像72Cは、第3波長帯λの光の輝度値Xλ3に対応する画像(すなわち、輝度値Xλ3に依拠した画像)である。 The spectral image 72A of the captured image 70 is an image corresponding to the luminance value X λ1 of light in the first wavelength band λ 1 (that is, an image based on the luminance value X λ1 ). The spectral image 72B of the captured image 70 is an image corresponding to the brightness value X λ2 of light in the second wavelength band λ 2 (that is, an image based on the brightness value X λ2 ). The spectral image 72C of the captured image 70 is an image corresponding to the brightness value X λ3 of light in the third wavelength band λ 3 (that is, an image based on the brightness value X λ3 ).
 このように、混信除去処理部84によって混信除去処理が実行されることにより、撮像画像70が、第1波長帯偏光の輝度値Xλ1に対応するスペクトル画像72Aと、第2波長帯偏光の輝度値Xλ2に対応するスペクトル画像72Bと、第3波長帯偏光の輝度値Xλ3に対応するスペクトル画像72Cとに分離される。すなわち、撮像画像70が、複数の分光フィルタ20の波長帯λ毎のスペクトル画像72に分離される。上述の通り、各スペクトル画像72には、画像ずれが生じる(図6参照)。画像ずれは、スペクトル画像72毎に異なる。 In this way, by performing the interference removal process by the interference removal processing unit 84, the captured image 70 is divided into the spectrum image 72A corresponding to the brightness value X λ1 of the first wavelength band polarized light and the brightness of the second wavelength band polarized light. It is separated into a spectral image 72B corresponding to the value X λ2 and a spectral image 72C corresponding to the luminance value X λ3 of the third wavelength band polarized light. That is, the captured image 70 is separated into spectral images 72 for each wavelength band λ of the plurality of spectral filters 20. As described above, image shift occurs in each spectrum image 72 (see FIG. 6). The image shift differs for each spectrum image 72.
 一例として図11に示すように、補正処理部86は、各スペクトル画像72に対して、画像ずれを補正する処理(以下、「補正処理」と称する)を行う。補正処理は、本開示の技術に係る「画像処理」及び「校正処理」の一例である。ストレージ62には、補正値群90A~90Cが予め記憶されている。補正値群90Aは、スペクトル画像72Aの画像ずれを補正するための複数の補正値92Aを含む。補正値群90Bは、スペクトル画像72Bの画像ずれを補正するための複数の補正値92Bを含む。補正値群90Cは、スペクトル画像72Cの画像ずれを補正するための複数の補正値92Cを含む。各補正値92A~92Cを導出する処理(以下、「補正値導出処理」)については、後に詳述する。 As an example, as shown in FIG. 11, the correction processing unit 86 performs a process of correcting image shift (hereinafter referred to as "correction process") on each spectrum image 72. The correction process is an example of "image processing" and "calibration process" according to the technology of the present disclosure. In the storage 62, correction value groups 90A to 90C are stored in advance. The correction value group 90A includes a plurality of correction values 92A for correcting image shift of the spectral image 72A. The correction value group 90B includes a plurality of correction values 92B for correcting image shift of the spectral image 72B. The correction value group 90C includes a plurality of correction values 92C for correcting image shift of the spectral image 72C. The process of deriving each of the correction values 92A to 92C (hereinafter referred to as "correction value derivation process") will be described in detail later.
 以下、補正値92A~92Cを区別する必要が無い場合には、各補正値92A~92Cを「補正値92」と称する。各補正値92は、スペクトル画像72に含まれる画像画素毎に定められてもよく、スペクトル画像72のうちの画像領域毎に定められてもよい。各スペクトル画像72に対して各補正値92に基づく補正処理が行われることにより、各スペクトル画像72の画像ずれが補正される。 Hereinafter, when there is no need to distinguish between the correction values 92A to 92C, each correction value 92A to 92C will be referred to as a "correction value 92." Each correction value 92 may be determined for each image pixel included in the spectral image 72, or may be determined for each image region of the spectral image 72. By performing a correction process based on each correction value 92 on each spectral image 72, the image shift of each spectral image 72 is corrected.
 一例として図12に示すように、マルチスペクトル画像生成部88は、補正処理部86によって画像ずれが補正された複数のスペクトル画像72を合成することにより、マルチスペクトル画像74を生成する。例えば、マルチスペクトル画像74を示すマルチスペクトル画像データは、ディスプレイ58に出力される。ディスプレイ58は、マルチスペクトル画像データに基づいてマルチスペクトル画像74を表示する。なお、マルチスペクトル画像データは、ディスプレイ58以外の装置(図示省略)に出力されてもよい。 As an example, as shown in FIG. 12, the multispectral image generation unit 88 generates a multispectral image 74 by combining a plurality of spectral images 72 whose image deviations have been corrected by the correction processing unit 86. For example, multispectral image data representing multispectral image 74 is output to display 58. Display 58 displays multispectral image 74 based on the multispectral image data. Note that the multispectral image data may be output to a device other than the display 58 (not shown).
 図13には、上述の各スペクトル画像72に対応する補正値92を導出するための処理装置100の一例が示されている。処理装置100は、コンピュータ102を備えている。コンピュータ102は、プロセッサ104、ストレージ106、及びRAM108を備える。プロセッサ104、ストレージ106、及びRAM108は、上述のプロセッサ60、ストレージ62、及びRAM64(図2参照)と同様のハードウェアによって実現される。 FIG. 13 shows an example of a processing device 100 for deriving the correction values 92 corresponding to each of the above-mentioned spectral images 72. The processing device 100 includes a computer 102. Computer 102 includes a processor 104, storage 106, and RAM 108. The processor 104, storage 106, and RAM 108 are realized by the same hardware as the above-described processor 60, storage 62, and RAM 64 (see FIG. 2).
 ストレージ106には、補正値導出プログラム110が記憶されている。プロセッサ104は、ストレージ106から補正値導出プログラム110を読み出し、読み出した補正値導出プログラム110をRAM108上で実行する。プロセッサ104は、RAM108上で実行する補正値導出プログラム110に従って、各補正値92を導出するための補正値導出処理を実行する。 A correction value derivation program 110 is stored in the storage 106. The processor 104 reads the correction value derivation program 110 from the storage 106 and executes the read correction value derivation program 110 on the RAM 108. The processor 104 executes a correction value derivation process for deriving each correction value 92 according to a correction value derivation program 110 executed on the RAM 108.
 補正値導出処理は、プロセッサ104が補正値導出プログラム110に従って、スペクトル画像取得部112及び補正値導出部114として動作することで実現される。なお、ここでは、処理装置100が撮像装置10とは別の装置として説明されているが、処理装置100は、撮像装置10でもよい。つまり、補正値92を導出する補正値導出処理は、撮像装置10で行われてもよい。 The correction value derivation process is realized by the processor 104 operating as the spectral image acquisition unit 112 and the correction value derivation unit 114 according to the correction value derivation program 110. Note that although the processing device 100 is described here as a separate device from the imaging device 10, the processing device 100 may be the imaging device 10. That is, the correction value derivation process for deriving the correction value 92 may be performed by the imaging device 10.
 図14~図16には、補正値92が導出される態様の一例が示されている。図14~図16に示す例では、補正値92を導出するための被写体としてドットチャート120が用いられている。ドットチャート120は、複数のドット122を有する。各ドット122は、円形の点である。ドット122の大きさは、任意に設定されてもよい。複数のドット122は、一例として、ドットチャート120の一対の対角線(図示省略)上に並んで配置されている。ドットチャート120は、本開示の技術に係る「被写体」及び「校正用部材」の一例である。ドット122は、本開示の技術に係る「特徴部分」及び「点」の一例である。 FIGS. 14 to 16 show an example of how the correction value 92 is derived. In the examples shown in FIGS. 14 to 16, a dot chart 120 is used as the subject for deriving the correction value 92. The dot chart 120 has a plurality of dots 122. Each dot 122 is a circular point. The size of the dots 122 may be set arbitrarily. For example, the plurality of dots 122 are arranged on a pair of diagonal lines (not shown) of the dot chart 120. The dot chart 120 is an example of a "subject" and a "calibration member" according to the technology of the present disclosure. The dot 122 is an example of a "characteristic part" and a "point" according to the technology of the present disclosure.
 図14には、ドットチャート120が撮像装置10によって撮像される場合に受光面34Aに結像される光学像130A~130C、光学像132A~132C、及び光学像134A~134Cの一例が示されている。光学像130A~130Cは、複数のドット122のうちの右上に位置する第1ドット122Aに対応する光学像であり、光学像132A~132Cは、複数のドット122のうちの右下に位置する第2ドット122Bに対応する光学像であり、光学像134A~134Cは、複数のドット122のうちの中央に位置する第3ドット122Cに対応する光学像である。 FIG. 14 shows an example of optical images 130A to 130C, optical images 132A to 132C, and optical images 134A to 134C that are formed on the light receiving surface 34A when the dot chart 120 is imaged by the imaging device 10. There is. Optical images 130A to 130C are optical images corresponding to the first dot 122A located at the upper right of the plurality of dots 122, and optical images 132A to 132C are optical images corresponding to the first dot 122A located at the lower right of the plurality of dots 122. This is an optical image corresponding to the two dots 122B, and the optical images 134A to 134C are optical images corresponding to the third dot 122C located at the center of the plurality of dots 122.
 光学像130A、光学像132A、及び光学像134Aは、第1波長帯λに対応する光学像であり、光学像130B、光学像132B、及び光学像134Bは、第2波長帯λに対応する光学像であり、光学像130C、光学像132C、及び光学像134Cは、第3波長帯λに対応する光学像である。第1基準位置136Aは、受光面34A内における第1ドット122Aに対応する位置であり、第2基準位置136Bは、受光面34A内における第2ドット122Bに対応する位置であり、第3基準位置136Cは、受光面34A内における第3ドット122Cに対応する位置である。 The optical image 130A, the optical image 132A, and the optical image 134A are optical images corresponding to the first wavelength band λ 1 , and the optical image 130B, the optical image 132B, and the optical image 134B are the optical images corresponding to the second wavelength band λ 2 . The optical image 130C, the optical image 132C, and the optical image 134C are optical images corresponding to the third wavelength band λ 3 . The first reference position 136A is a position corresponding to the first dot 122A within the light receiving surface 34A, the second reference position 136B is a position corresponding to the second dot 122B within the light receiving surface 34A, and the third reference position is a position corresponding to the second dot 122B within the light receiving surface 34A. 136C is a position corresponding to the third dot 122C within the light receiving surface 34A.
 光学像130A~130Cは、第1基準位置136Aに対して位置がずれている。光学像130A~130Cの位置ずれの方向及び量は互いに異なる。同様に、光学像132A~132Cは、第2基準位置136Bに対して位置がずれており、光学像134A~134Cは、第3基準位置136Cに対して位置がずれている。光学像132A~132Cの位置ずれの方向及び量は互いに異なり、光学像134A~134Cの位置ずれの方向及び量は互いに異なる。 The optical images 130A to 130C are shifted from the first reference position 136A. The direction and amount of positional deviation of the optical images 130A to 130C are different from each other. Similarly, the optical images 132A to 132C are shifted from the second reference position 136B, and the optical images 134A to 134C are misaligned from the third reference position 136C. The directions and amounts of positional deviations of optical images 132A to 132C are different from each other, and the directions and amounts of positional deviations of optical images 134A to 134C are different from each other.
 また、例えば、第1ドット122Aと第2ドット122Bは、ドットチャート120の縦方向に対称な位置に配置されているが、光学像130A~130C及び光学像132A~132Cの各位置ずれの方向及び量は、開口24(図2参照)の位置及び形状に応じて互いに異なる。このため、光学像130A~130Cと光学像132A~132Cとは、受光面34Aの縦方向に対称にならずに非対称になる。また、第3ドット122Cに対応する光学像134A~134Cのように、受光面34Aの中心でも位置ずれが生じる。 Further, for example, the first dot 122A and the second dot 122B are arranged at symmetrical positions in the vertical direction of the dot chart 120, but the direction of each positional shift of the optical images 130A to 130C and the optical images 132A to 132C and The amounts differ from each other depending on the location and shape of the opening 24 (see FIG. 2). Therefore, the optical images 130A to 130C and the optical images 132A to 132C are not symmetrical in the vertical direction of the light receiving surface 34A, but are asymmetrical. Further, as in the optical images 134A to 134C corresponding to the third dot 122C, positional deviation also occurs at the center of the light receiving surface 34A.
 一例として図15に示すように、スペクトル画像取得部112は、ドットチャート120が撮像されることにより得られた各スペクトル画像72A~72Cを取得する。例えば、スペクトル画像72Aには、光学像130Aに対応する波長帯画像140Aが含まれており、スペクトル画像72Bには、光学像130Bに対応する波長帯画像140Bが含まれており、スペクトル画像72Cには、光学像130Cに対応する波長帯画像140Cが含まれている。 As an example, as shown in FIG. 15, the spectral image acquisition unit 112 acquires each of the spectral images 72A to 72C obtained by capturing the dot chart 120. For example, the spectral image 72A includes a wavelength band image 140A corresponding to the optical image 130A, the spectral image 72B includes a wavelength band image 140B corresponding to the optical image 130B, and the spectral image 72C includes a wavelength band image 140B corresponding to the optical image 130B. includes a wavelength band image 140C corresponding to the optical image 130C.
 基準位置142は、各スペクトル画像72内における第1ドット122Aに対応する位置である。波長帯画像140A~140Cは、基準位置142に対して位置がずれている。波長帯画像140A~140Cの位置ずれの方向及び量は互いに異なる。以下、波長帯画像140A~140Cを区別して説明する必要が無い場合、波長帯画像140A~140Cを「波長帯画像140」と称する。また、以下、第1ドット122A以外のドット122に対応する波長帯画像(図示省略)を「波長帯画像140」と称する場合がある。波長帯画像140は、本開示の技術に係る「波長帯画像」の一例である。 The reference position 142 is a position corresponding to the first dot 122A in each spectrum image 72. The wavelength band images 140A to 140C are shifted from the reference position 142. The direction and amount of positional shift of the wavelength band images 140A to 140C are different from each other. Hereinafter, unless it is necessary to separately explain the wavelength band images 140A to 140C, the wavelength band images 140A to 140C will be referred to as "wavelength band images 140." Moreover, hereinafter, the wavelength band image (not shown) corresponding to the dots 122 other than the first dot 122A may be referred to as the "wavelength band image 140." The wavelength band image 140 is an example of a "wavelength band image" according to the technology of the present disclosure.
 一例として図16に示すように、補正値導出部114は、次の要領で補正値92を導出する。例えば、補正値導出部114は、スペクトル画像取得部112で取得された各スペクトル画像72A~72Cを取得する。また、補正値導出部114は、スペクトル画像72A内における波長帯画像140Aの位置、及び、スペクトル画像72A内における基準位置142の位置を、ドットチャート120(図15参照)内における第1ドット122Aの位置、及び光学系26(図2参照)に関する設計値等に基づいて特定する。 As an example, as shown in FIG. 16, the correction value derivation unit 114 derives the correction value 92 in the following manner. For example, the correction value deriving unit 114 acquires each of the spectral images 72A to 72C acquired by the spectral image acquiring unit 112. Further, the correction value deriving unit 114 calculates the position of the wavelength band image 140A in the spectral image 72A and the position of the reference position 142 in the spectral image 72A to the position of the first dot 122A in the dot chart 120 (see FIG. 15). It is specified based on the position and design values regarding the optical system 26 (see FIG. 2).
 同様に、補正値導出部114は、スペクトル画像72B内における波長帯画像140Bの位置、及び、スペクトル画像72B内における基準位置142の位置を、ドットチャート120内における第1ドット122Aの位置、及び光学系26に関する設計値等に基づいて特定する。また、補正値導出部114は、スペクトル画像72C内における波長帯画像140Cの位置、及び、スペクトル画像72C内における基準位置142の位置を、ドットチャート120内における第1ドット122Aの位置、及び光学系26に関する設計値等に基づいて特定する。ドットチャート120内における第1ドット122Aの位置、及び光学系26に関する設計値は、いずれも既知の値である。 Similarly, the correction value deriving unit 114 converts the position of the wavelength band image 140B in the spectral image 72B and the position of the reference position 142 in the spectral image 72B to the position of the first dot 122A in the dot chart 120 and the optical It is specified based on the design value etc. regarding the system 26. Further, the correction value deriving unit 114 converts the position of the wavelength band image 140C in the spectral image 72C and the position of the reference position 142 in the spectral image 72C to the position of the first dot 122A in the dot chart 120 and the optical system. It is specified based on the design value etc. regarding 26. The position of the first dot 122A in the dot chart 120 and the design values regarding the optical system 26 are both known values.
 光学系26に関する設計値には、製品としての撮像装置10に対して付与された仕様に関する値が含まれてもよく、緒元に関する値が含まれてもよい。また、設計値には、光学系26の特性に関する値が含まれてもよい。光学系26の特性には、例えば、瞳分割フィルタ16の配置位置及び/又は開口24(図2参照)の特徴等が含まれてもよい。開口24の特徴には、開口24の重心位置G(図7及び図8参照)が含まれてもよい。開口24の重心位置Gは、例えば、開口24の位置及び/又は形状に基づいて定まる位置でもよい。 The design values related to the optical system 26 may include values related to specifications given to the imaging device 10 as a product, and may include values related to specifications. Further, the design values may include values related to the characteristics of the optical system 26. The characteristics of the optical system 26 may include, for example, the arrangement position of the pupil splitting filter 16 and/or the characteristics of the aperture 24 (see FIG. 2). The characteristics of the opening 24 may include the center of gravity position G of the opening 24 (see FIGS. 7 and 8). The center of gravity position G of the opening 24 may be determined based on the position and/or shape of the opening 24, for example.
 また、光学系26に関する設計値には、各分光フィルタ20(図2参照)の波長帯λの組み合わせに関する値が含まれてもよい。波長帯λの組み合わせに関する値は、各分光フィルタ20の波長帯そのものを示す値でもよく、各分光フィルタ20の波長帯の関係を示す値でもよい。 Further, the design values regarding the optical system 26 may include values regarding the combination of wavelength bands λ of each spectral filter 20 (see FIG. 2). The value regarding the combination of wavelength bands λ may be a value indicating the wavelength band of each spectral filter 20 itself, or may be a value indicating the relationship between the wavelength bands of each spectral filter 20.
 そして、例えば、補正値導出部114は、基準位置142に対する波長帯画像140Aの位置ずれの方向及び量をスペクトル画像72Aに基づいて導出し、導出した方向及び量に基づいて、スペクトル画像72Aに含まれる複数の画像画素のうちの波長帯画像140Aに対応する画像画素に対する補正値92Aを導出する。補正値92Aは、波長帯画像140Aの位置ずれを補正する補正値である。例えば、補正値92Aは、基準位置142に対する波長帯画像140Aの位置ずれの方向と反対の方向(すなわち、波長帯画像140Aに対する基準位置142の方向)を示す値と、導出された量を示す値とを含む。 For example, the correction value deriving unit 114 derives the direction and amount of positional deviation of the wavelength band image 140A with respect to the reference position 142 based on the spectral image 72A, and based on the derived direction and amount, the correction value deriving unit 114 A correction value 92A is derived for the image pixel corresponding to the wavelength band image 140A among the plurality of image pixels included in the image. The correction value 92A is a correction value for correcting the positional shift of the wavelength band image 140A. For example, the correction value 92A includes a value indicating a direction opposite to the direction of positional deviation of the wavelength band image 140A with respect to the reference position 142 (that is, a direction of the reference position 142 with respect to the wavelength band image 140A), and a value indicating the derived amount. including.
 同様に、補正値導出部114は、基準位置142に対する波長帯画像140Bの位置ずれの方向及び量をスペクトル画像72Bに基づいて導出し、導出した方向及び量に基づいて、スペクトル画像72Bに含まれる複数の画像画素のうちの波長帯画像140Bに対応する画像画素に対する補正値92Bを導出する。補正値92Bは、波長帯画像140Bの位置ずれを補正する補正値である。また、補正値導出部114は、基準位置142に対する波長帯画像140Cの位置ずれの方向及び量をスペクトル画像72Cに基づいて導出し、導出した方向及び量に基づいて、スペクトル画像72Cに含まれる複数の画像画素のうちの波長帯画像140Cに対応する画像画素に対する補正値92Cを導出する。補正値92Cは、波長帯画像140Cの位置ずれを補正する補正値である。各補正値92は、波長帯画像140の位置に応じて異なる。 Similarly, the correction value deriving unit 114 derives the direction and amount of positional deviation of the wavelength band image 140B with respect to the reference position 142 based on the spectral image 72B, and based on the derived direction and amount, the correction value deriving unit 114 A correction value 92B for the image pixel corresponding to the wavelength band image 140B among the plurality of image pixels is derived. The correction value 92B is a correction value for correcting the positional shift of the wavelength band image 140B. Further, the correction value deriving unit 114 derives the direction and amount of positional deviation of the wavelength band image 140C with respect to the reference position 142 based on the spectral image 72C, and based on the derived direction and amount, A correction value 92C for the image pixel corresponding to the wavelength band image 140C among the image pixels of is derived. The correction value 92C is a correction value for correcting the positional shift of the wavelength band image 140C. Each correction value 92 differs depending on the position of the wavelength band image 140.
 一例として図15及び図16には、第1ドット122Aに対応する波長帯画像140に基づいて、波長帯画像140に対応する画像画素に対する補正値92が導出される例が示されているが、第1ドット122A以外のドット122に対応する波長帯画像140についても、位置ずれの方向及び量が導出され、導出された方向及び量に基づいて、波長帯画像140に対応する画像画素に対する補正値92が導出される。 As an example, FIGS. 15 and 16 show an example in which the correction value 92 for the image pixel corresponding to the wavelength band image 140 is derived based on the wavelength band image 140 corresponding to the first dot 122A. The direction and amount of positional deviation are also derived for the wavelength band image 140 corresponding to the dots 122 other than the first dot 122A, and based on the derived direction and amount, the correction value for the image pixel corresponding to the wavelength band image 140 is calculated. 92 is derived.
 また、波長帯画像140に対応する画像画素(以下、「対応画像画素」と称する)以外の画像画素(以下、「非対応画像画素」と称する)については、例えば、対応画像画素の補正値92に基づいて補正値92が導出されてもよい。また、非対応画像画素の補正値92は、対応画像画素に対する非対応画像画素の位置に基づいて導出されてもよい。 For image pixels (hereinafter referred to as "non-corresponding image pixels") other than the image pixels corresponding to the wavelength band image 140 (hereinafter referred to as "corresponding image pixels"), for example, the correction value 92 of the corresponding image pixel The correction value 92 may be derived based on . Further, the correction value 92 for the non-corresponding image pixel may be derived based on the position of the non-corresponding image pixel with respect to the corresponding image pixel.
 また、各補正値92には、ディストーション及び/又は台形歪み等による画像ずれを補正する補正値が盛り込まれてもよい。また、補正値92は、位置ずれの方向及び量のうちの一方のみに基づいて導出されてもよい。また、補正値92は、実験により得られた値に基づいて導出されてもよい。また、ここでは、各補正値92は、処理装置100によって導出される例が挙げられているが、各補正値92は、撮像装置10の開発者等によって実験的に導出されてもよい。以上の要領で導出された補正値92は、撮像装置10のストレージ62に記憶される。 Furthermore, each correction value 92 may include a correction value for correcting image shift due to distortion and/or trapezoidal distortion. Further, the correction value 92 may be derived based on only one of the direction and amount of positional deviation. Further, the correction value 92 may be derived based on a value obtained through experiment. Moreover, although an example is given here in which each correction value 92 is derived by the processing device 100, each correction value 92 may be experimentally derived by a developer of the imaging device 10 or the like. The correction value 92 derived in the above manner is stored in the storage 62 of the imaging device 10.
 次に、本実施形態に係る撮像装置10の作用について説明する。図17には、撮像装置10で実行されるマルチスペクトル画像生成処理の流れの一例が示されている。 Next, the operation of the imaging device 10 according to this embodiment will be explained. FIG. 17 shows an example of the flow of multispectral image generation processing executed by the imaging device 10.
 図17に示すマルチスペクトル画像生成処理では、先ず、ステップST10で、出力値取得部82は、イメージセンサ28から出力された撮像データに基づいて、各物理画素44の出力値Yを取得する(図10参照)。ステップST10の処理が実行された後、マルチスペクトル画像生成処理は、ステップST12へ移行する。 In the multispectral image generation process shown in FIG. 10). After the process of step ST10 is executed, the multispectral image generation process moves to step ST12.
 ステップST12で、混信除去処理部84は、ストレージ62に記憶されている混信除去行列Aと、ステップST10で取得された各物理画素44の出力値Yとを取得し、取得した混信除去行列Aと各物理画素44の出力値Yとに基づいて、各画像画素の画素値Xを出力する(図10参照)。ステップST12で混信除去処理が実行されることにより、撮像画像70が、第1波長帯偏光の輝度値Xλ1に対応するスペクトル画像72Aと、第2波長帯偏光の輝度値Xλ2に対応するスペクトル画像72Bと、第3波長帯偏光の輝度値Xλ3に対応するスペクトル画像72Cとに分離される。ステップST12の処理が実行された後、マルチスペクトル画像生成処理は、ステップST14へ移行する。 In step ST12, the interference cancellation processing unit 84 acquires the interference cancellation matrix A + stored in the storage 62 and the output value Y of each physical pixel 44 acquired in step ST10, and obtains the interference cancellation matrix A The pixel value X of each image pixel is output based on + and the output value Y of each physical pixel 44 (see FIG. 10). By executing the interference removal process in step ST12, the captured image 70 is divided into a spectrum image 72A corresponding to the luminance value X λ1 of the first wavelength band polarized light and a spectrum corresponding to the luminance value X λ2 of the second wavelength band polarized light. It is separated into an image 72B and a spectrum image 72C corresponding to the luminance value X λ3 of the third wavelength band polarized light. After the process of step ST12 is executed, the multispectral image generation process moves to step ST14.
 ステップST14で、補正処理部86は、各スペクトル画像72A~72Cに対して、画像ずれを補正する補正処理を行う(図11参照)。ステップST14の処理が実行された後、マルチスペクトル画像生成処理は、ステップST16へ移行する。 In step ST14, the correction processing unit 86 performs a correction process to correct image shift on each of the spectral images 72A to 72C (see FIG. 11). After the process of step ST14 is executed, the multispectral image generation process moves to step ST16.
 ステップST16で、マルチスペクトル画像生成部88は、ステップST14で補正処理が行われたスペクトル画像72A~72Cを合成することにより、マルチスペクトル画像74を生成する(図12参照)。ステップST16の処理が実行された後、マルチスペクトル画像生成処理は、ステップST18へ移行する。 In step ST16, the multispectral image generation unit 88 generates the multispectral image 74 by combining the spectral images 72A to 72C that have been corrected in step ST14 (see FIG. 12). After the process of step ST16 is executed, the multispectral image generation process moves to step ST18.
 ステップST18で、プロセッサ60は、マルチスペクトル画像生成処理を終了する条件(すなわち、終了条件)が成立したか否かを判定する。終了条件の一例としては、ユーザによってマルチスペクトル画像生成処理を終了させる指示が撮像装置10に対して付与されたという条件等が挙げられる。ステップST18において、終了条件が成立していない場合には、判定が否定されて、マルチスペクトル画像生成処理は、ステップST10へ移行する。ステップST18において、終了条件が成立した場合には、判定が肯定されて、マルチスペクトル画像生成処理は終了する。なお、上述の撮像装置10の作用として説明した画像処理方法は、本開示の技術に係る「画像処理方法」の一例である。 In step ST18, the processor 60 determines whether a condition for terminating the multispectral image generation process (ie, an terminating condition) is satisfied. An example of the termination condition is a condition that the user has given an instruction to the imaging device 10 to terminate the multispectral image generation process. In step ST18, if the end condition is not satisfied, the determination is negative and the multispectral image generation process moves to step ST10. In step ST18, if the termination condition is satisfied, the determination is affirmative and the multispectral image generation process is terminated. Note that the image processing method described as the function of the imaging device 10 described above is an example of the "image processing method" according to the technology of the present disclosure.
 次に、本実施形態に係る処理装置100の作用について説明する。図18には、処理装置100で実行される補正値導出処理の流れの一例が示されている。 Next, the operation of the processing device 100 according to this embodiment will be explained. FIG. 18 shows an example of the flow of the correction value derivation process executed by the processing device 100.
 図18に示す補正値導出処理では、先ず、ステップST20で、スペクトル画像取得部112は、撮像装置10で得られた各スペクトル画像72A~72Cを取得する(図15参照)。ステップST20の処理が実行された後、補正値導出処理は、ステップST22へ移行する。 In the correction value derivation process shown in FIG. 18, first, in step ST20, the spectral image acquisition unit 112 acquires each of the spectral images 72A to 72C obtained by the imaging device 10 (see FIG. 15). After the process of step ST20 is executed, the correction value derivation process moves to step ST22.
 ステップST22で、補正値導出部114は、各スペクトル画像72A~72Cに基づいて波長帯画像140A~140Cの位置ずれの方向及び量を導出し、導出した方向及び量に基づいて、各スペクトル画像72A~72Cに含まれる画像画素に対する補正値92を導出する(図16参照)。導出された補正値92は、撮像装置10のストレージ62に記憶される。ステップST22の処理が実行された後、補正値導出処理は、ステップST24へ移行する。 In step ST22, the correction value deriving unit 114 derives the direction and amount of positional shift of the wavelength band images 140A to 140C based on the respective spectral images 72A to 72C, and based on the derived direction and amount, the correction value deriving unit 114 A correction value 92 for image pixels included in 72C is derived (see FIG. 16). The derived correction value 92 is stored in the storage 62 of the imaging device 10. After the process of step ST22 is executed, the correction value derivation process moves to step ST24.
 ステップST24で、プロセッサ104は、補正値導出処理を終了する条件(すなわち、終了条件)が成立したか否かを判定する。終了条件の一例としては、各スペクトル画像72A~72Cに含まれる複数の画像画素に対する補正値92を導出したという条件等が挙げられる。ステップST24において、終了条件が成立していない場合には、判定が否定されて、補正値導出処理は、ステップST22へ移行する。ステップST24において、終了条件が成立した場合には、判定が肯定されて、補正値導出処理は終了する。 In step ST24, the processor 104 determines whether a condition for terminating the correction value derivation process (ie, an terminating condition) is satisfied. An example of the termination condition is a condition that correction values 92 for a plurality of image pixels included in each of the spectral images 72A to 72C have been derived. In step ST24, if the termination condition is not satisfied, the determination is negative and the correction value derivation process moves to step ST22. In step ST24, if the termination condition is satisfied, the determination is affirmative and the correction value derivation process is terminated.
 次に、本実施形態の効果について説明する。 Next, the effects of this embodiment will be explained.
 本実施形態では、光学系26は、光軸OAの周りに設けられた複数の分光フィルタ20を有している(図2及び図3参照)。プロセッサ60は、各スペクトル画像72に対して、複数の分光フィルタ20によって分光されることで生じる光学像の位置ずれによる画像ずれを補正する補正処理を行う(図11参照)。したがって、画像ずれが生じているスペクトル画像72に基づいてマルチスペクトル画像74が生成される場合に比して、画質の高いマルチスペクトル画像74を得ることができる。 In this embodiment, the optical system 26 includes a plurality of spectral filters 20 provided around the optical axis OA (see FIGS. 2 and 3). The processor 60 performs a correction process on each spectral image 72 to correct image deviation due to positional deviation of the optical image caused by being separated by the plurality of spectral filters 20 (see FIG. 11). Therefore, it is possible to obtain a multispectral image 74 with higher image quality than when the multispectral image 74 is generated based on the spectral image 72 in which image shift has occurred.
 補正処理は、マルチスペクトル画像74を生成するための複数のスペクトル画像72に対して行われる。したがって、マルチスペクトル画像74が生成される前の段階で、複数のスペクトル画像72に生じている画像ずれを補正することができる。 The correction process is performed on a plurality of spectral images 72 to generate a multispectral image 74. Therefore, before the multispectral image 74 is generated, image shifts occurring in the plurality of spectral images 72 can be corrected.
 補正処理は、例えば、光学系26に関する設計値に基づいて行われる。したがって、光学系26に関する設計値に応じた画像ずれを補正することができる。 The correction process is performed, for example, based on design values regarding the optical system 26. Therefore, it is possible to correct image shift according to design values regarding the optical system 26.
 光学系26に関する設計値には、例えば、光学系26の特性が含まれる。したがって、光学系26の特性に基づいて補正処理が行われることにより、光学系26の特性に応じた画像ずれを補正することができる。 The design values regarding the optical system 26 include, for example, the characteristics of the optical system 26. Therefore, by performing the correction process based on the characteristics of the optical system 26, it is possible to correct the image shift according to the characteristics of the optical system 26.
 光学系26の特性には、例えば、瞳分割フィルタ16の配置位置が含まれる。したがって、瞳分割フィルタ16の配置位置に基づいて補正処理が行われることにより、瞳分割フィルタ16の配置位置に応じた画像ずれを補正することができる。 The characteristics of the optical system 26 include, for example, the arrangement position of the pupil splitting filter 16. Therefore, by performing the correction process based on the arrangement position of the pupil division filter 16, it is possible to correct image shift according to the arrangement position of the pupil division filter 16.
 光学系26の特性には、例えば、開口24の特徴が含まれる。したがって、開口24の特徴に基づいて補正処理が行われることにより、開口24の特徴に応じた画像ずれを補正することができる。 The characteristics of the optical system 26 include, for example, the characteristics of the aperture 24. Therefore, by performing the correction process based on the characteristics of the aperture 24, it is possible to correct the image shift according to the characteristics of the aperture 24.
 開口24の特徴には、例えば、開口24の重心位置Gが含まれる。したがって、開口24の重心位置Gに基づいて補正処理が行われることにより、開口24の重心位置Gに応じた画像ずれを補正することができる。 The characteristics of the opening 24 include, for example, the center of gravity position G of the opening 24. Therefore, by performing the correction process based on the center of gravity position G of the aperture 24, it is possible to correct the image shift according to the center of gravity position G of the aperture 24.
 開口24の重心位置Gは、例えば、開口24の位置及び/又は形状に基づいて定まる位置である。したがって、開口24の位置及び/又は形状に基づいて補正処理が行われることにより、開口24の位置及び/又は形状に応じた画像ずれを補正することができる。 The center of gravity position G of the opening 24 is, for example, a position determined based on the position and/or shape of the opening 24. Therefore, by performing the correction process based on the position and/or shape of the aperture 24, it is possible to correct image shift according to the position and/or shape of the aperture 24.
 光学系26に関する設計値には、例えば、各分光フィルタ20の波長帯λの組み合わせに関する値が含まれる。したがって、各分光フィルタ20の波長帯λの組み合わせに関する値に基づいて補正処理が行われることにより、各分光フィルタ20の波長帯λの組み合わせに応じた画像ずれを補正することができる。 The design values regarding the optical system 26 include, for example, values regarding the combination of wavelength bands λ of each spectral filter 20. Therefore, by performing the correction process based on the value related to the combination of wavelength bands λ of each spectral filter 20, it is possible to correct image shift according to the combination of wavelength bands λ of each spectral filter 20.
 補正処理は、波長帯λ毎の補正値92に基づいて行われる。したがって、各波長帯λに対応するスペクトル画像72間で画像ずれが異なる場合でも、各波長帯λに対応するスペクトル画像72毎に画像ずれを補正することができる。 The correction process is performed based on the correction value 92 for each wavelength band λ. Therefore, even if the image shift differs between the spectral images 72 corresponding to each wavelength band λ, the image shift can be corrected for each spectral image 72 corresponding to each wavelength band λ.
 各スペクトル画像72は、波長帯画像140を含む。波長帯画像140の位置ずれは、スペクトル画像72毎に異なる。したがって、補正処理によってスペクトル画像72毎に波長帯画像140の位置ずれが補正されることにより、各スペクトル画像72の画像ずれを補正することができる。 Each spectral image 72 includes a wavelength band image 140. The positional shift of the wavelength band image 140 differs for each spectrum image 72. Therefore, by correcting the positional deviation of the wavelength band image 140 for each spectral image 72 through the correction process, the image deviation of each spectral image 72 can be corrected.
 補正処理で用いられる補正値92は、波長帯画像140の位置に応じて異なる。したがって、補正処理において波長帯画像140の位置に応じた補正値92が用いられることにより、波長帯画像140毎に異なる位置ずれを補正することができる。 The correction value 92 used in the correction process differs depending on the position of the wavelength band image 140. Therefore, by using the correction value 92 according to the position of the wavelength band image 140 in the correction process, it is possible to correct positional deviations that differ for each wavelength band image 140.
 補正値92は、スペクトル画像72内の基準位置142に対する波長帯画像140の位置ずれの方向及び/又は量に基づいて定まる。したがって、補正処理において波長帯画像140の位置ずれの方向及び/又は量に対応する補正値92が用いられることにより、波長帯画像140の位置ずれの方向及び/又は量に基づく位置ずれを補正することができる。 The correction value 92 is determined based on the direction and/or amount of positional shift of the wavelength band image 140 with respect to the reference position 142 within the spectrum image 72. Therefore, by using the correction value 92 corresponding to the direction and/or amount of positional deviation of the wavelength band image 140 in the correction process, the positional deviation based on the direction and/or amount of positional deviation of the wavelength band image 140 is corrected. be able to.
 補正値92を導出する補正値導出処理には、ドット122を有するドットチャート120が用いられる。したがって、補正値導出処理において、例えば、ドットチャート120内のドット122の位置等の既知の値を用いて波長帯画像140の位置を特定することができる。 A dot chart 120 having dots 122 is used in the correction value derivation process for deriving the correction value 92. Therefore, in the correction value derivation process, for example, the position of the wavelength band image 140 can be specified using a known value such as the position of the dot 122 in the dot chart 120.
 ドットチャート120は、特徴部分としてドット122を有する。したがって、ドット122よりも複雑な形状を有する特徴部分を含む校正用部材が用いられる場合に比して、波長帯画像140の位置を容易に特定することができる。 The dot chart 120 has dots 122 as a characteristic part. Therefore, the position of the wavelength band image 140 can be specified more easily than when a calibration member including a characteristic portion having a more complicated shape than the dots 122 is used.
 補正値導出処理では、スペクトル画像72内のドット122に対応する位置が基準位置142として設定される。したがって、各スペクトル画像72において、基準位置142に対する各波長帯画像140の位置ずれに基づいて補正値92を導出することができる。 In the correction value derivation process, the position corresponding to the dot 122 in the spectrum image 72 is set as the reference position 142. Therefore, in each spectrum image 72, the correction value 92 can be derived based on the positional shift of each wavelength band image 140 with respect to the reference position 142.
 撮像装置10では、撮像データに対して混信除去処理が行われる。したがって、撮像データに対応する各物理画素44の出力値Yが混信(すなわち、クロストーク)を含む値である場合でも、混信除去処理が行われることにより、出力値Yから各波長帯λに対応した値を分離して抽出することができる。これにより、各波長帯λに対応したスペクトル画像72を撮像画像70から得ることができる。 In the imaging device 10, interference removal processing is performed on the imaging data. Therefore, even if the output value Y of each physical pixel 44 corresponding to the imaging data is a value that includes interference (that is, crosstalk), by performing the interference removal process, the output value Y corresponds to each wavelength band λ. The values can be separated and extracted. Thereby, a spectrum image 72 corresponding to each wavelength band λ can be obtained from the captured image 70.
 複数の分光フィルタ20は、互いに異なる波長帯λを有する。したがって、各波長帯λに対応するスペクトル画像72を得ることができる。 The plurality of spectral filters 20 have mutually different wavelength bands λ. Therefore, a spectrum image 72 corresponding to each wavelength band λ can be obtained.
 複数の分光フィルタ20は、光軸OAの周りに並んで配置されている。したがって、例えば、複数の分光フィルタ20が光軸OAと同心円状に配置される場合に比して、少ないスペースで瞳分割数を確保することができる。 The plurality of spectral filters 20 are arranged side by side around the optical axis OA. Therefore, the number of pupil divisions can be secured in a smaller space than, for example, when a plurality of spectral filters 20 are arranged concentrically with the optical axis OA.
 次に、本実施形態の変形例について説明する。 Next, a modification of this embodiment will be described.
 一例として図19に示す第1変形例では、補正値導出部114は、次の要領で補正値92を導出する。例えば、補正値導出部114は、複数の波長帯画像140のうちのいずれか一つの波長帯画像140を選択し、選択した波長帯画像140を基準位置142として設定する。ここでは、波長帯画像140Cが基準位置142として設定された場合を例に挙げて説明する。 In the first modified example shown in FIG. 19 as an example, the correction value derivation unit 114 derives the correction value 92 in the following manner. For example, the correction value deriving unit 114 selects any one of the plurality of wavelength band images 140 and sets the selected wavelength band image 140 as the reference position 142. Here, a case where the wavelength band image 140C is set as the reference position 142 will be described as an example.
 補正値導出部114は、波長帯画像140Cを基準位置142として設定する以外は、上記実施形態と同様の要領で補正値92を導出する。例えば、補正値導出部114は、基準位置142に対する波長帯画像140Aの位置ずれの方向及び量をスペクトル画像72Aに基づいて導出し、導出した方向及び量に基づいて、波長帯画像140Aに対応する画像画素に対する補正値92Aを導出する。 The correction value deriving unit 114 derives the correction value 92 in the same manner as in the above embodiment, except that the wavelength band image 140C is set as the reference position 142. For example, the correction value deriving unit 114 derives the direction and amount of positional deviation of the wavelength band image 140A with respect to the reference position 142 based on the spectral image 72A, and based on the derived direction and amount, the correction value deriving unit 114 derives the direction and amount of positional deviation of the wavelength band image 140A with respect to the reference position 142. A correction value 92A for the image pixel is derived.
 同様に、補正値導出部114は、基準位置142に対する波長帯画像140Bの位置ずれの方向及び量をスペクトル画像72Bに基づいて導出し、導出した方向及び量に基づいて、波長帯画像140Aに対応する画像画素に対する補正値92Bを導出する。なお、補正値導出部114は、基準位置142として設定された波長帯画像140Cに対応する画像画素に対する補正値92Cを0に設定する。 Similarly, the correction value deriving unit 114 derives the direction and amount of positional deviation of the wavelength band image 140B with respect to the reference position 142 based on the spectrum image 72B, and corresponds to the wavelength band image 140A based on the derived direction and amount. A correction value 92B for the image pixel is derived. Note that the correction value deriving unit 114 sets the correction value 92C to 0 for the image pixel corresponding to the wavelength band image 140C set as the reference position 142.
 第1変形例では、複数の波長帯画像140のうちのいずれか一つの波長帯画像140の位置が基準位置142として設定される。したがって、基準位置142として設定された波長帯画像140の位置に対する残りの波長帯画像140の位置ずれに基づいて補正値92を導出することができる。 In the first modification, the position of any one of the plurality of wavelength band images 140 is set as the reference position 142. Therefore, the correction value 92 can be derived based on the positional deviation of the remaining wavelength band images 140 with respect to the position of the wavelength band image 140 set as the reference position 142.
 なお、各補正値92には、ディストーション及び/又は台形歪みによる画像ずれを補正する補正値が盛り込まれなくてもよい。このようにすると、補正値92にディストーション及び/又は台形歪みによる画像ずれを補正する補正値が盛り込まれる場合に比して、補正値92による補正量を少なくすることができる。 Note that each correction value 92 does not need to include a correction value for correcting image shift due to distortion and/or trapezoidal distortion. In this way, the amount of correction by the correction value 92 can be reduced compared to the case where the correction value 92 includes a correction value for correcting image shift due to distortion and/or trapezoidal distortion.
 一例として図20に示す第2変形例では、補正値導出部114は、次の要領で補正値92を導出する。例えば、補正値導出部114は、複数のスペクトル画像72のうちのいずれか一つのスペクトル画像72を選択する。ここでは、スペクトル画像72Cが選択された場合を例に挙げて説明する。また、補正値導出部114は、スペクトル画像72Cに対して画像処理を行うことにより、スペクトル画像72Cから第1ドット122Aに対応する波長帯画像140Cを抽出する。 In the second modified example shown in FIG. 20 as an example, the correction value derivation unit 114 derives the correction value 92 in the following manner. For example, the correction value deriving unit 114 selects any one of the plurality of spectral images 72. Here, the case where the spectrum image 72C is selected will be described as an example. Further, the correction value deriving unit 114 extracts the wavelength band image 140C corresponding to the first dot 122A from the spectrum image 72C by performing image processing on the spectrum image 72C.
 そして、補正値導出部114は、抽出した波長帯画像140Cを基準位置142として設定する。ここでは、スペクトル画像72Cにおいて第1ドット122Aに対応する波長帯画像140Cが基準位置142として設定された場合を例に挙げて説明する。 Then, the correction value deriving unit 114 sets the extracted wavelength band image 140C as the reference position 142. Here, an example will be described in which the wavelength band image 140C corresponding to the first dot 122A in the spectrum image 72C is set as the reference position 142.
 また、補正値導出部114は、スペクトル画像72Aに対して画像処理を行うことにより、スペクトル画像72Aから第1ドット122Aに対応する波長帯画像140Aを抽出する。同様に、補正値導出部114は、スペクトル画像72Bに対して画像処理を行うことにより、スペクトル画像72Bから第1ドット122Aに対応する波長帯画像140Bを抽出する。 Further, the correction value deriving unit 114 extracts the wavelength band image 140A corresponding to the first dot 122A from the spectrum image 72A by performing image processing on the spectrum image 72A. Similarly, the correction value deriving unit 114 extracts the wavelength band image 140B corresponding to the first dot 122A from the spectrum image 72B by performing image processing on the spectrum image 72B.
 また、補正値導出部114は、スペクトル画像72A内における波長帯画像140Aの位置を画像処理によって特定する。同様に、補正値導出部114は、スペクトル画像72B内における波長帯画像140Bの位置を画像処理によって特定する。また、補正値導出部114は、スペクトル画像72C内における波長帯画像140Cの位置を画像処理によって特定し、特定した波長帯画像140Cの位置を基準位置142の位置として設定する。 Further, the correction value deriving unit 114 identifies the position of the wavelength band image 140A within the spectrum image 72A by image processing. Similarly, the correction value deriving unit 114 identifies the position of the wavelength band image 140B within the spectrum image 72B by image processing. Further, the correction value deriving unit 114 specifies the position of the wavelength band image 140C within the spectrum image 72C by image processing, and sets the specified position of the wavelength band image 140C as the position of the reference position 142.
 そして、上記第1変形例と同様に、補正値導出部114は、基準位置142に対する波長帯画像140Aの位置ずれの方向及び量に基づいて補正値92Aを導出する。また、補正値導出部114は、基準位置142に対する波長帯画像140Bの位置ずれの方向及び量に基づいて補正値92Bを導出する。なお、第2変形例においても、補正値92Cは0に設定される。 Then, similarly to the first modification, the correction value deriving unit 114 derives the correction value 92A based on the direction and amount of positional deviation of the wavelength band image 140A with respect to the reference position 142. Further, the correction value deriving unit 114 derives the correction value 92B based on the direction and amount of positional deviation of the wavelength band image 140B with respect to the reference position 142. Note that also in the second modification, the correction value 92C is set to 0.
 第2変形例では、スペクトル画像72内における波長帯画像140の位置、及び、スペクトル画像72内における基準位置142の位置が画像処理によって特定される。したがって、例えば、ドットチャート120内におけるドット122の位置、及び光学系26に関する設計値等を用いなくても、スペクトル画像72内における波長帯画像140の位置、及び、スペクトル画像72内における基準位置142の位置を特定することができる。 In the second modification, the position of the wavelength band image 140 within the spectral image 72 and the position of the reference position 142 within the spectral image 72 are specified by image processing. Therefore, for example, the position of the wavelength band image 140 in the spectral image 72 and the reference position 142 in the spectral image 72 can be determined without using the position of the dot 122 in the dot chart 120 and the design values regarding the optical system 26. can be located.
 なお、各補正値92には、ディストーション及び/又は台形歪みによる画像ずれを補正する補正値が盛り込まれなくてもよい。このようにすると、補正値92にディストーション及び/又は台形歪みによる画像ずれを補正する補正値が盛り込まれる場合に比して、補正値92による補正量を少なくすることができる。 Note that each correction value 92 does not need to include a correction value for correcting image shift due to distortion and/or trapezoidal distortion. In this way, the amount of correction by the correction value 92 can be reduced compared to the case where the correction value 92 includes a correction value for correcting image shift due to distortion and/or trapezoidal distortion.
 一例として図21~図23に示す第3変形例は、第2変形例に対して、ドットチャート120の代わりに、格子チャート150を用いる例である。格子チャート150は、格子柄を有する。格子柄は、格子で囲まれた各領域152の色が隣接する領域152で異なる柄であるチェック柄でもよく、格子状の線のみによって構成された柄でもよい。第3変形例では、格子柄の一例としてチェック柄が用いられている。隣接する領域152の色は、どのような組み合わせでもよい。 As an example, a third modification shown in FIGS. 21 to 23 is an example in which a grid chart 150 is used instead of the dot chart 120 in the second modification. The grid chart 150 has a grid pattern. The lattice pattern may be a check pattern in which each region 152 surrounded by the lattice has a different color in adjacent regions 152, or a pattern composed only of lattice-like lines. In the third modification, a check pattern is used as an example of a check pattern. The adjacent areas 152 may have any combination of colors.
 格子チャート150が用いられる場合、格子柄に含まれる交点154が特徴部分に相当する。複数の領域152は、格子チャート150の縦方向及び横方向に直線状に並んで配列されており、複数の交点154も、格子チャート150の縦方向及び横方向に直線状に並んでいる。一例として、各領域152の形状は、正方形であり、複数の交点154は、格子チャート150の縦方向及び横方向に等間隔に並んでいる。格子チャート150は、本開示の技術に係る「被写体」及び「校正用部材」の一例である。交点154は、本開示の技術に係る「特徴部分」及び「交点」の一例である。 When the grid chart 150 is used, the intersection points 154 included in the grid pattern correspond to characteristic parts. The plurality of regions 152 are arranged linearly in the vertical and horizontal directions of the grid chart 150, and the plurality of intersections 154 are also linearly arranged in the vertical and horizontal directions of the grid chart 150. As an example, each region 152 has a square shape, and the plurality of intersection points 154 are arranged at equal intervals in the vertical and horizontal directions of the grid chart 150. The grid chart 150 is an example of a "subject" and a "calibration member" according to the technology of the present disclosure. The intersection 154 is an example of a “characteristic portion” and an “intersection” according to the technology of the present disclosure.
 図21には、格子チャート150が撮像装置10によって撮像される場合に受光面34Aに結像される光学像160A~160C、光学像162A~162C、及び光学像164A~164Cの一例が示されている。光学像160A~160Cは、複数の交点154のうちの上から2行目の右から2列目に位置する第1交点154Aに対応する光学像であり、光学像162A~162Cは、複数の交点154のうちの下から2行目の右から2列目に位置する第2交点154Bに対応する光学像であり、光学像164A~164Cは、複数の交点154のうちの中央に位置する第3交点154Cに対応する光学像である。 FIG. 21 shows an example of optical images 160A to 160C, optical images 162A to 162C, and optical images 164A to 164C that are formed on the light receiving surface 34A when the lattice chart 150 is imaged by the imaging device 10. There is. The optical images 160A to 160C are optical images corresponding to the first intersection point 154A located in the second row from the top and the second column from the right among the plurality of intersection points 154, and the optical images 162A to 162C are the optical images corresponding to the first intersection point 154A located in the second row from the top and the second column from the right among the plurality of intersection points 154. 154, which corresponds to the second intersection 154B located in the second row from the bottom and second column from the right, and the optical images 164A to 164C correspond to the third intersection 154B located at the center of the plurality of intersections 154. This is an optical image corresponding to the intersection 154C.
 光学像160A、光学像162A、及び光学像164Aは、第1波長帯λに対応する光学像であり、光学像160B、光学像162B、及び光学像164Bは、第2波長帯λに対応する光学像であり、光学像160C、光学像162C、及び光学像164Cは、第3波長帯λに対応する光学像である。第1基準位置166Aは、受光面34A内における第1交点154Aに対応する位置であり、第2基準位置166Bは、受光面34A内における第2交点154Bに対応する位置であり、第3基準位置166Cは、受光面34A内における第3交点154Cに対応する位置である。 The optical image 160A, the optical image 162A, and the optical image 164A are optical images corresponding to the first wavelength band λ 1 , and the optical image 160B, the optical image 162B, and the optical image 164B are the optical images corresponding to the second wavelength band λ 2 . The optical image 160C, the optical image 162C, and the optical image 164C are optical images corresponding to the third wavelength band λ 3 . The first reference position 166A is a position corresponding to the first intersection 154A in the light receiving surface 34A, the second reference position 166B is a position corresponding to the second intersection 154B in the light receiving surface 34A, and the third reference position 166C is a position corresponding to the third intersection 154C within the light receiving surface 34A.
 光学像160A~160Cは、第1基準位置166Aに対して位置がずれている。光学像160A~160Cの位置ずれの方向及び量は互いに異なる。同様に、光学像162A~162Cは、第2基準位置166Bに対して位置がずれており、光学像164A~164Cは、第3基準位置166Cに対して位置がずれている。光学像162A~162Cの位置ずれの方向及び量は互いに異なり、光学像164A~164Cの位置ずれの方向及び量は互いに異なる。 The optical images 160A to 160C are shifted from the first reference position 166A. The direction and amount of positional deviation of the optical images 160A to 160C are different from each other. Similarly, the optical images 162A to 162C are shifted from the second reference position 166B, and the optical images 164A to 164C are misaligned from the third reference position 166C. The directions and amounts of positional deviations of optical images 162A to 162C are different from each other, and the directions and amounts of positional deviations of optical images 164A to 164C are different from each other.
 一例として図22に示すように、スペクトル画像取得部112は、格子チャート150が撮像されることにより得られた各スペクトル画像72A~72Cを取得する。例えば、スペクトル画像72Aには、光学像160Aに対応する波長帯画像170Aが含まれており、スペクトル画像72Bには、光学像160Bに対応する波長帯画像170Bが含まれており、スペクトル画像72Cには、光学像160Cに対応する波長帯画像170Cが含まれている。波長帯画像170A~170Cの位置ずれの方向及び量は互いに異なる。以下、波長帯画像170A~170Cを区別して説明する必要が無い場合、波長帯画像170A~170Cを「波長帯画像170」と称する。 As an example, as shown in FIG. 22, the spectral image acquisition unit 112 acquires each of the spectral images 72A to 72C obtained by capturing the grid chart 150. For example, the spectral image 72A includes a wavelength band image 170A corresponding to the optical image 160A, the spectral image 72B includes a wavelength band image 170B corresponding to the optical image 160B, and the spectral image 72C includes a wavelength band image 170B corresponding to the optical image 160B. includes a wavelength band image 170C corresponding to the optical image 160C. The direction and amount of positional shift of the wavelength band images 170A to 170C are different from each other. Hereinafter, unless it is necessary to separately explain the wavelength band images 170A to 170C, the wavelength band images 170A to 170C will be referred to as "wavelength band images 170."
 一例として図23に示すように、補正値導出部114は、次の要領で補正値92を導出する。例えば、補正値導出部114は、複数のスペクトル画像72のうちのいずれか一つのスペクトル画像72を選択する。ここでは、スペクトル画像72Cが選択された場合を例に挙げて説明する。また、補正値導出部114は、スペクトル画像72Cに対して画像処理を行うことにより、スペクトル画像72Cから波長帯画像170Cを抽出する。 As shown in FIG. 23 as an example, the correction value derivation unit 114 derives the correction value 92 in the following manner. For example, the correction value deriving unit 114 selects any one of the plurality of spectral images 72. Here, the case where the spectrum image 72C is selected will be described as an example. Further, the correction value deriving unit 114 extracts the wavelength band image 170C from the spectrum image 72C by performing image processing on the spectrum image 72C.
 そして、補正値導出部114は、抽出した波長帯画像170Cを基準位置172として設定する。ここでは、スペクトル画像72Cにおいて第1交点154Aに対応する波長帯画像170Cが基準位置172として設定された場合を例に挙げて説明する。 Then, the correction value deriving unit 114 sets the extracted wavelength band image 170C as the reference position 172. Here, an example will be described in which a wavelength band image 170C corresponding to the first intersection 154A in the spectrum image 72C is set as the reference position 172.
 また、補正値導出部114は、スペクトル画像72Aに対して画像処理を行うことにより、スペクトル画像72Aから第1交点154Aに対応する波長帯画像170Aを抽出する。同様に、補正値導出部114は、スペクトル画像72Bに対して画像処理を行うことにより、スペクトル画像72Bから第1交点154Aに対応する波長帯画像170Bを抽出する。 Further, the correction value deriving unit 114 extracts the wavelength band image 170A corresponding to the first intersection point 154A from the spectrum image 72A by performing image processing on the spectrum image 72A. Similarly, the correction value deriving unit 114 extracts the wavelength band image 170B corresponding to the first intersection point 154A from the spectrum image 72B by performing image processing on the spectrum image 72B.
 また、補正値導出部114は、スペクトル画像72A内における波長帯画像170Aの位置を画像処理によって特定する。同様に、補正値導出部114は、スペクトル画像72B内における波長帯画像170Bの位置を画像処理によって特定する。また、補正値導出部114は、スペクトル画像72C内における波長帯画像170Cの位置を画像処理によって特定し、特定した波長帯画像170Cの位置を基準位置172の位置として設定する。 Further, the correction value deriving unit 114 identifies the position of the wavelength band image 170A within the spectrum image 72A by image processing. Similarly, the correction value deriving unit 114 identifies the position of the wavelength band image 170B within the spectrum image 72B by image processing. Further, the correction value deriving unit 114 specifies the position of the wavelength band image 170C within the spectrum image 72C by image processing, and sets the specified position of the wavelength band image 170C as the position of the reference position 172.
 そして、上記第2変形例と同様に、補正値導出部114は、基準位置172に対する波長帯画像170Aの位置ずれの方向及び量に基づいて補正値92Aを導出する。また、補正値導出部114は、基準位置172に対する波長帯画像170Bの位置ずれの方向及び量に基づいて補正値92Bを導出する。なお、第3変形例においても、補正値92Cは0に設定される。 Then, similarly to the second modification, the correction value deriving unit 114 derives the correction value 92A based on the direction and amount of positional deviation of the wavelength band image 170A with respect to the reference position 172. Further, the correction value deriving unit 114 derives the correction value 92B based on the direction and amount of positional deviation of the wavelength band image 170B with respect to the reference position 172. Note that also in the third modification, the correction value 92C is set to 0.
 第3変形例では、格子チャート150は、特徴部分として格子柄を有する。したがって、格子柄よりも複雑な形状の特徴部分を有する校正用部材が用いられる場合に比して、波長帯画像170の位置を容易に特定することができる。 In the third modification, the grid chart 150 has a grid pattern as a characteristic part. Therefore, the position of the wavelength band image 170 can be specified more easily than when a calibration member having a characteristic portion having a more complicated shape than a checkered pattern is used.
 なお、各補正値92には、ディストーション及び/又は台形歪みによる画像ずれを補正する補正値が盛り込まれなくてもよい。このようにすると、補正値92にディストーション及び/又は台形歪みによる画像ずれを補正する補正値が盛り込まれる場合に比して、補正値92による補正量を少なくすることができる。 Note that each correction value 92 does not need to include a correction value for correcting image shift due to distortion and/or trapezoidal distortion. In this way, the amount of correction by the correction value 92 can be reduced compared to the case where the correction value 92 includes a correction value for correcting image shift due to distortion and/or trapezoidal distortion.
 一例として図24に示す第4変形例では、第3変形例で説明した格子チャート150が用いられる。補正値導出部114は、第3変形例と同様に、基準位置172に対する波長帯画像170の位置ずれの方向及び量をスペクトル画像72に基づいて導出し、導出した方向及び量に基づいて、波長帯画像170に対応する画像画素に対する補正値92を導出する。 As an example, in a fourth modification example shown in FIG. 24, the lattice chart 150 described in the third modification example is used. Similar to the third modification, the correction value deriving unit 114 derives the direction and amount of positional deviation of the wavelength band image 170 with respect to the reference position 172 based on the spectrum image 72, and calculates the wavelength based on the derived direction and amount. A correction value 92 for the image pixel corresponding to the band image 170 is derived.
 ただし、第4変形例では、補正値導出部114は、例えば、格子チャート150の各行の交点154が格子チャート150の横方向に直線状に並ぶことに基づいて各補正値92を導出する。例えば、格子チャート150の各行の交点154に対応する波長帯画像170は、スペクトル画像72の縦方向に異なる量の位置ずれを有するが、補正値92として、格子チャート150の各行の交点154に対応する波長帯画像170の位置がスペクトル画像72の縦方向に揃う補正値が導出される。一例として図24には、格子チャート150の上から2行目の交点154が格子チャート150の横方向に直線状に並ぶことに基づいて各補正値92が導出される態様が示されている。 However, in the fourth modification, the correction value deriving unit 114 derives each correction value 92 based on, for example, that the intersection points 154 of each row of the lattice chart 150 are lined up in a straight line in the horizontal direction of the lattice chart 150. For example, although the wavelength band images 170 corresponding to the intersection points 154 of each row of the grid chart 150 have different amounts of positional shift in the vertical direction of the spectrum image 72, the correction values 92 correspond to the intersection points 154 of each row of the grid chart 150. A correction value that aligns the positions of the wavelength band images 170 in the vertical direction of the spectrum image 72 is derived. As an example, FIG. 24 shows a mode in which each correction value 92 is derived based on the fact that the intersection points 154 in the second row from the top of the grid chart 150 are lined up in a straight line in the horizontal direction of the grid chart 150.
 第4変形例では、例えば、格子チャート150の各行の交点154が格子チャート150の横方向に直線状に並ぶことに基づいて各補正値92が導出される。したがって、例えば、補正値92として、格子チャート150の各行の交点154に対応する波長帯画像170の位置がスペクトル画像72の縦方向に揃う補正値を得ることができる。これにより、例えば、補正値92として、格子チャート150の各行の交点154に対応する波長帯画像170の位置がスペクトル画像72の縦方向にずれる補正値が導出される場合に比して、スペクトル画像72の縦方向の歪を抑制することができる。 In the fourth modification, for example, each correction value 92 is derived based on the fact that the intersection points 154 of each row of the grid chart 150 are lined up in a straight line in the horizontal direction of the grid chart 150. Therefore, for example, as the correction value 92, a correction value can be obtained in which the positions of the wavelength band image 170 corresponding to the intersection points 154 of each row of the lattice chart 150 are aligned in the vertical direction of the spectrum image 72. As a result, for example, compared to a case where a correction value in which the position of the wavelength band image 170 corresponding to the intersection point 154 of each row of the grid chart 150 is shifted in the vertical direction of the spectral image 72 is derived as the correction value 92, the spectral image 72 can be suppressed in the vertical direction.
 一例として図25に示す第5変形例のように、補正値導出部114は、例えば、格子チャート150の各列の交点154が格子チャート150の縦方向に直線状に並ぶことに基づいて各補正値92を導出してもよい。例えば、格子チャート150の各列の交点154に対応する波長帯画像170は、スペクトル画像72の横方向に異なる量の位置ずれを有するが、例えば、補正値92として、格子チャート150の各列の交点154に対応する波長帯画像170の位置がスペクトル画像72の横方向に揃う補正値が導出されてもよい。一例として図25には、格子チャート150の右から2列目の交点154が格子チャート150の縦方向に直線状に並ぶことに基づいて各補正値92が導出される態様が示されている。 As an example, as in a fifth modified example shown in FIG. A value of 92 may be derived. For example, the wavelength band images 170 corresponding to the intersection points 154 of each column of the lattice chart 150 have different amounts of positional deviation in the lateral direction of the spectral image 72. A correction value may be derived that aligns the wavelength band image 170 corresponding to the intersection point 154 in the horizontal direction of the spectrum image 72. As an example, FIG. 25 shows a mode in which each correction value 92 is derived based on the fact that the intersection points 154 in the second column from the right of the grid chart 150 are lined up in a straight line in the vertical direction of the grid chart 150.
 第5変形例では、例えば、補正値92として、格子チャート150の各列の交点154に対応する波長帯画像170の位置がスペクトル画像72の横方向に揃う補正値を得ることができる。これにより、例えば、補正値92として、格子チャート150の各列の交点154に対応する波長帯画像170の位置がスペクトル画像72の横方向にずれる補正値92が導出される場合に比して、スペクトル画像72の横方向の歪を抑制することができる。 In the fifth modification, for example, a correction value can be obtained as the correction value 92 such that the positions of the wavelength band images 170 corresponding to the intersections 154 of each column of the lattice chart 150 are aligned in the horizontal direction of the spectrum image 72. As a result, for example, compared to the case where a correction value 92 is derived in which the position of the wavelength band image 170 corresponding to the intersection point 154 of each column of the lattice chart 150 is shifted in the horizontal direction of the spectrum image 72, Lateral distortion of the spectrum image 72 can be suppressed.
 一例として図26に示す第6変形例のように、補正値導出部114は、例えば、格子チャート150の各行の交点154が格子チャート150の横方向に直線状に並ぶこと、及び、格子チャート150の各列の交点154が格子チャート150の縦方向に直線状に並ぶことに基づいて各補正値92を導出してもよい。一例として図26には、格子チャート150の上から2行目の交点154が格子チャート150の横方向に直線状に並ぶこと、及び、格子チャート150の右から2列目の交点154が格子チャート150の縦方向に直線状に並ぶことに基づいて各補正値92が導出される態様が示されている。 As an example, as in the sixth modified example shown in FIG. Each correction value 92 may be derived based on the fact that the intersection points 154 of each column are lined up in a straight line in the vertical direction of the grid chart 150. As an example, FIG. 26 shows that the intersection points 154 in the second row from the top of the grid chart 150 are lined up in a straight line in the horizontal direction of the grid chart 150, and that the intersection points 154 in the second column from the right of the grid chart 150 are arranged in a straight line in the horizontal direction of the grid chart 150. A mode in which each correction value 92 is derived based on the vertical alignment of 150 is shown.
 第6変形例では、例えば、補正値92として、格子チャート150の各行の交点154に対応する波長帯画像170の位置がスペクトル画像72の縦方向に揃い、かつ格子チャート150の各列の交点154に対応する波長帯画像170の位置がスペクトル画像72の横方向に揃う補正値を得ることができる。これにより、例えば、補正値92として、格子チャート150の各行の交点154に対応する波長帯画像170の位置がスペクトル画像72の縦方向にずれる補正値、及び/又は、格子チャート150の各列の交点154に対応する波長帯画像170の位置がスペクトル画像72の横方向にずれる補正値が導出される場合に比して、スペクトル画像72の縦方向及び/又は横方向の歪を抑制することができる。 In the sixth modification, for example, as the correction value 92, the positions of the wavelength band images 170 corresponding to the intersections 154 of each row of the lattice chart 150 are aligned in the vertical direction of the spectrum image 72, and the intersections 154 of each column of the lattice chart 150 It is possible to obtain a correction value that aligns the positions of the wavelength band images 170 corresponding to the spectrum image 72 in the horizontal direction. As a result, for example, the correction value 92 may be a correction value that shifts the position of the wavelength band image 170 corresponding to the intersection 154 of each row of the lattice chart 150 in the vertical direction of the spectral image 72, and/or a correction value for each column of the lattice chart 150. Compared to the case where a correction value in which the position of the wavelength band image 170 corresponding to the intersection point 154 is shifted in the horizontal direction of the spectrum image 72 is derived, vertical and/or horizontal distortion of the spectrum image 72 can be suppressed. can.
 また、第6変形例では、各補正値92には、台形歪みによる画像ずれを補正する補正値が盛り込まれてもよい。このようにすると、補正値92に台形歪みによる画像ずれを補正する補正値が盛り込まれていない場合に比して、スペクトル画像72の台形歪みを抑制することができる。 In the sixth modification, each correction value 92 may include a correction value for correcting image shift due to trapezoidal distortion. In this way, the trapezoidal distortion of the spectral image 72 can be suppressed compared to the case where the correction value 92 does not include a correction value for correcting image shift due to trapezoidal distortion.
 また、第4変形例から第6変形例では、補正値導出部114は、例えば、格子チャート150の各行の交点154が格子チャート150の横方向に等間隔に並ぶことに基づいて各補正値92を導出してもよい。このようにすると、格子チャート150の各行の交点154が格子チャート150の横方向に等間隔に並ぶことに関係なく各補正値92が導出される場合に比して、スペクトル画像72の横方向の歪を抑制することができる。 In the fourth to sixth modifications, the correction value deriving unit 114 calculates each correction value 92 based on, for example, that the intersection points 154 of each row of the grid chart 150 are arranged at equal intervals in the horizontal direction of the grid chart 150. may be derived. In this way, each correction value 92 is derived regardless of whether the intersection points 154 of each row of the grid chart 150 are arranged at equal intervals in the horizontal direction of the grid chart 150. Distortion can be suppressed.
 また、第4変形例から第6変形例では、補正値導出部114は、例えば、格子チャート150の各列の交点154が格子チャート150の縦方向に等間隔に並ぶことに基づいて各補正値92を導出してもよい。このようにすると、格子チャート150の各列の交点154が格子チャート150の縦方向に等間隔に並ぶことに関係なく各補正値92が導出される場合に比して、スペクトル画像72の縦方向の歪を抑制することができる。 In the fourth to sixth modifications, the correction value deriving unit 114 calculates each correction value based on, for example, that the intersection points 154 of each column of the grid chart 150 are arranged at equal intervals in the vertical direction of the grid chart 150. 92 may be derived. In this way, each correction value 92 is derived regardless of whether the intersection points 154 of each column of the grid chart 150 are arranged at regular intervals in the vertical direction of the grid chart 150. distortion can be suppressed.
 また、第4変形例から第6変形例では、補正値導出部114は、例えば、格子チャート150の各行の交点154が格子チャート150の横方向に等間隔に並ぶこと、及び、格子チャート150の各列の交点154が格子チャート150の縦方向に等間隔に並ぶことに基づいて各補正値92を導出してもよい。このようにすると、格子チャート150の各行の交点154が格子チャート150の横方向に等間隔に並ぶことに関係なく各補正値92が導出される場合、及び/又は、格子チャート150の各列の交点154が格子チャート150の縦方向に等間隔に並ぶことに関係なく各補正値92が導出される場合に比して、スペクトル画像72の縦方向及び/又は横方向の歪を抑制することができる。 Further, in the fourth to sixth modifications, the correction value deriving unit 114 determines, for example, that the intersection points 154 of each row of the grid chart 150 are arranged at equal intervals in the horizontal direction of the grid chart 150; Each correction value 92 may be derived based on the fact that the intersection points 154 of each column are arranged at equal intervals in the vertical direction of the grid chart 150. In this way, each correction value 92 is derived regardless of whether the intersection points 154 of each row of the grid chart 150 are arranged at equal intervals in the horizontal direction of the grid chart 150, and/or Compared to the case where each correction value 92 is derived regardless of whether the intersection points 154 are arranged at regular intervals in the vertical direction of the grid chart 150, vertical and/or horizontal distortion of the spectral image 72 can be suppressed. can.
 また、格子チャート150の各行の交点154が格子チャート150の横方向に等間隔に並ぶこと、及び、格子チャート150の各列の交点154が格子チャート150の縦方向に等間隔に並ぶことに基づいて各補正値92が導出される場合に、各補正値92には、台形歪みによる画像ずれを補正する補正値が盛り込まれてもよい。このようにすると、補正値92に台形歪みによる画像ずれを補正する補正値が盛り込まれていない場合に比して、スペクトル画像72の台形歪みを抑制することができる。 Further, based on the fact that the intersection points 154 of each row of the grid chart 150 are arranged at equal intervals in the horizontal direction of the grid chart 150, and that the intersection points 154 of each column of the grid chart 150 are arranged at equal intervals in the vertical direction of the grid chart 150, In the case where each correction value 92 is derived, each correction value 92 may include a correction value for correcting image shift due to trapezoidal distortion. In this way, the trapezoidal distortion of the spectral image 72 can be suppressed compared to the case where the correction value 92 does not include a correction value for correcting image shift due to trapezoidal distortion.
 また、例えば、格子チャート150の各行の交点154が格子チャート150の横方向に等間隔に並ぶこと、及び/又は、格子チャート150の各列の交点154が格子チャート150の縦方向に等間隔に並ぶことに基づいて各補正値92が導出される場合に、補正値92は、次の要領で導出されてもよい。 For example, the intersection points 154 of each row of the grid chart 150 may be arranged at equal intervals in the horizontal direction of the grid chart 150, and/or the intersection points 154 of each column of the grid chart 150 may be arranged at equal intervals in the vertical direction of the grid chart 150. When each correction value 92 is derived based on the alignment, the correction value 92 may be derived in the following manner.
 すなわち、補正値92として、格子チャート150の中央領域以外の領域に位置する交点154に対応する波長帯画像170間の間隔を、格子チャート150の中央領域に位置する交点154に対応する波長帯画像170間の間隔に合わせる補正値が導出されてもよい。このようにすると、格子チャート150の中央領域に位置する交点154に対応する波長帯画像170間の間隔と関係なく補正値92導出される場合に比して、スペクトル画像72の縦方向及び/又は横方向の歪を抑制することができる。 That is, as the correction value 92, the interval between the wavelength band images 170 corresponding to the intersection 154 located in an area other than the central area of the lattice chart 150 is set as the interval between the wavelength band images 170 corresponding to the intersection 154 located in the central area of the lattice chart 150. A correction value may be derived to match the interval between 170 and 170. In this way, compared to the case where the correction value 92 is derived regardless of the interval between the wavelength band images 170 corresponding to the intersection 154 located in the central area of the lattice chart 150, the vertical direction of the spectral image 72 and/or Lateral distortion can be suppressed.
 一例として図27に示す第7変形例は、第2変形例に対して、ドットチャート120の代わりに、被写体4として検査用物体180A~180Dを用いる例である。一例として、各検査用物体180A~180Dは、被検体である。各検査用物体180A~180Dは、どのようなものでもよい。検査用物体180A~180Dは、互いに異なる種類の物体である。以下、検査用物体180A~180Dを区別して説明する必要が無い場合には、各検査用物体180A~180Dを「検査用物体180」と称する。各検査用物体180は、特徴部分182を有している。ここでは、複数の検査用物体180の数が4つである例が挙げられているが、複数の検査用物体180の数はいくつでもよい。検査用物体180は、本開示の技術に係る「被写体」及び「検査用物体」の一例である。 As an example, a seventh modification shown in FIG. 27 is an example in which inspection objects 180A to 180D are used as the subject 4 instead of the dot chart 120 in the second modification. As an example, each of the inspection objects 180A to 180D is a subject. Each inspection object 180A to 180D may be of any type. The inspection objects 180A to 180D are different types of objects. Hereinafter, unless it is necessary to explain the inspection objects 180A to 180D separately, each of the inspection objects 180A to 180D will be referred to as an "inspection object 180." Each test object 180 has a feature 182. Here, an example is given in which the number of the plurality of inspection objects 180 is four, but the number of the plurality of inspection objects 180 may be any number. The inspection object 180 is an example of a "subject" and "inspection object" according to the technology of the present disclosure.
 図27には、検査用物体180A~180Dが撮像装置10によって撮像される場合に受光面34Aに結像される光学像190A~190Cの一例が示されている。一例として、光学像190A~190Cは、第1検査用物体180Aの特徴部分182(以下、「第1特徴部分182A」と称する)に対応する光学像である。光学像190Aは、第1波長帯λに対応する光学像であり、光学像190Bは、第2波長帯λに対応する光学像であり、光学像190Cは、第3波長帯λに対応する光学像である。基準位置192は、受光面34A内における第1特徴部分182Aに対応する位置である。光学像190A~190Cは、基準位置192に対して位置がずれている。光学像190A~190Cの位置ずれの方向及び量は互いに異なる。 FIG. 27 shows an example of optical images 190A to 190C formed on the light receiving surface 34A when the inspection objects 180A to 180D are imaged by the imaging device 10. As an example, the optical images 190A to 190C are optical images corresponding to a characteristic portion 182 (hereinafter referred to as “first characteristic portion 182A”) of the first inspection object 180A. The optical image 190A is an optical image corresponding to the first wavelength band λ 1 , the optical image 190B is an optical image corresponding to the second wavelength band λ 2 , and the optical image 190C is an optical image corresponding to the third wavelength band λ 3 . The corresponding optical image. The reference position 192 is a position corresponding to the first characteristic portion 182A within the light receiving surface 34A. The optical images 190A to 190C are shifted from the reference position 192. The direction and amount of positional deviation of the optical images 190A to 190C are different from each other.
 一例として図27に示すように、スペクトル画像72Aには、光学像190Aに対応する波長帯画像200Aが含まれており、スペクトル画像72Bには、光学像190Bに対応する波長帯画像200Bが含まれており、スペクトル画像72Cには、光学像190Cに対応する波長帯画像200Cが含まれている。波長帯画像200A~200Cの位置ずれの方向及び量は互いに異なる。以下、波長帯画像200A~200Cを区別して説明する必要が無い場合、波長帯画像200A~200Cを「波長帯画像200」と称する。 As an example, as shown in FIG. 27, the spectral image 72A includes a wavelength band image 200A corresponding to the optical image 190A, and the spectral image 72B includes a wavelength band image 200B corresponding to the optical image 190B. The spectral image 72C includes a wavelength band image 200C corresponding to the optical image 190C. The direction and amount of positional shift of the wavelength band images 200A to 200C are different from each other. Hereinafter, unless it is necessary to separately explain the wavelength band images 200A to 200C, the wavelength band images 200A to 200C will be referred to as "wavelength band images 200."
 補正値導出部114は、次の要領で補正値92を導出する。例えば、補正値導出部114は、複数のスペクトル画像72のうちのいずれか一つのスペクトル画像72を選択する。ここでは、スペクトル画像72Cが選択された場合を例に挙げて説明する。また、補正値導出部114は、スペクトル画像72Cに対して画像処理を行うことにより、スペクトル画像72Cから波長帯画像200Cを抽出する。 The correction value deriving unit 114 derives the correction value 92 in the following manner. For example, the correction value deriving unit 114 selects any one of the plurality of spectral images 72. Here, the case where the spectrum image 72C is selected will be described as an example. Further, the correction value deriving unit 114 extracts the wavelength band image 200C from the spectrum image 72C by performing image processing on the spectrum image 72C.
 そして、補正値導出部114は、抽出した波長帯画像200Cを基準位置202として設定する。ここでは、スペクトル画像72Cにおいて第1特徴部分182Aに対応する波長帯画像200Cが基準位置202として設定された場合を例に挙げて説明する。 Then, the correction value deriving unit 114 sets the extracted wavelength band image 200C as the reference position 202. Here, an example will be described in which a wavelength band image 200C corresponding to the first characteristic portion 182A in the spectrum image 72C is set as the reference position 202.
 また、補正値導出部114は、スペクトル画像72Aに対して画像処理を行うことにより、スペクトル画像72Aから第1特徴部分182Aに対応する波長帯画像200Aを抽出する。同様に、補正値導出部114は、スペクトル画像72Bに対して画像処理を行うことにより、スペクトル画像72Bから第1特徴部分182Aに対応する波長帯画像200Bを抽出する。 Further, the correction value deriving unit 114 extracts the wavelength band image 200A corresponding to the first characteristic portion 182A from the spectrum image 72A by performing image processing on the spectrum image 72A. Similarly, the correction value deriving unit 114 extracts the wavelength band image 200B corresponding to the first characteristic portion 182A from the spectral image 72B by performing image processing on the spectral image 72B.
 また、補正値導出部114は、スペクトル画像72A内における波長帯画像200Aの位置を画像処理によって特定する。同様に、補正値導出部114は、スペクトル画像72B内における波長帯画像200Bの位置を画像処理によって特定する。また、補正値導出部114は、スペクトル画像72C内における波長帯画像200Cの位置を画像処理によって特定し、特定した波長帯画像200Cの位置を基準位置202の位置として設定する。 Further, the correction value deriving unit 114 identifies the position of the wavelength band image 200A within the spectrum image 72A by image processing. Similarly, the correction value deriving unit 114 identifies the position of the wavelength band image 200B within the spectrum image 72B by image processing. Further, the correction value deriving unit 114 specifies the position of the wavelength band image 200C within the spectrum image 72C by image processing, and sets the specified position of the wavelength band image 200C as the position of the reference position 202.
 そして、上記第2変形例と同様に、補正値導出部114は、基準位置202に対する波長帯画像200Aの位置ずれの方向及び量に基づいて補正値92Aを導出する。また、補正値導出部114は、基準位置202に対する波長帯画像200Bの位置ずれの方向及び量に基づいて補正値92Bを導出する。なお、第7変形例においても、補正値92Cは0に設定される。 Then, similarly to the second modification, the correction value deriving unit 114 derives the correction value 92A based on the direction and amount of positional shift of the wavelength band image 200A with respect to the reference position 202. Further, the correction value deriving unit 114 derives the correction value 92B based on the direction and amount of positional deviation of the wavelength band image 200B with respect to the reference position 202. Note that in the seventh modification as well, the correction value 92C is set to 0.
 第7変形例では、補正値92を導出する場合の被写体4として複数の検査用物体180が用いられている。したがって、校正用部材(すなわち、補正値92を導出するための専用の部材)を用いなくても、補正値92を導出することができる。 In the seventh modification, a plurality of inspection objects 180 are used as the subject 4 when deriving the correction value 92. Therefore, the correction value 92 can be derived without using a calibration member (that is, a dedicated member for deriving the correction value 92).
 なお、第7変形例による補正値導出処理は、撮像装置10における補正処理に組み込まれてもよい。そして、第7変形例による補正値導出処理で導出された補正値92が補正処理で用いられてもよい。 Note that the correction value deriving process according to the seventh modification may be incorporated into the correction process in the imaging device 10. Then, the correction value 92 derived in the correction value derivation process according to the seventh modification may be used in the correction process.
 一例として図28に示す第8変形例では、第7変形例に対して、被写体4として同一種類の複数の検査用物体180A~180Eが用いられる。以下、検査用物体180A~180Eを区別して説明する必要が無い場合には、各検査用物体180A~180Eを「検査用物体180」と称する。各検査用物体180は、第1特徴部分184と、第2特徴部分186とを有する。 As an example, in the eighth modification shown in FIG. 28, a plurality of inspection objects 180A to 180E of the same type are used as the subject 4 compared to the seventh modification. Hereinafter, unless there is a need to explain the inspection objects 180A to 180E separately, each of the inspection objects 180A to 180E will be referred to as an "inspection object 180." Each inspection object 180 has a first feature 184 and a second feature 186.
 補正値導出部114は、例えば、第1特徴部分184に対応する波長帯画像200を基準位置202として設定し、基準位置202に対する波長帯画像200の位置ずれの方向及び量をスペクトル画像72に基づいて導出し、導出した方向及び量に基づいて、波長帯画像200に対応する画像画素に対する補正値92を導出する。 For example, the correction value deriving unit 114 sets the wavelength band image 200 corresponding to the first characteristic portion 184 as the reference position 202, and calculates the direction and amount of positional deviation of the wavelength band image 200 with respect to the reference position 202 based on the spectrum image 72. Based on the derived direction and amount, a correction value 92 for the image pixel corresponding to the wavelength band image 200 is derived.
 ただし、第8変形例では、補正値導出部114は、例えば、各検査用物体180で第2特徴部分186に対する第1特徴部分184の位置が同じであることに基づいて各補正値92を導出する。例えば、各検査用物体180の第1特徴部分184に対応する波長帯画像200は、各検査用物体180の第2特徴部分186に対応する波長帯画像200に対して異なる方向及び/又は量の位置ずれを有するが、補正値92として、各検査用物体180の第1特徴部分184に対応する波長帯画像200の位置ずれの方向及び/又は量が揃う補正値が導出される。 However, in the eighth modification, the correction value deriving unit 114 derives each correction value 92 based on, for example, that the position of the first characteristic portion 184 with respect to the second characteristic portion 186 in each inspection object 180 is the same. do. For example, the wavelength band image 200 corresponding to the first characteristic portion 184 of each inspection object 180 may be different in direction and/or amount from the wavelength band image 200 corresponding to the second characteristic portion 186 of each inspection object 180. Although there is a positional shift, a correction value is derived as the correction value 92 in which the direction and/or amount of the positional shift of the wavelength band images 200 corresponding to the first characteristic portions 184 of each inspection object 180 are the same.
 第8変形例では、例えば、各検査用物体180で第2特徴部分186に対する第1特徴部分184の位置が同じであることに基づいて各補正値92が導出される。したがって、例えば、補正値92として、各検査用物体180の第1特徴部分184に対応する波長帯画像200の位置ずれの方向及び/又は量が揃う補正値を得ることができる。これにより、例えば、各検査用物体180で第2特徴部分186に対する第1特徴部分184の位置が同じであることに関係なく補正値92が導出される場合に比して、スペクトル画像72の縦方向及び/又は横方向の歪を抑制することができる。 In the eighth modification, each correction value 92 is derived, for example, based on the fact that the position of the first characteristic portion 184 with respect to the second characteristic portion 186 in each inspection object 180 is the same. Therefore, for example, it is possible to obtain, as the correction value 92, a correction value in which the directions and/or amounts of positional deviations of the wavelength band images 200 corresponding to the first characteristic portions 184 of each inspection object 180 are aligned. As a result, for example, compared to the case where the correction value 92 is derived regardless of the fact that the position of the first characteristic portion 184 with respect to the second characteristic portion 186 is the same in each inspection object 180, the vertical direction of the spectral image 72 is Directional and/or lateral distortion can be suppressed.
 なお、例えば、各検査用物体180で第2特徴部分186に対する第1特徴部分184の位置が同じであることに基づいて各補正値92が導出される場合に、補正値92は、次の要領で導出されてもよい。 Note that, for example, when each correction value 92 is derived based on the fact that the position of the first characteristic part 184 with respect to the second characteristic part 186 is the same in each inspection object 180, the correction value 92 is calculated according to the following procedure. It may be derived as follows.
 すなわち、補正値92として、撮像装置10による撮像対象領域210のうちの中央領域以外の領域に位置する検査用物体180A~180Dの第1特徴部分184及び第2特徴部分186に対応する波長帯画像200間の間隔を、撮像対象領域210のうちの中央領域に位置する検査用物体180Eの第1特徴部分184及び第2特徴部分186に対応する波長帯画像200間の間隔に合わせる補正値が導出されてもよい。このようにすると、中央領域に位置する検査用物体180Eの第1特徴部分184及び第2特徴部分186に対応する波長帯画像200間の間隔と関係なく補正値92が導出される場合に比して、スペクトル画像72の縦方向及び/又は横方向の歪を抑制することができる。 That is, as the correction value 92, a wavelength band image corresponding to the first characteristic portion 184 and the second characteristic portion 186 of the inspection objects 180A to 180D located in a region other than the central region of the imaging target region 210 by the imaging device 10 is used. A correction value is derived that adjusts the interval between the wavelength band images 200 to the interval between the wavelength band images 200 corresponding to the first characteristic portion 184 and the second characteristic portion 186 of the inspection object 180E located in the central region of the imaging target region 210. may be done. In this way, compared to the case where the correction value 92 is derived regardless of the interval between the wavelength band images 200 corresponding to the first characteristic portion 184 and the second characteristic portion 186 of the inspection object 180E located in the central region. Thus, vertical and/or horizontal distortion of the spectral image 72 can be suppressed.
 なお、第8変形例による補正値導出処理は、撮像装置10における補正処理に組み込まれてもよい。そして、第8変形例による補正値導出処理で導出された補正値92が補正処理で用いられてもよい。 Note that the correction value deriving process according to the eighth modification may be incorporated into the correction process in the imaging device 10. Then, the correction value 92 derived in the correction value derivation process according to the eighth modification may be used in the correction process.
 一例として図29に示す第9変形例では、第8変形例に対して、被写体4として2つの被写体220A及び被写体220Bが用いられている。一例として、撮像装置10による撮像対象領域210には、被写体220Aが配置されている被写体領域210Aと、被写体220Bが配置されている被写体領域210Bと、被写体220A及び被写体220Bが配置されていない空き領域210C及び空き領域210Dとが含まれる。 As an example, in the ninth modification example shown in FIG. 29, two subjects 220A and 220B are used as the subject 4, compared to the eighth modification example. As an example, the imaging target area 210 by the imaging device 10 includes a subject area 210A where the subject 220A is placed, a subject area 210B where the subject 220B is placed, and an empty area where the subject 220A and the subject 220B are not placed. 210C and a free area 210D.
 補正値導出部114は、撮像装置10によって撮像対象領域210が撮像されることにより得られた各スペクトル画像72を領域212A~212Dに分割する。領域212Aは、被写体領域210Aに対応しており、領域212Bは、被写体領域210Bに対応しており、領域212Cは、空き領域210Cに対応しており、領域212Dは、空き領域210Dに対応している。各スペクトル画像72において、領域212Aには、被写体220Aに対応する波長帯画像230Aが含まれており、領域212Bには、被写体220Bに対応する波長帯画像230Bが含まれている。 The correction value deriving unit 114 divides each spectrum image 72 obtained by imaging the imaging target region 210 by the imaging device 10 into regions 212A to 212D. The area 212A corresponds to the subject area 210A, the area 212B corresponds to the subject area 210B, the area 212C corresponds to the free area 210C, and the area 212D corresponds to the free area 210D. There is. In each spectral image 72, a region 212A includes a wavelength band image 230A corresponding to a subject 220A, and a region 212B includes a wavelength band image 230B corresponding to a subject 220B.
 補正値導出部114は、例えば、ユーザ等から撮像装置10によって受け付けられた指示に基づいて、各スペクトル画像72を領域212A~212Dに分割してもよく、各スペクトル画像72に対して画像処理を行うことにより、波長帯画像の有無に基づいて、各スペクトル画像72を複数の領域212A~212Dに分割してもよい。ここでは、2つの被写体220A及び被写体220Bに対応して各スペクトル画像72が4つの領域212A~212Dに分割されているが、撮像対象領域210に含まれる被写体の数、及び、各スペクトル画像72を分割する領域の数は、いくつでもよい。そして、補正値導出部114は、領域212A及び領域212Bに対して補正値92を導出し、領域212C及び領域212Dに対しては補正値92を導出しない。 The correction value deriving unit 114 may divide each spectral image 72 into regions 212A to 212D based on an instruction received by the imaging device 10 from a user or the like, and performs image processing on each spectral image 72. By doing so, each spectral image 72 may be divided into a plurality of regions 212A to 212D based on the presence or absence of a wavelength band image. Here, each spectral image 72 is divided into four areas 212A to 212D corresponding to two subjects 220A and 220B, but the number of subjects included in the imaging target area 210 and each spectral image 72 are The number of regions to be divided may be any number. Then, the correction value deriving unit 114 derives the correction value 92 for the area 212A and the area 212B, but does not derive the correction value 92 for the area 212C and the area 212D.
 第9変形例では、領域212A及び領域212Bに対して補正値92が導出され、領域212C及び領域212Dに対しては補正値92が導出されない。したがって、領域212C及び領域212Dに対しても補正値92が導出される場合に比して、処理装置100のプロセッサ104の負荷を軽減することができる。 In the ninth modification, the correction value 92 is derived for the region 212A and the region 212B, and the correction value 92 is not derived for the region 212C and the region 212D. Therefore, the load on the processor 104 of the processing device 100 can be reduced compared to the case where the correction value 92 is also derived for the region 212C and the region 212D.
 また、第9変形例によって導出された補正値92が用いられる場合には、撮像装置10において、スペクトル画像72の一部の領域に対して補正処理が行われる。したがって、スペクトル画像72の全領域に対して補正処理が行われる場合に比して、撮像装置10のプロセッサ60の負荷を軽減することができる。 Furthermore, when the correction value 92 derived according to the ninth modification is used, a correction process is performed on a part of the spectral image 72 in the imaging device 10. Therefore, the load on the processor 60 of the imaging device 10 can be reduced compared to the case where the correction process is performed on the entire region of the spectral image 72.
 また、上記実施形態では、撮像装置10においてスペクトル画像72に対する補正処理が実行されるが、撮像装置10から外部装置にスペクトル画像72が入力され、外部装置においてスペクトル画像72に対する補正処理が実行されてもよい。この場合、外部装置は、本開示の技術に関する「画像処理装置」の一例である。 Further, in the embodiment described above, the correction process on the spectral image 72 is executed in the imaging device 10, but the spectral image 72 is input from the image capturing device 10 to an external device, and the correction process on the spectral image 72 is executed in the external device. Good too. In this case, the external device is an example of an "image processing device" related to the technology of the present disclosure.
 また、上記実施形態では、撮像装置10について、プロセッサ60を例示したが、プロセッサ60に代えて、又は、プロセッサ60と共に、他の少なくとも1つのCPU、少なくとも1つのGPU、及び/又は、少なくとも1つのTPUを用いるようにしてもよい。 Further, in the above embodiment, the processor 60 is illustrated in the imaging device 10, but instead of the processor 60, or together with the processor 60, at least one other CPU, at least one GPU, and/or at least one TPU may also be used.
 また、上記実施形態では、処理装置100について、プロセッサ104を例示したが、プロセッサ104に代えて、又は、プロセッサ104と共に、他の少なくとも1つのCPU、少なくとも1つのGPU、及び/又は、少なくとも1つのTPUを用いるようにしてもよい。 Further, in the above embodiment, the processor 104 is illustrated as an example of the processing device 100, but instead of or together with the processor 104, at least one other CPU, at least one GPU, and/or at least one TPU may also be used.
 また、上記実施形態では、撮像装置10について、ストレージ62にマルチスペクトル画像生成プログラム80が記憶されている形態例を挙げて説明したが、本開示の技術はこれに限定されない。例えば、マルチスペクトル画像生成プログラム80がSSD又はUSBメモリなどの可搬型の非一時的なコンピュータ読取可能な記憶媒体(以下、単に「非一時的記憶媒体」と称する)に記憶されていてもよい。非一時的記憶媒体に記憶されているマルチスペクトル画像生成プログラム80は、撮像装置10のコンピュータ56にインストールされてもよい。 Furthermore, in the above embodiment, the imaging device 10 has been described using an example in which the multispectral image generation program 80 is stored in the storage 62, but the technology of the present disclosure is not limited to this. For example, the multispectral image generation program 80 may be stored in a portable non-transitory computer-readable storage medium (hereinafter simply referred to as "non-transitory storage medium") such as an SSD or a USB memory. A multispectral image generation program 80 stored on a non-transitory storage medium may be installed on the computer 56 of the imaging device 10.
 また、ネットワークを介して撮像装置10に接続される他のコンピュータ又はサーバ装置等の記憶装置にマルチスペクトル画像生成プログラム80を記憶させておき、撮像装置10の要求に応じてマルチスペクトル画像生成プログラム80がダウンロードされ、撮像装置10のコンピュータ56にインストールされてもよい。 Further, the multispectral image generation program 80 is stored in a storage device such as another computer or a server device connected to the imaging device 10 via a network, and the multispectral image generation program 80 is generated in response to a request from the imaging device 10. may be downloaded and installed on the computer 56 of the imaging device 10.
 また、撮像装置10に接続される他のコンピュータ又はサーバ装置等の記憶装置、又はストレージ62にマルチスペクトル画像生成プログラム80の全てを記憶させておく必要はなく、マルチスペクトル画像生成プログラム80の一部を記憶させておいてもよい。 Further, it is not necessary to store the entire multispectral image generation program 80 in a storage device such as another computer or server device connected to the imaging device 10, or in the storage 62, but a part of the multispectral image generation program 80. may be stored.
 また、上記実施形態では、処理装置100について、ストレージ106に補正値導出プログラム110が記憶されている形態例を挙げて説明したが、本開示の技術はこれに限定されない。例えば、補正値導出プログラム110が非一時的記憶媒体に記憶されていてもよい。非一時的記憶媒体に記憶されている補正値導出プログラム110は、処理装置100のコンピュータ102にインストールされてもよい。 Furthermore, in the above embodiment, the processing device 100 has been described using an example in which the correction value derivation program 110 is stored in the storage 106, but the technology of the present disclosure is not limited to this. For example, the correction value derivation program 110 may be stored in a non-temporary storage medium. The correction value derivation program 110 stored in a non-transitory storage medium may be installed on the computer 102 of the processing device 100.
 また、ネットワークを介して処理装置100に接続される他のコンピュータ又はサーバ装置等の記憶装置に補正値導出プログラム110を記憶させておき、処理装置100の要求に応じて補正値導出プログラム110がダウンロードされ、処理装置100のコンピュータ102にインストールされてもよい。 Further, the correction value derivation program 110 is stored in a storage device such as another computer or server device connected to the processing device 100 via a network, and the correction value derivation program 110 is downloaded in response to a request from the processing device 100. may be installed on the computer 102 of the processing device 100.
 また、処理装置100に接続される他のコンピュータ又はサーバ装置等の記憶装置、又はストレージ106に補正値導出プログラム110の全てを記憶させておく必要はなく、補正値導出プログラム110の一部を記憶させておいてもよい。 Further, it is not necessary to store all of the correction value derivation program 110 in a storage device such as another computer or server device connected to the processing device 100, or in the storage 106, but only a part of the correction value derivation program 110 is stored. You can leave it.
 また、撮像装置10には、コンピュータ56が内蔵されているが、本開示の技術はこれに限定されず、例えば、コンピュータ56が撮像装置10の外部に設けられるようにしてもよい。 Further, although the imaging device 10 has a built-in computer 56, the technology of the present disclosure is not limited to this, and for example, the computer 56 may be provided outside the imaging device 10.
 また、処理装置100には、コンピュータ102が内蔵されているが、本開示の技術はこれに限定されず、例えば、コンピュータ102が処理装置100の外部に設けられるようにしてもよい。 Further, although the processing device 100 has a built-in computer 102, the technology of the present disclosure is not limited to this, and for example, the computer 102 may be provided outside the processing device 100.
 また、上記実施形態では、撮像装置10について、プロセッサ60、ストレージ62、及びRAM64を含むコンピュータ56が例示されているが、本開示の技術はこれに限定されず、コンピュータ56に代えて、ASIC、FPGA、及び/又はPLDを含むデバイスを適用してもよい。また、コンピュータ56に代えて、ハードウェア構成及びソフトウェア構成の組み合わせを用いてもよい。 Further, in the above embodiment, the computer 56 including the processor 60, the storage 62, and the RAM 64 is illustrated as an example of the imaging device 10, but the technology of the present disclosure is not limited to this, and instead of the computer 56, an ASIC, A device including an FPGA and/or a PLD may also be applied. Further, instead of the computer 56, a combination of hardware configuration and software configuration may be used.
 また、上記実施形態では、処理装置100について、プロセッサ104、ストレージ106、及びRAM108を含むコンピュータ102が例示されているが、本開示の技術はこれに限定されず、コンピュータ102に代えて、ASIC、FPGA、及び/又はPLDを含むデバイスを適用してもよい。また、コンピュータ102に代えて、ハードウェア構成及びソフトウェア構成の組み合わせを用いてもよい。 Further, in the above embodiment, the computer 102 including the processor 104, the storage 106, and the RAM 108 is illustrated as the processing device 100, but the technology of the present disclosure is not limited to this, and instead of the computer 102, an ASIC, A device including an FPGA and/or a PLD may also be applied. Further, in place of the computer 102, a combination of hardware configuration and software configuration may be used.
 また、上記実施形態で説明した各種処理を実行するハードウェア資源としては、次に示す各種のプロセッサを用いることができる。プロセッサとしては、例えば、ソフトウェア、すなわち、プログラムを実行することで、各種処理を実行するハードウェア資源として機能する汎用的なプロセッサであるCPUが挙げられる。また、プロセッサとしては、例えば、FPGA、PLD、又はASICなどの特定の処理を実行させるために専用に設計された回路構成を有するプロセッサである専用電子回路が挙げられる。何れのプロセッサにもメモリが内蔵又は接続されており、何れのプロセッサもメモリを使用することで各種処理を実行する。 Additionally, the following various processors can be used as hardware resources for executing the various processes described in the above embodiments. Examples of the processor include a CPU, which is a general-purpose processor that functions as a hardware resource that executes various processes by executing software, that is, a program. Examples of the processor include a dedicated electronic circuit such as an FPGA, a PLD, or an ASIC, which is a processor having a circuit configuration specifically designed to execute a specific process. Each processor has a built-in memory or is connected to it, and each processor uses the memory to perform various processes.
 各種処理を実行するハードウェア資源は、これらの各種のプロセッサのうちの1つで構成されてもよいし、同種または異種の2つ以上のプロセッサの組み合わせ(例えば、複数のFPGAの組み合わせ、又はCPUとFPGAとの組み合わせ)で構成されてもよい。また、各種処理を実行するハードウェア資源は1つのプロセッサであってもよい。 Hardware resources that execute various processes may be configured with one of these various processors, or a combination of two or more processors of the same type or different types (for example, a combination of multiple FPGAs, or a CPU and FPGA). Furthermore, the hardware resource that executes various processes may be one processor.
 1つのプロセッサで構成する例としては、第1に、1つ以上のCPUとソフトウェアの組み合わせで1つのプロセッサを構成し、このプロセッサが、各種処理を実行するハードウェア資源として機能する形態がある。第2に、SoCなどに代表されるように、各種処理を実行する複数のハードウェア資源を含むシステム全体の機能を1つのICチップで実現するプロセッサを使用する形態がある。このように、各種処理は、ハードウェア資源として、上記各種のプロセッサの1つ以上を用いて実現される。 As an example of a configuration using one processor, firstly, one processor is configured by a combination of one or more CPUs and software, and this processor functions as a hardware resource that executes various processes. Second, there is a form of using a processor, as typified by an SoC, in which a single IC chip realizes the functions of an entire system including a plurality of hardware resources that execute various processes. In this way, various types of processing are realized using one or more of the various types of processors described above as hardware resources.
 更に、これらの各種のプロセッサのハードウェア的な構造としては、より具体的には、半導体素子などの回路素子を組み合わせた電子回路を用いることができる。また、上記の視線検出処理はあくまでも一例である。したがって、主旨を逸脱しない範囲内において不要なステップを削除したり、新たなステップを追加したり、処理順序を入れ替えたりしてもよいことは言うまでもない。 Furthermore, as the hardware structure of these various processors, more specifically, an electronic circuit that is a combination of circuit elements such as semiconductor elements can be used. Further, the above line of sight detection processing is just an example. Therefore, it goes without saying that unnecessary steps may be deleted, new steps may be added, or the processing order may be rearranged without departing from the main idea.
 以上に示した記載内容及び図示内容は、本開示の技術に係る部分についての詳細な説明であり、本開示の技術の一例に過ぎない。例えば、上記の構成、機能、作用、及び効果に関する説明は、本開示の技術に係る部分の構成、機能、作用、及び効果の一例に関する説明である。よって、本開示の技術の主旨を逸脱しない範囲内において、以上に示した記載内容及び図示内容に対して、不要な部分を削除したり、新たな要素を追加したり、置き換えたりしてもよいことは言うまでもない。また、錯綜を回避し、本開示の技術に係る部分の理解を容易にするために、以上に示した記載内容及び図示内容では、本開示の技術の実施を可能にする上で特に説明を要しない技術常識等に関する説明は省略されている。 The descriptions and illustrations described above are detailed explanations of the parts related to the technology of the present disclosure, and are merely examples of the technology of the present disclosure. For example, the above description regarding the configuration, function, operation, and effect is an example of the configuration, function, operation, and effect of the part related to the technology of the present disclosure. Therefore, unnecessary parts may be deleted, new elements may be added, or replacements may be made to the written and illustrated contents described above without departing from the gist of the technology of the present disclosure. Needless to say. In addition, in order to avoid confusion and facilitate understanding of the parts related to the technology of the present disclosure, the descriptions and illustrations shown above do not include parts that require particular explanation in order to enable implementation of the technology of the present disclosure. Explanations regarding common technical knowledge, etc. that do not apply are omitted.
 本明細書において、「A及び/又はB」は、「A及びBのうちの少なくとも1つ」と同義である。つまり、「A及び/又はB」は、Aだけであってもよいし、Bだけであってもよいし、A及びBの組み合わせであってもよい、という意味である。また、本明細書において、3つ以上の事柄を「及び/又は」で結び付けて表現する場合も、「A及び/又はB」と同様の考え方が適用される。 In this specification, "A and/or B" has the same meaning as "at least one of A and B." That is, "A and/or B" means that it may be only A, only B, or a combination of A and B. Furthermore, in this specification, even when three or more items are expressed by connecting them with "and/or", the same concept as "A and/or B" is applied.
 本明細書に記載された全ての文献、特許出願及び技術規格は、個々の文献、特許出願及び技術規格が参照により取り込まれることが具体的かつ個々に記された場合と同程度に、本明細書中に参照により取り込まれる。 All documents, patent applications, and technical standards mentioned herein are incorporated herein by reference to the same extent as if each individual document, patent application, and technical standard was specifically and individually indicated to be incorporated by reference. Incorporated by reference into this book.
 以上の実施形態に関し、さらに以下の付記を開示する。 Regarding the above embodiments, the following additional notes are further disclosed.
(付記1)
 校正処理に用いられる補正値を導出するための部材であって、
 前記校正処理は、光学系を備える撮像装置から出力される画像に対する校正処理であり、
 前記光学系は、光軸の周りに設けられ、互いに異なる波長帯を有する複数のフィルタを有し、
 前記補正値は、前記画像に対して、前記複数のフィルタによって分光されることで生じる光学像の位置ずれによる画像ずれを補正するための補正値であり、
 前記画像は、各前記波長帯に対応する波長帯画像を含み、
 前記部材は、前記画像内の基準位置に対する前記波長帯画像の位置ずれの方向及び/又は量に基づいて前記補正値を導出する場合の前記基準位置に対応する特徴部分を含む
 部材。
(付記2)
 校正処理に用いられる補正値を導出するための装置であって、
 前記校正処理は、光学系を備える撮像装置から出力される画像に対する校正処理であり、
 前記光学系は、光軸の周りに設けられ、互いに異なる波長帯を有する複数のフィルタを有し、
 前記補正値は、前記画像に対して、前記複数のフィルタによって分光されることで生じる光学像の位置ずれによる画像ずれを補正するための補正値であり、
 前記画像は、各前記波長帯に対応する波長帯画像を含み、
 前記装置は、プロセッサを備え、
 前記プロセッサは、前記画像内の基準位置に対する前記波長帯画像の位置ずれの方向及び/又は量に基づいて前記補正値を導出することを含む
 装置。
(付記3)
 校正処理に用いられる補正値を導出するための方法であって、
 前記校正処理は、光学系を備える撮像装置から出力される画像に対する校正処理であり、
 前記光学系は、光軸の周りに設けられ、互いに異なる波長帯を有する複数のフィルタを有し、
 前記補正値は、前記画像に対して、前記複数のフィルタによって分光されることで生じる光学像の位置ずれによる画像ずれを補正するための補正値であり、
 前記画像は、各前記波長帯に対応する波長帯画像を含み、
 前記方法は、前記画像内の基準位置に対する前記波長帯画像の位置ずれの方向及び/又は量に基づいて前記補正値を導出することを備える
 方法。
(付記4)
 校正処理に用いられる補正値を導出するための処理をコンピュータに実行させるためのプログラムであって、
 前記校正処理は、光学系を備える撮像装置から出力される画像に対する校正処理であり、
 前記光学系は、光軸の周りに設けられ、互いに異なる波長帯を有する複数のフィルタを有し、
 前記補正値は、前記画像に対して、前記複数のフィルタによって分光されることで生じる光学像の位置ずれによる画像ずれを補正するための補正値であり、
 前記画像は、各前記波長帯に対応する波長帯画像を含み、
 前記処理は、前記画像内の基準位置に対する前記波長帯画像の位置ずれの方向及び/又は量に基づいて前記補正値を導出することを含む
 プログラム。
(Additional note 1)
A member for deriving correction values used in calibration processing,
The calibration process is a calibration process for an image output from an imaging device including an optical system,
The optical system is provided around an optical axis and includes a plurality of filters having mutually different wavelength bands,
The correction value is a correction value for correcting an image shift due to a positional shift of an optical image that occurs when the image is separated by the plurality of filters,
The image includes a wavelength band image corresponding to each of the wavelength bands,
The member includes a characteristic portion corresponding to the reference position when the correction value is derived based on the direction and/or amount of positional shift of the wavelength band image with respect to the reference position in the image.
(Additional note 2)
A device for deriving correction values used in calibration processing,
The calibration process is a calibration process for an image output from an imaging device including an optical system,
The optical system is provided around an optical axis and includes a plurality of filters having mutually different wavelength bands,
The correction value is a correction value for correcting an image shift due to a positional shift of an optical image that occurs when the image is separated by the plurality of filters,
The image includes a wavelength band image corresponding to each of the wavelength bands,
The device includes a processor;
The apparatus includes: the processor deriving the correction value based on the direction and/or amount of displacement of the wavelength band image with respect to a reference position within the image.
(Additional note 3)
A method for deriving a correction value used in a calibration process, the method comprising:
The calibration process is a calibration process for an image output from an imaging device including an optical system,
The optical system is provided around an optical axis and includes a plurality of filters having mutually different wavelength bands,
The correction value is a correction value for correcting an image shift due to a positional shift of an optical image that occurs when the image is separated by the plurality of filters,
The image includes a wavelength band image corresponding to each of the wavelength bands,
The method comprises deriving the correction value based on the direction and/or amount of displacement of the wavelength band image with respect to a reference position within the image.
(Additional note 4)
A program for causing a computer to execute processing for deriving correction values used in calibration processing,
The calibration process is a calibration process for an image output from an imaging device including an optical system,
The optical system is provided around an optical axis and includes a plurality of filters having mutually different wavelength bands,
The correction value is a correction value for correcting an image shift due to a positional shift of an optical image that occurs when the image is separated by the plurality of filters,
The image includes a wavelength band image corresponding to each of the wavelength bands,
The processing includes deriving the correction value based on the direction and/or amount of positional shift of the wavelength band image with respect to a reference position within the image.

Claims (29)

  1.  光学系を備える撮像装置から出力される画像に対して適用される画像処理装置であって、
     前記光学系は、光軸の周りに設けられた複数のフィルタを有し、
     前記画像処理装置は、プロセッサを備え、
     前記プロセッサは、前記画像に対して、前記複数のフィルタによって分光されることで生じる光学像の位置ずれによる画像ずれを補正する処理を行う
     画像処理装置。
    An image processing device applied to an image output from an imaging device including an optical system,
    The optical system includes a plurality of filters provided around an optical axis,
    The image processing device includes a processor,
    The processor performs processing on the image to correct an image shift due to a positional shift of an optical image caused by being separated by the plurality of filters.
  2.  前記光学系は、複数の開口を有し、
     各前記開口には、各前記フィルタが設けられ、
     前記画像ずれは、少なくとも各前記開口の特徴に基づく画像ずれを含み、
     前記処理は、前記特徴に基づいて行われる
     請求項1に記載の画像処理装置。
    The optical system has a plurality of apertures,
    Each of the openings is provided with each of the filters,
    The image shift includes at least an image shift based on the characteristics of each of the openings,
    The image processing device according to claim 1, wherein the processing is performed based on the characteristics.
  3.  前記特徴は、前記開口の重心位置を含む
     請求項2に記載の画像処理装置。
    The image processing device according to claim 2, wherein the feature includes a center of gravity position of the opening.
  4.  前記重心位置は、前記開口の位置及び/又は形状に基づいて定まる位置である
     請求項3に記載の画像処理装置。
    The image processing device according to claim 3, wherein the center of gravity position is a position determined based on the position and/or shape of the opening.
  5.  前記光学像の位置ずれは、少なくとも前記光学系の特性で生じる光学像の位置ずれを含む
     請求項1から請求項4の何れか一項に記載の画像処理装置。
    The image processing device according to any one of claims 1 to 4, wherein the positional deviation of the optical image includes at least a positional deviation of the optical image caused by a characteristic of the optical system.
  6.  前記処理は、前記画像のうちの一部の領域に対して行われる
     請求項1から請求項5の何れか一項に記載の画像処理装置。
    The image processing device according to any one of claims 1 to 5, wherein the processing is performed on a partial area of the image.
  7.  前記画像は、マルチスペクトル画像を生成するためのスペクトル画像である
     請求項1から請求項6の何れか一項に記載の画像処理装置。
    The image processing device according to any one of claims 1 to 6, wherein the image is a spectral image for generating a multispectral image.
  8.  前記画像は、前記撮像装置によって撮像されることにより得られた撮像データに対して混信除去処理が行われることにより生成された画像である
     請求項7に記載の画像処理装置。
    The image processing device according to claim 7, wherein the image is an image generated by performing interference removal processing on imaged data obtained by imaging by the imaging device.
  9.  前記複数のフィルタは、互いに異なる波長帯を有する
     請求項1から請求項8の何れか一項に記載の画像処理装置。
    The image processing device according to any one of claims 1 to 8, wherein the plurality of filters have wavelength bands different from each other.
  10.  前記複数のフィルタは、前記光軸の周りに並んで配置されている
     請求項1から請求項9の何れか一項に記載の画像処理装置。
    The image processing device according to any one of claims 1 to 9, wherein the plurality of filters are arranged side by side around the optical axis.
  11.  前記処理は、前記波長帯の組み合わせに基づいて行われる
     請求項9に記載の画像処理装置。
    The image processing device according to claim 9, wherein the processing is performed based on a combination of the wavelength bands.
  12.  前記処理は、前記光学系に関する設計値に基づいて行われる
     請求項1から請求項11の何れか一項に記載の画像処理装置。
    The image processing device according to any one of claims 1 to 11, wherein the processing is performed based on design values regarding the optical system.
  13.  前記処理は、前記波長帯毎の補正値に基づいて行われる
     請求項9に記載の画像処理装置。
    The image processing device according to claim 9, wherein the processing is performed based on a correction value for each wavelength band.
  14.  前記画像は、各前記波長帯に対応する波長帯画像を含む
     請求項9、請求項11、及び請求項13の何れか一項に記載の画像処理装置。
    The image processing device according to any one of claims 9, 11, and 13, wherein the image includes a wavelength band image corresponding to each of the wavelength bands.
  15.  前記画像は、各前記波長帯に対応する波長帯画像を含み、
     前記補正値は、前記波長帯画像の位置に応じて異なる
     請求項13に記載の画像処理装置。
    The image includes a wavelength band image corresponding to each of the wavelength bands,
    The image processing device according to claim 13, wherein the correction value differs depending on the position of the wavelength band image.
  16.  前記画像は、各前記波長帯に対応する波長帯画像を含み、
     前記補正値は、前記画像内の基準位置に対する前記波長帯画像の位置ずれの方向及び/又は量に基づいて定まる
     請求項13に記載の画像処理装置。
    The image includes a wavelength band image corresponding to each of the wavelength bands,
    The image processing device according to claim 13, wherein the correction value is determined based on the direction and/or amount of positional shift of the wavelength band image with respect to a reference position within the image.
  17.  前記波長帯画像は、被写体の特徴部分を示す画像である
     請求項14に記載の画像処理装置。
    The image processing device according to claim 14, wherein the wavelength band image is an image showing a characteristic part of a subject.
  18.  前記波長帯画像は、被写体の特徴部分を示す画像であり、
     前記基準位置は、前記特徴部分に対応する位置である
     請求項16に記載の画像処理装置。
    The wavelength band image is an image showing a characteristic part of the subject,
    The image processing device according to claim 16, wherein the reference position is a position corresponding to the characteristic portion.
  19.  前記基準位置は、複数の前記波長帯画像のうちのいずれか一つの波長帯画像の位置である
     請求項16に記載の画像処理装置。
    The image processing device according to claim 16, wherein the reference position is a position of any one of the plurality of wavelength band images.
  20.  前記被写体は、点を有し、
     前記特徴部分は、前記点である
     請求項17に記載の画像処理装置。
    The object has a point,
    The image processing device according to claim 17, wherein the characteristic portion is the point.
  21.  前記被写体は、格子柄を有し、
     前記特徴部分は、前記格子柄に含まれる交点である
     請求項17に記載の画像処理装置。
    The subject has a checkered pattern,
    The image processing device according to claim 17, wherein the characteristic portion is an intersection included in the checkered pattern.
  22.  前記被写体は、校正用部材である
     請求項17に記載の画像処理装置。
    The image processing device according to claim 17, wherein the subject is a calibration member.
  23.  前記被写体は、複数の前記特徴部分を有し、
     複数の前記特徴部分は、直線状に配置されている
     請求項17に記載の画像処理装置。
    The subject has a plurality of the characteristic parts,
    The image processing device according to claim 17, wherein the plurality of characteristic portions are arranged in a straight line.
  24.  前記被写体は、複数の前記特徴部分を有し、
     複数の前記特徴部分は、等間隔に配置されている
     請求項17に記載の画像処理装置。
    The subject has a plurality of the characteristic parts,
    The image processing device according to claim 17, wherein the plurality of characteristic portions are arranged at equal intervals.
  25.  前記被写体は、検査用物体を含む
     請求項17に記載の画像処理装置。
    The image processing device according to claim 17, wherein the subject includes an inspection object.
  26.  前記被写体は、複数の前記検査用物体を含む
     請求項25に記載の画像処理装置。
    The image processing device according to claim 25, wherein the subject includes a plurality of the inspection objects.
  27.  前記光学系は、各前記フィルタに対応して設けられた偏光フィルタを有し、
     複数の前記偏光フィルタは、互いに異なる偏光軸を有し、
     前記撮像装置は、複数の画素ブロックを有するイメージセンサを備え、
     各前記画素ブロックには、互いに異なる偏光軸を有する複数種類の偏光子が設けられている
     請求項1から請求項26の何れか一項に記載の画像処理装置。
    The optical system has a polarizing filter provided corresponding to each of the filters,
    The plurality of polarizing filters have mutually different polarization axes,
    The imaging device includes an image sensor having a plurality of pixel blocks,
    The image processing device according to any one of claims 1 to 26, wherein each of the pixel blocks is provided with a plurality of types of polarizers having mutually different polarization axes.
  28.  光学系を備える撮像装置から出力される画像に対して適用される画像処理方法であって、
     前記光学系は、光軸の周りに設けられた複数のフィルタを有し、
     前記画像処理方法は、前記画像に対して、前記複数のフィルタによって分光されることで生ずる光学像の位置ずれによる画像ずれを補正する処理を行うことを備える
     画像処理方法。
    An image processing method applied to an image output from an imaging device including an optical system, the method comprising:
    The optical system includes a plurality of filters provided around an optical axis,
    The image processing method includes performing a process on the image to correct an image shift due to a position shift of an optical image caused by being separated by the plurality of filters.
  29.  光学系を備える撮像装置から出力される画像に対してコンピュータに画像処理を実行させるためのプログラムであって、
     前記光学系は、光軸の周りに設けられた複数のフィルタを有し、
     前記画像処理は、前記画像に対して、前記複数のフィルタによって分光されることで生じる光学像の位置ずれによる画像ずれを補正する処理を含む
     プログラム。
    A program for causing a computer to perform image processing on an image output from an imaging device equipped with an optical system, the program comprising:
    The optical system includes a plurality of filters provided around an optical axis,
    The image processing includes processing for correcting an image shift due to a positional shift of an optical image caused by the image being separated by the plurality of filters.
PCT/JP2023/017159 2022-08-22 2023-05-02 Image processing device, image processing method, and program WO2024042783A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-131763 2022-08-22
JP2022131763 2022-08-22

Publications (1)

Publication Number Publication Date
WO2024042783A1 true WO2024042783A1 (en) 2024-02-29

Family

ID=90012982

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/017159 WO2024042783A1 (en) 2022-08-22 2023-05-02 Image processing device, image processing method, and program

Country Status (1)

Country Link
WO (1) WO2024042783A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020064162A (en) * 2018-10-16 2020-04-23 キヤノン株式会社 Optical system and accessory device and imaging device equipped with the same
WO2020250774A1 (en) * 2019-06-11 2020-12-17 富士フイルム株式会社 Imaging device
WO2022024917A1 (en) * 2020-07-28 2022-02-03 富士フイルム株式会社 Imaging device, adjustment method, and adjustment program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020064162A (en) * 2018-10-16 2020-04-23 キヤノン株式会社 Optical system and accessory device and imaging device equipped with the same
WO2020250774A1 (en) * 2019-06-11 2020-12-17 富士フイルム株式会社 Imaging device
WO2022024917A1 (en) * 2020-07-28 2022-02-03 富士フイルム株式会社 Imaging device, adjustment method, and adjustment program

Similar Documents

Publication Publication Date Title
US7732744B2 (en) Image input apparatus, photodetection apparatus, and image synthesis method
US9509978B2 (en) Image processing method, image processing apparatus, image-capturing apparatus, and image processing program
JP6340884B2 (en) Measuring apparatus, measuring system and measuring method
US20100201853A1 (en) Digital camera and digital camera system
US20180075615A1 (en) Imaging device, subject information acquisition method, and computer program
JP6697680B2 (en) Signal processing device, signal processing method, and program
CN107113370A (en) Image capture apparatus and image-capturing method
JP2018029251A (en) Inspection apparatus, inspection method, and program
CN115152205A (en) Image pickup apparatus and method
AU2010237951A1 (en) Image processing method and image processing apparatus
US20230370700A1 (en) Data processing apparatus, data processing method, data processing program, optical element, imaging optical system, and imaging apparatus
US20220078359A1 (en) Imaging apparatus
WO2024042783A1 (en) Image processing device, image processing method, and program
US20150215593A1 (en) Image-acquisition apparatus
WO2017086788A1 (en) Hyperspectral 2d imaging device
JP5827249B2 (en) Skin hue measuring system and method
US10805581B2 (en) Image sensor and imaging apparatus
US7474337B1 (en) Method and apparatus to provide edge enhancements as part of a demosaicing process
JP6225519B2 (en) Measuring apparatus and measuring method
US20230393059A1 (en) Data processing apparatus, data processing method, data processing program, optical element, imaging optical system, and imaging apparatus
CN105979233A (en) Mosaic removing method, image processor and image sensor
JP2014021426A5 (en)
JP7279596B2 (en) Three-dimensional measuring device
WO2024047944A1 (en) Member for calibration, housing device, calibration device, calibration method, and program
US10776945B2 (en) Dimension measurement device, dimension measurement system, and dimension measurement method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23856903

Country of ref document: EP

Kind code of ref document: A1