WO2024047944A1 - Member for calibration, housing device, calibration device, calibration method, and program - Google Patents

Member for calibration, housing device, calibration device, calibration method, and program Download PDF

Info

Publication number
WO2024047944A1
WO2024047944A1 PCT/JP2023/017201 JP2023017201W WO2024047944A1 WO 2024047944 A1 WO2024047944 A1 WO 2024047944A1 JP 2023017201 W JP2023017201 W JP 2023017201W WO 2024047944 A1 WO2024047944 A1 WO 2024047944A1
Authority
WO
WIPO (PCT)
Prior art keywords
spectral reflectance
calibration
subject
spectral
imaging
Prior art date
Application number
PCT/JP2023/017201
Other languages
French (fr)
Japanese (ja)
Inventor
慶延 岸根
和佳 岡田
睦 川中子
高志 椚瀬
友也 平川
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Publication of WO2024047944A1 publication Critical patent/WO2024047944A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/02Details
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/30Measuring the intensity of spectral lines directly on the spectrum itself
    • G01J3/36Investigating two or more bands of a spectrum by separate detectors
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/46Measurement of colour; Colour measuring devices, e.g. colorimeters
    • G01J3/50Measurement of colour; Colour measuring devices, e.g. colorimeters using electric radiation detectors
    • G01J3/51Measurement of colour; Colour measuring devices, e.g. colorimeters using electric radiation detectors using colour filters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals

Definitions

  • the technology of the present disclosure relates to a calibration member, a housing device, a calibration device, a calibration method, and a program.
  • Japanese Unexamined Patent Publication No. 2006-292582 discloses a multiband image processing device that acquires a spectral image of a subject.
  • a multiband image processing device includes an image acquisition unit that acquires an image of a plurality of objects with known spectral characteristics placed near the object, and an image acquisition unit that acquires an image of a plurality of objects with known spectral characteristics obtained from the images. and an image processing unit that performs predetermined processing on the image based on spatial variations in the image.
  • JP 2018-009988A discloses a measurement device that evaluates particle characteristics of a painted surface containing a glitter material.
  • the measurement device includes a chromaticity distribution acquisition means that acquires the in-plane chromaticity distribution of the object under at least two or more illumination conditions or light reception conditions, and a particle characteristic based on the amount of variation in the in-plane chromaticity distribution. and particle characteristic evaluation means for calculating a particle characteristic evaluation value for evaluating.
  • Japanese Unexamined Patent Publication No. 2001-144972 discloses a multispectral image recording/processing device that images and records a subject in a subject image recording area illuminated by a light source.
  • the multispectral image recording/processing device includes a subject support section, a light source section, an imaging means, an angular multispectral image acquisition section, and a spectral reflectance estimation section.
  • the subject support section supports the subject in the subject image recording area.
  • the light source section is arranged in a desired azimuth direction centered on a predetermined position in a plane that includes a predetermined position of the subject imaging recording area, and is spaced a certain distance from the predetermined position, and is directed toward the subject imaging recording area. illuminate.
  • the imaging means is arranged in a desired azimuth direction centered on a predetermined position in a plane including the predetermined position, and spaced apart from the predetermined position by a certain distance, and captures a multiband image of the subject.
  • the declination multispectral image acquisition unit captures and records a plurality of multiband images under a plurality of imaging and recording conditions in which the light source position determined by the azimuth direction of the light source unit and the imaging position determined by the azimuth direction of the imaging means are respectively changed. From this, a polarization multispectral image having the spectral reflectance distribution of the subject is obtained using the light source position and the imaging position as parameters.
  • the spectral reflectance estimation unit interpolates or synthesizes the spectral reflectance distribution using the light source position and the imaging position as parameters to obtain the spectral reflectance of the subject under image reproduction conditions at the desired light source position, number of light sources, or imaging position.
  • the distribution is estimated using the image data of the declination multispectral image obtained by the declination multispectral image acquisition unit.
  • Japanese Unexamined Patent Publication No. 2010-130157 discloses a color conversion method that performs color conversion of a multiband image having four or more bands obtained by combining a plurality of input images input by a plurality of image input means having mutually different bands.
  • a multispectral image processing apparatus comprising means is disclosed.
  • the color conversion means includes a spectral sensitivity characteristic information correction means for correcting spectral sensitivity characteristic information, which is a spectral sensitivity characteristic of the image input means, based on a ratio of exposure information included in data of a plurality of input images; A device-independent method that converts a multiband image into a device-independent color image using the spectral sensitivity characteristic information obtained when the input image was photographed and the photographing illumination light information, which is the spectral characteristics of the illumination light when the input image was photographed. and color image conversion means.
  • One embodiment of the technology of the present disclosure includes, for example, a calibration member, a housing device, a calibration device, and a housing device that can improve measurement accuracy for the color of a subject compared to when the background surface is a white surface. Providing calibration methods and programs.
  • a first aspect of the technology of the present disclosure is a calibration member used for calibration of a spectral imaging device equipped with a spectral filter having a specific wavelength range, wherein the calibration member is a background that constitutes a background of a subject.
  • the first spectral reflectance which is the spectral reflectance of the light reflected by the subject
  • the second spectral reflectance which is the spectral reflectance of the light reflected by the background surface, have a relationship with each other in the wavelength range. It is a calibration member having.
  • a second aspect of the technology of the present disclosure is a calibration member according to the first aspect, wherein the relationship is such that the difference between the first spectral reflectance and the second spectral reflectance is within a first range. This is a calibration member.
  • a third aspect of the technology of the present disclosure is the calibration member according to the second aspect, wherein the first range is from a reflectance of 0.5 times to a reflectance of twice the first spectral reflectance.
  • This is a calibration member that has a range of .
  • a fourth aspect according to the technology of the present disclosure is a calibration member according to the second aspect or the third aspect, in which the first range is a third spectral reflectance of light reflected by the first reference plate. This is a calibration member that is set based on reflectance.
  • a fifth aspect of the technology of the present disclosure is the calibration member according to the fourth aspect, in which the first reference plate has a reflective surface that reflects light, and the reflective surface is a white surface. It is.
  • a sixth aspect according to the technology of the present disclosure is that in the calibration member according to the fourth aspect or the fifth aspect, the difference between the first spectral reflectance and the second spectral reflectance is the same as the difference between the first spectral reflectance and the second spectral reflectance. This is a calibration member smaller than the difference with the third spectral reflectance.
  • a seventh aspect of the technology of the present disclosure is the calibration member according to any one of the fourth to sixth aspects, wherein the second spectral reflectance is lower than the third spectral reflectance. It is a member for use.
  • An eighth aspect according to the technology of the present disclosure is the calibration member according to any one of the fourth to seventh aspects, in which the first spectral reflectance is lower than the third spectral reflectance. It is a member for use.
  • a ninth aspect according to the technology of the present disclosure is the calibration member according to any one of the fourth to eighth aspects, in which the second spectral reflectance is equal to the first spectral reflectance and the third spectral reflectance. This is a calibration member that falls within a second range of spectral reflectance set based on reflectance.
  • a tenth aspect according to the technology of the present disclosure is a calibration member according to the ninth aspect, in which the first spectral reflectance is a, the second spectral reflectance is b, and the third spectral reflectance is c.
  • the second range is a calibration member that is a range defined by equation (1). a/2 ⁇ b ⁇ (c+a)/2...(1)
  • the first spectral reflectance is a
  • the second spectral reflectance is b
  • the third spectral reflectance is c.
  • the second range is a calibration member that is a range defined by equation (2). 3a/4 ⁇ b ⁇ (c+3a)/4...(2)
  • a twelfth aspect of the technology of the present disclosure is a calibration member according to any one of the first to eleventh aspects, in which the first spectral reflectance is measured at multiple locations on the subject.
  • This is a calibration member that is a spectral reflectance based on the calculated spectral reflectance.
  • a thirteenth aspect according to the technology of the present disclosure is a calibration member according to any one of the fourth aspect to the eleventh aspect, and the twelfth aspect subordinate to the fourth aspect.
  • the range includes a plurality of wavelength ranges, and in each wavelength range, the difference between the first spectral reflectance and the second spectral reflectance is smaller than the difference between the first spectral reflectance and the third spectral reflectance. It is a member.
  • a fourteenth aspect according to the technology of the present disclosure is a calibration member according to any one of the first to thirteenth aspects, in which the background surface has a surface roughness corresponding to the subject. It is.
  • a fifteenth aspect according to the technology of the present disclosure is that in the calibration member according to any one of the first to fourteenth aspects, the spectral reflectance of the background surface is in the first visible range in the visible range. It is a calibration member whose first near-infrared region of the near-infrared region is higher than that of the near-infrared region.
  • a sixteenth aspect according to the technology of the present disclosure includes a calibration member according to any one of the first to fifteenth aspects, and a housing that covers an imaging space in which the calibration member and a subject are arranged. It is a housing device equipped with.
  • a seventeenth aspect according to the technology of the present disclosure includes the calibration member according to any one of the first to fifteenth aspects, a spectral imaging device, a light source, and a housing. covers the imaging space in which the calibration member and the subject are placed, and the imaging conditions when imaging the subject with the spectroscopic imaging device are such that the first component of the incident light that enters the spectroscopic imaging device is from the light source. This is the first condition of the calibration device in which the light is irradiated and reflected by the subject and the calibration member.
  • An 18th aspect according to the technology of the present disclosure is a calibration device according to the 17th aspect, which includes a processor, and the processor outputs warning information when the imaging condition deviates from the first condition.
  • a nineteenth aspect according to the technology of the present disclosure includes the calibration member according to any one of the first to fifteenth aspects, and a processor, the processor comprising: a calibration member according to any one of the first to fifteenth aspects; First imaging data obtained by imaging is acquired, second imaging data is obtained by imaging the calibration member and the subject by a spectroscopic imaging device, and second imaging data is acquired based on the first imaging data.
  • This is a calibration device that performs calibration on 2 imaged data.
  • a 20th aspect of the technology of the present disclosure is a 20th aspect of the technology of the present disclosure, in which the first imaging data obtained when the calibration member according to any one of the 1st to 15th aspects is imaged by a spectroscopic imaging device.
  • a first acquisition step of acquiring a second acquisition step of acquiring second imaging data obtained by imaging the calibration member and the subject with a spectroscopic imaging device, and a second acquisition step of acquiring second imaging data based on the first imaging data.
  • This is a calibration method comprising a calibration step of performing calibration for.
  • a twenty-first aspect of the technology of the present disclosure provides first imaging data obtained when the calibration member according to any one of the first to fifteenth aspects is imaged by a spectroscopic imaging device.
  • a first acquisition step of acquiring a second acquisition step of acquiring second imaging data obtained by imaging the calibration member and the subject with a spectroscopic imaging device, and a second acquisition step of acquiring second imaging data based on the first imaging data.
  • This is a program for causing a computer to execute a process including a calibration process for calibrating the data.
  • FIG. 1 is a block diagram showing an example of a color measuring device according to the present embodiment. It is a graph which shows an example of a 1st spectral reflectance, a 2nd spectral reflectance, and a 3rd spectral reflectance. It is a graph which shows the 1st example of 2nd spectral reflectance. It is a graph which shows the 2nd example of 2nd spectral reflectance.
  • FIG. 2 is a block diagram illustrating an example of a color setting method for setting the color of a background surface.
  • FIG. 2 is a block diagram illustrating an example of a color measurement method according to the present embodiment.
  • FIG. 2 is a block diagram showing an example of the aspect of incident light in the color measurement method according to the present embodiment.
  • FIG. 2 is a block diagram showing an example of the hardware configuration of an imaging device.
  • FIG. 2 is an explanatory diagram showing an example of the configuration of a photoelectric conversion element.
  • FIG. 2 is a block diagram illustrating an example of a functional configuration for executing spectral image generation processing.
  • FIG. 2 is a block diagram illustrating an example of the operation of an output value acquisition section and an interference removal processing section.
  • FIG. 2 is a block diagram illustrating an example of the hardware configuration of a processing device according to the present embodiment and an example of a functional configuration for executing color measurement processing.
  • FIG. 2 is a block diagram illustrating an example of the operations of an image data acquisition section, a calibration image generation section, and a color derivation section.
  • 2 is a flowchart illustrating an example of the flow of spectral image generation processing.
  • 3 is a flowchart showing an example of the flow of color measurement processing.
  • FIG. 2 is a block diagram illustrating an example of a color measurement method according to a first comparative example.
  • FIG. 7 is a block diagram illustrating an example of a color measurement method according to a second comparative example.
  • FIG. 7 is a block diagram illustrating an example of an aspect of incident light in a color measurement method according to a second comparative example.
  • RGB is an abbreviation for “Red Green Blue.”
  • LED is an abbreviation for “light emitting diode.”
  • EL is an abbreviation for “Electro Luminescence”.
  • CMOS is an abbreviation for “Complementary Metal Oxide Semiconductor.”
  • CCD is an abbreviation for “Charge Coupled Device”.
  • I/F is an abbreviation for "Interface”.
  • RAM is an abbreviation for "Random Access Memory.”
  • CPU is an abbreviation for "Central Processing Unit.”
  • GPU is an abbreviation for “Graphics Processing Unit.”
  • EEPROM is an abbreviation for "Electrically Erasable and Programmable Read Only Memory.”
  • HDD is an abbreviation for "Hard Disk Drive.”
  • TPU is an abbreviation for "Tensor processing unit”.
  • SSD is an abbreviation for "Solid State Drive.”
  • USB is an abbreviation for “Universal Serial Bus.”
  • ASIC is an abbreviation for “Application Specific Integrated Circuit.”
  • FPGA is an abbreviation for "Field-Programmable Gate Array.”
  • PLD is an abbreviation for “Programmable Logic Device”.
  • SoC is an abbreviation for "System-on-a-Chip.”
  • IC is an abbreviation for "Integrated Circuit.”
  • uniform means not only complete “uniform” but also an error that is generally allowed in the technical field to which the technology of the present disclosure belongs, and is contrary to the spirit of the technology of the present disclosure. It refers to “uniform” in the sense that it includes some degree of error.
  • “same” means not only “the same” but also an error that is generally allowed in the technical field to which the technology of the present disclosure belongs, and which is contrary to the spirit of the technology of the present disclosure. It refers to "the same” in the sense that it includes a certain degree of error.
  • ⁇ straight line refers to an error that is generally allowed in the technical field to which the technology of the present disclosure belongs, in addition to a perfect “straight line,” and is contrary to the spirit of the technology of the present disclosure. It refers to a ⁇ straight line'' that includes a certain amount of error.
  • FIG. 18 shows an example of a color measurement method according to the first comparative example.
  • the imaging device 200 used in the color measurement method according to the first comparative example is an RGB camera.
  • the RGB camera refers to a camera that can generate wavelength range images in each of red, green, and blue wavelength ranges.
  • the color of the subject 2 is measured in the following manner. That is, first, as shown in Aspect I, light L8 is irradiated from the light source 202 onto a white reference plate 204 (hereinafter referred to as "white reference plate 204"), and the incident light including light L9 reflected by the white reference plate 204 is emitted. The light Lb is imaged by the imaging device 200. As a result, a reference image 220 is obtained. Subsequently, as shown in aspect J, light L8 is irradiated from the light source 202 to the subject 2, and the incident light Lb including the light L10 reflected by the subject 2 is imaged by the imaging device 200. As a result, a subject image 222 is obtained.
  • a calibration image 224 is generated by performing calibration on the subject image 222 based on the reference image 220, and a calibration image 224 is generated based on the calibration image 224. Color is measured. As a result, a color measurement result 226, which is the result of measuring the color of the subject 2, is obtained.
  • the color measurement method according to the first comparative example even if the light L8 emitted from the light source 202 is uneven or the amount of light around the imaging device 200 is reduced, the color measurement method based on the reference image 220 By performing the calibration on the image 222, the color of the subject 2 is measured while eliminating the influence of unevenness of the light L8, the influence of a decrease in the amount of light around the imaging device 200, and the like.
  • the color measurement method according to the first comparative example is a method that assumes that the subject 2 is imaged in an ideal imaging environment, and in an actual imaging environment, light other than the light L8 emitted from the light source 202 It is affected by ambient light, etc.
  • FIG. 19 shows an example of a color measurement method according to the second comparative example.
  • a housing 206 is used to prevent disturbance light and the like from affecting the imaging environment.
  • the housing 206 is configured to cover the imaging space 208.
  • the light source 202, the entrance section 200A of the imaging device 200, and the subject 2 are arranged.
  • the inner surfaces of the housing 206 ie, the bottom surface 206A, the side surfaces 206B, and the top surface 206C) are white surfaces.
  • the color of the subject 2 is measured in the following manner. That is, first, as shown in aspect K, light L8 is irradiated from the light source 202 onto the bottom surface 206A, and incident light Lb including light L11 reflected from the bottom surface 206A is imaged by the imaging device 200. As a result, a reference image 220 is obtained. Subsequently, as shown in aspect L, the subject 2 is placed on the bottom surface 206A, the light L8 is irradiated from the light source 202 to the subject 2, and the incident light Lb including the light L12 reflected by the subject 2 is captured by the imaging device 200. Imaged. As a result, a subject image 222 is obtained.
  • a calibration image 224 is generated by performing calibration on the subject image 222 based on the reference image 220, and a calibration image 224 is generated based on the calibration image 224. Color is measured. As a result, a color measurement result 226, which is the result of measuring the color of the subject 2, is obtained.
  • FIG. 20 shows problems with the color measurement method according to the second comparative example.
  • the incident light Lb is irradiated from the light source 202 to the bottom surface 206A and reflected by the bottom surface 206A and the top surface 206C. This includes light L13 that is reflected again on the bottom surface 206A.
  • the incident light Lb includes light that is irradiated from the light source 202 to the subject 2, reflected by the subject 2 and the top surface 206C, and then reflected again by the subject 2.
  • re-reflected light L14 Light L14 (hereinafter referred to as "re-reflected light L14") is included. Since the re-reflected light L14 is light that has been reflected twice by the subject 2, in the case of aspect N, the color of the subject 2 is measured to be darker than the actual color of the subject 2.
  • the incident light Lb includes light irradiated from the light source 202 to the subject 2 and reflected by the subject 2.
  • the incident light Lb includes light irradiated from the light source 202 to the subject 2 and reflected by the subject 2.
  • light L16 reflected from the area around the subject 2 on the bottom surface 206A (hereinafter referred to as "peripheral reflected light L16") is also included. Therefore, in the case of aspect O, the peripheral reflected light L16 is included in the incident light Lb, so that the color of the subject 2 is measured to be lighter than the actual color of the subject 2, and the peripheral reflected light L16 causes flare. Occurs.
  • the incident light Lb changes between the case where there is no subject 2 and the case where there is subject 2, so that the color measurement method according to the first comparative example is different. , there is a problem that measurement accuracy for the color of the subject 2 is reduced.
  • the multispectral camera referred to here refers to a camera that can generate spectral images in each wavelength range of a spectral filter.
  • the present embodiment described below provides, as an example, a color measurement method that can solve the above problems when a multispectral camera is used.
  • FIG. 1 shows a color measuring device 110 according to this embodiment.
  • the color measurement device 110 includes a light source 112, a housing 114, a calibration member 116, an imaging device 10, and a processing device 90.
  • the light source 112 is, for example, an LED light source, a laser light source, or an incandescent light bulb.
  • the light emitted from the light source 112 is non-polarized.
  • the light source 112 is arranged, for example, at the top inside the housing 114.
  • the housing 114 is configured to cover the imaging space 118.
  • the light source 112, the calibration member 116, the entrance section 10A of the imaging device 10, and the subject 2 are arranged.
  • the inner surface of the housing 114 (that is, the bottom surface 114A, the side surfaces 114B, and the top surface 114C) is a white surface.
  • white is defined as the color perceived by a person looking at the surface of an object when visible light rays of all colors are diffusely reflected by the object.
  • the inner surface of the casing 114 has light diffusing properties to diffuse light.
  • the housing 114 is an example of a "casing" according to the technology of the present disclosure.
  • the calibration member 116 is, for example, a rectangular plate in plan view.
  • the calibration member 116 is arranged on the bottom surface 114A.
  • an example is given in which the calibration member 116 is a plate, but the calibration member 116 may have a shape other than a plate.
  • the calibration member 116 and the housing 114 constitute a housing device 120.
  • the housing device 120 may include a light source 112.
  • the calibration member 116 is an example of a “calibration member” according to the technology of the present disclosure.
  • the housing device 120 is an example of a “casing device” according to the technology of the present disclosure.
  • the calibration member 116 has a background surface 116A.
  • the background surface 116A is a surface facing the imaging device 10 (that is, an upper surface).
  • the subject 2 is placed on the background surface 116A.
  • the subject 2 may be of any kind.
  • the color of the subject 2 may be any color.
  • an example in which the color of the subject 2 is a color other than white will be described.
  • the background surface 116A constitutes the background of the subject 2 when the subject 2 is placed on the calibration member 116.
  • the background surface 116A has a color corresponding to the color of the subject 2, as will be described in detail later.
  • the surfaces of the calibration member 116 other than the background surface 116A may have the same color as the background surface 116A, or may have a different color from the background surface 116A.
  • an example in which the surface of the calibration member 116 other than the background surface 116A has the same color as the background surface 116A will be described.
  • the background surface 116A has a surface roughness corresponding to that of the subject 2.
  • the surface roughness may be adjusted by processing the background surface 116A, or by changing the size of particles forming the background surface 116A. Since the background surface 116A has a surface roughness corresponding to that of the subject 2, the directionality of the light reflected on the background surface 116A can be matched to the directionality of the light reflected on the subject 2.
  • the imaging device 10 is, for example, a multispectral camera. Although an example in which the imaging device 10 is a multispectral camera is given here, the imaging device 10 may be a spectral camera such as a hyperspectral camera. Hereinafter, an example in which the imaging device 10 is a multispectral camera will be described.
  • the imaging device 10 is an example of a "spectral imaging device" according to the technology of the present disclosure.
  • the imaging device 10 includes an optical system 26 and an image sensor 28.
  • the optical system 26 includes a first lens 30, a pupil splitting filter 16, and a second lens 32.
  • the first lens 30, the pupil splitting filter 16, and the second lens 32 are arranged along the optical axis OA of the optical system 26 from the subject 2 side to the image sensor 28 side.
  • the lenses 32 are arranged in this order.
  • the pupil division filter 16 has spectral filters 20A to 20C.
  • Each of the spectral filters 20A to 20C is a bandpass filter that transmits light in a specific wavelength range.
  • the spectral filters 20A to 20C have different wavelength ranges. Specifically, the spectral filter 20A has a first wavelength range ⁇ 1 , the spectral filter 20B has a second wavelength range ⁇ 2 , and the spectral filter 20C has a third wavelength range ⁇ 3. has.
  • the spectral filter is an example of a "spectral filter" according to the technology of the present disclosure.
  • each of the spectral filters 20A to 20C will be referred to as a "spectral filter 20.”
  • the spectral filter 20 is an example of a “spectral filter” according to the technology of the present disclosure.
  • the first wavelength range ⁇ 1 , the second wavelength range ⁇ 2 , and the third wavelength range ⁇ 3 is referred to as a "wavelength range ⁇ ".
  • spectral images 72A to 72C corresponding to each wavelength range ⁇ are generated based on a captured image (not shown) obtained by imaging the subject 2.
  • the spectral image 72A is a spectral image corresponding to the first wavelength range ⁇ 1
  • the spectral image 72B is a spectral image corresponding to the second wavelength range ⁇ 2
  • the spectral image 72C is a spectral image corresponding to the third wavelength range ⁇ 3 . This is the corresponding spectral image.
  • each of the spectral images 72A to 72C will be referred to as a "spectral image 72.”
  • the imaging device 10 has a zoom function.
  • the angle of view of the imaging device 10 is adjusted by the zoom function.
  • the angle of view of the imaging device 10 is set to such that the imaging range of the imaging device 10 is filled with the subject 2 and the calibration member 116.
  • the imaging conditions when the subject 2 is imaged by the imaging device 10 are as follows: is set to a specific condition under which the light is emitted from the light source 112 and reflected by the subject 2 and the calibration member 116.
  • the main component is an example of a "first component" according to the technology of the present disclosure.
  • the specific condition is an example of a "first condition" according to the technology of the present disclosure.
  • the processing device 90 is communicably connected to the imaging device 10.
  • the processing device 90 is, for example, an information processing device such as a personal computer or a server.
  • the processing device 90 includes a display device 108.
  • the display device 108 is, for example, a liquid crystal display or an EL display.
  • the processing device 90 generates a multispectral image 74 based on the plurality of spectral images 72 and displays the generated multispectral image 74 on the display device 108. Furthermore, as will be described in detail later, the processing device 90 measures the color of the subject 2 based on the plurality of spectral images 72, and displays the color measurement result 136, which is the measurement result, on the display device 108.
  • FIG. 2 shows a graph showing an example of the spectral reflectance of the white reference plate 122, the calibration member 116, and the subject 2.
  • the white reference plate 122 has a reflective surface 122A that reflects light.
  • the white reference plate 122 is a white plate material, and the reflective surface 122A is a white surface.
  • the reflective surface 122A is formed of uniform white color.
  • the background surface 116A is formed of a uniform color.
  • Graph G1 is a graph showing the first spectral reflectance a, which is the spectral reflectance of light reflected by the subject 2.
  • Graph G2 is a graph showing the second spectral reflectance b, which is the spectral reflectance of light reflected by the background surface 116A of the calibration member 116.
  • Graph G3 is a graph showing the third spectral reflectance c, which is the spectral reflectance of the light reflected by the reflective surface 122A of the white reference plate 122.
  • the first spectral reflectance a is an example of a "first spectral reflectance” according to the technology of the present disclosure.
  • the second spectral reflectance b is an example of a "second spectral reflectance” according to the technology of the present disclosure.
  • the third spectral reflectance c is an example of the "third spectral reflectance” according to the technology of the present disclosure.
  • the first spectral reflectance a may be a spectral reflectance based on spectral reflectances measured at multiple locations on the subject 2. If the first spectral reflectance a is a spectral reflectance based on spectral reflectances measured at multiple locations on the subject 2, the first spectral reflectance a is The accuracy of the first spectral reflectance a is improved compared to the case where it is a measured spectral reflectance.
  • the first spectral reflectance a may be an average value of spectral reflectances measured at a plurality of locations on the subject 2, or may be a representative value (for example, a maximum value, a central value, or a minimum value). Further, the number of the plurality of locations may be any number.
  • the positions of the plurality of places may be any position on the subject 2.
  • the subject 2 may be a plurality of subjects. Each location may be a location on each subject 2.
  • the first spectral reflectance a may be a spectral reflectance measured at one location on the subject 2.
  • the second spectral reflectance b may be a spectral reflectance based on spectral reflectances measured at multiple locations on the background surface 116A.
  • the third spectral reflectance c may be a spectral reflectance based on spectral reflectances measured at multiple locations on the reflective surface 122A.
  • FIG. 2 shows an example of the relationship between the first spectral reflectance a and the second spectral reflectance b for the first wavelength range ⁇ 1 and the second wavelength range ⁇ 2 .
  • the first spectral reflectance a in the first wavelength range ⁇ 1 may be an average value or a representative value (for example, a maximum value, a central value, or a minimum value).
  • the second spectral reflectance b in the first wavelength range ⁇ 1 may be an average value or a representative value (for example, the highest value, the center value, or the lowest value).
  • the first spectral reflectance a in the second wavelength range ⁇ 2 may be an average value or a representative value (for example, a maximum value, a central value, or a minimum value).
  • the second spectral reflectance b in the second wavelength range ⁇ 2 may be an average value or a representative value (for example, the highest value, the center value, or the lowest value).
  • the second spectral reflectance b corresponds to the first spectral reflectance a.
  • the first spectral reflectance a and the second spectral reflectance b have a relationship with each other in the first wavelength range ⁇ 1 .
  • the first spectral reflectance a and the second spectral reflectance b have a relationship with each other.
  • the first spectral reflectance a and the second spectral reflectance b have a relationship with each other in the first wavelength range ⁇ 1 and the second wavelength range ⁇ 2 .
  • the index a and the second spectral reflectance b may have a relationship with each other in the second wavelength range ⁇ 2 and the third wavelength range ⁇ 3 (see FIG. 1).
  • an example will be described in which the first spectral reflectance a and the second spectral reflectance b have a relationship with each other in the first wavelength range ⁇ 1 and the second wavelength range ⁇ 2 .
  • the difference between the first spectral reflectance a and the second spectral reflectance b is , within the first range (not shown).
  • the first spectral reflectance a and the second spectral reflectance b The difference is within the first range.
  • the first range is set, for example, based on the third spectral reflectance c. Specifically, the first range is set as a range of spectral reflectance lower than the third spectral reflectance c.
  • the first range is, for example, a range from a reflectance 0.5 times to a reflectance twice the first spectral reflectance a. In other words, a reflectance that is 0.5 times the first spectral reflectance a is the lower limit of the first range, and a reflectance that is twice the first spectral reflectance a is the upper limit of the first range. be.
  • the first spectral reflectance a and the second spectral reflectance b becomes irrelevant.
  • the upper limit of the first range is set to a reflectance that exceeds twice the spectral reflectance of the first spectral reflectance a, the first spectral reflectance a and the second spectral reflectance rate b becomes irrelevant.
  • the first spectral reflectance a and the second spectral reflectance b have a difference.
  • the second spectral reflectance b may be higher than the first spectral reflectance a, or may be lower than the first spectral reflectance a.
  • the difference between the first spectral reflectance a and the second spectral reflectance b is greater than the difference between the first spectral reflectance a and the third spectral reflectance c. It's also small.
  • the first spectral reflectance a and the second spectral reflectance b are lower than the third spectral reflectance c.
  • the second spectral reflectance b is the second spectral reflectance set based on the first spectral reflectance a and the third spectral reflectance c. falls within the range.
  • the second range is a range defined by equation (1).
  • the second range defined by equation (1) is shown as second range R1. a/2 ⁇ b ⁇ (c+a)/2...(1)
  • the lower limit of the second range R1 is defined by half the first spectral reflectance a
  • the upper limit of the second range R1 is defined by the third spectral reflectance c and the first spectral reflectance. It is defined as half the sum of a.
  • the degree of influence of the second spectral reflectance b on the color measurement error can be reduced to 1/2.
  • the third spectral reflectance c is 1 and the first spectral reflectance a is 0.1
  • the lower limit of the second range R1 is 0.05
  • the upper limit of the second range R1 is 0.05. It will be 55.
  • the second range is more preferably a range defined by formula (2).
  • the second range defined by equation (2) is shown as second range R2. 3a/4 ⁇ b ⁇ (c+3a)/4...(2)
  • the lower limit value of the second range R2 is defined by the 3/4 value of the first spectral reflectance a
  • the upper limit value of the second range R2 is defined by the third spectral reflectance c and the first spectral reflectance c. It is defined by the value of 1/4 of the sum of the spectral reflectance a and the spectral reflectance that is three times the spectral reflectance a.
  • the degree of influence of the second spectral reflectance b on color measurement errors can be reduced to 1/4.
  • the third spectral reflectance c is 1 and the first spectral reflectance a is 0.1
  • the lower limit of the second range R2 is 0.075
  • the upper limit of the second range R2 is 0.075. It becomes 325.
  • FIG. 3 and 4 show graphs schematically showing an example of the second spectral reflectance b.
  • the second spectral reflectance b is higher in the first near-infrared region of the near-infrared region than in the first visible region of the visible region. may be high.
  • the first visible range may be the entire wavelength range of the visible range, or may be a part of the wavelength range.
  • the first near-infrared region may be the entire wavelength region of the near-infrared region, or may be a part of the wavelength region.
  • the visible region and the near-infrared region may be continuous, may be separated, or may partially overlap.
  • the second spectral reflectance b increases linearly (that is, continuously) as the wavelength becomes longer from a specific wavelength, and has a peak in the first near-infrared region.
  • the second spectral reflectance b increases nonlinearly (that is, discontinuously) at a specific wavelength and has a peak in the first near-infrared region.
  • the second spectral reflectance b when the second spectral reflectance b has a peak in the first near-infrared region, for example, the subject 2, such as a plant, has a higher spectral reflectance in the near-infrared region than in the visible region.
  • the subject 2 is a subject shown in FIG. That is, for example, the accuracy of measuring the color of the subject 2 is improved compared to a case where the second spectral reflectance b is the same in the first visible region and the first near-infrared region.
  • FIG. 5 shows an example of the flow of a method for setting the color of the background surface 116A (hereinafter referred to as "color setting method").
  • color setting method a method for setting the color of the background surface 116A
  • the imaging device 124 used to image the subject 2 may be a multispectral camera or an RGB camera.
  • the member 128 placed on the back surface of the subject 2 may be of any type.
  • the first spectral reflectance a is measured based on the captured image 126.
  • the first spectral reflectance a may be a spectral reflectance based on spectral reflectances measured at multiple locations on the subject 2. Further, the first spectral reflectance a may be an average value of spectral reflectances measured at a plurality of locations on the subject 2, or may be a representative value (for example, a maximum value, a central value, or a minimum value).
  • a second spectral reflectance b corresponding to the first spectral reflectance a is determined based on the first spectral reflectance a. Then, the color having the second spectral reflectance b is set as the color of the background surface 116A.
  • FIG. 6 shows an example of the color measurement method according to this embodiment.
  • the above-described color measuring device 110 is used in the color measuring method according to this embodiment.
  • the color of the subject 2 is measured in the following manner. That is, first, as shown in aspect D, with the calibration member 116 disposed on the bottom surface 114A of the housing 114, light L1 is irradiated from the light source 112 onto the background surface 116A, and light L2 is reflected from the background surface 116A. The incident light La including the above is imaged by the imaging device 10. As a result, a reference image 130 is obtained. Subsequently, as shown in aspect E, the subject 2 is placed on the background surface 116A, the light source 112 irradiates the subject 2 with light L1, and the incident light La including the light L3 reflected by the subject 2 is transmitted to the imaging device 10. imaged by. As a result, a subject image 132 is obtained.
  • a calibration image 134 is generated by calibrating the subject image 132 based on the reference image 130 . Color is measured. As a result, a color measurement result 136, which is the result of measuring the color of the subject 2, is obtained.
  • FIG. 7 shows a specific example of the incident light La that enters the imaging device 10 in the color measurement method according to the present embodiment.
  • the incident light La is irradiated from the light source 112 onto the background surface 116A and the background surface 116A and the top surface 114C. It includes light L4 that has been reflected and then reflected again on the background surface 116A.
  • the incident light La is irradiated from the light source 112 to the subject 2 and the subject 2 and the top surface 114C.
  • the light L5 that has been reflected and then reflected again by the subject 2 is included.
  • the second spectral reflectance b which is the spectral reflectance of the background surface 116A, corresponds to the first spectral reflectance a, which is the spectral reflectance of the subject 2. Therefore, the wavelength range of the incident light La is suppressed from changing between the case where the subject 2 is not present and the case where the subject 2 is present. As a result, it is possible to avoid measuring the color of the object 2 darker than the actual color of the object 2, as in the color measurement method according to the second comparative example (see aspect N in FIG. 20). can.
  • the incident light La when the subject image 132 is obtained, the incident light La includes light L6 that is irradiated onto the subject 2 from the light source 112 and reflected by the subject 2.
  • light L7 hereinafter referred to as "peripheral reflected light L7" that is irradiated onto the subject 2 from the light source 112 and reflected on the area around the subject 2 on the background surface 116A is also included.
  • the second spectral reflectance b which is the spectral reflectance of the background surface 116A
  • the first spectral reflectance a which is the spectral reflectance of the subject 2. Therefore, the wavelength range of the peripheral reflected light L7 is suppressed from changing between the case where the subject 2 is not present and the case where the subject 2 is present.
  • the color of the subject 2 may be measured to be lighter than the actual color of the subject 2, or the amount of flare may change. You can avoid doing that.
  • the imaging device 10 includes a lens device 12 and an imaging device body 14.
  • the lens device 12 includes the pupil splitting filter 16 described above.
  • the imaging device 10 is a multispectral camera that generates and outputs a plurality of spectral images 72A to 72C by capturing light that has been split into a plurality of wavelength ranges ⁇ by the pupil splitting filter 16.
  • the pupil division filter 16 includes a frame 18, spectral filters 20A to 20C, and polarization filters 22A to 22C.
  • the frame 18 has openings 24A to 24C.
  • the openings 24A to 24C are formed in line around the optical axis OA.
  • each of the openings 24A to 24C will be referred to as an "opening 24.”
  • the spectral filters 20A to 20C are provided in the apertures 24A to 24C, respectively, so that they are arranged side by side around the optical axis OA.
  • the polarizing filters 22A to 22C are provided corresponding to the spectral filters 20A to 20C, respectively. Specifically, the polarizing filter 22A is provided in the aperture 24A, and is overlapped with the spectral filter 20A. The polarizing filter 22B is provided in the aperture 24B and overlapped with the spectral filter 20B. The polarizing filter 22C is provided in the aperture 24C and is overlapped with the spectral filter 20C.
  • Each of the polarizing filters 22A to 22C is an optical filter that transmits light vibrating in a specific direction.
  • the polarizing filters 22A to 22C have polarization axes with different polarization angles.
  • polarizing filter 22A has a first polarizing angle ⁇ 1
  • polarizing filter 22B has a second polarizing angle ⁇ 2
  • polarizing filter 22C has a third polarizing angle ⁇ 3 .
  • the polarization axis may also be referred to as the transmission axis.
  • the first polarization angle ⁇ 1 is set to 0°
  • the second polarization angle ⁇ 2 is set to 45°
  • the third polarization angle ⁇ 3 is set to 90°. .
  • each of the polarizing filters 22A to 22C will be referred to as a "polarizing filter 22." Furthermore, if it is not necessary to separately explain the first polarization angle ⁇ 1 , the second polarization angle ⁇ 2 , and the third polarization angle ⁇ 3 , the first polarization angle ⁇ 1 , the second polarization angle ⁇ 2 , and Each of the third polarization angles ⁇ 3 is referred to as a “polarization angle ⁇ ”.
  • the number of apertures 24 is three, corresponding to the number of wavelength ranges ⁇ , but the number of apertures 24 is equal to the number of wavelength ranges ⁇ . (that is, the number of spectral filters 20). Furthermore, unused openings 24 among the plurality of openings 24 may be covered by a shielding member (not shown). Further, in the example shown in FIG. 9, the plurality of spectral filters 20 have mutually different wavelength ranges ⁇ , but the plurality of spectral filters 20 may include spectral filters 20 having the same wavelength range ⁇ .
  • the lens device 12 includes an optical system 26, and the imaging device body 14 includes an image sensor 28.
  • the optical system 26 includes a pupil splitting filter 16, a first lens 30, and a second lens 32.
  • the first lens 30 causes the light emitted from the light source 112 and reflected by the subject 2 to enter the pupil division filter 16 .
  • the second lens 32 forms an image of the light that has passed through the pupil splitting filter 16 onto a light receiving surface 34A of a photoelectric conversion element 34 provided in the image sensor 28.
  • the pupil splitting filter 16 is placed at the pupil position of the optical system 26.
  • the pupil position refers to the aperture surface that limits the brightness of the optical system 26.
  • the pupil position here includes a nearby position, and the nearby position refers to the range from the entrance pupil to the exit pupil.
  • the configuration of the pupil division filter 16 is as described using FIG. 9. In FIG. 10, for convenience, a plurality of spectral filters 20 and a plurality of polarizing filters 22 are shown arranged in a straight line along a direction perpendicular to the optical axis OA.
  • the image sensor 28 includes a photoelectric conversion element 34 and a signal processing circuit 36.
  • the image sensor 28 is, for example, a CMOS image sensor.
  • a CMOS image sensor is exemplified as the image sensor 28, but the technology of the present disclosure is not limited to this.
  • the image sensor 28 may be another type of image sensor such as a CCD image sensor. The technology of the present disclosure is realized.
  • FIG. 10 shows a schematic configuration of the photoelectric conversion element 34.
  • FIG. 11 specifically shows the configuration of a part of the photoelectric conversion element 34.
  • the photoelectric conversion element 34 includes a pixel layer 38, a polarizing filter layer 40, and a spectral filter layer 42. Note that the configuration of the photoelectric conversion element 34 shown in FIG. 11 is an example, and the technology of the present disclosure is applicable even if the photoelectric conversion element 34 does not include the spectral filter layer 42.
  • the pixel layer 38 has a plurality of pixels 44.
  • the plurality of pixels 44 are arranged in a matrix and form a light receiving surface 34A of the photoelectric conversion element 34.
  • Each pixel 44 is a physical pixel having a photodiode (not shown), photoelectrically converts the received light, and outputs an electrical signal according to the amount of received light.
  • the pixel 44 provided in the photoelectric conversion element 34 will be referred to as a "physical pixel 44" in order to distinguish it from the pixel forming a spectral image. Furthermore, the pixels forming the spectral image 72 are referred to as "image pixels.”
  • the photoelectric conversion element 34 outputs the electrical signals output from the plurality of physical pixels 44 to the signal processing circuit 36 as image data.
  • the signal processing circuit 36 digitizes the analog imaging data input from the photoelectric conversion element 34.
  • the image data is image data indicating a captured image 70.
  • a plurality of physical pixels 44 form a plurality of pixel blocks 46.
  • Each pixel block 46 is formed by a total of four physical pixels 44, two in the vertical direction and two in the horizontal direction.
  • the four physical pixels 44 forming each pixel block 46 are shown arranged in a straight line along the direction perpendicular to the optical axis OA, but the four physical pixels 44 are arranged adjacent to the photoelectric conversion element 34 in the vertical and horizontal directions (see FIG. 11).
  • the polarizing filter layer 40 has multiple types of polarizers 48A to 48D.
  • Each polarizer 48A to 48D is an optical filter that transmits light vibrating in a specific direction.
  • the polarizers 48A to 48D have polarization axes with different polarization angles. Specifically, polarizer 48A has a first polarization angle ⁇ 1 , polarizer 48B has a second polarization angle ⁇ 2 , and polarizer 48C has a third polarization angle ⁇ 3 . , and the polarizer 48D has a fourth polarization angle ⁇ 4 .
  • the first polarization angle ⁇ 1 is set to 0°
  • the second polarization angle ⁇ 2 is set to 45°
  • the third polarization angle ⁇ 3 is set to 90°
  • the fourth polarization angle ⁇ 4 is set to 135°.
  • each of the polarizers 48A to 48D will be referred to as a "polarizer 48.”
  • the polarizer 48 is an example of a "polarizer” according to the technology of the present disclosure.
  • the first polarization angle ⁇ 1 , the second polarization angle ⁇ 2 , the third polarization angle ⁇ 3 , and the fourth polarization angle ⁇ 4 are each referred to as “polarization angle ⁇ ”.
  • the spectral filter layer 42 includes a B filter 50A, a G filter 50B, and an R filter 50C.
  • the B filter 50A is a blue band filter that transmits most of the light in the blue wavelength band among the light in the plurality of wavelength bands.
  • the G filter 50B is a green filter that transmits most of the light in the green wavelength range among the light in the plurality of wavelength ranges.
  • the R filter 50C is a red band filter that transmits most of the light in the red wavelength band among the plurality of wavelength bands.
  • a B filter 50A, a G filter 50B, and an R filter 50C are assigned to each pixel block 46.
  • the B filter 50A, the G filter 50B, and the R filter 50C are shown arranged in a straight line along the direction orthogonal to the optical axis OA, but as an example, as shown in FIG.
  • the B filter 50A, the G filter 50B, and the R filter 50C are arranged in a matrix in a predetermined pattern arrangement.
  • the B filter 50A, the G filter 50B, and the R filter 50C are arranged in a matrix in a Bayer arrangement, as an example of a predetermined pattern arrangement.
  • the predetermined pattern arrangement may be an RGB stripe arrangement, an R/G checkered arrangement, an X-Trans (registered trademark) arrangement, a honeycomb arrangement, or the like other than the Bayer arrangement.
  • filters 50 the B filter 50A, the G filter 50B, and the R filter 50C will be referred to as "filters 50", respectively.
  • the imaging device body 14 includes a control driver 52, an input/output I/F 54, a computer 56, and a communication device 58 in addition to the image sensor 28.
  • a signal processing circuit 36, a control driver 52, a computer 56, and a communication device 58 are connected to the input/output I/F 54.
  • the computer 56 has a processor 60, a storage 62, and a RAM 64.
  • the processor 60 controls the entire imaging device 10 .
  • the processor 60 is, for example, an arithmetic processing device including a CPU and a GPU, and the GPU operates under the control of the CPU and is responsible for executing processing regarding images.
  • an arithmetic processing unit including a CPU and a GPU is cited as an example of the processor 60, but this is just an example, and the processor 60 may be one or more CPUs with integrated GPU functions. , one or more CPUs without integrated GPU functionality.
  • the processor 60, storage 62, and RAM 64 are connected via a bus 66, and the bus 66 is connected to the input/output I/F 54.
  • the storage 62 is a non-temporary storage medium and stores various parameters and programs.
  • storage 62 is flash memory (eg, EEPROM).
  • flash memory eg, EEPROM
  • HDD high-density diode
  • the RAM 64 temporarily stores various information and is used as a work memory. Examples of the RAM 64 include DRAM and/or SRAM.
  • the processor 60 reads a necessary program from the storage 62 and executes the read program on the RAM 64.
  • Processor 60 controls control driver 52 and signal processing circuit 36 according to a program executed on RAM 64.
  • the control driver 52 controls the photoelectric conversion element 34 under the control of the processor 60.
  • the communication device 58 is connected to the processor 60 via the input/output I/F 54 and the bus 66. Further, the communication device 58 is communicably connected to the processing device 90 by wire or wirelessly. The communication device 58 is in charge of exchanging information with the processing device 90 . For example, the communication device 58 transmits data in response to a request from the processor 60 to the processing device 90. The communication device 58 also receives data transmitted from the processing device 90 and outputs the received data to the processor 60 via the bus 66.
  • a spectral image generation program 80 is stored in the storage 62.
  • the processor 60 reads the spectral image generation program 80 from the storage 62 and executes the read spectral image generation program 80 on the RAM 64.
  • the processor 60 executes a spectral image generation process to generate a plurality of spectral images 72 according to a spectral image generation program 80 executed on the RAM 64 .
  • the spectral image generation process is realized by the processor 60 operating as the output value acquisition section 82 and the interference removal processing section 84 according to the spectral image generation program 80.
  • the output value acquisition unit 82 calculates the output value Y of each physical pixel 44 based on the imaging data. get.
  • the output value Y of each physical pixel 44 corresponds to the luminance value of each pixel included in the captured image 70 indicated by the captured image data.
  • the output value Y of each physical pixel 44 is a value that includes interference (that is, crosstalk). That is, since light in each wavelength range ⁇ of the first wavelength range ⁇ 1 , the second wavelength range ⁇ 2 , and the third wavelength range ⁇ 3 is incident on each physical pixel 44, the output value Y is The value is a mixture of a value corresponding to the light amount in the wavelength range ⁇ 1 , a value depending on the light amount in the second wavelength range ⁇ 2 , and a value depending on the light amount in the third wavelength range ⁇ 3 .
  • the processor 60 performs a process of separating and extracting a value corresponding to each wavelength range ⁇ from the output value Y for each physical pixel 44, that is, a process of removing interference, which is a process of removing interference. Processing needs to be performed on the output value Y. Therefore, in this embodiment, in order to obtain the spectrum image 72, the interference removal processing section 84 performs interference removal processing on the output value Y of each physical pixel 44 obtained by the output value acquisition section 82.
  • the output value Y of each physical pixel 44 includes each brightness value for each polarization angle ⁇ for red, green, and blue as a component of the output value Y.
  • the output value Y of each physical pixel 44 is expressed by equation (3).
  • Y ⁇ 1_R is the brightness value of the red component of the output value Y whose polarization angle is the first polarization angle ⁇ 1
  • Y ⁇ 2_R is the brightness value of the component of the output value Y that is red and whose polarization angle is the second polarization angle ⁇ .
  • Y ⁇ 3_R is the brightness value of the red component of the output value Y whose polarization angle is the third polarization angle ⁇ 3
  • Y ⁇ 4_R is the brightness value of the red component of the output value Y whose polarization angle is the third polarization angle ⁇ 3 . is the brightness value of the component whose fourth polarization angle is ⁇ 4 .
  • Y ⁇ 1_G is the luminance value of the green component of the output value Y whose polarization angle is the first polarization angle ⁇ 1
  • Y ⁇ 2_G is the luminance value of the component of the output value Y that is green and whose polarization angle is the second polarization angle ⁇ .
  • Y ⁇ 3_G is the luminance value of the component of the output value Y that is green and has a polarization angle of the third polarization angle ⁇ 3
  • Y ⁇ 4_G is the luminance value of the component that is green and has the polarization angle of the third polarization angle ⁇ 3 of the output value Y. is the brightness value of the component whose fourth polarization angle is ⁇ 4 .
  • Y ⁇ 1_B is the luminance value of the blue component of the output value Y whose polarization angle is the first polarization angle ⁇ 1
  • Y ⁇ 2_B is the luminance value of the blue component of the output value Y whose polarization angle is the second polarization angle ⁇ .
  • Y ⁇ 3_B is the luminance value of the blue component of the output value Y whose polarization angle is the third polarization angle ⁇ 3
  • Y ⁇ 4_B is the polarization angle of the blue component of the output value Y. is the brightness value of the component whose fourth polarization angle is ⁇ 4 .
  • the pixel value X of each image pixel forming the spectral image 72 is the brightness value X ⁇ 1 of polarized light in a first wavelength range ⁇ 1 having a first polarization angle ⁇ 1 (hereinafter referred to as “first wavelength range polarized light” ) .
  • the brightness value X ⁇ 2 of polarized light in a second wavelength range ⁇ 2 having a second polarization angle ⁇ 2 (hereinafter referred to as “second wavelength range polarized light”)
  • second wavelength range polarized light includes the luminance value X ⁇ 3 of the polarized light of No. 3 (hereinafter referred to as "third wavelength range polarized light”) as a component of the pixel value X.
  • the pixel value X of each image pixel is expressed by equation (4).
  • A is an interference matrix.
  • the interference matrix A (not shown) is a matrix indicating characteristics of interference.
  • the interference matrix A includes a plurality of known information such as the spectrum of the incident light, the spectral transmittance of the first lens 30, the spectral transmittance of the second lens 32, the spectral transmittance of the plurality of spectral filters 20, and the spectral sensitivity of the image sensor 28. predefined based on the value of .
  • the interference removal matrix A + also includes the spectrum of the incident light, the spectral transmittance of the first lens 30, the spectral transmittance of the second lens 32, the spectral transmittance of the plurality of spectral filters 20, and the image sensor. This is a matrix defined based on the spectral sensitivity, etc. of No. 28.
  • the interference cancellation matrix A + is stored in advance in the storage 62.
  • the interference cancellation processing unit 84 acquires the interference cancellation matrix A + stored in the storage 62 and the output value Y of each physical pixel 44 acquired by the output value acquisition unit 82, and uses the acquired interference cancellation matrix A + Based on the output value Y of each physical pixel 44, the pixel value X of each image pixel is output using equation (6).
  • the pixel value X of each image pixel is the brightness value X ⁇ 1 of the first wavelength range polarized light, the brightness value X ⁇ 2 of the second wavelength range polarized light, and the brightness value X ⁇ 3 of the third wavelength range polarized light. are included as components of the pixel value X.
  • the spectral image 72A of the captured image 70 is an image corresponding to the brightness value X ⁇ 1 of light in the first wavelength range ⁇ 1 (that is, an image based on the brightness value X ⁇ 1 ).
  • the spectral image 72B of the captured image 70 is an image corresponding to the brightness value X ⁇ 2 of light in the second wavelength range ⁇ 2 (that is, an image based on the brightness value X ⁇ 2 ).
  • the spectral image 72C of the captured image 70 is an image corresponding to the brightness value X ⁇ 3 of light in the third wavelength range ⁇ 3 (that is, an image based on the brightness value X ⁇ 3 ).
  • the captured image 70 is divided into the spectrum image 72A corresponding to the brightness value X ⁇ 1 of the first wavelength band polarized light and the brightness of the second wavelength band polarized light. It is separated into a spectral image 72B corresponding to the value X ⁇ 2 and a spectral image 72C corresponding to the luminance value X ⁇ 3 of the third wavelength band polarized light. That is, the captured image 70 is separated into spectral images 72 for each wavelength range ⁇ of the plurality of spectral filters 20.
  • the processing device 90 includes a computer 92.
  • Computer 92 includes a processor 94, storage 96, and RAM 98.
  • the processor 94, storage 96, and RAM 98 are realized by the same hardware as the above-described processor 60, storage 62, and RAM 64 (see FIG. 10).
  • a color measurement program 100 is stored in the storage 96.
  • the processor 94 reads the color measurement program 100 from the storage 96 and executes the read color measurement program 100 on the RAM 98.
  • Processor 94 executes color measurement processing to obtain color measurement results 136 according to color measurement program 100 executed on RAM 98 .
  • the color measurement process is realized by the processor 94 operating as an image data acquisition unit 102, a calibration image generation unit 104, and a color derivation unit 106 according to the color measurement program 100.
  • the processing device 90 is an example of a "calibration device” and a "color measurement device” according to the technology of the present disclosure.
  • the color measurement program 100 is an example of a "program" according to the technology of the present disclosure.
  • the imaging device 10 transmits reference image data indicating the reference image 130 to the processing device 90. Furthermore, when the subject image 132 is obtained, the imaging device 10 transmits subject image data indicating the subject image 132 to the processing device 90.
  • the reference image data is an example of "first imaging data” according to the technology of the present disclosure.
  • the subject image data is an example of "second imaging data” according to the technology of the present disclosure.
  • the image data acquisition unit 102 acquires the reference image data received by the processing device 90.
  • the image data acquisition unit 102 then acquires the reference image 130 based on the reference image data.
  • the reference image 130 includes a spectrum image 72A in the first wavelength range ⁇ 1 (hereinafter referred to as “reference spectrum image 72A”) and a spectrum image 72B in the second wavelength range ⁇ 2 (hereinafter referred to as “reference spectrum image 72B"). ) is included.
  • reference spectrum image 72A and the reference spectrum image 72B will be referred to as the "reference spectrum image 72.”
  • the image data acquisition unit 102 acquires the subject image data received by the processing device 90.
  • the image data acquisition unit 102 then acquires a subject image 132 based on the subject image data.
  • the object image 132 includes a spectrum image 72A in a first wavelength range ⁇ 1 (hereinafter referred to as "object spectrum image 72A”) and a spectrum image 72B in a second wavelength range ⁇ 2 (hereinafter referred to as "object spectrum image 72B"). ) is included.
  • object spectrum image 72A a spectrum image 72A in a first wavelength range ⁇ 1
  • object spectrum image 72B a spectrum image 72B in a second wavelength range ⁇ 2
  • the subject spectrum image 72A and the subject spectrum image 72B will be referred to as "the subject spectrum image 72.”
  • the calibration image generation unit 104 generates a calibration image 134 by calibrating the subject image 132 based on the reference image 130. Specifically, the calibration image generation unit 104 divides the pixel value of the subject spectrum image 72A by the pixel value of the reference spectrum image 72A for the first wavelength range ⁇ 1 , thereby generating a calibration image that is a calibrated spectrum image. A spectral image 134A is generated. Similarly, the calibration image generation unit 104 divides the pixel values of the object spectrum image 72B by the pixel values of the reference spectrum image 72B for the second wavelength range ⁇ 2 , thereby generating a calibration spectrum image that is a calibrated spectrum image. 134B. In this way, the processing device 90 performs calibration of the subject image data based on the reference image data.
  • the calibration image generation unit 104 may calibrate a part of the image area of the subject spectrum image 72. For example, when the processing device 90 receives an instruction from a user specifying an image area in which color is to be measured (hereinafter referred to as a "user instruction"), the calibration image generation unit 104 selects one of the subject spectrum images 72. Calibration may be performed on an image area corresponding to a user instruction.
  • the calibration image generation unit 104 Calibration may be performed on the subject image region of the subject spectrum image 72. Further, the calibration image generation unit 104 may perform calibration on a part of the image area of the subject image area.
  • the image regions of the subject spectrum image 72 that have been calibrated by the calibration image generation unit 104 correspond to the calibration spectral image 134A and the calibration spectral image 134B.
  • the imaging device 10 when the user instruction is accepted by the imaging device 10, the imaging device 10 generates reference image data indicating an image area corresponding to the user instruction in the reference image 130 and corresponding to the user instruction in the subject image 132.
  • the subject image data indicating the image area to be photographed may be transmitted to the processing device 90.
  • X ⁇ 1 is the pixel value X of each image pixel forming the calibration spectrum image 134A in the first wavelength range ⁇ 1
  • X ⁇ 2 is the pixel value X of each image pixel forming the calibration spectrum image 134B in the second wavelength range ⁇ 2. This is the pixel value X of the pixel.
  • a color measurement result 136 is generated based on the color derived by the color derivation unit 106.
  • the color measurement result 136 generated by the color measurement process described above may be displayed on the display device 108 (see FIG. 1) of the processing device 90. Further, measurement data indicating the color measurement result 136 may be transmitted to an external device (not shown) communicably connected to the processing device 90, and the color measurement result 136 may be used by the external device. Further, the color measurement results 136 may include an image indicating the measured color, or may include numerical values and/or graphs indicating the measured color.
  • FIG. 16 shows an example of the flow of spectral image generation processing according to this embodiment.
  • step ST10 the output value acquisition unit 82 acquires the output value Y of each physical pixel 44 based on the imaging data output from the image sensor 28 (FIG. reference). After the process of step ST10 is executed, the spectral image generation process moves to step ST12.
  • step ST12 the interference cancellation processing unit 84 acquires the interference cancellation matrix A + stored in the storage 62 and the output value Y of each physical pixel 44 acquired in step ST10, and obtains the interference cancellation matrix A
  • the pixel value X of each image pixel is output based on + and the output value Y of each physical pixel 44 (see FIG. 10).
  • the captured image 70 is divided into a spectrum image 72A corresponding to the brightness value X ⁇ 1 of the first wavelength band polarized light and a spectrum corresponding to the brightness value X ⁇ 2 of the second wavelength band polarized light. It is separated into an image 72B and a spectrum image 72C corresponding to the luminance value X ⁇ 3 of the third wavelength band polarized light.
  • FIG. 17 shows an example of the flow of color measurement processing according to this embodiment.
  • step ST20 the image data acquisition unit 102 acquires reference image data and subject image data.
  • the image data acquisition unit 102 then acquires a reference image 130 indicated by the reference image data and a subject image 132 indicated by the subject image data.
  • step ST22 the color measurement process moves to step ST22.
  • step ST22 the calibration image generation unit 104 performs calibration on the subject image 132 based on the reference image 130 with respect to the reference image 130 and the subject image 132 acquired in step ST20.
  • a calibration spectrum image 134A in one wavelength range ⁇ 1 and a calibration spectrum image 134B in a second wavelength range ⁇ 2 are generated.
  • the color measurement process moves to step ST24.
  • step ST24 the color deriving unit 106 calculates the color of the subject 2 based on the calibration spectrum image 134A in the first wavelength range ⁇ 1 and the calibration spectrum image 134B in the second wavelength range ⁇ 2 generated in step ST22. Derive. As a result, a color measurement result 136 is obtained. After the process of step ST24 is executed, the color measurement process ends.
  • Step ST20 is an example of a “first acquisition step” and a “second acquisition step” according to the technology of the present disclosure.
  • Step ST22 is an example of a “calibration step” according to the technology of the present disclosure.
  • Step ST24 is an example of a "color measurement step” according to the technology of the present disclosure.
  • the calibration member 116 is used when measuring the color of the subject 2 (see FIG. 1).
  • the calibration member 116 has a background surface 116A that constitutes the background of the subject 2.
  • the first spectral reflectance a which is the spectral reflectance of the light reflected by the subject 2
  • the second spectral reflectance b which is the spectral reflectance of the light reflected by the background surface 116A
  • the wavelength range of the incident light is suppressed from changing between when there is no subject 2 and when there is subject 2 (see FIG. 7).
  • the wavelength range of the peripheral reflected light L7 is suppressed from changing between the case where the subject 2 is not present and the case where the subject 2 is present.
  • the accuracy of measuring the color of the subject 2 can be improved compared to, for example, when the background surface 116A is a white surface.
  • the color measurement process is executed by the processing device 90, but it may also be executed by the imaging device 10. Further, part of the color measurement process may be executed by the imaging device 10, and the remaining process of the color measurement process may be executed by the processing device 90.
  • the housing device 120 is used in such a direction that the imaging device 10 is placed at the top of the housing 114 and the calibration member 116 is placed at the bottom of the housing 114.
  • the orientation may be other than the above.
  • the housing device 120 may be used in an orientation in which the imaging device 10 is disposed at the bottom of the housing 114 and the calibration member 116 is disposed at the top of the housing 114.
  • the housing device 120 may be used in an orientation in which the two faces are horizontally opposed.
  • the subject 2 may be fixed to the background surface 116A.
  • a plurality of calibration members 116 having different spectral reflectances may be used for the housing 114.
  • Each calibration member 116 may be replaceable with respect to the housing 114 or may be switchable with respect to the housing 114.
  • a calibration member 116 having a second spectral reflectance b corresponding to the first spectral reflectance a may be selected from among the plurality of calibration members 116.
  • the processor 60 of the imaging device 10 may output warning information when the imaging conditions deviate from specific conditions.
  • the warning information may be information for notifying the user that the angle of view should be adjusted.
  • the notification may include at least one of sound notification, vibration notification, and light notification.
  • the processor 60 may determine whether the imaging conditions have deviated from the specific conditions based on a change in the captured image 70 obtained by the imaging device 10.
  • the processor 60 is illustrated in the imaging device 10, but instead of the processor 60, or together with the processor 60, at least one other CPU, at least one GPU, and/or at least one TPU may also be used.
  • the processor 94 is illustrated as an example of the processing device 90, but instead of the processor 94, or together with the processor 94, at least one other CPU, at least one GPU, and/or at least one TPU may also be used.
  • the imaging device 10 has been described using an example in which the spectral image generation program 80 is stored in the storage 62, but the technology of the present disclosure is not limited to this.
  • the spectral image generation program 80 may be stored in a portable non-transitory computer-readable storage medium (hereinafter simply referred to as "non-transitory storage medium") such as an SSD or a USB memory.
  • non-transitory storage medium such as an SSD or a USB memory.
  • the spectral image generation program 80 stored in a non-transitory storage medium may be installed on the computer 56 of the imaging device 10.
  • the spectral image generation program 80 is stored in a storage device such as another computer or server device connected to the imaging device 10 via a network, and the spectral image generation program 80 is downloaded in response to a request from the imaging device 10. may be installed in the computer 56 of the imaging device 10.
  • a storage device such as another computer or server device connected to the imaging device 10, or in the storage 62, but only a part of the spectral image generation program 80 can be stored. You can leave it.
  • the processing device 90 has been described using an example in which the color measurement program 100 is stored in the storage 96, but the technology of the present disclosure is not limited to this.
  • the color measurement program 100 may be stored in a non-transitory storage medium.
  • Color measurement program 100 stored on a non-transitory storage medium may be installed on computer 92 of processing device 90.
  • the color measurement program 100 is stored in a storage device such as another computer or a server device connected to the processing device 90 via a network, and the color measurement program 100 is downloaded in response to a request from the processing device 90. It may be installed on the computer 92 of the processing device 90.
  • color measurement program 100 it is not necessary to store the entire color measurement program 100 in a storage device such as another computer or server device connected to the processing device 90, or in the storage 96, but only a part of the color measurement program 100 may be stored. You can leave it there.
  • the imaging device 10 has a built-in computer 56, the technology of the present disclosure is not limited to this, and for example, the computer 56 may be provided outside the imaging device 10.
  • processing device 90 has a built-in computer 92
  • the technology of the present disclosure is not limited to this, and for example, the computer 92 may be provided outside the processing device 90.
  • the computer 56 including the processor 60, the storage 62, and the RAM 64 is illustrated as an example of the imaging device 10, but the technology of the present disclosure is not limited to this, and instead of the computer 56, an ASIC, A device including an FPGA and/or a PLD may also be applied. Further, instead of the computer 56, a combination of hardware configuration and software configuration may be used.
  • the computer 92 including the processor 94, the storage 96, and the RAM 98 is illustrated as the processing device 90, but the technology of the present disclosure is not limited to this, and instead of the computer 92, an ASIC, A device including an FPGA and/or a PLD may also be applied. Further, instead of the computer 92, a combination of hardware configuration and software configuration may be used.
  • processors can be used as hardware resources for executing the various processes described in the above embodiments.
  • the processor include a CPU, which is a general-purpose processor that functions as a hardware resource that executes various processes by executing software, that is, a program.
  • the processor include a dedicated electronic circuit such as an FPGA, a PLD, or an ASIC, which is a processor having a circuit configuration specifically designed to execute a specific process.
  • Each processor has a built-in memory or is connected to it, and each processor uses the memory to perform various processes.
  • Hardware resources that execute various processes may be configured with one of these various processors, or a combination of two or more processors of the same type or different types (for example, a combination of multiple FPGAs, or a CPU and FPGA). Furthermore, the hardware resource that executes various processes may be one processor.
  • one processor is configured by a combination of one or more CPUs and software, and this processor functions as a hardware resource that executes various processes.
  • a and/or B has the same meaning as “at least one of A and B.” That is, “A and/or B” means that it may be only A, only B, or a combination of A and B. Furthermore, in this specification, even when three or more items are expressed by connecting them with “and/or”, the same concept as “A and/or B" is applied.
  • the processor includes: obtaining first imaging data obtained by imaging the calibration member by the spectroscopic imaging device; acquiring second imaging data obtained by imaging the calibration member and the subject by the spectroscopic imaging device; calibrating the second image data based on the first image data; A color measurement device that measures the color of the subject based on the second image data that has been calibrated.
  • a color measurement method comprising: (Appendix 3) a first acquisition step of acquiring first imaging data obtained by imaging the calibration member according to claim 1 with the spectroscopic imaging device; a second acquisition step of acquiring second imaging data obtained by imaging the calibration member and the subject with the spectroscopic imaging device; a calibration step of calibrating the second image data based on the first image data; a color measurement step of measuring the color of the subject based on the second image data that has been calibrated;
  • a color measurement method comprising: (Appendix 3) a first acquisition step of acquiring first imaging data obtained by imaging the calibration member according to claim 1 with the spectroscopic imaging device; a second acquisition step of acquiring second imaging data obtained by imaging the calibration member and the subject with the spectroscopic imaging device; a calibration step of calibrating the second image data based on the first image data; a color measurement step of measuring the color of the subject based on the second image data that has been calibrated;
  • a program that causes a computer to perform processing, including processing.

Landscapes

  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Spectrometry And Color Measurement (AREA)

Abstract

A member for calibration according to the present invention is used to calibrate a spectroscopic imaging device comprising a spectral filter that has a specific wavelength region. This member for calibration has a background surface that constitutes the background of a subject. In the specific wavelength region, a first spectral reflectance that is the spectral reflectance of light reflected by the subject and a second spectral reflectance that is the spectral reflectance of light reflected by the background surface have a relationship with each other.

Description

校正用部材、筐体装置、校正装置、校正方法、及びプログラムCalibration parts, housing device, calibration device, calibration method, and program
 本開示の技術は、校正用部材、筐体装置、校正装置、校正方法、及びプログラムに関する。 The technology of the present disclosure relates to a calibration member, a housing device, a calibration device, a calibration method, and a program.
 特開2006-292582号公報には、被写体のスペクトル画像を取得するマルチバンド画像処理装置が開示されている。マルチバンド画像処理装置は、被写体の近傍に分光特性が既知である複数の物体が配置された状態で該被写体を撮影した画像を取得する画像取得部と、画像から得られる複数の物体の分光特性の空間的なばらつきに基づいて該画像に対して所定の処理を施す画像処理部とを備える。 Japanese Unexamined Patent Publication No. 2006-292582 discloses a multiband image processing device that acquires a spectral image of a subject. A multiband image processing device includes an image acquisition unit that acquires an image of a plurality of objects with known spectral characteristics placed near the object, and an image acquisition unit that acquires an image of a plurality of objects with known spectral characteristics obtained from the images. and an image processing unit that performs predetermined processing on the image based on spatial variations in the image.
 特開2018-009988号公報には、光輝材を含む塗装面の粒子特性を評価する計測装置が開示されている。計測装置は、少なくとも2つ以上の照明条件、又は、受光条件で被対象物の面内色度分布を取得する色度分布取得手段と、面内色度分布の変動量に基づいて、粒子特性を評価するための粒子特性評価値を算出する粒子特性評価手段とを有する。 JP 2018-009988A discloses a measurement device that evaluates particle characteristics of a painted surface containing a glitter material. The measurement device includes a chromaticity distribution acquisition means that acquires the in-plane chromaticity distribution of the object under at least two or more illumination conditions or light reception conditions, and a particle characteristic based on the amount of variation in the in-plane chromaticity distribution. and particle characteristic evaluation means for calculating a particle characteristic evaluation value for evaluating.
 特開2001-144972号公報には、光源によって照明された被写体撮像収録領域にある被写体を撮像収録するマルチスペクトル画像収録・処理装置が開示されている。マルチスペクトル画像収録・処理装置は、被写体支持部と、光源部と、撮像手段と、偏角マルチスペクトル画像取得部と、分光反射率推定部とを備える。被写体支持部は、被写体を被写体撮像収録領域に支持する。光源部は、被写体撮像収録領域の所定の位置を含む平面内の所定の位置を中心とした所望の方位方向に、所定の位置から一定の距離離間して配置され、被写体撮像収録領域に向けて照明する。撮像手段は、所定の位置を含む平面内の所定の位置を中心とした所望の方位方向に、所定の位置から一定の距離離間して配置され、被写体のマルチバンド画像を撮像する。偏角マルチスペクトル画像取得部は、光源部の方位方向によって定まる光源位置と、撮像手段の方位方向によって定まる撮像位置とをそれぞれ変化させた複数の撮像収録条件で撮像収録される複数のマルチバンド画像から、光源位置と撮像位置とをパラメータとした、被写体の分光反射率分布を有する偏角マルチスペクトル画像を求める。分光反射率推定部は、光源位置と撮像位置とをパラメータとした分光反射率分布を補間または合成して、所望の光源位置、光源個数または撮像位置での画像再現条件下における被写体の分光反射率分布を、偏角マルチスペクトル画像取得部で求めた偏角マルチスペクトル画像の画像データを用いて推定する。 Japanese Unexamined Patent Publication No. 2001-144972 discloses a multispectral image recording/processing device that images and records a subject in a subject image recording area illuminated by a light source. The multispectral image recording/processing device includes a subject support section, a light source section, an imaging means, an angular multispectral image acquisition section, and a spectral reflectance estimation section. The subject support section supports the subject in the subject image recording area. The light source section is arranged in a desired azimuth direction centered on a predetermined position in a plane that includes a predetermined position of the subject imaging recording area, and is spaced a certain distance from the predetermined position, and is directed toward the subject imaging recording area. illuminate. The imaging means is arranged in a desired azimuth direction centered on a predetermined position in a plane including the predetermined position, and spaced apart from the predetermined position by a certain distance, and captures a multiband image of the subject. The declination multispectral image acquisition unit captures and records a plurality of multiband images under a plurality of imaging and recording conditions in which the light source position determined by the azimuth direction of the light source unit and the imaging position determined by the azimuth direction of the imaging means are respectively changed. From this, a polarization multispectral image having the spectral reflectance distribution of the subject is obtained using the light source position and the imaging position as parameters. The spectral reflectance estimation unit interpolates or synthesizes the spectral reflectance distribution using the light source position and the imaging position as parameters to obtain the spectral reflectance of the subject under image reproduction conditions at the desired light source position, number of light sources, or imaging position. The distribution is estimated using the image data of the declination multispectral image obtained by the declination multispectral image acquisition unit.
 特開2010-130157号公報には、互いに異なるバンドを有する複数の画像入力手段により入力された複数の入力画像を合成して得られる4以上のバンドを有するマルチバンド画像の色変換を行う色変換手段を備えるマルチスペクトル画像処理装置が開示されている。色変換手段は、複数の入力画像のデータにそれぞれ含まれる露光量情報の比に基づいて、画像入力手段の分光感度特性である分光感度特性情報の補正を行う分光感度特性情報補正手段と、補正が行われた分光感度特性情報、および、入力画像を撮影した時の照明光の分光特性である撮影照明光情報を用いて、マルチバンド画像をデバイスインディペンデントカラー画像に変換するデバイスインディペンデントカラー画像変換手段とを備える。 Japanese Unexamined Patent Publication No. 2010-130157 discloses a color conversion method that performs color conversion of a multiband image having four or more bands obtained by combining a plurality of input images input by a plurality of image input means having mutually different bands. A multispectral image processing apparatus comprising means is disclosed. The color conversion means includes a spectral sensitivity characteristic information correction means for correcting spectral sensitivity characteristic information, which is a spectral sensitivity characteristic of the image input means, based on a ratio of exposure information included in data of a plurality of input images; A device-independent method that converts a multiband image into a device-independent color image using the spectral sensitivity characteristic information obtained when the input image was photographed and the photographing illumination light information, which is the spectral characteristics of the illumination light when the input image was photographed. and color image conversion means.
 本開示の技術に係る一つの実施形態は、例えば、背景面が白色面である場合に比して、被写体の色に対する測定精度を向上させることができる校正用部材、筐体装置、校正装置、校正方法、及びプログラムを提供する。 One embodiment of the technology of the present disclosure includes, for example, a calibration member, a housing device, a calibration device, and a housing device that can improve measurement accuracy for the color of a subject compared to when the background surface is a white surface. Providing calibration methods and programs.
 本開示の技術に係る第1の態様は、特定の波長域を有する分光フィルタを備えた分光撮像装置の校正に用いられる校正用部材であって、校正用部材は、被写体の背景を構成する背景面を有し、波長域において、被写体で反射した光の分光反射率である第1分光反射率と、背景面で反射した光の分光反射率である第2分光反射率とは、互いに関係性を有する校正用部材である。 A first aspect of the technology of the present disclosure is a calibration member used for calibration of a spectral imaging device equipped with a spectral filter having a specific wavelength range, wherein the calibration member is a background that constitutes a background of a subject. The first spectral reflectance, which is the spectral reflectance of the light reflected by the subject, and the second spectral reflectance, which is the spectral reflectance of the light reflected by the background surface, have a relationship with each other in the wavelength range. It is a calibration member having.
 本開示の技術に係る第2の態様は、第1の態様に係る校正用部材において、関係性は、第1分光反射率と第2分光反射率との差が第1範囲内である関係性である校正用部材である。 A second aspect of the technology of the present disclosure is a calibration member according to the first aspect, wherein the relationship is such that the difference between the first spectral reflectance and the second spectral reflectance is within a first range. This is a calibration member.
 本開示の技術に係る第3の態様は、第2の態様に係る校正用部材において、第1範囲は、第1分光反射率に対して0.5倍の反射率から2倍の反射率までの範囲である校正用部材である。 A third aspect of the technology of the present disclosure is the calibration member according to the second aspect, wherein the first range is from a reflectance of 0.5 times to a reflectance of twice the first spectral reflectance. This is a calibration member that has a range of .
 本開示の技術に係る第4の態様は、第2の態様又は第3の態様に係る校正用部材において、第1範囲は、第1基準板で反射した光の分光反射率である第3分光反射率に基づいて設定される校正用部材である。 A fourth aspect according to the technology of the present disclosure is a calibration member according to the second aspect or the third aspect, in which the first range is a third spectral reflectance of light reflected by the first reference plate. This is a calibration member that is set based on reflectance.
 本開示の技術に係る第5の態様は、第4の態様に係る校正用部材において、第1基準板は、光を反射する反射面を有し、反射面は、白色面である校正用部材である。 A fifth aspect of the technology of the present disclosure is the calibration member according to the fourth aspect, in which the first reference plate has a reflective surface that reflects light, and the reflective surface is a white surface. It is.
 本開示の技術に係る第6の態様は、第4の態様又は第5の態様に係る校正用部材において、第1分光反射率と第2分光反射率との差は、第1分光反射率と第3分光反射率との差よりも小さい校正用部材である。 A sixth aspect according to the technology of the present disclosure is that in the calibration member according to the fourth aspect or the fifth aspect, the difference between the first spectral reflectance and the second spectral reflectance is the same as the difference between the first spectral reflectance and the second spectral reflectance. This is a calibration member smaller than the difference with the third spectral reflectance.
 本開示の技術に係る第7の態様は、第4の態様から第6の態様の何れか一つの態様に係る校正用部材において、第2分光反射率は、第3分光反射率よりも低い校正用部材である。 A seventh aspect of the technology of the present disclosure is the calibration member according to any one of the fourth to sixth aspects, wherein the second spectral reflectance is lower than the third spectral reflectance. It is a member for use.
 本開示の技術に係る第8の態様は、第4の態様から第7の態様の何れか一つの態様に係る校正用部材において、第1分光反射率は、第3分光反射率よりも低い校正用部材である。 An eighth aspect according to the technology of the present disclosure is the calibration member according to any one of the fourth to seventh aspects, in which the first spectral reflectance is lower than the third spectral reflectance. It is a member for use.
 本開示の技術に係る第9の態様は、第4の態様から第8の態様の何れか一つの態様に係る校正用部材において、第2分光反射率は、第1分光反射率及び第3分光反射率に基づいて設定された分光反射率の第2範囲に収まる校正用部材である。 A ninth aspect according to the technology of the present disclosure is the calibration member according to any one of the fourth to eighth aspects, in which the second spectral reflectance is equal to the first spectral reflectance and the third spectral reflectance. This is a calibration member that falls within a second range of spectral reflectance set based on reflectance.
 本開示の技術に係る第10の態様は、第9の態様に係る校正用部材において、第1分光反射率をaとし、第2分光反射率をbとし、第3分光反射率をcとした場合に、第2範囲は、式(1)によって規定される範囲である校正用部材である。
 a/2≦b≦(c+a)/2・・・(1)
A tenth aspect according to the technology of the present disclosure is a calibration member according to the ninth aspect, in which the first spectral reflectance is a, the second spectral reflectance is b, and the third spectral reflectance is c. In this case, the second range is a calibration member that is a range defined by equation (1).
a/2≦b≦(c+a)/2...(1)
 本開示の技術に係る第11の態様は、第9の態様に係る校正用部材において、第1分光反射率をaとし、第2分光反射率をbとし、第3分光反射率をcとした場合に、第2範囲は、式(2)によって規定される範囲である校正用部材である。
 3a/4≦b≦(c+3a)/4・・・(2)
An eleventh aspect according to the technology of the present disclosure is that in the calibration member according to the ninth aspect, the first spectral reflectance is a, the second spectral reflectance is b, and the third spectral reflectance is c. In this case, the second range is a calibration member that is a range defined by equation (2).
3a/4≦b≦(c+3a)/4...(2)
 本開示の技術に係る第12の態様は、第1の態様から第11の態様の何れか一つの態様に係る校正用部材において、第1分光反射率は、被写体の複数の箇所に対して測定された分光反射率に基づく分光反射率である校正用部材である。 A twelfth aspect of the technology of the present disclosure is a calibration member according to any one of the first to eleventh aspects, in which the first spectral reflectance is measured at multiple locations on the subject. This is a calibration member that is a spectral reflectance based on the calculated spectral reflectance.
 本開示の技術に係る第13の態様は、第4の態様から第11の態様、及び第4の態様に従属する第12の態様の何れか一つの態様に係る校正用部材において、特定の波長域は、複数の波長域を含み、各波長域において、第1分光反射率と第2分光反射率との差は、第1分光反射率と第3分光反射率との差よりも小さい校正用部材である。 A thirteenth aspect according to the technology of the present disclosure is a calibration member according to any one of the fourth aspect to the eleventh aspect, and the twelfth aspect subordinate to the fourth aspect. The range includes a plurality of wavelength ranges, and in each wavelength range, the difference between the first spectral reflectance and the second spectral reflectance is smaller than the difference between the first spectral reflectance and the third spectral reflectance. It is a member.
 本開示の技術に係る第14の態様は、第1の態様から第13の態様の何れか一つの態様に係る校正用部材において、背景面は、被写体と対応する表面粗さを有する校正用部材である。 A fourteenth aspect according to the technology of the present disclosure is a calibration member according to any one of the first to thirteenth aspects, in which the background surface has a surface roughness corresponding to the subject. It is.
 本開示の技術に係る第15の態様は、第1の態様から第14の態様の何れか一つの態様に係る校正用部材において、背景面の分光反射率は、可視域のうちの第1可視域よりも近赤外域のうちの第1近赤外域の方が高い校正用部材である。 A fifteenth aspect according to the technology of the present disclosure is that in the calibration member according to any one of the first to fourteenth aspects, the spectral reflectance of the background surface is in the first visible range in the visible range. It is a calibration member whose first near-infrared region of the near-infrared region is higher than that of the near-infrared region.
 本開示の技術に係る第16の態様は、第1の態様から第15の態様の何れか一つの態様に係る校正用部材と、校正用部材及び被写体が配置された撮像空間を覆う筐体とを備える筐体装置である。 A sixteenth aspect according to the technology of the present disclosure includes a calibration member according to any one of the first to fifteenth aspects, and a housing that covers an imaging space in which the calibration member and a subject are arranged. It is a housing device equipped with.
 本開示の技術に係る第17の態様は、第1の態様から第15の態様の何れか一つの態様に係る校正用部材と、分光撮像装置と、光源と、筐体とを備え、筐体は、校正用部材及び被写体が配置された撮像空間を覆っており、被写体を分光撮像装置で撮像する場合の撮像条件は、分光撮像装置に入射する入射光のうちの第1成分が、光源から照射され被写体及び校正用部材で反射した光となる第1条件である校正装置である。 A seventeenth aspect according to the technology of the present disclosure includes the calibration member according to any one of the first to fifteenth aspects, a spectral imaging device, a light source, and a housing. covers the imaging space in which the calibration member and the subject are placed, and the imaging conditions when imaging the subject with the spectroscopic imaging device are such that the first component of the incident light that enters the spectroscopic imaging device is from the light source. This is the first condition of the calibration device in which the light is irradiated and reflected by the subject and the calibration member.
 本開示の技術に係る第18の態様は、第17の態様に係る校正装置において、プロセッサを備え、プロセッサは、撮像条件が第1条件を外れた場合に警告情報を出力する校正装置である。 An 18th aspect according to the technology of the present disclosure is a calibration device according to the 17th aspect, which includes a processor, and the processor outputs warning information when the imaging condition deviates from the first condition.
 本開示の技術に係る第19の態様は、第1の態様から第15の態様の何れか一つの態様に係る校正用部材と、プロセッサとを備え、プロセッサは、校正用部材が分光撮像装置によって撮像されることにより得られた第1撮像データを取得し、校正用部材及び被写体が分光撮像装置によって撮像されることにより得られた第2撮像データを取得し、第1撮像データに基づいて第2撮像データに対する校正を行う校正装置である。 A nineteenth aspect according to the technology of the present disclosure includes the calibration member according to any one of the first to fifteenth aspects, and a processor, the processor comprising: a calibration member according to any one of the first to fifteenth aspects; First imaging data obtained by imaging is acquired, second imaging data is obtained by imaging the calibration member and the subject by a spectroscopic imaging device, and second imaging data is acquired based on the first imaging data. This is a calibration device that performs calibration on 2 imaged data.
 本開示の技術に係る第20の態様は、第1の態様から第15の態様の何れか一つの態様に係る校正用部材が分光撮像装置によって撮像されることにより得られた第1撮像データを取得する第1取得工程と、校正用部材及び被写体が分光撮像装置によって撮像されることにより得られた第2撮像データを取得する第2取得工程と、第1撮像データに基づいて第2撮像データに対する校正を行う校正工程とを備える校正方法である。 A 20th aspect of the technology of the present disclosure is a 20th aspect of the technology of the present disclosure, in which the first imaging data obtained when the calibration member according to any one of the 1st to 15th aspects is imaged by a spectroscopic imaging device. a first acquisition step of acquiring, a second acquisition step of acquiring second imaging data obtained by imaging the calibration member and the subject with a spectroscopic imaging device, and a second acquisition step of acquiring second imaging data based on the first imaging data. This is a calibration method comprising a calibration step of performing calibration for.
 本開示の技術に係る第21の態様は、第1の態様から第15の態様の何れか一つの態様に係る校正用部材が分光撮像装置によって撮像されることにより得られた第1撮像データを取得する第1取得工程と、校正用部材及び被写体が分光撮像装置によって撮像されることにより得られた第2撮像データを取得する第2取得工程と、第1撮像データに基づいて第2撮像データに対する校正を行う校正工程とを含む処理をコンピュータに処理を実行させるためのプログラムである。 A twenty-first aspect of the technology of the present disclosure provides first imaging data obtained when the calibration member according to any one of the first to fifteenth aspects is imaged by a spectroscopic imaging device. a first acquisition step of acquiring, a second acquisition step of acquiring second imaging data obtained by imaging the calibration member and the subject with a spectroscopic imaging device, and a second acquisition step of acquiring second imaging data based on the first imaging data. This is a program for causing a computer to execute a process including a calibration process for calibrating the data.
本実施形態に係る色測定装置の一例を示すブロック図である。FIG. 1 is a block diagram showing an example of a color measuring device according to the present embodiment. 第1分光反射率、第2分光反射率、及び第3分光反射率の一例を示すグラフである。It is a graph which shows an example of a 1st spectral reflectance, a 2nd spectral reflectance, and a 3rd spectral reflectance. 第2分光反射率の第1例を示すグラフである。It is a graph which shows the 1st example of 2nd spectral reflectance. 第2分光反射率の第2例を示すグラフである。It is a graph which shows the 2nd example of 2nd spectral reflectance. 背景面の色を設定する色設定方法の一例を示すブロック図である。FIG. 2 is a block diagram illustrating an example of a color setting method for setting the color of a background surface. 本実施形態に係る色測定方法の一例を示すブロック図である。FIG. 2 is a block diagram illustrating an example of a color measurement method according to the present embodiment. 本実施形態に係る色測定方法における入射光の態様の一例を示すブロック図である。FIG. 2 is a block diagram showing an example of the aspect of incident light in the color measurement method according to the present embodiment. 本実施形態に係る撮像装置の一例を示す斜視図である。It is a perspective view showing an example of an imaging device concerning this embodiment. 瞳分割フィルタの一例を示す分解斜視図である。It is an exploded perspective view showing an example of a pupil division filter. 撮像装置のハードウェア構成の一例を示すブロック図である。FIG. 2 is a block diagram showing an example of the hardware configuration of an imaging device. 光電変換素子の構成の一例を示す説明図である。FIG. 2 is an explanatory diagram showing an example of the configuration of a photoelectric conversion element. スペクトル画像生成処理を実行するための機能的な構成の一例を示すブロック図である。FIG. 2 is a block diagram illustrating an example of a functional configuration for executing spectral image generation processing. 出力値取得部及び混信除去処理部の動作の一例を示すブロック図である。FIG. 2 is a block diagram illustrating an example of the operation of an output value acquisition section and an interference removal processing section. 本実施形態に係る処理装置のハードウェア構成の一例及び色測定処理を実行するための機能的な構成の一例を示すブロック図である。FIG. 2 is a block diagram illustrating an example of the hardware configuration of a processing device according to the present embodiment and an example of a functional configuration for executing color measurement processing. 画像データ取得部、校正画像生成部、及び色導出部の動作の一例を示すブロック図である。FIG. 2 is a block diagram illustrating an example of the operations of an image data acquisition section, a calibration image generation section, and a color derivation section. スペクトル画像生成処理の流れの一例を示すフローチャートである。2 is a flowchart illustrating an example of the flow of spectral image generation processing. 色測定処理の流れの一例を示すフローチャートである。3 is a flowchart showing an example of the flow of color measurement processing. 第1比較例に係る色測定方法の一例を示すブロック図である。FIG. 2 is a block diagram illustrating an example of a color measurement method according to a first comparative example. 第2比較例に係る色測定方法の一例を示すブロック図である。FIG. 7 is a block diagram illustrating an example of a color measurement method according to a second comparative example. 第2比較例に係る色測定方法における入射光の態様の一例を示すブロック図である。FIG. 7 is a block diagram illustrating an example of an aspect of incident light in a color measurement method according to a second comparative example.
 以下、添付図面に従って本開示の技術に係る校正用部材、筐体装置、校正装置、校正方法、及びプログラムの実施形態の一例について説明する。 Hereinafter, an example of an embodiment of a calibration member, a housing device, a calibration device, a calibration method, and a program according to the technology of the present disclosure will be described with reference to the accompanying drawings.
 先ず、以下の説明で使用される文言について説明する。 First, the words used in the following explanation will be explained.
 RGBとは、“Red Green Blue”の略称を指す。LEDとは、“light emitting diode”の略称を指す。ELとは、“Electro Luminescence”の略称を指す。CMOSとは、“Complementary Metal Oxide Semiconductor”の略称を指す。CCDとは、“Charge Coupled Device”の略称を指す。I/Fとは、“Interface”の略称を指す。RAMとは、“Random Access Memory”の略称を指す。CPUとは、“Central Processing Unit”の略称を指す。GPUとは、“Graphics Processing Unit”の略称を指す。EEPROMとは、“Electrically Erasable and Programmable Read Only Memory”の略称を指す。HDDとは、“Hard Disk Drive”の略称を指す。TPUとは、“Tensor processing unit”の略称を指す。SSDとは、“Solid State Drive”の略称を指す。USBとは、“Universal Serial Bus”の略称を指す。ASICとは、“Application Specific Integrated Circuit”の略称を指す。FPGAとは、“Field-Programmable Gate Array”の略称を指す。PLDとは、“Programmable Logic Device”の略称を指す。SoCとは、“System-on-a-Chip”の略称を指す。ICとは、“Integrated Circuit”の略称を指す。 RGB is an abbreviation for "Red Green Blue." LED is an abbreviation for "light emitting diode." EL is an abbreviation for "Electro Luminescence". CMOS is an abbreviation for "Complementary Metal Oxide Semiconductor." CCD is an abbreviation for “Charge Coupled Device”. I/F is an abbreviation for "Interface". RAM is an abbreviation for "Random Access Memory." CPU is an abbreviation for "Central Processing Unit." GPU is an abbreviation for “Graphics Processing Unit.” EEPROM is an abbreviation for "Electrically Erasable and Programmable Read Only Memory." HDD is an abbreviation for "Hard Disk Drive." TPU is an abbreviation for "Tensor processing unit". SSD is an abbreviation for "Solid State Drive." USB is an abbreviation for "Universal Serial Bus." ASIC is an abbreviation for “Application Specific Integrated Circuit.” FPGA is an abbreviation for "Field-Programmable Gate Array." PLD is an abbreviation for “Programmable Logic Device”. SoC is an abbreviation for "System-on-a-Chip." IC is an abbreviation for "Integrated Circuit."
 本明細書の説明において、「均一」とは、完全な「均一」の他に、本開示の技術が属する技術分野で一般的に許容される誤差であって、本開示の技術の趣旨に反しない程度の誤差を含めた意味合いでの「均一」を指す。本明細書の説明において、「同じ」とは、完全な「同じ」の他に、本開示の技術が属する技術分野で一般的に許容される誤差であって、本開示の技術の趣旨に反しない程度の誤差を含めた意味合いでの「同じ」を指す。本明細書の説明において、「直交」とは、完全な「直交」の他に、本開示の技術が属する技術分野で一般的に許容される誤差であって、本開示の技術の趣旨に反しない程度の誤差を含めた意味合いでの「直交」を指す。本明細書の説明において、「直線」とは、完全な「直線」の他に、本開示の技術が属する技術分野で一般的に許容される誤差であって、本開示の技術の趣旨に反しない程度の誤差を含めた意味合いでの「直線」を指す。 In the description of this specification, "uniform" means not only complete "uniform" but also an error that is generally allowed in the technical field to which the technology of the present disclosure belongs, and is contrary to the spirit of the technology of the present disclosure. It refers to "uniform" in the sense that it includes some degree of error. In the description of this specification, "same" means not only "the same" but also an error that is generally allowed in the technical field to which the technology of the present disclosure belongs, and which is contrary to the spirit of the technology of the present disclosure. It refers to "the same" in the sense that it includes a certain degree of error. In the description of this specification, "orthogonal" means not only complete "orthogonal" but also an error that is generally allowed in the technical field to which the technology of the present disclosure belongs, and is contrary to the spirit of the technology of the present disclosure. This refers to "orthogonal" in the sense that it includes a degree of error that does not occur. In the description of this specification, a "straight line" refers to an error that is generally allowed in the technical field to which the technology of the present disclosure belongs, in addition to a perfect "straight line," and is contrary to the spirit of the technology of the present disclosure. It refers to a ``straight line'' that includes a certain amount of error.
 はじめに、本実施形態の比較例について説明する。 First, a comparative example of this embodiment will be described.
 図18には、第1比較例に係る色測定方法の一例を示す。第1比較例に係る色測定方法で使用される撮像装置200は、RGBカメラである。RGBカメラとは、赤色、緑色、及び青色の各波長域の波長域画像を生成することが可能なカメラを指す。 FIG. 18 shows an example of a color measurement method according to the first comparative example. The imaging device 200 used in the color measurement method according to the first comparative example is an RGB camera. The RGB camera refers to a camera that can generate wavelength range images in each of red, green, and blue wavelength ranges.
 図18に示す色測定方法では、次の要領で被写体2の色が測定される。すなわち、先ず、態様Iに示すように、光源202から白色の基準板204(以下、「白色基準板204」と称する)に光L8が照射され、白色基準板204で反射した光L9を含む入射光Lbが撮像装置200によって撮像される。これにより、基準画像220が得られる。続いて、態様Jに示すように、光源202から被写体2に光L8が照射され、被写体2で反射した光L10を含む入射光Lbが撮像装置200によって撮像される。これにより、被写体画像222が得られる。 In the color measurement method shown in FIG. 18, the color of the subject 2 is measured in the following manner. That is, first, as shown in Aspect I, light L8 is irradiated from the light source 202 onto a white reference plate 204 (hereinafter referred to as "white reference plate 204"), and the incident light including light L9 reflected by the white reference plate 204 is emitted. The light Lb is imaged by the imaging device 200. As a result, a reference image 220 is obtained. Subsequently, as shown in aspect J, light L8 is irradiated from the light source 202 to the subject 2, and the incident light Lb including the light L10 reflected by the subject 2 is imaged by the imaging device 200. As a result, a subject image 222 is obtained.
 そして、撮像装置200に通信可能に接続された処理装置210において、基準画像220に基づいて被写体画像222に対する校正が行われることにより、校正画像224が生成され、校正画像224に基づいて被写体2の色が測定される。これにより、被写体2の色を測定した結果である色測定結果226が得られる。 Then, in the processing device 210 communicably connected to the imaging device 200, a calibration image 224 is generated by performing calibration on the subject image 222 based on the reference image 220, and a calibration image 224 is generated based on the calibration image 224. Color is measured. As a result, a color measurement result 226, which is the result of measuring the color of the subject 2, is obtained.
 第1比較例に係る色測定方法では、例えば、光源202から照射された光L8にムラが生じていたり、撮像装置200の周辺の光量が低下したりした場合でも、基準画像220に基づいて被写体画像222に対する校正が行われることにより、光L8のムラの影響、及び撮像装置200の周辺の光量低下の影響等を排除した状態で、被写体2の色が測定される。 In the color measurement method according to the first comparative example, even if the light L8 emitted from the light source 202 is uneven or the amount of light around the imaging device 200 is reduced, the color measurement method based on the reference image 220 By performing the calibration on the image 222, the color of the subject 2 is measured while eliminating the influence of unevenness of the light L8, the influence of a decrease in the amount of light around the imaging device 200, and the like.
 しかしながら、第1比較例に係る色測定方法は、理想的な撮像環境で被写体2が撮像されることを想定した方法であり、現実の撮像環境には、光源202から照射される光L8以外の外乱光等が影響する。 However, the color measurement method according to the first comparative example is a method that assumes that the subject 2 is imaged in an ideal imaging environment, and in an actual imaging environment, light other than the light L8 emitted from the light source 202 It is affected by ambient light, etc.
 図19には、第2比較例に係る色測定方法の一例を示す。第2比較例に係る色測定方法では、撮像環境に外乱光等が影響することを防止するために、筐体206が用いられている。筐体206は、撮像空間208を覆う構成とされている。撮像空間208には、光源202、撮像装置200の入射部200A、及び被写体2が配置される。筐体206の内面(すなわち、底面206A、側面206B、及び天面206C)は、白色面である。 FIG. 19 shows an example of a color measurement method according to the second comparative example. In the color measurement method according to the second comparative example, a housing 206 is used to prevent disturbance light and the like from affecting the imaging environment. The housing 206 is configured to cover the imaging space 208. In the imaging space 208, the light source 202, the entrance section 200A of the imaging device 200, and the subject 2 are arranged. The inner surfaces of the housing 206 (ie, the bottom surface 206A, the side surfaces 206B, and the top surface 206C) are white surfaces.
 第2比較例に係る色測定方法では、次の要領で被写体2の色が測定される。すなわち、先ず、態様Kに示すように、光源202から底面206Aに光L8が照射され、底面206Aで反射した光L11を含む入射光Lbが撮像装置200によって撮像される。これにより、基準画像220が得られる。続いて、態様Lに示すように、底面206Aの上に被写体2が配置され、光源202から被写体2に光L8が照射され、被写体2で反射した光L12を含む入射光Lbが撮像装置200によって撮像される。これにより、被写体画像222が得られる。 In the color measurement method according to the second comparative example, the color of the subject 2 is measured in the following manner. That is, first, as shown in aspect K, light L8 is irradiated from the light source 202 onto the bottom surface 206A, and incident light Lb including light L11 reflected from the bottom surface 206A is imaged by the imaging device 200. As a result, a reference image 220 is obtained. Subsequently, as shown in aspect L, the subject 2 is placed on the bottom surface 206A, the light L8 is irradiated from the light source 202 to the subject 2, and the incident light Lb including the light L12 reflected by the subject 2 is captured by the imaging device 200. Imaged. As a result, a subject image 222 is obtained.
 そして、撮像装置200に通信可能に接続された処理装置210において、基準画像220に基づいて被写体画像222に対する校正が行われることにより、校正画像224が生成され、校正画像224に基づいて被写体2の色が測定される。これにより、被写体2の色を測定した結果である色測定結果226が得られる。 Then, in the processing device 210 communicably connected to the imaging device 200, a calibration image 224 is generated by performing calibration on the subject image 222 based on the reference image 220, and a calibration image 224 is generated based on the calibration image 224. Color is measured. As a result, a color measurement result 226, which is the result of measuring the color of the subject 2, is obtained.
 図20には、第2比較例に係る色測定方法の課題を示す。第2比較例に係る色測定方法では、態様Mに示すように、基準画像220が得られる場合に、入射光Lbには、光源202から底面206Aに照射されて底面206A及び天面206Cで反射してから底面206Aで再び反射した光L13が含まれる。一方、態様Nに示すように、被写体画像222が得られる場合に、入射光Lbには、光源202から被写体2に照射されて被写体2及び天面206Cで反射してから被写体2で再び反射した光L14(以下、「再反射光L14」と称する)が含まれる。再反射光L14は、被写体2で2回反射した光であるので、態様Nの場合には、実際の被写体2の色に比して被写体2の色が濃く測定される。 FIG. 20 shows problems with the color measurement method according to the second comparative example. In the color measurement method according to the second comparative example, as shown in aspect M, when the reference image 220 is obtained, the incident light Lb is irradiated from the light source 202 to the bottom surface 206A and reflected by the bottom surface 206A and the top surface 206C. This includes light L13 that is reflected again on the bottom surface 206A. On the other hand, as shown in aspect N, when the subject image 222 is obtained, the incident light Lb includes light that is irradiated from the light source 202 to the subject 2, reflected by the subject 2 and the top surface 206C, and then reflected again by the subject 2. Light L14 (hereinafter referred to as "re-reflected light L14") is included. Since the re-reflected light L14 is light that has been reflected twice by the subject 2, in the case of aspect N, the color of the subject 2 is measured to be darker than the actual color of the subject 2.
 また、第2比較例に係る色測定方法では、態様Oに示すように、被写体画像222が得られる場合に、入射光Lbには、光源202から被写体2に照射されて被写体2で反射した光L15に加えて、底面206Aのうちの被写体2の周囲の領域で反射した光L16(以下、「周辺反射光L16」と称する)も含まれる。したがって、態様Oの場合には、入射光Lbに周辺反射光L16が含まれることにより、実際の被写体2の色に比して被写体2の色が薄く測定されたり、周辺反射光L16によってフレアが生じたりする。 In addition, in the color measurement method according to the second comparative example, as shown in aspect O, when the subject image 222 is obtained, the incident light Lb includes light irradiated from the light source 202 to the subject 2 and reflected by the subject 2. In addition to L15, light L16 reflected from the area around the subject 2 on the bottom surface 206A (hereinafter referred to as "peripheral reflected light L16") is also included. Therefore, in the case of aspect O, the peripheral reflected light L16 is included in the incident light Lb, so that the color of the subject 2 is measured to be lighter than the actual color of the subject 2, and the peripheral reflected light L16 causes flare. Occurs.
 このように、第2比較例に係る色測定方法では、被写体2がない場合と被写体2がある場合とで入射光Lbが変化することにより、第1比較例に係る色測定方法に比して、被写体2の色に対する測定精度が低下するという課題がある。 In this way, in the color measurement method according to the second comparative example, the incident light Lb changes between the case where there is no subject 2 and the case where there is subject 2, so that the color measurement method according to the first comparative example is different. , there is a problem that measurement accuracy for the color of the subject 2 is reduced.
 発明者らは、RGBカメラの代わりに、マルチスペクトルカメラについて鋭意検討を行った結果、マルチスペクトルカメラでも、上述のRGBカメラと同様の課題が生じることを確認した。ここで言うマルチスペクトルカメラとは、分光フィルタの各波長域のスペクトル画像を生成することが可能なカメラを指す。以下に説明する本実施形態は、一例として、マルチスペクトルカメラを用いた場合に、上記課題を解決できる色測定方法を提供する。 As a result of intensive study on multispectral cameras instead of RGB cameras, the inventors confirmed that the same problems as the above-mentioned RGB cameras occur with multispectral cameras. The multispectral camera referred to here refers to a camera that can generate spectral images in each wavelength range of a spectral filter. The present embodiment described below provides, as an example, a color measurement method that can solve the above problems when a multispectral camera is used.
 次に、本実施形態について説明する。 Next, this embodiment will be described.
 図1には、本実施形態に係る色測定装置110を示す。色測定装置110は、光源112と、筐体114と、校正用部材116と、撮像装置10と、処理装置90とを備える。 FIG. 1 shows a color measuring device 110 according to this embodiment. The color measurement device 110 includes a light source 112, a housing 114, a calibration member 116, an imaging device 10, and a processing device 90.
 光源112は、例えば、LED光源、レーザ光源、又は白熱電球等である。光源112から照射される光は、無偏光である。光源112は、一例として、筐体114の内部の上部に配置されている。 The light source 112 is, for example, an LED light source, a laser light source, or an incandescent light bulb. The light emitted from the light source 112 is non-polarized. The light source 112 is arranged, for example, at the top inside the housing 114.
 筐体114は、撮像空間118を覆う構成とされている。撮像空間118には、光源112、校正用部材116、撮像装置10の入射部10A、及び被写体2が配置される。筐体114の内面(すなわち、底面114A、側面114B、及び天面114C)は、白色の白色面である。本明細書において、白色とは、全ての色の可視光線が物体で乱反射された場合に、物体の表面を見た人が知覚する色として定義される。筐体114の内面は、光を拡散させる光拡散性を有する。筐体114は、本開示の技術に係る「筐体」の一例である。 The housing 114 is configured to cover the imaging space 118. In the imaging space 118, the light source 112, the calibration member 116, the entrance section 10A of the imaging device 10, and the subject 2 are arranged. The inner surface of the housing 114 (that is, the bottom surface 114A, the side surfaces 114B, and the top surface 114C) is a white surface. In this specification, white is defined as the color perceived by a person looking at the surface of an object when visible light rays of all colors are diffusely reflected by the object. The inner surface of the casing 114 has light diffusing properties to diffuse light. The housing 114 is an example of a "casing" according to the technology of the present disclosure.
 校正用部材116は、一例として、平面視で四角形状の板材である。校正用部材116は、底面114Aに配置される。ここでは、校正用部材116が板材である例が挙げられているが、校正用部材116の形状は、板状以外の形状でもよい。校正用部材116及び筐体114は、筐体装置120を構成している。筐体装置120には、光源112が含まれてもよい。校正用部材116は、本開示の技術に係る「校正用部材」の一例である。筐体装置120は、本開示の技術に係る「筐体装置」の一例である。 The calibration member 116 is, for example, a rectangular plate in plan view. The calibration member 116 is arranged on the bottom surface 114A. Here, an example is given in which the calibration member 116 is a plate, but the calibration member 116 may have a shape other than a plate. The calibration member 116 and the housing 114 constitute a housing device 120. The housing device 120 may include a light source 112. The calibration member 116 is an example of a “calibration member” according to the technology of the present disclosure. The housing device 120 is an example of a “casing device” according to the technology of the present disclosure.
 校正用部材116は、背景面116Aを有する。背景面116Aは、撮像装置10側を向く面(すなわち、上面)である。背景面116Aの上には、被写体2が配置される。被写体2は、どのようなものでもよい。また、被写体2の色は、何色でもよい。以下、被写体2の色が白色以外の色であることを例に挙げて説明する。 The calibration member 116 has a background surface 116A. The background surface 116A is a surface facing the imaging device 10 (that is, an upper surface). The subject 2 is placed on the background surface 116A. The subject 2 may be of any kind. Moreover, the color of the subject 2 may be any color. Hereinafter, an example in which the color of the subject 2 is a color other than white will be described.
 背景面116Aは、校正用部材116の上に被写体2が配置された場合に、被写体2の背景を構成する。背景面116Aは、後に詳述する通り、被写体2の色に対応する色を有する。校正用部材116のうちの背景面116A以外の面は、背景面116Aと同じ色でもよく、背景面116Aとは異なる色でもよい。以下、校正用部材116のうちの背景面116A以外の面が背景面116Aと同じ色であることを例に挙げて説明する。 The background surface 116A constitutes the background of the subject 2 when the subject 2 is placed on the calibration member 116. The background surface 116A has a color corresponding to the color of the subject 2, as will be described in detail later. The surfaces of the calibration member 116 other than the background surface 116A may have the same color as the background surface 116A, or may have a different color from the background surface 116A. Hereinafter, an example in which the surface of the calibration member 116 other than the background surface 116A has the same color as the background surface 116A will be described.
 また、背景面116Aは、被写体2と対応する表面粗さを有する。表面粗さは、背景面116Aが加工されることによって調節されてもよく、背景面116Aを形成する粒子の大きさが変更されることによって調節されてもよい。背景面116Aが被写体2と対応する表面粗さを有することにより、背景面116Aで反射する光の方向性を被写体2で反射する光の方向性に合わせることができる。 Further, the background surface 116A has a surface roughness corresponding to that of the subject 2. The surface roughness may be adjusted by processing the background surface 116A, or by changing the size of particles forming the background surface 116A. Since the background surface 116A has a surface roughness corresponding to that of the subject 2, the directionality of the light reflected on the background surface 116A can be matched to the directionality of the light reflected on the subject 2.
 撮像装置10は、一例として、マルチスペクトルカメラである。ここでは、撮像装置10がマルチスペクトルカメラである例が挙げられているが、撮像装置10は、ハイパースペクトルカメラ等のスペクトルカメラでもよい。以下、撮像装置10がマルチスペクトルカメラであることを例に挙げて説明する。撮像装置10は、本開示の技術に係る「分光撮像装置」の一例である。 The imaging device 10 is, for example, a multispectral camera. Although an example in which the imaging device 10 is a multispectral camera is given here, the imaging device 10 may be a spectral camera such as a hyperspectral camera. Hereinafter, an example in which the imaging device 10 is a multispectral camera will be described. The imaging device 10 is an example of a "spectral imaging device" according to the technology of the present disclosure.
 撮像装置10は、光学系26と、イメージセンサ28とを備える。光学系26は、第1レンズ30と、瞳分割フィルタ16と、第2レンズ32とを備える。第1レンズ30、瞳分割フィルタ16、及び第2レンズ32は、被写体2側からイメージセンサ28側にかけて、光学系26の光軸OAに沿って第1レンズ30、瞳分割フィルタ16、及び第2レンズ32の順に配置されている。 The imaging device 10 includes an optical system 26 and an image sensor 28. The optical system 26 includes a first lens 30, a pupil splitting filter 16, and a second lens 32. The first lens 30, the pupil splitting filter 16, and the second lens 32 are arranged along the optical axis OA of the optical system 26 from the subject 2 side to the image sensor 28 side. The lenses 32 are arranged in this order.
 瞳分割フィルタ16は、分光フィルタ20A~20Cを有する。各分光フィルタ20A~20Cは、特定の波長域の光を透過させるバンドパスフィルタである。分光フィルタ20A~20Cは、互いに異なる波長域を有する。具体的には、分光フィルタ20Aは、第1波長域λを有しており、分光フィルタ20Bは、第2波長域λを有しており、分光フィルタ20Cは、第3波長域λを有する。分光フィルタは、本開示の技術に係る「分光フィルタ」の一例である。 The pupil division filter 16 has spectral filters 20A to 20C. Each of the spectral filters 20A to 20C is a bandpass filter that transmits light in a specific wavelength range. The spectral filters 20A to 20C have different wavelength ranges. Specifically, the spectral filter 20A has a first wavelength range λ 1 , the spectral filter 20B has a second wavelength range λ 2 , and the spectral filter 20C has a third wavelength range λ 3. has. The spectral filter is an example of a "spectral filter" according to the technology of the present disclosure.
 以下、分光フィルタ20A~20Cを区別して説明する必要が無い場合には、各分光フィルタ20A~20Cを「分光フィルタ20」と称する。分光フィルタ20は、本開示の技術に係る「分光フィルタ」の一例である。また、第1波長域λ、第2波長域λ、及び第3波長域λを区別して説明する必要が無い場合には、第1波長域λ、第2波長域λ、及び第3波長域λをそれぞれ「波長域λ」と称する。 Hereinafter, unless it is necessary to separately explain the spectral filters 20A to 20C, each of the spectral filters 20A to 20C will be referred to as a "spectral filter 20." The spectral filter 20 is an example of a "spectral filter" according to the technology of the present disclosure. In addition, if it is not necessary to separately explain the first wavelength range λ 1 , the second wavelength range λ 2 , and the third wavelength range λ 3 , the first wavelength range λ 1 , the second wavelength range λ 2 , and Each of the third wavelength ranges λ 3 is referred to as a "wavelength range λ".
 撮像装置10では、後に詳述する通り、被写体2を撮像することにより得られた撮像画像(図示省略)に基づいて各波長域λに対応するスペクトル画像72A~72Cが生成される。スペクトル画像72Aは、第1波長域λに対応するスペクトル画像であり、スペクトル画像72Bは、第2波長域λに対応するスペクトル画像であり、スペクトル画像72Cは、第3波長域λに対応するスペクトル画像である。以下、スペクトル画像72A~72Cを区別して説明する必要が無い場合には、各スペクトル画像72A~72Cを「スペクトル画像72」と称する。 As will be described in detail later, in the imaging device 10, spectral images 72A to 72C corresponding to each wavelength range λ are generated based on a captured image (not shown) obtained by imaging the subject 2. The spectral image 72A is a spectral image corresponding to the first wavelength range λ 1 , the spectral image 72B is a spectral image corresponding to the second wavelength range λ 2 , and the spectral image 72C is a spectral image corresponding to the third wavelength range λ 3 . This is the corresponding spectral image. Hereinafter, unless it is necessary to explain the spectral images 72A to 72C separately, each of the spectral images 72A to 72C will be referred to as a "spectral image 72."
 本実施形態では、一例として、3つの波長域λに分光された光に基づいて3つのスペクトル画像72が生成される場合を例に挙げて説明する。ただし、3つの波長域λは、あくまでも一例に過ぎず、2つ以上の波長域λに対して適用可能である。 In the present embodiment, a case will be described as an example in which three spectral images 72 are generated based on light separated into three wavelength ranges λ. However, the three wavelength ranges λ are just an example, and the present invention is applicable to two or more wavelength ranges λ.
 撮像装置10は、ズーム機能を有している。被写体2が撮像装置10によって撮像される場合、ズーム機能により、撮像装置10の画角が調節される。撮像装置10の画角は、撮像装置10の撮像範囲が被写体2及び校正用部材116で満たされる画角に設定される。このように画角が調節されることにより、被写体2が撮像装置10によって撮像される場合の撮像条件は、撮像装置10に入射する光(以下、「入射光」と称する)のうちの主成分が、光源112から照射され被写体2及び校正用部材116で反射した光となる条件である特定条件に設定される。主成分は、本開示の技術に係る「第1成分」の一例である。特定条件は、本開示の技術に係る「第1条件」の一例である。 The imaging device 10 has a zoom function. When the subject 2 is imaged by the imaging device 10, the angle of view of the imaging device 10 is adjusted by the zoom function. The angle of view of the imaging device 10 is set to such that the imaging range of the imaging device 10 is filled with the subject 2 and the calibration member 116. By adjusting the angle of view in this way, the imaging conditions when the subject 2 is imaged by the imaging device 10 are as follows: is set to a specific condition under which the light is emitted from the light source 112 and reflected by the subject 2 and the calibration member 116. The main component is an example of a "first component" according to the technology of the present disclosure. The specific condition is an example of a "first condition" according to the technology of the present disclosure.
 処理装置90は、撮像装置10と通信可能に接続されている。処理装置90は、例えば、パーソナルコンピュータ又はサーバ等の情報処理装置である。処理装置90は、表示装置108を備える。表示装置108は、例えば、液晶表示器又はELディスプレイである。処理装置90は、複数のスペクトル画像72に基づいてマルチスペクトル画像74を生成し、生成したマルチスペクトル画像74を表示装置108に表示する。また、処理装置90は、後に詳述する通り、複数のスペクトル画像72に基づいて被写体2の色を測定し、測定した結果である色測定結果136を表示装置108に表示する。 The processing device 90 is communicably connected to the imaging device 10. The processing device 90 is, for example, an information processing device such as a personal computer or a server. The processing device 90 includes a display device 108. The display device 108 is, for example, a liquid crystal display or an EL display. The processing device 90 generates a multispectral image 74 based on the plurality of spectral images 72 and displays the generated multispectral image 74 on the display device 108. Furthermore, as will be described in detail later, the processing device 90 measures the color of the subject 2 based on the plurality of spectral images 72, and displays the color measurement result 136, which is the measurement result, on the display device 108.
 図2には、白色基準板122、校正用部材116、及び被写体2の各分光反射率の一例を示すグラフを示す。白色基準板122は、光を反射する反射面122Aを有する。白色基準板122は、白色の板材であり、反射面122Aは、白色面である。反射面122Aは、均一な白色によって形成されている。同様に、背景面116Aは、均一な色によって形成されている。 FIG. 2 shows a graph showing an example of the spectral reflectance of the white reference plate 122, the calibration member 116, and the subject 2. The white reference plate 122 has a reflective surface 122A that reflects light. The white reference plate 122 is a white plate material, and the reflective surface 122A is a white surface. The reflective surface 122A is formed of uniform white color. Similarly, the background surface 116A is formed of a uniform color.
 グラフG1は、被写体2で反射した光の分光反射率である第1分光反射率aを示すグラフである。グラフG2は、校正用部材116のうちの背景面116Aで反射した光の分光反射率である第2分光反射率bを示すグラフである。グラフG3は、白色基準板122のうちの反射面122Aで反射した光の分光反射率である第3分光反射率cを示すグラフである。第1分光反射率aは、本開示の技術に係る「第1分光反射率」の一例である。第2分光反射率bは、本開示の技術に係る「第2分光反射率」の一例である。第3分光反射率cは、本開示の技術に係る「第3分光反射率」の一例である。 Graph G1 is a graph showing the first spectral reflectance a, which is the spectral reflectance of light reflected by the subject 2. Graph G2 is a graph showing the second spectral reflectance b, which is the spectral reflectance of light reflected by the background surface 116A of the calibration member 116. Graph G3 is a graph showing the third spectral reflectance c, which is the spectral reflectance of the light reflected by the reflective surface 122A of the white reference plate 122. The first spectral reflectance a is an example of a "first spectral reflectance" according to the technology of the present disclosure. The second spectral reflectance b is an example of a "second spectral reflectance" according to the technology of the present disclosure. The third spectral reflectance c is an example of the "third spectral reflectance" according to the technology of the present disclosure.
 例えば、第1分光反射率aは、被写体2の複数の箇所に対して測定された分光反射率に基づく分光反射率でもよい。第1分光反射率aが被写体2の複数の箇所に対して測定された分光反射率に基づく分光反射率である場合には、第1分光反射率aが被写体2のうちの1箇所に対して測定された分光反射率である場合に比して、第1分光反射率aの精度が向上する。第1分光反射率aは、被写体2の複数の箇所に対して測定された分光反射率の平均値でもよく、代表値(例えば、最高値、中心値、又は最低値)でもよい。また、複数の箇所の数は、いくつでもよい。さらに、複数の箇所の位置は、被写体2のどこの位置でもよい。また、被写体2は、複数の被写体でもよい。各箇所は、各被写体2の箇所でもよい。なお、第1分光反射率aは、被写体2のうちの1箇所に対して測定された分光反射率でもよい。 For example, the first spectral reflectance a may be a spectral reflectance based on spectral reflectances measured at multiple locations on the subject 2. If the first spectral reflectance a is a spectral reflectance based on spectral reflectances measured at multiple locations on the subject 2, the first spectral reflectance a is The accuracy of the first spectral reflectance a is improved compared to the case where it is a measured spectral reflectance. The first spectral reflectance a may be an average value of spectral reflectances measured at a plurality of locations on the subject 2, or may be a representative value (for example, a maximum value, a central value, or a minimum value). Further, the number of the plurality of locations may be any number. Furthermore, the positions of the plurality of places may be any position on the subject 2. Further, the subject 2 may be a plurality of subjects. Each location may be a location on each subject 2. Note that the first spectral reflectance a may be a spectral reflectance measured at one location on the subject 2.
 また、例えば、第2分光反射率bは、背景面116Aの複数の箇所に対して測定された分光反射率に基づく分光反射率でもよい。同様に、第3分光反射率cは、反射面122Aの複数の箇所に対して測定された分光反射率に基づく分光反射率でもよい。 Furthermore, for example, the second spectral reflectance b may be a spectral reflectance based on spectral reflectances measured at multiple locations on the background surface 116A. Similarly, the third spectral reflectance c may be a spectral reflectance based on spectral reflectances measured at multiple locations on the reflective surface 122A.
 図2には、第1波長域λと第2波長域λについて、第1分光反射率aと第2分光反射率bとの関係性の一例が示されている。第1波長域λにおける第1分光反射率aは、平均値でもよく、代表値(例えば、最高値、中心値、又は最低値)でもよい。同様に、第1波長域λにおける第2分光反射率bは、平均値でもよく、代表値(例えば、最高値、中心値、又は最低値)でもよい。また、第2波長域λにおける第1分光反射率aは、平均値でもよく、代表値(例えば、最高値、中心値、又は最低値)でもよい。同様に、第2波長域λにおける第2分光反射率bは、平均値でもよく、代表値(例えば、最高値、中心値、又は最低値)でもよい。 FIG. 2 shows an example of the relationship between the first spectral reflectance a and the second spectral reflectance b for the first wavelength range λ 1 and the second wavelength range λ 2 . The first spectral reflectance a in the first wavelength range λ 1 may be an average value or a representative value (for example, a maximum value, a central value, or a minimum value). Similarly, the second spectral reflectance b in the first wavelength range λ 1 may be an average value or a representative value (for example, the highest value, the center value, or the lowest value). Further, the first spectral reflectance a in the second wavelength range λ 2 may be an average value or a representative value (for example, a maximum value, a central value, or a minimum value). Similarly, the second spectral reflectance b in the second wavelength range λ 2 may be an average value or a representative value (for example, the highest value, the center value, or the lowest value).
 背景面116Aは、被写体2の色に対応する色を有しているため、第2分光反射率bは、第1分光反射率aに対応する。具体的には、図2に示す例では、第1波長域λにおいて、第1分光反射率aと第2分光反射率bとは、互いに関係性を有している。同様に、第2波長域λにおいて、第1分光反射率aと第2分光反射率bとは、互いに関係性を有している。 Since the background surface 116A has a color corresponding to the color of the subject 2, the second spectral reflectance b corresponds to the first spectral reflectance a. Specifically, in the example shown in FIG. 2, the first spectral reflectance a and the second spectral reflectance b have a relationship with each other in the first wavelength range λ 1 . Similarly, in the second wavelength range λ 2 , the first spectral reflectance a and the second spectral reflectance b have a relationship with each other.
 ここでは、第1波長域λ及び第2波長域λにおいて、第1分光反射率aと第2分光反射率bとが互いに関係性を有する例が挙げられているが、第1分光反射率aと第2分光反射率bとは、第2波長域λ及び第3波長域λ(図1参照)において互いに関係性を有していてもよい。以下、第1波長域λ及び第2波長域λにおいて、第1分光反射率aと第2分光反射率bとが互いに関係性を有することを例に挙げて説明する。 Here, an example is given in which the first spectral reflectance a and the second spectral reflectance b have a relationship with each other in the first wavelength range λ 1 and the second wavelength range λ 2 . The index a and the second spectral reflectance b may have a relationship with each other in the second wavelength range λ 2 and the third wavelength range λ 3 (see FIG. 1). Hereinafter, an example will be described in which the first spectral reflectance a and the second spectral reflectance b have a relationship with each other in the first wavelength range λ 1 and the second wavelength range λ 2 .
 第1波長域λにおいて、第1分光反射率aと第2分光反射率bとが互いに関係性を有することの一例として、第1分光反射率aと第2分光反射率bとの差は、第1範囲(図示省略)内である。同様に、第2波長域λにおいて、第1分光反射率aと第2分光反射率bとが互いに関係性を有することの一例として、第1分光反射率aと第2分光反射率bとの差は、第1範囲内である。 As an example of the relationship between the first spectral reflectance a and the second spectral reflectance b in the first wavelength range λ 1 , the difference between the first spectral reflectance a and the second spectral reflectance b is , within the first range (not shown). Similarly, in the second wavelength range λ 2 , as an example of the relationship between the first spectral reflectance a and the second spectral reflectance b, the first spectral reflectance a and the second spectral reflectance b The difference is within the first range.
 第1範囲は、例えば、第3分光反射率cに基づいて設定される。具体的には、第1範囲は、第3分光反射率cよりも低い分光反射率の範囲で設定される。第1範囲は、例えば、第1分光反射率aに対して0.5倍の反射率から2倍の反射率までの範囲である。つまり、第1分光反射率aに対して0.5倍の反射率が第1範囲の下限値であり、第1分光反射率aに対して2倍の反射率が第1範囲の上限値である。例えば、第1範囲の下限値が第1分光反射率aに対して0.5倍の分光反射率未満に設定されている場合には、第1分光反射率aと第2分光反射率bとが関係性を有しなくなる。また、例えば、第1範囲の上限値が第1分光反射率aに対して2倍の分光反射率を超える反射率に設定されている場合にも、第1分光反射率aと第2分光反射率bとが関係性を有しなくなる。 The first range is set, for example, based on the third spectral reflectance c. Specifically, the first range is set as a range of spectral reflectance lower than the third spectral reflectance c. The first range is, for example, a range from a reflectance 0.5 times to a reflectance twice the first spectral reflectance a. In other words, a reflectance that is 0.5 times the first spectral reflectance a is the lower limit of the first range, and a reflectance that is twice the first spectral reflectance a is the upper limit of the first range. be. For example, if the lower limit of the first range is set to less than 0.5 times the spectral reflectance of the first spectral reflectance a, then the first spectral reflectance a and the second spectral reflectance b becomes irrelevant. For example, even if the upper limit of the first range is set to a reflectance that exceeds twice the spectral reflectance of the first spectral reflectance a, the first spectral reflectance a and the second spectral reflectance rate b becomes irrelevant.
 また、第1波長域λ及び第2波長域λにおいて、第1分光反射率aと第2分光反射率bとは、差を有する。第1波長域λ及び第2波長域λにおいて、第2分光反射率bは、第1分光反射率aよりも高くてもよく、第1分光反射率aよりも低くてもよい。第1波長域λ及び第2波長域λにおいて、第1分光反射率aと第2分光反射率bとの差は、第1分光反射率aと第3分光反射率cとの差よりも小さい。また、第1波長域λ及び第2波長域λにおいて、第1分光反射率a及び第2分光反射率bは、第3分光反射率cよりも低い。 Further, in the first wavelength range λ 1 and the second wavelength range λ 2 , the first spectral reflectance a and the second spectral reflectance b have a difference. In the first wavelength range λ 1 and the second wavelength range λ 2 , the second spectral reflectance b may be higher than the first spectral reflectance a, or may be lower than the first spectral reflectance a. In the first wavelength range λ 1 and the second wavelength range λ 2 , the difference between the first spectral reflectance a and the second spectral reflectance b is greater than the difference between the first spectral reflectance a and the third spectral reflectance c. It's also small. Furthermore, in the first wavelength range λ 1 and the second wavelength range λ 2 , the first spectral reflectance a and the second spectral reflectance b are lower than the third spectral reflectance c.
 また、第1波長域λ及び第2波長域λにおいて、第2分光反射率bは、第1分光反射率a及び第3分光反射率cに基づいて設定された分光反射率の第2範囲に収まる。例えば、第2範囲は、式(1)によって規定される範囲である。図2には、式(1)によって規定される第2範囲を第2範囲R1として示す。
 a/2≦b≦(c+a)/2・・・(1)
Further, in the first wavelength range λ 1 and the second wavelength range λ 2 , the second spectral reflectance b is the second spectral reflectance set based on the first spectral reflectance a and the third spectral reflectance c. falls within the range. For example, the second range is a range defined by equation (1). In FIG. 2, the second range defined by equation (1) is shown as second range R1.
a/2≦b≦(c+a)/2...(1)
 式(1)によれば、第2範囲R1の下限値は、第1分光反射率aの半値で規定され、第2範囲R1の上限値は、第3分光反射率c及び第1分光反射率aの和の半値で規定される。この場合、第2範囲R1の下限値が第1分光反射率aで規定され、第2範囲R1の上限値が第3分光反射率c及び第1分光反射率aの和で規定される場合に比して、色の測定誤差に対する第2分光反射率bの影響度を1/2にすることができる。例えば、第3分光反射率cが1であり、第1分光反射率aが0.1である場合、第2範囲R1の下限値は0.05となり、第2範囲R1の上限値は0.55となる。 According to formula (1), the lower limit of the second range R1 is defined by half the first spectral reflectance a, and the upper limit of the second range R1 is defined by the third spectral reflectance c and the first spectral reflectance. It is defined as half the sum of a. In this case, when the lower limit of the second range R1 is defined by the first spectral reflectance a, and the upper limit of the second range R1 is defined by the sum of the third spectral reflectance c and the first spectral reflectance a, In comparison, the degree of influence of the second spectral reflectance b on the color measurement error can be reduced to 1/2. For example, when the third spectral reflectance c is 1 and the first spectral reflectance a is 0.1, the lower limit of the second range R1 is 0.05, and the upper limit of the second range R1 is 0.05. It will be 55.
 また、例えば、第2範囲は、より好ましくは、式(2)によって規定される範囲である。図2には、式(2)によって規定される第2範囲を第2範囲R2として示す。
 3a/4≦b≦(c+3a)/4・・・(2)
Further, for example, the second range is more preferably a range defined by formula (2). In FIG. 2, the second range defined by equation (2) is shown as second range R2.
3a/4≦b≦(c+3a)/4...(2)
 式(2)によれば、第2範囲R2の下限値は、第1分光反射率aの3/4値で規定され、第2範囲R2の上限値は、第3分光反射率cと第1分光反射率aの3倍の分光反射率との和の1/4値で規定される。この場合、第2範囲R2の下限値が第1分光反射率aで規定され、第2範囲R2の上限値が第3分光反射率c及び第1分光反射率aの和で規定される場合に比して、色の測定誤差に対する第2分光反射率bの影響度を1/4にすることができる。例えば、第3分光反射率cが1であり、第1分光反射率aが0.1である場合、第2範囲R2の下限値は0.075となり、第2範囲R2の上限値は0.325となる。 According to formula (2), the lower limit value of the second range R2 is defined by the 3/4 value of the first spectral reflectance a, and the upper limit value of the second range R2 is defined by the third spectral reflectance c and the first spectral reflectance c. It is defined by the value of 1/4 of the sum of the spectral reflectance a and the spectral reflectance that is three times the spectral reflectance a. In this case, when the lower limit of the second range R2 is defined by the first spectral reflectance a, and the upper limit of the second range R2 is defined by the sum of the third spectral reflectance c and the first spectral reflectance a, In comparison, the degree of influence of the second spectral reflectance b on color measurement errors can be reduced to 1/4. For example, if the third spectral reflectance c is 1 and the first spectral reflectance a is 0.1, the lower limit of the second range R2 is 0.075, and the upper limit of the second range R2 is 0.075. It becomes 325.
 図3及び図4には、第2分光反射率bの一例を模式的に示したグラフを示す。一例として図3のグラフG4及び図4のグラフG5に示すように、第2分光反射率bは、可視域のうちの第1可視域よりも近赤外域のうちの第1近赤外域の方が高くてもよい。 3 and 4 show graphs schematically showing an example of the second spectral reflectance b. As an example, as shown in graph G4 in FIG. 3 and graph G5 in FIG. 4, the second spectral reflectance b is higher in the first near-infrared region of the near-infrared region than in the first visible region of the visible region. may be high.
 第1可視域は、可視域のうちの全体の波長域でもよく、一部の波長域でもよい。同様に、第1近赤外域は、近赤外域のうちの全体の波長域でもよく、一部の波長域でもよい。可視域と近赤外域とは、連続していてもよく、離れていてもよく、一部が重複していてもよい。 The first visible range may be the entire wavelength range of the visible range, or may be a part of the wavelength range. Similarly, the first near-infrared region may be the entire wavelength region of the near-infrared region, or may be a part of the wavelength region. The visible region and the near-infrared region may be continuous, may be separated, or may partially overlap.
 図3に示す例では、第2分光反射率bは、特定の波長から波長が長くなるに従って線形的(すなわち、連続的)に増加し、第1近赤外域にピークを有する。図4に示す例では、第2分光反射率bは、特定の波長で非線形的(すなわち、非連続的)に増加し、第1近赤外域にピークを有する。 In the example shown in FIG. 3, the second spectral reflectance b increases linearly (that is, continuously) as the wavelength becomes longer from a specific wavelength, and has a peak in the first near-infrared region. In the example shown in FIG. 4, the second spectral reflectance b increases nonlinearly (that is, discontinuously) at a specific wavelength and has a peak in the first near-infrared region.
 図3及び図4に示すように、第2分光反射率bが第1近赤外域にピークを有すると、例えば、被写体2が植物のように可視域よりも近赤外域で高い分光反射率を示す被写体である場合に、被写体2の色の測定精度を向上させることができる。すなわち、例えば、第2分光反射率bが第1可視域と第1近赤外域とで同じ場合に比して、被写体2の色の測定精度が向上する。 As shown in FIGS. 3 and 4, when the second spectral reflectance b has a peak in the first near-infrared region, for example, the subject 2, such as a plant, has a higher spectral reflectance in the near-infrared region than in the visible region. In the case where the subject 2 is a subject shown in FIG. That is, for example, the accuracy of measuring the color of the subject 2 is improved compared to a case where the second spectral reflectance b is the same in the first visible region and the first near-infrared region.
 図5には、背景面116Aの色を設定する方法(以下、「色設定方法」と称する)の流れの一例を示す。図5に示す色設定方法では、先ず、工程Aに示すように、被写体2が撮像装置124によって撮像される。これにより、撮像画像126が得られる。被写体2を撮像するために用いられる撮像装置124は、マルチスペクトルカメラでもよく、RGBカメラでもよい。また、被写体2の背面に配置される部材128は、どのようなものでもよい。 FIG. 5 shows an example of the flow of a method for setting the color of the background surface 116A (hereinafter referred to as "color setting method"). In the color setting method shown in FIG. 5, first, as shown in step A, the subject 2 is imaged by the imaging device 124. As a result, a captured image 126 is obtained. The imaging device 124 used to image the subject 2 may be a multispectral camera or an RGB camera. Furthermore, the member 128 placed on the back surface of the subject 2 may be of any type.
 続いて、工程Bに示すように、撮像画像126に基づいて、第1分光反射率aが測定される。第1分光反射率aは、被写体2の複数の箇所に対して測定された分光反射率に基づく分光反射率でもよい。また、第1分光反射率aは、被写体2の複数の箇所に対して測定された分光反射率の平均値でもよく、代表値(例えば、最高値、中心値、又は最低値)でもよい。 Subsequently, as shown in step B, the first spectral reflectance a is measured based on the captured image 126. The first spectral reflectance a may be a spectral reflectance based on spectral reflectances measured at multiple locations on the subject 2. Further, the first spectral reflectance a may be an average value of spectral reflectances measured at a plurality of locations on the subject 2, or may be a representative value (for example, a maximum value, a central value, or a minimum value).
 続いて、工程Cに示すように、第1分光反射率aに基づいて、第1分光反射率aに対応する第2分光反射率bが決定される。そして、第2分光反射率bを有する色が背景面116Aの色として設定される。 Subsequently, as shown in step C, a second spectral reflectance b corresponding to the first spectral reflectance a is determined based on the first spectral reflectance a. Then, the color having the second spectral reflectance b is set as the color of the background surface 116A.
 図6には、本実施形態に係る色測定方法の一例を示す。本実施形態に係る色測定方法には、上述の色測定装置110が用いられる。 FIG. 6 shows an example of the color measurement method according to this embodiment. The above-described color measuring device 110 is used in the color measuring method according to this embodiment.
 本実施形態に係る色測定方法では、次の要領で被写体2の色が測定される。すなわち、先ず、態様Dに示すように、筐体114の底面114Aに校正用部材116が配置された状態で、光源112から背景面116Aに光L1が照射され、背景面116Aで反射した光L2を含む入射光Laが撮像装置10によって撮像される。これにより、基準画像130が得られる。続いて、態様Eに示すように、背景面116Aの上に被写体2が配置され、光源112から被写体2に光L1が照射され、被写体2で反射した光L3を含む入射光Laが撮像装置10によって撮像される。これにより、被写体画像132が得られる。 In the color measurement method according to this embodiment, the color of the subject 2 is measured in the following manner. That is, first, as shown in aspect D, with the calibration member 116 disposed on the bottom surface 114A of the housing 114, light L1 is irradiated from the light source 112 onto the background surface 116A, and light L2 is reflected from the background surface 116A. The incident light La including the above is imaged by the imaging device 10. As a result, a reference image 130 is obtained. Subsequently, as shown in aspect E, the subject 2 is placed on the background surface 116A, the light source 112 irradiates the subject 2 with light L1, and the incident light La including the light L3 reflected by the subject 2 is transmitted to the imaging device 10. imaged by. As a result, a subject image 132 is obtained.
 そして、撮像装置10に通信可能に接続された処理装置90において、基準画像130に基づいて被写体画像132に対する校正が行われることにより、校正画像134が生成され、校正画像134に基づいて被写体2の色が測定される。これにより、被写体2の色を測定した結果である色測定結果136が得られる。 Then, in the processing device 90 communicably connected to the imaging device 10 , a calibration image 134 is generated by calibrating the subject image 132 based on the reference image 130 . Color is measured. As a result, a color measurement result 136, which is the result of measuring the color of the subject 2, is obtained.
 図7には、本実施形態に係る色測定方法において撮像装置10に入射する入射光Laの具体的な態様の一例を示す。本実施形態に係る色測定方法では、態様Fに示すように、基準画像130が得られる場合に、入射光Laには、光源112から背景面116Aに照射されて背景面116A及び天面114Cで反射してから背景面116Aで再び反射した光L4が含まれる。また、本実施形態に係る色測定方法では、態様Gに示すように、被写体画像132が得られる場合に、入射光Laには、光源112から被写体2に照射されて被写体2及び天面114Cで反射してから被写体2で再び反射した光L5が含まれる。 FIG. 7 shows a specific example of the incident light La that enters the imaging device 10 in the color measurement method according to the present embodiment. In the color measurement method according to the present embodiment, as shown in aspect F, when the reference image 130 is obtained, the incident light La is irradiated from the light source 112 onto the background surface 116A and the background surface 116A and the top surface 114C. It includes light L4 that has been reflected and then reflected again on the background surface 116A. Further, in the color measurement method according to the present embodiment, as shown in aspect G, when the subject image 132 is obtained, the incident light La is irradiated from the light source 112 to the subject 2 and the subject 2 and the top surface 114C. The light L5 that has been reflected and then reflected again by the subject 2 is included.
 ここで、背景面116Aの分光反射率である第2分光反射率bは、被写体2の分光反射率である第1分光反射率aに対応している。したがって、被写体2がない場合と被写体2がある場合とで入射光Laの波長域が変化することが抑制される。これにより、第2比較例に係る色測定方法(図20の態様Nを参照)のように、実際の被写体2の色に比して被写体2の色が濃く測定されることを回避することができる。 Here, the second spectral reflectance b, which is the spectral reflectance of the background surface 116A, corresponds to the first spectral reflectance a, which is the spectral reflectance of the subject 2. Therefore, the wavelength range of the incident light La is suppressed from changing between the case where the subject 2 is not present and the case where the subject 2 is present. As a result, it is possible to avoid measuring the color of the object 2 darker than the actual color of the object 2, as in the color measurement method according to the second comparative example (see aspect N in FIG. 20). can.
 また、本実施形態に係る色測定方法では、態様Hに示すように、被写体画像132が得られる場合に、入射光Laには、光源112から被写体2に照射されて被写体2で反射した光L6に加えて、光源112から被写体2に照射されて背景面116Aのうちの被写体2の周囲の領域で反射した光L7(以下、「周辺反射光L7」と称する)も含まれる。 In addition, in the color measurement method according to the present embodiment, as shown in aspect H, when the subject image 132 is obtained, the incident light La includes light L6 that is irradiated onto the subject 2 from the light source 112 and reflected by the subject 2. In addition to this, light L7 (hereinafter referred to as "peripheral reflected light L7") that is irradiated onto the subject 2 from the light source 112 and reflected on the area around the subject 2 on the background surface 116A is also included.
 ここで、上述の通り、背景面116Aの分光反射率である第2分光反射率bは、被写体2の分光反射率である第1分光反射率aに対応している。したがって、被写体2がない場合と被写体2がある場合とで周辺反射光L7の波長域が変化することが抑制される。これにより、第2比較例に係る色測定方法(図20の態様Oを参照)のように、実際の被写体2の色に比して被写体2の色が薄く測定されたり、フレアの量が変化したりすることを回避することができる。 Here, as described above, the second spectral reflectance b, which is the spectral reflectance of the background surface 116A, corresponds to the first spectral reflectance a, which is the spectral reflectance of the subject 2. Therefore, the wavelength range of the peripheral reflected light L7 is suppressed from changing between the case where the subject 2 is not present and the case where the subject 2 is present. As a result, as in the color measurement method according to the second comparative example (see aspect O in FIG. 20), the color of the subject 2 may be measured to be lighter than the actual color of the subject 2, or the amount of flare may change. You can avoid doing that.
 次に、本実施形態に係る撮像装置10について詳しく説明する。 Next, the imaging device 10 according to this embodiment will be described in detail.
 一例として図8に示すように、撮像装置10は、レンズ装置12と、撮像装置ボディ14とを備える。レンズ装置12は、上述の瞳分割フィルタ16を有する。撮像装置10は、瞳分割フィルタ16によって複数の波長域λに分光された光を撮像することにより複数のスペクトル画像72A~72Cを生成して出力するマルチスペクトルカメラである。 As shown in FIG. 8 as an example, the imaging device 10 includes a lens device 12 and an imaging device body 14. The lens device 12 includes the pupil splitting filter 16 described above. The imaging device 10 is a multispectral camera that generates and outputs a plurality of spectral images 72A to 72C by capturing light that has been split into a plurality of wavelength ranges λ by the pupil splitting filter 16.
 一例として図9に示すように、瞳分割フィルタ16は、枠体18と、分光フィルタ20A~20Cと、偏光フィルタ22A~22Cとを有する。 As shown in FIG. 9 as an example, the pupil division filter 16 includes a frame 18, spectral filters 20A to 20C, and polarization filters 22A to 22C.
 枠体18は、開口24A~24Cを有する。開口24A~24Cは、光軸OAの周りに並んで形成されている。以下、開口24A~24Cを区別して説明する必要が無い場合には、各開口24A~24Cを「開口24」と称する。分光フィルタ20A~20Cは、開口24A~開口24Cにそれぞれ設けられることにより、光軸OAの周りに並んで配置されている。 The frame 18 has openings 24A to 24C. The openings 24A to 24C are formed in line around the optical axis OA. Hereinafter, unless it is necessary to separately explain the openings 24A to 24C, each of the openings 24A to 24C will be referred to as an "opening 24." The spectral filters 20A to 20C are provided in the apertures 24A to 24C, respectively, so that they are arranged side by side around the optical axis OA.
 偏光フィルタ22A~22Cは、分光フィルタ20A~20Cにそれぞれ対応して設けられている。具体的には、偏光フィルタ22Aは、開口24Aに設けられており、分光フィルタ20Aと重ね合わされている。偏光フィルタ22Bは、開口24Bに設けられており、分光フィルタ20Bと重ね合わされている。偏光フィルタ22Cは、開口24Cに設けられており、分光フィルタ20Cと重ね合わされている。 The polarizing filters 22A to 22C are provided corresponding to the spectral filters 20A to 20C, respectively. Specifically, the polarizing filter 22A is provided in the aperture 24A, and is overlapped with the spectral filter 20A. The polarizing filter 22B is provided in the aperture 24B and overlapped with the spectral filter 20B. The polarizing filter 22C is provided in the aperture 24C and is overlapped with the spectral filter 20C.
 各偏光フィルタ22A~22Cは、特定の方向に振動する光を透過させる光学フィルタである。偏光フィルタ22A~22Cは、互いに異なる偏光角度の偏光軸を有する。具体的には、偏光フィルタ22Aは、第1偏光角度αを有しており、偏光フィルタ22Bは、第2偏光角度αを有しており、偏光フィルタ22Cは、第3偏光角度αを有する。なお、偏光軸を透過軸と称してもよい。一例として、第1偏光角度αは、0°に設定されており、第2偏光角度αは、45°に設定されており、第3偏光角度αは、90°に設定されている。 Each of the polarizing filters 22A to 22C is an optical filter that transmits light vibrating in a specific direction. The polarizing filters 22A to 22C have polarization axes with different polarization angles. Specifically, polarizing filter 22A has a first polarizing angle α 1, polarizing filter 22B has a second polarizing angle α 2 , and polarizing filter 22C has a third polarizing angle α 3 . has. Note that the polarization axis may also be referred to as the transmission axis. As an example, the first polarization angle α 1 is set to 0°, the second polarization angle α 2 is set to 45°, and the third polarization angle α 3 is set to 90°. .
 以下、偏光フィルタ22A~22Cを区別して説明する必要が無い場合には、各偏光フィルタ22A~22Cを「偏光フィルタ22」と称する。また、第1偏光角度α、第2偏光角度α、及び第3偏光角度αを区別して説明する必要が無い場合には、第1偏光角度α、第2偏光角度α、及び第3偏光角度αをそれぞれ「偏光角度α」と称する。 Hereinafter, unless it is necessary to explain the polarizing filters 22A to 22C separately, each of the polarizing filters 22A to 22C will be referred to as a "polarizing filter 22." Furthermore, if it is not necessary to separately explain the first polarization angle α 1 , the second polarization angle α 2 , and the third polarization angle α 3 , the first polarization angle α 1 , the second polarization angle α 2 , and Each of the third polarization angles α 3 is referred to as a “polarization angle α”.
 なお、図9に示す例では、複数の波長域λの数に対応して、複数の開口24の数が3つとされているが、複数の開口24の数は、複数の波長域λの数(すなわち、複数の分光フィルタ20の数)よりも多くてもよい。また、複数の開口24のうちの使用しない開口24は、遮蔽部材(図示省略)によって塞がれていてもよい。また、図9に示す例では、複数の分光フィルタ20は、互いに異なる波長域λを有するが、複数の分光フィルタ20には、同じ波長域λを有する分光フィルタ20が含まれていてもよい。 In the example shown in FIG. 9, the number of apertures 24 is three, corresponding to the number of wavelength ranges λ, but the number of apertures 24 is equal to the number of wavelength ranges λ. (that is, the number of spectral filters 20). Furthermore, unused openings 24 among the plurality of openings 24 may be covered by a shielding member (not shown). Further, in the example shown in FIG. 9, the plurality of spectral filters 20 have mutually different wavelength ranges λ, but the plurality of spectral filters 20 may include spectral filters 20 having the same wavelength range λ.
 一例として図10に示すように、レンズ装置12は、光学系26を備えており、撮像装置ボディ14は、イメージセンサ28を備えている。光学系26は、瞳分割フィルタ16と、第1レンズ30と、第2レンズ32とを有する。 As shown in FIG. 10 as an example, the lens device 12 includes an optical system 26, and the imaging device body 14 includes an image sensor 28. The optical system 26 includes a pupil splitting filter 16, a first lens 30, and a second lens 32.
 第1レンズ30は、光源112から照射されて被写体2で反射した光を瞳分割フィルタ16に入射させる。第2レンズ32は、瞳分割フィルタ16を透過した光をイメージセンサ28に設けられた光電変換素子34の受光面34A上に結像させる。 The first lens 30 causes the light emitted from the light source 112 and reflected by the subject 2 to enter the pupil division filter 16 . The second lens 32 forms an image of the light that has passed through the pupil splitting filter 16 onto a light receiving surface 34A of a photoelectric conversion element 34 provided in the image sensor 28.
 瞳分割フィルタ16は、光学系26の瞳位置に配置されている。瞳位置とは、光学系26の明るさを制限する絞り面を指す。ここでの瞳位置には、近傍位置も含まれ、近傍位置とは入射瞳から射出瞳までの範囲を指す。瞳分割フィルタ16の構成は、図9を用いて説明した通りである。図10では、便宜上、複数の分光フィルタ20及び複数の偏光フィルタ22が光軸OAと直交する方向に沿って直線状に配列された状態で示されている。 The pupil splitting filter 16 is placed at the pupil position of the optical system 26. The pupil position refers to the aperture surface that limits the brightness of the optical system 26. The pupil position here includes a nearby position, and the nearby position refers to the range from the entrance pupil to the exit pupil. The configuration of the pupil division filter 16 is as described using FIG. 9. In FIG. 10, for convenience, a plurality of spectral filters 20 and a plurality of polarizing filters 22 are shown arranged in a straight line along a direction perpendicular to the optical axis OA.
 イメージセンサ28は、光電変換素子34及び信号処理回路36を備えている。イメージセンサ28は、一例として、CMOSイメージセンサである。本実施形態では、イメージセンサ28としてCMOSイメージセンサが例示されているが、本開示の技術はこれに限定されず、例えば、イメージセンサ28がCCDイメージセンサ等の他種類のイメージセンサであっても本開示の技術は成立する。 The image sensor 28 includes a photoelectric conversion element 34 and a signal processing circuit 36. The image sensor 28 is, for example, a CMOS image sensor. In this embodiment, a CMOS image sensor is exemplified as the image sensor 28, but the technology of the present disclosure is not limited to this. For example, the image sensor 28 may be another type of image sensor such as a CCD image sensor. The technology of the present disclosure is realized.
 一例として図10中には、光電変換素子34の模式的な構成を示す。また、一例として図11には、光電変換素子34の一部の構成が具体的に示されている。光電変換素子34は、画素層38、偏光フィルタ層40、及び分光フィルタ層42を有する。なお、図11に示す光電変換素子34の構成は一例であって、光電変換素子34が分光フィルタ層42を有しなくても本開示の技術は成立する。 As an example, FIG. 10 shows a schematic configuration of the photoelectric conversion element 34. Furthermore, as an example, FIG. 11 specifically shows the configuration of a part of the photoelectric conversion element 34. The photoelectric conversion element 34 includes a pixel layer 38, a polarizing filter layer 40, and a spectral filter layer 42. Note that the configuration of the photoelectric conversion element 34 shown in FIG. 11 is an example, and the technology of the present disclosure is applicable even if the photoelectric conversion element 34 does not include the spectral filter layer 42.
 画素層38は、複数の画素44を有する。複数の画素44は、マトリクス状に配置されており、光電変換素子34の受光面34Aを形成している。各画素44は、フォトダイオード(図示省略)を有する物理的な画素であり、受光した光を光電変換し、受光量に応じた電気信号を出力する。 The pixel layer 38 has a plurality of pixels 44. The plurality of pixels 44 are arranged in a matrix and form a light receiving surface 34A of the photoelectric conversion element 34. Each pixel 44 is a physical pixel having a photodiode (not shown), photoelectrically converts the received light, and outputs an electrical signal according to the amount of received light.
 以下、スペクトル画像を形成する画素と区別するために、光電変換素子34に設けられた画素44を「物理画素44」と称する。また、スペクトル画像72を形成する画素を「画像画素」と称する。 Hereinafter, the pixel 44 provided in the photoelectric conversion element 34 will be referred to as a "physical pixel 44" in order to distinguish it from the pixel forming a spectral image. Furthermore, the pixels forming the spectral image 72 are referred to as "image pixels."
 光電変換素子34は、複数の物理画素44から出力された電気信号を撮像データとして信号処理回路36に対して出力する。信号処理回路36は、光電変換素子34から入力されたアナログの撮像データをデジタル化する。撮像データは、撮像画像70を示す画像データである。 The photoelectric conversion element 34 outputs the electrical signals output from the plurality of physical pixels 44 to the signal processing circuit 36 as image data. The signal processing circuit 36 digitizes the analog imaging data input from the photoelectric conversion element 34. The image data is image data indicating a captured image 70.
 複数の物理画素44は、複数の画素ブロック46を形成している。各画素ブロック46は、縦横2個ずつの合計4個の物理画素44によって形成されている。図10では、便宜上、各画素ブロック46を形成する4個の物理画素44が光軸OAと直交する方向に沿って直線状に配列された状態で示されているが、4個の物理画素44は、光電変換素子34の縦方向及び横方向にそれぞれ隣接して配置されている(図11参照)。 A plurality of physical pixels 44 form a plurality of pixel blocks 46. Each pixel block 46 is formed by a total of four physical pixels 44, two in the vertical direction and two in the horizontal direction. In FIG. 10, for convenience, the four physical pixels 44 forming each pixel block 46 are shown arranged in a straight line along the direction perpendicular to the optical axis OA, but the four physical pixels 44 are arranged adjacent to the photoelectric conversion element 34 in the vertical and horizontal directions (see FIG. 11).
 偏光フィルタ層40は、複数種類の偏光子48A~48Dを有する。各偏光子48A~48Dは、特定の方向に振動する光を透過させる光学フィルタである。偏光子48A~48Dは、互いに異なる偏光角度の偏光軸を有する。具体的には、偏光子48Aは、第1偏光角度θを有しており、偏光子48Bは、第2偏光角度θを有しており、偏光子48Cは、第3偏光角度θを有しており、偏光子48Dは、第4偏光角度θを有する。一例として、第1偏光角度θは、0°に設定されており、第2偏光角度θは、45°に設定されており、第3偏光角度θは、90°に設定されており、第4偏光角度θは、135°に設定されている。 The polarizing filter layer 40 has multiple types of polarizers 48A to 48D. Each polarizer 48A to 48D is an optical filter that transmits light vibrating in a specific direction. The polarizers 48A to 48D have polarization axes with different polarization angles. Specifically, polarizer 48A has a first polarization angle θ 1 , polarizer 48B has a second polarization angle θ 2 , and polarizer 48C has a third polarization angle θ 3 . , and the polarizer 48D has a fourth polarization angle θ4 . As an example, the first polarization angle θ 1 is set to 0°, the second polarization angle θ 2 is set to 45°, and the third polarization angle θ 3 is set to 90°. , the fourth polarization angle θ 4 is set to 135°.
 以下、偏光子48A~48Dを区別して説明する必要が無い場合には、各偏光子48A~48Dを「偏光子48」と称する。偏光子48は、本開示の技術に係る「偏光子」の一例である。また、第1偏光角度θ、第2偏光角度θ、第3偏光角度θ、及び第4偏光角度θを区別して説明する必要が無い場合には、第1偏光角度θ、第2偏光角度θ、第3偏光角度θ、及び第4偏光角度θをそれぞれ「偏光角度θ」と称する。 Hereinafter, unless it is necessary to explain the polarizers 48A to 48D separately, each of the polarizers 48A to 48D will be referred to as a "polarizer 48." The polarizer 48 is an example of a "polarizer" according to the technology of the present disclosure. In addition, if it is not necessary to separately explain the first polarization angle θ 1 , the second polarization angle θ 2 , the third polarization angle θ 3 , and the fourth polarization angle θ 4 , the first polarization angle θ 1 , the The second polarization angle θ 2 , the third polarization angle θ 3 , and the fourth polarization angle θ 4 are each referred to as “polarization angle θ”.
 分光フィルタ層42は、Bフィルタ50A、Gフィルタ50B、及びRフィルタ50Cを有する。Bフィルタ50Aは、複数の波長域の光のうちの青色の波長域の光を最も多く透過させる青色域フィルタである。Gフィルタ50Bは、複数の波長域の光のうちの緑色の波長域の光を最も多く透過させる緑色域フィルタである。Rフィルタ50Cは、複数の波長域の光のうちの赤色の波長域の光を最も多く透過させる赤色域フィルタである。Bフィルタ50A、Gフィルタ50B、及びRフィルタ50Cは、各画素ブロック46に割り当てられている。 The spectral filter layer 42 includes a B filter 50A, a G filter 50B, and an R filter 50C. The B filter 50A is a blue band filter that transmits most of the light in the blue wavelength band among the light in the plurality of wavelength bands. The G filter 50B is a green filter that transmits most of the light in the green wavelength range among the light in the plurality of wavelength ranges. The R filter 50C is a red band filter that transmits most of the light in the red wavelength band among the plurality of wavelength bands. A B filter 50A, a G filter 50B, and an R filter 50C are assigned to each pixel block 46.
 図10では、便宜上、Bフィルタ50A、Gフィルタ50B、及びRフィルタ50Cが光軸OAと直交する方向に沿って直線状に配列された状態で示されているが、一例として図11に示すように、Bフィルタ50A、Gフィルタ50B、及びRフィルタ50Cは既定のパターン配列でマトリクス状に配置されている。図11に示す例では、Bフィルタ50A、Gフィルタ50B、及びRフィルタ50Cは、既定のパターン配列の一例として、ベイヤ配列でマトリクス状に配置されている。なお、既定のパターン配列は、ベイヤ配列以外に、RGBストライプ配列、R/G市松配列、X-Trans(登録商標)配列、又はハニカム配列等でもよい。 In FIG. 10, for convenience, the B filter 50A, the G filter 50B, and the R filter 50C are shown arranged in a straight line along the direction orthogonal to the optical axis OA, but as an example, as shown in FIG. The B filter 50A, the G filter 50B, and the R filter 50C are arranged in a matrix in a predetermined pattern arrangement. In the example shown in FIG. 11, the B filter 50A, the G filter 50B, and the R filter 50C are arranged in a matrix in a Bayer arrangement, as an example of a predetermined pattern arrangement. Note that the predetermined pattern arrangement may be an RGB stripe arrangement, an R/G checkered arrangement, an X-Trans (registered trademark) arrangement, a honeycomb arrangement, or the like other than the Bayer arrangement.
 以下、Bフィルタ50A、Gフィルタ50B、及びRフィルタ50Cを区別して説明する必要がない場合には、Bフィルタ50A、Gフィルタ50B、及びRフィルタ50Cをそれぞれ「フィルタ50」と称する。 Hereinafter, unless it is necessary to explain the B filter 50A, G filter 50B, and R filter 50C separately, the B filter 50A, the G filter 50B, and the R filter 50C will be referred to as "filters 50", respectively.
 一例として図10に示すように、撮像装置ボディ14は、イメージセンサ28に加えて、制御ドライバ52、入出力I/F54、コンピュータ56、及び通信装置58を備える。入出力I/F54には、信号処理回路36、制御ドライバ52、コンピュータ56、及び通信装置58が接続されている。 As shown in FIG. 10 as an example, the imaging device body 14 includes a control driver 52, an input/output I/F 54, a computer 56, and a communication device 58 in addition to the image sensor 28. A signal processing circuit 36, a control driver 52, a computer 56, and a communication device 58 are connected to the input/output I/F 54.
 コンピュータ56は、プロセッサ60、ストレージ62、及びRAM64を有する。プロセッサ60は、撮像装置10の全体を制御する。プロセッサ60は、例えば、CPU及びGPUを含む演算処理装置であり、GPUは、CPUの制御下で動作し、画像に関する処理の実行を担う。ここでは、プロセッサ60の一例としてCPU及びGPUを含む演算処理装置を挙げているが、これはあくまでも一例に過ぎず、プロセッサ60は、GPU機能を統合した1つ以上のCPUであってもよいし、GPU機能を統合していない1つ以上のCPUであってもよい。 The computer 56 has a processor 60, a storage 62, and a RAM 64. The processor 60 controls the entire imaging device 10 . The processor 60 is, for example, an arithmetic processing device including a CPU and a GPU, and the GPU operates under the control of the CPU and is responsible for executing processing regarding images. Here, an arithmetic processing unit including a CPU and a GPU is cited as an example of the processor 60, but this is just an example, and the processor 60 may be one or more CPUs with integrated GPU functions. , one or more CPUs without integrated GPU functionality.
 プロセッサ60、ストレージ62、及びRAM64は、バス66を介して接続されており、バス66は、入出力I/F54に接続されている。ストレージ62は、非一時的記憶媒体であり、各種パラメータ及び各種プログラムを記憶している。例えば、ストレージ62は、フラッシュメモリ(例えば、EEPROM)である。但し、これは、あくまでも一例に過ぎず、フラッシュメモリと共に、HDD等をストレージ62として適用してもよい。RAM64は、各種情報を一時的に記憶し、ワークメモリとして用いられる。RAM64としては、例えば、DRAM及び/又はSRAM等が挙げられる。 The processor 60, storage 62, and RAM 64 are connected via a bus 66, and the bus 66 is connected to the input/output I/F 54. The storage 62 is a non-temporary storage medium and stores various parameters and programs. For example, storage 62 is flash memory (eg, EEPROM). However, this is just an example, and an HDD or the like may be used as the storage 62 in addition to a flash memory. The RAM 64 temporarily stores various information and is used as a work memory. Examples of the RAM 64 include DRAM and/or SRAM.
 プロセッサ60は、ストレージ62から必要なプログラムを読み出し、読み出したプログラムをRAM64上で実行する。プロセッサ60は、RAM64上で実行するプログラムに従って、制御ドライバ52及び信号処理回路36を制御する。制御ドライバ52は、プロセッサ60の制御下で光電変換素子34を制御する。 The processor 60 reads a necessary program from the storage 62 and executes the read program on the RAM 64. Processor 60 controls control driver 52 and signal processing circuit 36 according to a program executed on RAM 64. The control driver 52 controls the photoelectric conversion element 34 under the control of the processor 60.
 通信装置58は、入出力I/F54及びバス66を介してプロセッサ60と接続されている。また、通信装置58は、有線又は無線により処理装置90と通信可能に接続されている。通信装置58は、処理装置90との間の情報の授受を司る。例えば、通信装置58は、プロセッサ60からの要求に応じたデータを処理装置90に送信する。また、通信装置58は、処理装置90から送信されたデータを受信し、受信したデータを、バス66を介してプロセッサ60に出力する。 The communication device 58 is connected to the processor 60 via the input/output I/F 54 and the bus 66. Further, the communication device 58 is communicably connected to the processing device 90 by wire or wirelessly. The communication device 58 is in charge of exchanging information with the processing device 90 . For example, the communication device 58 transmits data in response to a request from the processor 60 to the processing device 90. The communication device 58 also receives data transmitted from the processing device 90 and outputs the received data to the processor 60 via the bus 66.
 一例として図12に示すように、ストレージ62には、スペクトル画像生成プログラム80が記憶されている。プロセッサ60は、ストレージ62からスペクトル画像生成プログラム80を読み出し、読み出したスペクトル画像生成プログラム80をRAM64上で実行する。プロセッサ60は、RAM64上で実行するスペクトル画像生成プログラム80に従って、複数のスペクトル画像72を生成するためのスペクトル画像生成処理を実行する。スペクトル画像生成処理は、プロセッサ60がスペクトル画像生成プログラム80に従って、出力値取得部82及び混信除去処理部84として動作することで実現される。 As an example, as shown in FIG. 12, a spectral image generation program 80 is stored in the storage 62. The processor 60 reads the spectral image generation program 80 from the storage 62 and executes the read spectral image generation program 80 on the RAM 64. The processor 60 executes a spectral image generation process to generate a plurality of spectral images 72 according to a spectral image generation program 80 executed on the RAM 64 . The spectral image generation process is realized by the processor 60 operating as the output value acquisition section 82 and the interference removal processing section 84 according to the spectral image generation program 80.
 一例として図13に示すように、出力値取得部82は、イメージセンサ28から出力された撮像データがプロセッサ60に入力された場合に、撮像データに基づいて、各物理画素44の出力値Yを取得する。各物理画素44の出力値Yは、撮像データによって示される撮像画像70に含まれる各画素の輝度値に対応する。 As an example, as shown in FIG. 13, when the imaging data output from the image sensor 28 is input to the processor 60, the output value acquisition unit 82 calculates the output value Y of each physical pixel 44 based on the imaging data. get. The output value Y of each physical pixel 44 corresponds to the luminance value of each pixel included in the captured image 70 indicated by the captured image data.
 ここで、各物理画素44の出力値Yは、混信(すなわち、クロストーク)が含まれた値である。すなわち、各物理画素44には、第1波長域λ、第2波長域λ、及び第3波長域λの各波長域λの光が入射するため、出力値Yは、第1波長域λの光量に応じた値、第2波長域λの光量に応じた値、及び第3波長域λの光量に応じた値が混合した値となる。 Here, the output value Y of each physical pixel 44 is a value that includes interference (that is, crosstalk). That is, since light in each wavelength range λ of the first wavelength range λ 1 , the second wavelength range λ 2 , and the third wavelength range λ 3 is incident on each physical pixel 44, the output value Y is The value is a mixture of a value corresponding to the light amount in the wavelength range λ 1 , a value depending on the light amount in the second wavelength range λ 2 , and a value depending on the light amount in the third wavelength range λ 3 .
 スペクトル画像72を得るためには、プロセッサ60が、物理画素44毎に、出力値Yから各波長域λに対応した値を分離して抽出する処理、すなわち、混信を除去する処理である混信除去処理を出力値Yに対して行う必要がある。そこで、本実施形態では、スペクトル画像72を取得するために、混信除去処理部84が、出力値取得部82によって取得された各物理画素44の出力値Yに対して混信除去処理を実行する。 In order to obtain the spectral image 72, the processor 60 performs a process of separating and extracting a value corresponding to each wavelength range λ from the output value Y for each physical pixel 44, that is, a process of removing interference, which is a process of removing interference. Processing needs to be performed on the output value Y. Therefore, in this embodiment, in order to obtain the spectrum image 72, the interference removal processing section 84 performs interference removal processing on the output value Y of each physical pixel 44 obtained by the output value acquisition section 82.
 ここで、混信除去処理について説明する。各物理画素44の出力値Yは、赤色、緑色、及び青色について、偏光角度θ毎の各輝度値を出力値Yの成分として含む。各物理画素44の出力値Yは、式(3)によって表される。
Here, the interference removal process will be explained. The output value Y of each physical pixel 44 includes each brightness value for each polarization angle θ for red, green, and blue as a component of the output value Y. The output value Y of each physical pixel 44 is expressed by equation (3).
 ただし、Yθ1_Rは、出力値Yのうちの赤色で偏光角度が第1偏光角度θである成分の輝度値、Yθ2_Rは、出力値Yのうちの赤色で偏光角度が第2偏光角度θである成分の輝度値、Yθ3_Rは、出力値Yのうちの赤色で偏光角度が第3偏光角度θである成分の輝度値、Yθ4_Rは、出力値Yのうちの赤色で偏光角度が第4偏光角度θである成分の輝度値である。 However, Y θ1_R is the brightness value of the red component of the output value Y whose polarization angle is the first polarization angle θ 1 , and Y θ2_R is the brightness value of the component of the output value Y that is red and whose polarization angle is the second polarization angle θ. 2 , Y θ3_R is the brightness value of the red component of the output value Y whose polarization angle is the third polarization angle θ 3, and Y θ4_R is the brightness value of the red component of the output value Y whose polarization angle is the third polarization angle θ 3 . is the brightness value of the component whose fourth polarization angle is θ4 .
 また、Yθ1_Gは、出力値Yのうちの緑色で偏光角度が第1偏光角度θである成分の輝度値、Yθ2_Gは、出力値Yのうちの緑色で偏光角度が第2偏光角度θである成分の輝度値、Yθ3_Gは、出力値Yのうちの緑色で偏光角度が第3偏光角度θである成分の輝度値、Yθ4_Gは、出力値Yのうちの緑色で偏光角度が第4偏光角度θである成分の輝度値である。 Further, Y θ1_G is the luminance value of the green component of the output value Y whose polarization angle is the first polarization angle θ 1 , and Y θ2_G is the luminance value of the component of the output value Y that is green and whose polarization angle is the second polarization angle θ. 2 , Y θ3_G is the luminance value of the component of the output value Y that is green and has a polarization angle of the third polarization angle θ 3, and Y θ4_G is the luminance value of the component that is green and has the polarization angle of the third polarization angle θ 3 of the output value Y. is the brightness value of the component whose fourth polarization angle is θ4 .
 また、Yθ1_Bは、出力値Yのうちの青色で偏光角度が第1偏光角度θである成分の輝度値、Yθ2_Bは、出力値Yのうちの青色で偏光角度が第2偏光角度θである成分の輝度値、Yθ3_Bは、出力値Yのうちの青色で偏光角度が第3偏光角度θである成分の輝度値、Yθ4_Bは、出力値Yのうちの青色で偏光角度が第4偏光角度θである成分の輝度値である。 Further, Y θ1_B is the luminance value of the blue component of the output value Y whose polarization angle is the first polarization angle θ 1 , and Y θ2_B is the luminance value of the blue component of the output value Y whose polarization angle is the second polarization angle θ. 2 , Y θ3_B is the luminance value of the blue component of the output value Y whose polarization angle is the third polarization angle θ 3 , and Y θ4_B is the polarization angle of the blue component of the output value Y. is the brightness value of the component whose fourth polarization angle is θ4 .
 スペクトル画像72を形成する各画像画素の画素値Xは、第1偏光角度αを有する第1波長域λの偏光(以下、「第1波長域偏光」と称する)の輝度値Xλ1と、第2偏光角度αを有する第2波長域λの偏光(以下、「第2波長域偏光」と称する)の輝度値Xλ2と、第3偏光角度αを有する第3波長域λの偏光(以下、「第3波長域偏光」と称する)の輝度値Xλ3とを画素値Xの成分として含む。各画像画素の画素値Xは、式(4)によって表される。
The pixel value X of each image pixel forming the spectral image 72 is the brightness value X λ1 of polarized light in a first wavelength range λ 1 having a first polarization angle α 1 (hereinafter referred to as “first wavelength range polarized light” ) . , the brightness value X λ2 of polarized light in a second wavelength range λ 2 having a second polarization angle α 2 (hereinafter referred to as “second wavelength range polarized light”), and a third wavelength range λ having a third polarization angle α 3 The pixel value X includes the luminance value X λ3 of the polarized light of No. 3 (hereinafter referred to as "third wavelength range polarized light") as a component of the pixel value X. The pixel value X of each image pixel is expressed by equation (4).
 各物理画素44の出力値Yは、式(5)によって表される。
The output value Y of each physical pixel 44 is expressed by equation (5).
 式(5)において、Aは、混信行列である。混信行列A(図示省略)は、混信の特性を示す行列である。混信行列Aは、入射光のスペクトル、第1レンズ30の分光透過率、第2レンズ32の分光透過率、複数の分光フィルタ20の分光透過率、及びイメージセンサ28の分光感度等の複数の既知の値に基づいて事前に規定される。 In equation (5), A is an interference matrix. The interference matrix A (not shown) is a matrix indicating characteristics of interference. The interference matrix A includes a plurality of known information such as the spectrum of the incident light, the spectral transmittance of the first lens 30, the spectral transmittance of the second lens 32, the spectral transmittance of the plurality of spectral filters 20, and the spectral sensitivity of the image sensor 28. predefined based on the value of .
 混信行列Aの一般逆行列である混信除去行列をAとした場合、各画像画素の画素値Xは、式(6)によって表される。
When the interference cancellation matrix, which is the general inverse of the interference matrix A, is A + , the pixel value X of each image pixel is expressed by equation (6).
 混信除去行列Aも、混信行列Aと同様に、入射光のスペクトル、第1レンズ30の分光透過率、第2レンズ32の分光透過率、複数の分光フィルタ20の分光透過率、及びイメージセンサ28の分光感度等に基づいて規定される行列である。混信除去行列Aは、ストレージ62に予め記憶される。 Similar to the interference matrix A, the interference removal matrix A + also includes the spectrum of the incident light, the spectral transmittance of the first lens 30, the spectral transmittance of the second lens 32, the spectral transmittance of the plurality of spectral filters 20, and the image sensor. This is a matrix defined based on the spectral sensitivity, etc. of No. 28. The interference cancellation matrix A + is stored in advance in the storage 62.
 混信除去処理部84は、ストレージ62に記憶されている混信除去行列Aと、出力値取得部82によって取得された各物理画素44の出力値Yとを取得し、取得した混信除去行列Aと各物理画素44の出力値Yとに基づいて、式(6)により、各画像画素の画素値Xを出力する。 The interference cancellation processing unit 84 acquires the interference cancellation matrix A + stored in the storage 62 and the output value Y of each physical pixel 44 acquired by the output value acquisition unit 82, and uses the acquired interference cancellation matrix A + Based on the output value Y of each physical pixel 44, the pixel value X of each image pixel is output using equation (6).
 ここで、上述の通り、各画像画素の画素値Xは、第1波長域偏光の輝度値Xλ1と、第2波長域偏光の輝度値Xλ2と、第3波長域偏光の輝度値Xλ3とを画素値Xの成分として含む。 Here, as described above, the pixel value X of each image pixel is the brightness value X λ1 of the first wavelength range polarized light, the brightness value X λ2 of the second wavelength range polarized light, and the brightness value X λ3 of the third wavelength range polarized light. are included as components of the pixel value X.
 撮像画像70のうちのスペクトル画像72Aは、第1波長域λの光の輝度値Xλ1に対応する画像(すなわち、輝度値Xλ1に依拠した画像)である。撮像画像70のうちのスペクトル画像72Bは、第2波長域λの光の輝度値Xλ2に対応する画像(すなわち、輝度値Xλ2に依拠した画像)である。撮像画像70のうちのスペクトル画像72Cは、第3波長域λの光の輝度値Xλ3に対応する画像(すなわち、輝度値Xλ3に依拠した画像)である。 The spectral image 72A of the captured image 70 is an image corresponding to the brightness value X λ1 of light in the first wavelength range λ 1 (that is, an image based on the brightness value X λ1 ). The spectral image 72B of the captured image 70 is an image corresponding to the brightness value X λ2 of light in the second wavelength range λ 2 (that is, an image based on the brightness value X λ2 ). The spectral image 72C of the captured image 70 is an image corresponding to the brightness value X λ3 of light in the third wavelength range λ 3 (that is, an image based on the brightness value X λ3 ).
 このように、混信除去処理部84によって混信除去処理が実行されることにより、撮像画像70が、第1波長域偏光の輝度値Xλ1に対応するスペクトル画像72Aと、第2波長域偏光の輝度値Xλ2に対応するスペクトル画像72Bと、第3波長域偏光の輝度値Xλ3に対応するスペクトル画像72Cとに分離される。すなわち、撮像画像70が、複数の分光フィルタ20の波長域λ毎のスペクトル画像72に分離される。 In this way, by performing the interference removal process by the interference removal processing unit 84, the captured image 70 is divided into the spectrum image 72A corresponding to the brightness value X λ1 of the first wavelength band polarized light and the brightness of the second wavelength band polarized light. It is separated into a spectral image 72B corresponding to the value X λ2 and a spectral image 72C corresponding to the luminance value X λ3 of the third wavelength band polarized light. That is, the captured image 70 is separated into spectral images 72 for each wavelength range λ of the plurality of spectral filters 20.
 次に、本実施形態に係る処理装置90について詳しく説明する。 Next, the processing device 90 according to this embodiment will be explained in detail.
 一例として図14に示すように、処理装置90は、コンピュータ92を備えている。コンピュータ92は、プロセッサ94、ストレージ96、及びRAM98を備える。プロセッサ94、ストレージ96、及びRAM98は、上述のプロセッサ60、ストレージ62、及びRAM64(図10参照)と同様のハードウェアによって実現される。 As shown in FIG. 14 as an example, the processing device 90 includes a computer 92. Computer 92 includes a processor 94, storage 96, and RAM 98. The processor 94, storage 96, and RAM 98 are realized by the same hardware as the above-described processor 60, storage 62, and RAM 64 (see FIG. 10).
 ストレージ96には、色測定プログラム100が記憶されている。プロセッサ94は、ストレージ96から色測定プログラム100を読み出し、読み出した色測定プログラム100をRAM98上で実行する。プロセッサ94は、RAM98上で実行する色測定プログラム100に従って、色測定結果136を得るための色測定処理を実行する。色測定処理は、プロセッサ94が色測定プログラム100に従って、画像データ取得部102、校正画像生成部104、及び色導出部106として動作することで実現される。処理装置90は、本開示の技術に係る「校正装置」及び「色測定装置」の一例である。色測定プログラム100は、本開示の技術に係る「プログラム」の一例である。 A color measurement program 100 is stored in the storage 96. The processor 94 reads the color measurement program 100 from the storage 96 and executes the read color measurement program 100 on the RAM 98. Processor 94 executes color measurement processing to obtain color measurement results 136 according to color measurement program 100 executed on RAM 98 . The color measurement process is realized by the processor 94 operating as an image data acquisition unit 102, a calibration image generation unit 104, and a color derivation unit 106 according to the color measurement program 100. The processing device 90 is an example of a "calibration device" and a "color measurement device" according to the technology of the present disclosure. The color measurement program 100 is an example of a "program" according to the technology of the present disclosure.
 一例として図15に示すように、撮像装置10は、基準画像130が得られた場合に、基準画像130を示す基準画像データを処理装置90に対して送信する。また、撮像装置10は、被写体画像132が得られた場合に、被写体画像132を示す被写体画像データを処理装置90に対して送信する。基準画像データは、本開示の技術に係る「第1撮像データ」の一例である。被写体画像データは、本開示の技術に係る「第2撮像データ」の一例である。 As an example, as shown in FIG. 15, when the reference image 130 is obtained, the imaging device 10 transmits reference image data indicating the reference image 130 to the processing device 90. Furthermore, when the subject image 132 is obtained, the imaging device 10 transmits subject image data indicating the subject image 132 to the processing device 90. The reference image data is an example of "first imaging data" according to the technology of the present disclosure. The subject image data is an example of "second imaging data" according to the technology of the present disclosure.
 画像データ取得部102は、処理装置90で受信された基準画像データを取得する。そして、画像データ取得部102は、基準画像データに基づいて基準画像130を取得する。基準画像130には、第1波長域λのスペクトル画像72A(以下、「基準スペクトル画像72A」と称する)と、第2波長域λのスペクトル画像72B(以下、「基準スペクトル画像72B」と称する)とが含まれる。以下、基準スペクトル画像72A及び基準スペクトル画像72Bを区別して説明する必要が無い場合、基準スペクトル画像72A及び基準スペクトル画像72Bを「基準スペクトル画像72」と称する。 The image data acquisition unit 102 acquires the reference image data received by the processing device 90. The image data acquisition unit 102 then acquires the reference image 130 based on the reference image data. The reference image 130 includes a spectrum image 72A in the first wavelength range λ 1 (hereinafter referred to as "reference spectrum image 72A") and a spectrum image 72B in the second wavelength range λ 2 (hereinafter referred to as "reference spectrum image 72B"). ) is included. Hereinafter, when there is no need to distinguish between the reference spectrum image 72A and the reference spectrum image 72B, the reference spectrum image 72A and the reference spectrum image 72B will be referred to as the "reference spectrum image 72."
 また、画像データ取得部102は、処理装置90で受信された被写体画像データを取得する。そして、画像データ取得部102は、被写体画像データに基づいて被写体画像132を取得する。被写体画像132には、第1波長域λのスペクトル画像72A(以下、「被写体スペクトル画像72A」と称する)と、第2波長域λのスペクトル画像72B(以下、「被写体スペクトル画像72B」と称する)とが含まれる。以下、被写体スペクトル画像72A及び被写体スペクトル画像72Bを区別して説明する必要が無い場合、被写体スペクトル画像72A及び被写体スペクトル画像72Bを「被写体スペクトル画像72」と称する。 Further, the image data acquisition unit 102 acquires the subject image data received by the processing device 90. The image data acquisition unit 102 then acquires a subject image 132 based on the subject image data. The object image 132 includes a spectrum image 72A in a first wavelength range λ 1 (hereinafter referred to as "object spectrum image 72A") and a spectrum image 72B in a second wavelength range λ 2 (hereinafter referred to as "object spectrum image 72B"). ) is included. Hereinafter, unless it is necessary to separately explain the subject spectrum image 72A and the subject spectrum image 72B, the subject spectrum image 72A and the subject spectrum image 72B will be referred to as "the subject spectrum image 72."
 校正画像生成部104は、基準画像130に基づいて被写体画像132に対する校正を行うことにより、校正画像134を生成する。具体的には、校正画像生成部104は、第1波長域λについて、被写体スペクトル画像72Aの画素値を、基準スペクトル画像72Aの画素値で除することにより、校正されたスペクトル画像である校正スペクトル画像134Aを生成する。同様に、校正画像生成部104は、第2波長域λについて、被写体スペクトル画像72Bの画素値を、基準スペクトル画像72Bの画素値で除することにより、校正されたスペクトル画像である校正スペクトル画像134Bを生成する。このように、処理装置90では、基準画像データに基づいて被写体画像データに対する校正が行われる。 The calibration image generation unit 104 generates a calibration image 134 by calibrating the subject image 132 based on the reference image 130. Specifically, the calibration image generation unit 104 divides the pixel value of the subject spectrum image 72A by the pixel value of the reference spectrum image 72A for the first wavelength range λ 1 , thereby generating a calibration image that is a calibrated spectrum image. A spectral image 134A is generated. Similarly, the calibration image generation unit 104 divides the pixel values of the object spectrum image 72B by the pixel values of the reference spectrum image 72B for the second wavelength range λ 2 , thereby generating a calibration spectrum image that is a calibrated spectrum image. 134B. In this way, the processing device 90 performs calibration of the subject image data based on the reference image data.
 なお、校正画像生成部104は、被写体スペクトル画像72のうちの一部の画像領域に対して校正を行ってもよい。例えば、色を測定する画像領域を指定するユーザからの指示(以下、「ユーザ指示」と称する)が処理装置90によって受け付けられた場合に、校正画像生成部104は、被写体スペクトル画像72のうちのユーザ指示に対応する画像領域に対して校正を行ってもよい。 Note that the calibration image generation unit 104 may calibrate a part of the image area of the subject spectrum image 72. For example, when the processing device 90 receives an instruction from a user specifying an image area in which color is to be measured (hereinafter referred to as a "user instruction"), the calibration image generation unit 104 selects one of the subject spectrum images 72. Calibration may be performed on an image area corresponding to a user instruction.
 また、例えば、被写体画像132に対して画像処理が行われることにより被写体2を像として含む画像領域(以下、「被写体画像領域」と称する)が抽出された場合に、校正画像生成部104は、被写体スペクトル画像72のうちの被写体画像領域に対して校正を行ってもよい。また、校正画像生成部104は、被写体画像領域のうちの一部の画像領域に対して校正を行ってもよい。被写体スペクトル画像72のうちの校正画像生成部104によって校正が行われた画像領域が校正スペクトル画像134A及び校正スペクトル画像134Bに相当する。 For example, when image processing is performed on the subject image 132 to extract an image area that includes the subject 2 as an image (hereinafter referred to as "subject image area"), the calibration image generation unit 104 Calibration may be performed on the subject image region of the subject spectrum image 72. Further, the calibration image generation unit 104 may perform calibration on a part of the image area of the subject image area. The image regions of the subject spectrum image 72 that have been calibrated by the calibration image generation unit 104 correspond to the calibration spectral image 134A and the calibration spectral image 134B.
 また、ユーザ指示が撮像装置10によって受け付けられた場合に、撮像装置10は、基準画像130のうちのユーザ指示に対応する画像領域を示す基準画像データと、被写体画像132のうちのユーザ指示に対応する画像領域を示す被写体画像データとを処理装置90に送信してもよい。 Further, when the user instruction is accepted by the imaging device 10, the imaging device 10 generates reference image data indicating an image area corresponding to the user instruction in the reference image 130 and corresponding to the user instruction in the subject image 132. The subject image data indicating the image area to be photographed may be transmitted to the processing device 90.
 色導出部106は、校正画像生成部104によって生成された第1波長域λの校正スペクトル画像134Aと、第2波長域λの校正スペクトル画像134Bとに基づいて、被写体2の色を導出する。具体的には、色導出部106は、式(7)に基づいて、画像画素毎に被写体2の色CLを導出する。
 CL=(Xλ1-Xλ2)÷(Xλ1+Xλ2)・・・(7)
The color derivation unit 106 derives the color of the subject 2 based on the calibration spectrum image 134A in the first wavelength range λ 1 and the calibration spectrum image 134B in the second wavelength range λ 2 generated by the calibration image generation unit 104. do. Specifically, the color derivation unit 106 derives the color CL of the subject 2 for each image pixel based on equation (7).
CL=(X λ1 -X λ2 )÷(X λ1 +X λ2 )...(7)
 ただし、Xλ1は、第1波長域λの校正スペクトル画像134Aを形成する各画像画素の画素値Xであり、Xλ2は、第2波長域λの校正スペクトル画像134Bを形成する各画像画素の画素値Xである。そして、色導出部106によって導出された色に基づいて色測定結果136が生成される。 However, X λ1 is the pixel value X of each image pixel forming the calibration spectrum image 134A in the first wavelength range λ 1 , and X λ2 is the pixel value X of each image pixel forming the calibration spectrum image 134B in the second wavelength range λ 2. This is the pixel value X of the pixel. Then, a color measurement result 136 is generated based on the color derived by the color derivation unit 106.
 以上説明した色測定処理で生成された色測定結果136は、処理装置90の表示装置108(図1参照)に表示されてもよい。また、色測定結果136を示す測定データが、処理装置90に通信可能に接続された外部装置(図示省略)に対して送信され、外部装置で色測定結果136が利用されてもよい。また、色測定結果136は、測定した色を示す画像を含んでいてもよいし、測定した色を示す数値及び/又はグラフ等を含んでいてもよい。 The color measurement result 136 generated by the color measurement process described above may be displayed on the display device 108 (see FIG. 1) of the processing device 90. Further, measurement data indicating the color measurement result 136 may be transmitted to an external device (not shown) communicably connected to the processing device 90, and the color measurement result 136 may be used by the external device. Further, the color measurement results 136 may include an image indicating the measured color, or may include numerical values and/or graphs indicating the measured color.
 次に、本実施形態の作用について説明する。先ず、本実施形態に係るスペクトル画像生成処理について説明する。図16には、本実施形態に係るスペクトル画像生成処理の流れの一例を示す。 Next, the operation of this embodiment will be explained. First, spectral image generation processing according to this embodiment will be explained. FIG. 16 shows an example of the flow of spectral image generation processing according to this embodiment.
 図16に示すスペクトル画像生成処理では、先ず、ステップST10で、出力値取得部82は、イメージセンサ28から出力された撮像データに基づいて、各物理画素44の出力値Yを取得する(図10参照)。ステップST10の処理が実行された後、スペクトル画像生成処理は、ステップST12へ移行する。 In the spectral image generation process shown in FIG. 16, first, in step ST10, the output value acquisition unit 82 acquires the output value Y of each physical pixel 44 based on the imaging data output from the image sensor 28 (FIG. reference). After the process of step ST10 is executed, the spectral image generation process moves to step ST12.
 ステップST12で、混信除去処理部84は、ストレージ62に記憶されている混信除去行列Aと、ステップST10で取得された各物理画素44の出力値Yとを取得し、取得した混信除去行列Aと各物理画素44の出力値Yとに基づいて、各画像画素の画素値Xを出力する(図10参照)。ステップST12で混信除去処理が実行されることにより、撮像画像70が、第1波長域偏光の輝度値Xλ1に対応するスペクトル画像72Aと、第2波長域偏光の輝度値Xλ2に対応するスペクトル画像72Bと、第3波長域偏光の輝度値Xλ3に対応するスペクトル画像72Cとに分離される。ステップST12の処理が実行された後、スペクトル画像生成処理は終了する。 In step ST12, the interference cancellation processing unit 84 acquires the interference cancellation matrix A + stored in the storage 62 and the output value Y of each physical pixel 44 acquired in step ST10, and obtains the interference cancellation matrix A The pixel value X of each image pixel is output based on + and the output value Y of each physical pixel 44 (see FIG. 10). By executing the interference removal process in step ST12, the captured image 70 is divided into a spectrum image 72A corresponding to the brightness value X λ1 of the first wavelength band polarized light and a spectrum corresponding to the brightness value X λ2 of the second wavelength band polarized light. It is separated into an image 72B and a spectrum image 72C corresponding to the luminance value X λ3 of the third wavelength band polarized light. After the process of step ST12 is executed, the spectral image generation process ends.
 続いて、本実施形態に係る色測定処理について説明する。図17には、本実施形態に係る色測定処理の流れの一例を示す。 Next, color measurement processing according to this embodiment will be explained. FIG. 17 shows an example of the flow of color measurement processing according to this embodiment.
 図17に示す色測定処理では、先ず、ステップST20で、画像データ取得部102は、基準画像データ及び被写体画像データを取得する。そして、画像データ取得部102は、基準画像データが示す基準画像130及び被写体画像データが示す被写体画像132を取得する。ステップST20の処理が実行された後、色測定処理は、ステップST22へ移行する。 In the color measurement process shown in FIG. 17, first, in step ST20, the image data acquisition unit 102 acquires reference image data and subject image data. The image data acquisition unit 102 then acquires a reference image 130 indicated by the reference image data and a subject image 132 indicated by the subject image data. After the process of step ST20 is executed, the color measurement process moves to step ST22.
 ステップST22で、校正画像生成部104は、ステップST20で取得された基準画像130及び被写体画像132について、基準画像130に基づいて被写体画像132に対する校正を行うことにより、校正画像134(一例として、第1波長域λの校正スペクトル画像134A及び第2波長域λの校正スペクトル画像134B)を生成する。ステップST22の処理が実行された後、色測定処理は、ステップST24へ移行する。 In step ST22, the calibration image generation unit 104 performs calibration on the subject image 132 based on the reference image 130 with respect to the reference image 130 and the subject image 132 acquired in step ST20. A calibration spectrum image 134A in one wavelength range λ 1 and a calibration spectrum image 134B in a second wavelength range λ 2 are generated. After the process of step ST22 is executed, the color measurement process moves to step ST24.
 ステップST24で、色導出部106は、ステップST22で生成された第1波長域λの校正スペクトル画像134Aと、第2波長域λの校正スペクトル画像134Bとに基づいて、被写体2の色を導出する。これにより、色測定結果136が得られる。ステップST24の処理が実行された後、色測定処理は終了する。 In step ST24, the color deriving unit 106 calculates the color of the subject 2 based on the calibration spectrum image 134A in the first wavelength range λ 1 and the calibration spectrum image 134B in the second wavelength range λ 2 generated in step ST22. Derive. As a result, a color measurement result 136 is obtained. After the process of step ST24 is executed, the color measurement process ends.
 なお、上述の処理装置90の作用として説明した校正方法及び色測定方法は、本開示の技術に係る「校正方法」及び「色測定方法」の一例である。ステップST20は、本開示の技術に係る「第1取得工程」及び「第2取得工程」の一例である。ステップST22は、本開示の技術に係る「校正工程」の一例である。ステップST24は、本開示の技術に係る「色測定工程」の一例である。 Note that the calibration method and color measurement method described as the functions of the processing device 90 described above are examples of the "calibration method" and the "color measurement method" according to the technology of the present disclosure. Step ST20 is an example of a "first acquisition step" and a "second acquisition step" according to the technology of the present disclosure. Step ST22 is an example of a "calibration step" according to the technology of the present disclosure. Step ST24 is an example of a "color measurement step" according to the technology of the present disclosure.
 以上詳述した通り、本実施形態では、被写体2の色を測定する場合に、校正用部材116が用いられる(図1参照)。校正用部材116は、被写体2の背景を構成する背景面116Aを有する。各波長域λにおいて、被写体2で反射した光の分光反射率である第1分光反射率aと、背景面116Aで反射した光の分光反射率である第2分光反射率bとは、互いに関係性を有する(図2参照)。 As detailed above, in this embodiment, the calibration member 116 is used when measuring the color of the subject 2 (see FIG. 1). The calibration member 116 has a background surface 116A that constitutes the background of the subject 2. In each wavelength range λ, the first spectral reflectance a, which is the spectral reflectance of the light reflected by the subject 2, and the second spectral reflectance b, which is the spectral reflectance of the light reflected by the background surface 116A, are related to each other. (See Figure 2).
 したがって、被写体2がない場合と被写体2がある場合とで入射光の波長域が変化することが抑制される(図7参照)。これにより、第2比較例に係る色測定方法(図20参照)のように、実際の被写体2の色に比して被写体2の色が濃く測定されることを回避することができる。また、被写体2がない場合と被写体2がある場合とで周辺反射光L7の波長域が変化することが抑制される。これにより、第2比較例に係る色測定方法のように、実際の被写体2の色に比して被写体2の色が薄く測定されたり、フレアの量が変化したりすることを回避することができる。この結果、例えば、背景面116Aが白色面である場合に比して、被写体2の色に対する測定精度を向上させることができる Therefore, the wavelength range of the incident light is suppressed from changing between when there is no subject 2 and when there is subject 2 (see FIG. 7). Thereby, it is possible to avoid measuring the color of the subject 2 darker than the actual color of the subject 2, as in the color measurement method according to the second comparative example (see FIG. 20). Further, the wavelength range of the peripheral reflected light L7 is suppressed from changing between the case where the subject 2 is not present and the case where the subject 2 is present. As a result, it is possible to avoid cases where the color of the subject 2 is measured to be lighter than the actual color of the subject 2 or the amount of flare changes, as in the color measurement method according to the second comparative example. can. As a result, the accuracy of measuring the color of the subject 2 can be improved compared to, for example, when the background surface 116A is a white surface.
 なお、上記実施形態では、色測定処理は、処理装置90で実行されるが、撮像装置10で実行されてもよい。また、色測定処理のうちの一部の処理が撮像装置10で実行され、色測定処理のうちの残りの処理が処理装置90で実行されてもよい。 Note that in the above embodiment, the color measurement process is executed by the processing device 90, but it may also be executed by the imaging device 10. Further, part of the color measurement process may be executed by the imaging device 10, and the remaining process of the color measurement process may be executed by the processing device 90.
 また、上記実施形態では、筐体114の上部に撮像装置10が配置され、筐体114の下部に校正用部材116が配置される向きで筐体装置120が使用されるが、筐体装置120の向きは、上記以外でもよい。例えば、筐体114の下部に撮像装置10が配置され、筐体114の上部に校正用部材116が配置される向きで筐体装置120が使用されてもよく、撮像装置10及び校正用部材116が水平方向に対向する向きで筐体装置120が使用されてもよい。 Furthermore, in the embodiment described above, the housing device 120 is used in such a direction that the imaging device 10 is placed at the top of the housing 114 and the calibration member 116 is placed at the bottom of the housing 114. The orientation may be other than the above. For example, the housing device 120 may be used in an orientation in which the imaging device 10 is disposed at the bottom of the housing 114 and the calibration member 116 is disposed at the top of the housing 114. The housing device 120 may be used in an orientation in which the two faces are horizontally opposed.
 また、例えば、筐体114の上部に校正用部材116が配置される向き、又は、撮像装置10及び校正用部材116が水平方向に対向する向きで筐体装置120が使用される場合に、被写体2は、背景面116Aに固定されてもよい。 Furthermore, for example, when the housing device 120 is used in a direction in which the calibration member 116 is arranged on the top of the housing 114 or in a direction in which the imaging device 10 and the calibration member 116 face each other in the horizontal direction, the subject 2 may be fixed to the background surface 116A.
 また、上記実施形態では、筐体114に対して分光反射率が異なる複数の校正用部材116が用いられてもよい。各校正用部材116は、筐体114に対して交換可能でもよく、筐体114に対して切替可能でもよい。そして、複数の校正用部材116の中から、第1分光反射率aに対応する第2分光反射率bを有する校正用部材116が選択されてもよい。 Furthermore, in the above embodiment, a plurality of calibration members 116 having different spectral reflectances may be used for the housing 114. Each calibration member 116 may be replaceable with respect to the housing 114 or may be switchable with respect to the housing 114. Then, a calibration member 116 having a second spectral reflectance b corresponding to the first spectral reflectance a may be selected from among the plurality of calibration members 116.
 また、上記実施形態では、撮像装置10のプロセッサ60は、撮像条件が特定条件を外れた場合に警告情報を出力してもよい。警告情報は、ユーザに対して画角を合わせる旨の報知を行うための情報でもよい。報知は、音による報知、振動による報知、及び光による報知の少なくともいずれかを含んでいてもよい。警告情報が出力されることにより、ユーザ等に撮像条件が特定条件を外れたことを認識させることができる。 Furthermore, in the embodiment described above, the processor 60 of the imaging device 10 may output warning information when the imaging conditions deviate from specific conditions. The warning information may be information for notifying the user that the angle of view should be adjusted. The notification may include at least one of sound notification, vibration notification, and light notification. By outputting the warning information, it is possible to make the user or the like recognize that the imaging conditions have deviated from the specific conditions.
 また、撮像装置10の画角を調節する場合に、撮像装置10の撮像範囲内の一部に基準板(図示省略)を配置しておき、被写体2及び/又は校正用部材116が配置された場合に、プロセッサ60は、撮像装置10で得られた撮像画像70の変化に基づいて撮像条件が特定条件を外れたか否かを判定してもよい。 Further, when adjusting the angle of view of the imaging device 10, a reference plate (not shown) is placed in a part of the imaging range of the imaging device 10, and the subject 2 and/or the calibration member 116 are placed. In this case, the processor 60 may determine whether the imaging conditions have deviated from the specific conditions based on a change in the captured image 70 obtained by the imaging device 10.
 また、上記実施形態では、撮像装置10について、プロセッサ60を例示したが、プロセッサ60に代えて、又は、プロセッサ60と共に、他の少なくとも1つのCPU、少なくとも1つのGPU、及び/又は、少なくとも1つのTPUを用いるようにしてもよい。 Further, in the above embodiment, the processor 60 is illustrated in the imaging device 10, but instead of the processor 60, or together with the processor 60, at least one other CPU, at least one GPU, and/or at least one TPU may also be used.
 また、上記実施形態では、処理装置90について、プロセッサ94を例示したが、プロセッサ94に代えて、又は、プロセッサ94と共に、他の少なくとも1つのCPU、少なくとも1つのGPU、及び/又は、少なくとも1つのTPUを用いるようにしてもよい。 Further, in the above embodiment, the processor 94 is illustrated as an example of the processing device 90, but instead of the processor 94, or together with the processor 94, at least one other CPU, at least one GPU, and/or at least one TPU may also be used.
 また、上記実施形態では、撮像装置10について、ストレージ62にスペクトル画像生成プログラム80が記憶されている形態例を挙げて説明したが、本開示の技術はこれに限定されない。例えば、スペクトル画像生成プログラム80がSSD又はUSBメモリなどの可搬型の非一時的なコンピュータ読取可能な記憶媒体(以下、単に「非一時的記憶媒体」と称する)に記憶されていてもよい。非一時的記憶媒体に記憶されているスペクトル画像生成プログラム80は、撮像装置10のコンピュータ56にインストールされてもよい。 Further, in the above embodiment, the imaging device 10 has been described using an example in which the spectral image generation program 80 is stored in the storage 62, but the technology of the present disclosure is not limited to this. For example, the spectral image generation program 80 may be stored in a portable non-transitory computer-readable storage medium (hereinafter simply referred to as "non-transitory storage medium") such as an SSD or a USB memory. The spectral image generation program 80 stored in a non-transitory storage medium may be installed on the computer 56 of the imaging device 10.
 また、ネットワークを介して撮像装置10に接続される他のコンピュータ又はサーバ装置等の記憶装置にスペクトル画像生成プログラム80を記憶させておき、撮像装置10の要求に応じてスペクトル画像生成プログラム80がダウンロードされ、撮像装置10のコンピュータ56にインストールされてもよい。 In addition, the spectral image generation program 80 is stored in a storage device such as another computer or server device connected to the imaging device 10 via a network, and the spectral image generation program 80 is downloaded in response to a request from the imaging device 10. may be installed in the computer 56 of the imaging device 10.
 また、撮像装置10に接続される他のコンピュータ又はサーバ装置等の記憶装置、又はストレージ62にスペクトル画像生成プログラム80の全てを記憶させておく必要はなく、スペクトル画像生成プログラム80の一部を記憶させておいてもよい。 Further, it is not necessary to store all of the spectral image generation program 80 in a storage device such as another computer or server device connected to the imaging device 10, or in the storage 62, but only a part of the spectral image generation program 80 can be stored. You can leave it.
 また、上記実施形態では、処理装置90について、ストレージ96に色測定プログラム100が記憶されている形態例を挙げて説明したが、本開示の技術はこれに限定されない。例えば、色測定プログラム100が非一時的記憶媒体に記憶されていてもよい。非一時的記憶媒体に記憶されている色測定プログラム100は、処理装置90のコンピュータ92にインストールされてもよい。 Furthermore, in the above embodiment, the processing device 90 has been described using an example in which the color measurement program 100 is stored in the storage 96, but the technology of the present disclosure is not limited to this. For example, the color measurement program 100 may be stored in a non-transitory storage medium. Color measurement program 100 stored on a non-transitory storage medium may be installed on computer 92 of processing device 90.
 また、ネットワークを介して処理装置90に接続される他のコンピュータ又はサーバ装置等の記憶装置に色測定プログラム100を記憶させておき、処理装置90の要求に応じて色測定プログラム100がダウンロードされ、処理装置90のコンピュータ92にインストールされてもよい。 Further, the color measurement program 100 is stored in a storage device such as another computer or a server device connected to the processing device 90 via a network, and the color measurement program 100 is downloaded in response to a request from the processing device 90. It may be installed on the computer 92 of the processing device 90.
 また、処理装置90に接続される他のコンピュータ又はサーバ装置等の記憶装置、又はストレージ96に色測定プログラム100の全てを記憶させておく必要はなく、色測定プログラム100の一部を記憶させておいてもよい。 Further, it is not necessary to store the entire color measurement program 100 in a storage device such as another computer or server device connected to the processing device 90, or in the storage 96, but only a part of the color measurement program 100 may be stored. You can leave it there.
 また、撮像装置10には、コンピュータ56が内蔵されているが、本開示の技術はこれに限定されず、例えば、コンピュータ56が撮像装置10の外部に設けられるようにしてもよい。 Further, although the imaging device 10 has a built-in computer 56, the technology of the present disclosure is not limited to this, and for example, the computer 56 may be provided outside the imaging device 10.
 また、処理装置90には、コンピュータ92が内蔵されているが、本開示の技術はこれに限定されず、例えば、コンピュータ92が処理装置90の外部に設けられるようにしてもよい。 Further, although the processing device 90 has a built-in computer 92, the technology of the present disclosure is not limited to this, and for example, the computer 92 may be provided outside the processing device 90.
 また、上記実施形態では、撮像装置10について、プロセッサ60、ストレージ62、及びRAM64を含むコンピュータ56が例示されているが、本開示の技術はこれに限定されず、コンピュータ56に代えて、ASIC、FPGA、及び/又はPLDを含むデバイスを適用してもよい。また、コンピュータ56に代えて、ハードウェア構成及びソフトウェア構成の組み合わせを用いてもよい。 Further, in the above embodiment, the computer 56 including the processor 60, the storage 62, and the RAM 64 is illustrated as an example of the imaging device 10, but the technology of the present disclosure is not limited to this, and instead of the computer 56, an ASIC, A device including an FPGA and/or a PLD may also be applied. Further, instead of the computer 56, a combination of hardware configuration and software configuration may be used.
 また、上記実施形態では、処理装置90について、プロセッサ94、ストレージ96、及びRAM98を含むコンピュータ92が例示されているが、本開示の技術はこれに限定されず、コンピュータ92に代えて、ASIC、FPGA、及び/又はPLDを含むデバイスを適用してもよい。また、コンピュータ92に代えて、ハードウェア構成及びソフトウェア構成の組み合わせを用いてもよい。 Further, in the above embodiment, the computer 92 including the processor 94, the storage 96, and the RAM 98 is illustrated as the processing device 90, but the technology of the present disclosure is not limited to this, and instead of the computer 92, an ASIC, A device including an FPGA and/or a PLD may also be applied. Further, instead of the computer 92, a combination of hardware configuration and software configuration may be used.
 また、上記実施形態で説明した各種処理を実行するハードウェア資源としては、次に示す各種のプロセッサを用いることができる。プロセッサとしては、例えば、ソフトウェア、すなわち、プログラムを実行することで、各種処理を実行するハードウェア資源として機能する汎用的なプロセッサであるCPUが挙げられる。また、プロセッサとしては、例えば、FPGA、PLD、又はASICなどの特定の処理を実行させるために専用に設計された回路構成を有するプロセッサである専用電子回路が挙げられる。何れのプロセッサにもメモリが内蔵又は接続されており、何れのプロセッサもメモリを使用することで各種処理を実行する。 Additionally, the following various processors can be used as hardware resources for executing the various processes described in the above embodiments. Examples of the processor include a CPU, which is a general-purpose processor that functions as a hardware resource that executes various processes by executing software, that is, a program. Examples of the processor include a dedicated electronic circuit such as an FPGA, a PLD, or an ASIC, which is a processor having a circuit configuration specifically designed to execute a specific process. Each processor has a built-in memory or is connected to it, and each processor uses the memory to perform various processes.
 各種処理を実行するハードウェア資源は、これらの各種のプロセッサのうちの1つで構成されてもよいし、同種または異種の2つ以上のプロセッサの組み合わせ(例えば、複数のFPGAの組み合わせ、又はCPUとFPGAとの組み合わせ)で構成されてもよい。また、各種処理を実行するハードウェア資源は1つのプロセッサであってもよい。 Hardware resources that execute various processes may be configured with one of these various processors, or a combination of two or more processors of the same type or different types (for example, a combination of multiple FPGAs, or a CPU and FPGA). Furthermore, the hardware resource that executes various processes may be one processor.
 1つのプロセッサで構成する例としては、第1に、1つ以上のCPUとソフトウェアの組み合わせで1つのプロセッサを構成し、このプロセッサが、各種処理を実行するハードウェア資源として機能する形態がある。第2に、SoCなどに代表されるように、各種処理を実行する複数のハードウェア資源を含むシステム全体の機能を1つのICチップで実現するプロセッサを使用する形態がある。このように、各種処理は、ハードウェア資源として、上記各種のプロセッサの1つ以上を用いて実現される。 As an example of a configuration using one processor, firstly, one processor is configured by a combination of one or more CPUs and software, and this processor functions as a hardware resource that executes various processes. Second, there is a form of using a processor, as typified by an SoC, in which a single IC chip realizes the functions of an entire system including a plurality of hardware resources that execute various processes. In this way, various types of processing are realized using one or more of the various types of processors described above as hardware resources.
 更に、これらの各種のプロセッサのハードウェア的な構造としては、より具体的には、半導体素子などの回路素子を組み合わせた電子回路を用いることができる。また、上記の視線検出処理はあくまでも一例である。したがって、主旨を逸脱しない範囲内において不要なステップを削除したり、新たなステップを追加したり、処理順序を入れ替えたりしてもよいことは言うまでもない。 Furthermore, as the hardware structure of these various processors, more specifically, an electronic circuit that is a combination of circuit elements such as semiconductor elements can be used. Further, the above line of sight detection processing is just an example. Therefore, it goes without saying that unnecessary steps may be deleted, new steps may be added, or the processing order may be rearranged without departing from the main idea.
 以上に示した記載内容及び図示内容は、本開示の技術に係る部分についての詳細な説明であり、本開示の技術の一例に過ぎない。例えば、上記の構成、機能、作用、及び効果に関する説明は、本開示の技術に係る部分の構成、機能、作用、及び効果の一例に関する説明である。よって、本開示の技術の主旨を逸脱しない範囲内において、以上に示した記載内容及び図示内容に対して、不要な部分を削除したり、新たな要素を追加したり、置き換えたりしてもよいことは言うまでもない。また、錯綜を回避し、本開示の技術に係る部分の理解を容易にするために、以上に示した記載内容及び図示内容では、本開示の技術の実施を可能にする上で特に説明を要しない技術常識等に関する説明は省略されている。 The descriptions and illustrations described above are detailed explanations of the parts related to the technology of the present disclosure, and are merely examples of the technology of the present disclosure. For example, the above description regarding the configuration, function, operation, and effect is an example of the configuration, function, operation, and effect of the part related to the technology of the present disclosure. Therefore, unnecessary parts may be deleted, new elements may be added, or replacements may be made to the written and illustrated contents described above without departing from the gist of the technology of the present disclosure. Needless to say. In addition, in order to avoid confusion and facilitate understanding of the parts related to the technology of the present disclosure, the descriptions and illustrations shown above do not include parts that require particular explanation in order to enable implementation of the technology of the present disclosure. Explanations regarding common technical knowledge, etc. that do not apply are omitted.
 本明細書において、「A及び/又はB」は、「A及びBのうちの少なくとも1つ」と同義である。つまり、「A及び/又はB」は、Aだけであってもよいし、Bだけであってもよいし、A及びBの組み合わせであってもよい、という意味である。また、本明細書において、3つ以上の事柄を「及び/又は」で結び付けて表現する場合も、「A及び/又はB」と同様の考え方が適用される。 In this specification, "A and/or B" has the same meaning as "at least one of A and B." That is, "A and/or B" means that it may be only A, only B, or a combination of A and B. Furthermore, in this specification, even when three or more items are expressed by connecting them with "and/or", the same concept as "A and/or B" is applied.
 本明細書に記載された全ての文献、特許出願及び技術規格は、個々の文献、特許出願及び技術規格が参照により取り込まれることが具体的かつ個々に記された場合と同程度に、本明細書中に参照により取り込まれる。 All documents, patent applications, and technical standards mentioned herein are incorporated herein by reference to the same extent as if each individual document, patent application, and technical standard was specifically and individually indicated to be incorporated by reference. Incorporated by reference into this book.
 以上の実施形態に関し、さらに以下の付記を開示する。 Regarding the above embodiments, the following additional notes are further disclosed.
 (付記1)
 請求項1に記載の校正用部材と、プロセッサとを備え、
 前記プロセッサは、
 前記校正用部材が前記分光撮像装置によって撮像されることにより得られた第1撮像データを取得し、
 前記校正用部材及び被写体が前記分光撮像装置によって撮像されることにより得られた第2撮像データを取得し、
 前記第1撮像データに基づいて前記第2撮像データに対する校正を行い、
 前記校正が行われた前記第2撮像データに基づいて前記被写体の色を測定する
 色測定装置。
 (付記2)
 請求項1に記載の校正用部材が前記分光撮像装置によって撮像されることにより得られた第1撮像データを取得する第1取得工程と、
 前記校正用部材及び前記被写体が前記分光撮像装置によって撮像されることにより得られた第2撮像データを取得する第2取得工程と、
 前記第1撮像データに基づいて前記第2撮像データに対する校正を行う校正工程と、
 前記校正が行われた前記第2撮像データに基づいて前記被写体の色を測定する色測定工程と、
 を備える色測定方法。
 (付記3)
 請求項1に記載の校正用部材が前記分光撮像装置によって撮像されることにより得られた第1撮像データを取得する第1取得工程と、
 前記校正用部材及び前記被写体が前記分光撮像装置によって撮像されることにより得られた第2撮像データを取得する第2取得工程と、
 前記第1撮像データに基づいて前記第2撮像データに対する校正を行う校正工程と、
 前記校正が行われた前記第2撮像データに基づいて前記被写体の色を測定する色測定工程と、
 を含む処理をコンピュータに処理を実行させるためのプログラム。
(Additional note 1)
comprising the calibration member according to claim 1 and a processor,
The processor includes:
obtaining first imaging data obtained by imaging the calibration member by the spectroscopic imaging device;
acquiring second imaging data obtained by imaging the calibration member and the subject by the spectroscopic imaging device;
calibrating the second image data based on the first image data;
A color measurement device that measures the color of the subject based on the second image data that has been calibrated.
(Additional note 2)
a first acquisition step of acquiring first imaging data obtained by imaging the calibration member according to claim 1 with the spectroscopic imaging device;
a second acquisition step of acquiring second imaging data obtained by imaging the calibration member and the subject with the spectroscopic imaging device;
a calibration step of calibrating the second image data based on the first image data;
a color measurement step of measuring the color of the subject based on the second image data that has been calibrated;
A color measurement method comprising:
(Appendix 3)
a first acquisition step of acquiring first imaging data obtained by imaging the calibration member according to claim 1 with the spectroscopic imaging device;
a second acquisition step of acquiring second imaging data obtained by imaging the calibration member and the subject with the spectroscopic imaging device;
a calibration step of calibrating the second image data based on the first image data;
a color measurement step of measuring the color of the subject based on the second image data that has been calibrated;
A program that causes a computer to perform processing, including processing.

Claims (21)

  1.  特定の波長域を有する分光フィルタを備えた分光撮像装置の校正に用いられる校正用部材であって、
     前記校正用部材は、被写体の背景を構成する背景面を有し、
     前記波長域において、前記被写体で反射した光の分光反射率である第1分光反射率と、前記背景面で反射した光の分光反射率である第2分光反射率とは、互いに関係性を有する
     校正用部材。
    A calibration member used for calibrating a spectral imaging device equipped with a spectral filter having a specific wavelength range,
    The calibration member has a background surface that constitutes the background of the subject,
    In the wavelength range, a first spectral reflectance that is the spectral reflectance of the light reflected by the subject and a second spectral reflectance that is the spectral reflectance of the light reflected by the background surface have a relationship with each other. Calibration parts.
  2.  前記関係性は、前記第1分光反射率と前記第2分光反射率との差が第1範囲内である関係性である
     請求項1に記載の校正用部材。
    The calibration member according to claim 1, wherein the relationship is such that a difference between the first spectral reflectance and the second spectral reflectance is within a first range.
  3.  前記第1範囲は、前記第1分光反射率に対して0.5倍の反射率から2倍の反射率までの範囲である
     請求項2に記載の校正用部材。
    The calibration member according to claim 2, wherein the first range is from a reflectance 0.5 times the first spectral reflectance to a reflectance twice the first spectral reflectance.
  4.  前記第1範囲は、第1基準板で反射した光の分光反射率である第3分光反射率に基づいて設定される
     請求項2又は請求項3に記載の校正用部材。
    The calibration member according to claim 2 or 3, wherein the first range is set based on a third spectral reflectance that is a spectral reflectance of light reflected by the first reference plate.
  5.  前記第1基準板は、前記光を反射する反射面を有し、
     前記反射面は、白色面である
     請求項4に記載の校正用部材。
    The first reference plate has a reflective surface that reflects the light,
    The calibration member according to claim 4, wherein the reflective surface is a white surface.
  6.  前記第1分光反射率と前記第2分光反射率との差は、前記第1分光反射率と前記第3分光反射率との差よりも小さい
     請求項4又は請求項5に記載の校正用部材。
    The calibration member according to claim 4 or 5, wherein the difference between the first spectral reflectance and the second spectral reflectance is smaller than the difference between the first spectral reflectance and the third spectral reflectance. .
  7.  前記第2分光反射率は、前記第3分光反射率よりも低い
     請求項4から請求項6の何れか一項に記載の校正用部材。
    The calibration member according to any one of claims 4 to 6, wherein the second spectral reflectance is lower than the third spectral reflectance.
  8.  前記第1分光反射率は、前記第3分光反射率よりも低い
     請求項4から請求項7の何れか一項に記載の校正用部材。
    The calibration member according to any one of claims 4 to 7, wherein the first spectral reflectance is lower than the third spectral reflectance.
  9.  前記第2分光反射率は、前記第1分光反射率及び前記第3分光反射率に基づいて設定された分光反射率の第2範囲に収まる
     請求項4から請求項8の何れか一項に記載の校正用部材。
    The second spectral reflectance falls within a second range of spectral reflectances set based on the first spectral reflectance and the third spectral reflectance. Calibration parts.
  10.  前記第1分光反射率をaとし、前記第2分光反射率をbとし、前記第3分光反射率をcとした場合に、
     前記第2範囲は、式(1)によって規定される範囲である
     a/2≦b≦(c+a)/2・・・(1)
     請求項9に記載の校正用部材。
    When the first spectral reflectance is a, the second spectral reflectance is b, and the third spectral reflectance is c,
    The second range is a range defined by formula (1): a/2≦b≦(c+a)/2 (1)
    The calibration member according to claim 9.
  11.  前記第1分光反射率をaとし、前記第2分光反射率をbとし、前記第3分光反射率をcとした場合に、
     前記第2範囲は、式(2)によって規定される範囲である
     3a/4≦b≦(c+3a)/4・・・(2)
     請求項9に記載の校正用部材。
    When the first spectral reflectance is a, the second spectral reflectance is b, and the third spectral reflectance is c,
    The second range is defined by formula (2): 3a/4≦b≦(c+3a)/4 (2)
    The calibration member according to claim 9.
  12.  前記第1分光反射率は、前記被写体の複数の箇所に対して測定された分光反射率に基づく分光反射率である
     請求項1から請求項11の何れか一項に記載の校正用部材。
    The calibration member according to any one of claims 1 to 11, wherein the first spectral reflectance is a spectral reflectance based on spectral reflectances measured at a plurality of locations on the subject.
  13.  前記特定の波長域は、複数の波長域を含み、
     各前記波長域において、前記第1分光反射率と前記第2分光反射率との差は、前記第1分光反射率と前記第3分光反射率との差よりも小さい
     請求項4から請求項11、及び請求項4に従属する請求項12の何れか一項に記載の校正用部材。
    The specific wavelength range includes a plurality of wavelength ranges,
    In each of the wavelength ranges, the difference between the first spectral reflectance and the second spectral reflectance is smaller than the difference between the first spectral reflectance and the third spectral reflectance. , and the calibration member according to any one of claims 12 and 12 depending on claim 4.
  14.  前記背景面は、前記被写体と対応する表面粗さを有する
     請求項1から請求項13の何れか一項に記載の校正用部材。
    The calibration member according to any one of claims 1 to 13, wherein the background surface has a surface roughness corresponding to that of the subject.
  15.  前記背景面の分光反射率は、可視域のうちの第1可視域よりも近赤外域のうちの第1近赤外域の方が高い
     請求項1から請求項14の何れか一項に記載の校正用部材。
    15. The spectral reflectance of the background surface is higher in a first near-infrared region of the near-infrared region than in a first visible region of the visible region. Calibration parts.
  16.  請求項1から請求項15の何れか一項に記載の校正用部材と、
     前記校正用部材及び前記被写体が配置された撮像空間を覆う筐体と、
     を備える筐体装置。
    A calibration member according to any one of claims 1 to 15,
    a housing that covers an imaging space in which the calibration member and the subject are arranged;
    A housing device comprising:
  17.  請求項1から請求項15の何れか一項に記載の校正用部材と、前記分光撮像装置と、光源と、筐体とを備え、
     前記筐体は、前記校正用部材及び前記被写体が配置された撮像空間を覆っており、
     前記被写体を前記分光撮像装置で撮像する場合の撮像条件は、前記分光撮像装置に入射する入射光のうちの第1成分が、前記光源から照射され前記被写体及び前記校正用部材で反射した光となる第1条件である
     校正装置。
    The calibration member according to any one of claims 1 to 15, the spectral imaging device, a light source, and a housing,
    The housing covers an imaging space in which the calibration member and the subject are arranged,
    The imaging conditions for imaging the subject with the spectroscopic imaging device are such that the first component of the incident light that enters the spectroscopic imaging device is light irradiated from the light source and reflected by the subject and the calibration member. The first condition is a calibration device.
  18.  プロセッサを備え、
     前記プロセッサは、前記撮像条件が前記第1条件を外れた場合に警告情報を出力する
     請求項17に記載の校正装置。
    Equipped with a processor,
    The calibration device according to claim 17, wherein the processor outputs warning information when the imaging condition deviates from the first condition.
  19.  請求項1から請求項15の何れか一項に記載の校正用部材と、プロセッサとを備え、
     前記プロセッサは、
     前記校正用部材が前記分光撮像装置によって撮像されることにより得られた第1撮像データを取得し、
     前記校正用部材及び被写体が前記分光撮像装置によって撮像されることにより得られた第2撮像データを取得し、
     前記第1撮像データに基づいて前記第2撮像データに対する校正を行う
     校正装置。
    comprising the calibration member according to any one of claims 1 to 15 and a processor,
    The processor includes:
    obtaining first imaging data obtained by imaging the calibration member by the spectroscopic imaging device;
    acquiring second imaging data obtained by imaging the calibration member and the subject by the spectroscopic imaging device;
    A calibration device that calibrates the second image data based on the first image data.
  20.  請求項1から請求項15の何れか一項に記載の校正用部材が前記分光撮像装置によって撮像されることにより得られた第1撮像データを取得する第1取得工程と、
     前記校正用部材及び被写体が前記分光撮像装置によって撮像されることにより得られた第2撮像データを取得する第2取得工程と、
     前記第1撮像データに基づいて前記第2撮像データに対する校正を行う校正工程と
     を備える校正方法。
    a first acquisition step of acquiring first imaging data obtained by imaging the calibration member according to any one of claims 1 to 15 with the spectroscopic imaging device;
    a second acquisition step of acquiring second imaging data obtained by imaging the calibration member and the subject with the spectroscopic imaging device;
    A calibration method comprising: a calibration step of calibrating the second image data based on the first image data.
  21.  請求項1から請求項15の何れか一項に記載の校正用部材が前記分光撮像装置によって撮像されることにより得られた第1撮像データを取得する第1取得工程と、
     前記校正用部材及び被写体が前記分光撮像装置によって撮像されることにより得られた第2撮像データを取得する第2取得工程と、
     前記第1撮像データに基づいて前記第2撮像データに対する校正を行う校正工程と
     を含む処理をコンピュータに処理を実行させるためのプログラム。
    a first acquisition step of acquiring first imaging data obtained by imaging the calibration member according to any one of claims 1 to 15 with the spectroscopic imaging device;
    a second acquisition step of acquiring second imaging data obtained by imaging the calibration member and the subject with the spectroscopic imaging device;
    a calibration step of calibrating the second image data based on the first image data; and a program for causing a computer to execute a process.
PCT/JP2023/017201 2022-08-29 2023-05-02 Member for calibration, housing device, calibration device, calibration method, and program WO2024047944A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-136090 2022-08-29
JP2022136090 2022-08-29

Publications (1)

Publication Number Publication Date
WO2024047944A1 true WO2024047944A1 (en) 2024-03-07

Family

ID=90099135

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/017201 WO2024047944A1 (en) 2022-08-29 2023-05-02 Member for calibration, housing device, calibration device, calibration method, and program

Country Status (1)

Country Link
WO (1) WO2024047944A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000214103A (en) * 1999-01-22 2000-08-04 Toshiba Eng Co Ltd Defect-inspecting device
JP2003214951A (en) * 2002-01-28 2003-07-30 Matsushita Electric Works Ltd Spectrometric measuring device and method
US20110149109A1 (en) * 2009-12-21 2011-06-23 Electronics And Telecommunications Research Institute Apparatus and method for converting color of taken image
WO2017183567A1 (en) * 2016-04-19 2017-10-26 コニカミノルタ株式会社 Calibration system, calibration method, and calibration program
JP2021113744A (en) * 2020-01-20 2021-08-05 株式会社コンセプトアートテクノロジーズ Imaging system
WO2022163671A1 (en) * 2021-01-29 2022-08-04 富士フイルム株式会社 Data processing device, method, and program, optical element, imaging optical system, and imaging device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000214103A (en) * 1999-01-22 2000-08-04 Toshiba Eng Co Ltd Defect-inspecting device
JP2003214951A (en) * 2002-01-28 2003-07-30 Matsushita Electric Works Ltd Spectrometric measuring device and method
US20110149109A1 (en) * 2009-12-21 2011-06-23 Electronics And Telecommunications Research Institute Apparatus and method for converting color of taken image
WO2017183567A1 (en) * 2016-04-19 2017-10-26 コニカミノルタ株式会社 Calibration system, calibration method, and calibration program
JP2021113744A (en) * 2020-01-20 2021-08-05 株式会社コンセプトアートテクノロジーズ Imaging system
WO2022163671A1 (en) * 2021-01-29 2022-08-04 富士フイルム株式会社 Data processing device, method, and program, optical element, imaging optical system, and imaging device

Similar Documents

Publication Publication Date Title
CN106456070B (en) Image forming apparatus and method
US9638575B2 (en) Measuring apparatus, measuring system, and measuring method
JP4235252B2 (en) Image processing device
CN108780142A (en) 3D imaging systems and method
US10438365B2 (en) Imaging device, subject information acquisition method, and computer program
US9188841B2 (en) Imaging device
WO2022163671A1 (en) Data processing device, method, and program, optical element, imaging optical system, and imaging device
JP2018040976A (en) Filter, image capturing device, and image capturing system
JP6225519B2 (en) Measuring apparatus and measuring method
JP6341335B2 (en) Two-dimensional color measuring device
WO2024047944A1 (en) Member for calibration, housing device, calibration device, calibration method, and program
US20230230345A1 (en) Image analysis method, image analysis device, program, and recording medium
JP6931401B2 (en) Imaging equipment and image processing equipment
JP2017058559A (en) Optical device and imaging device
JP2015513824A (en) Imaging
EP4086597A1 (en) Imaging unit and measurement device
US11982899B2 (en) Image processing device, imaging device, image processing method, and image processing program
WO2024024174A1 (en) Lens device, multispectral camera, control device, control method, and program
WO2023157396A1 (en) Lens device, information processing device, program, and method for manufacturing imaging device
WO2023145188A1 (en) Imaging device, processing device, processing method, program, and optical filter
WO2023007965A1 (en) Image-capturing method and program
JP5761762B2 (en) Spectral reflectance measuring apparatus and spectral reflectance measuring method
US20230082539A1 (en) Illuminant correction in an imaging system
WO2024042783A1 (en) Image processing device, image processing method, and program
WO2022234753A1 (en) Solid-state imaging device and electronic apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23859715

Country of ref document: EP

Kind code of ref document: A1