WO2024121983A1 - Data processing device, data processing method, and data processing program - Google Patents
Data processing device, data processing method, and data processing program Download PDFInfo
- Publication number
- WO2024121983A1 WO2024121983A1 PCT/JP2022/045130 JP2022045130W WO2024121983A1 WO 2024121983 A1 WO2024121983 A1 WO 2024121983A1 JP 2022045130 W JP2022045130 W JP 2022045130W WO 2024121983 A1 WO2024121983 A1 WO 2024121983A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- imaging
- infrared
- coefficient
- scanning mirror
- areas
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims description 26
- 238000003672 processing method Methods 0.000 title claims description 3
- 238000003384 imaging method Methods 0.000 claims abstract description 192
- 238000004364 calculation method Methods 0.000 claims abstract description 50
- 230000003287 optical effect Effects 0.000 claims abstract description 17
- 230000005855 radiation Effects 0.000 claims abstract 3
- 238000000034 method Methods 0.000 claims description 23
- 230000008569 process Effects 0.000 claims description 14
- 238000005457 optimization Methods 0.000 claims description 6
- 238000012544 monitoring process Methods 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 55
- 238000004891 communication Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 8
- 230000008859 change Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000032683 aging Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 239000011248 coating agent Substances 0.000 description 1
- 238000000576 coating method Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000035772 mutation Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64G—COSMONAUTICS; VEHICLES OR EQUIPMENT THEREFOR
- B64G1/00—Cosmonautic vehicles
- B64G1/10—Artificial satellites; Systems of such satellites; Interplanetary vehicles
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64G—COSMONAUTICS; VEHICLES OR EQUIPMENT THEREFOR
- B64G1/00—Cosmonautic vehicles
- B64G1/22—Parts of, or equipment specially adapted for fitting in or to, cosmonautic vehicles
- B64G1/66—Arrangements or adaptations of apparatus or instruments, not otherwise provided for
Definitions
- This disclosure relates to calibrating the infrared emissivity of a scanning mirror installed in an imaging device in space.
- the characteristics of an imaging device are prone to change due to changes in the environment. In order to perform appropriate correction processing and obtain higher quality captured images, it is necessary to accurately grasp the characteristics of the imaging device at the time of capturing the image. For this reason, the characteristic evaluation of the imaging device is updated by capturing an image of a specific object while the imaging device is in operation, or by using a special imaging technique.
- the observed radiance changes depending on the angle of the scanning mirror when capturing an infrared image. This is because the infrared emissivity of the coating applied to the surface of the scanning mirror depends on the angle. Therefore, in order to accurately estimate the infrared radiance of a subject captured by the imaging device, it is necessary to properly calibrate the infrared emissivity of the scanning mirror to the angle of the scanning mirror.
- Non-Patent Document 1 a technology related to this disclosure is disclosed in Non-Patent Document 1.
- the infrared emissivity of a coated scanning mirror depends on the angle of incidence of light on the scanning mirror. It has been confirmed that the infrared emissivity of a scanning mirror installed in an imaging device mounted on a satellite changes due to changes in the environment between on the ground before launch and in space after launch. In other words, the infrared emissivity of a scanning mirror in space after launch will not match the infrared emissivity measured on the ground before launch. Infrared emissivity is used to calculate the infrared radiance of a subject. For this reason, if the infrared emissivity of the scanning mirror after launch is not accurately known, differences in the angle of the scanning mirror during imaging will cause errors in the calculated infrared radiance.
- the primary objective of this disclosure is to reduce the errors contained in such infrared radiance.
- the data processing device comprises: an imaging area designation unit that designates two imaging areas to be imaged by an imaging device in outer space in which one or more scanning mirrors and a line sensor are provided on the same optical path, the imaging areas being estimated to have a uniform radiance temperature in the outer space, having known infrared radiance values, and having different rotation angles for each imaging of at least one of the one or more scanning mirrors; an image acquisition unit that acquires two captured images obtained by the imaging device capturing images of the two imaging regions; a monitor data acquisition unit that acquires monitor data including angle data indicating a rotation angle of each scanning mirror at each time of imaging the two imaging areas;
- the imaging device further comprises a coefficient calculation section which calculates a coefficient of an infrared emissivity function for each of the one or more scanning mirrors using the two captured images and the monitor data.
- FIG. 1 is a diagram showing an example of a system configuration according to a first embodiment.
- FIG. 2 is a diagram showing an example of the internal configuration of an artificial satellite according to the first embodiment.
- 1 is a diagram showing an example of the internal configuration of an imaging device according to a first embodiment;
- FIG. 2 is a diagram showing an example of the hardware configuration of the emissivity calibration device according to the first embodiment.
- FIG. 2 is a diagram showing an example of the functional configuration of the emissivity calibration device according to the first embodiment.
- 4 is a flowchart showing an example of the operation of the emissivity calibration device according to the first embodiment.
- 4A and 4B are diagrams showing examples of designating an imaging region according to the first embodiment; 4A and 4B are diagrams showing examples of designating an imaging region according to the first embodiment; 4A and 4B are diagrams showing examples of designating an imaging region according to the first embodiment;
- FIG. 1 shows an example of a system configuration according to the present embodiment.
- the emissivity calibration device 100 is placed on the ground, while the image capturing device 200 is mounted on an artificial satellite 300.
- the image capturing device 200 exists in outer space.
- FIG. 2 shows an outline of the internal configuration of the satellite 300.
- the artificial satellite 300 includes an image capturing device 200 , an image capturing control device 400 , and a monitor device 500 .
- 2 shows only the internal configuration necessary for explaining this embodiment, i.e., the mechanism for controlling the operation of the artificial satellite 300 and the like are omitted.
- an imaging control device 400 controls the imaging of the imaging device 200 .
- the monitor device 500 monitors the state of the scanning mirror mounted in the imaging device 200 .
- the image capturing device 200, the image capturing control device 400, and the monitor device 500 are shown as separate devices, but the image capturing device 200, the image capturing control device 400, and the monitor device 500 may be integrated.
- Fig. 3 shows an outline of the internal structure of the imaging device 200. Note that Fig. 3 shows only the internal configuration necessary for explaining this embodiment.
- the imaging device 200 includes one or more scanning mirrors and a line sensor on the same optical path. Each scanning mirror is a one-axis driven scanning mirror. 3 shows an example in which the imaging device 200 includes a scanning mirror ⁇ , which is a primary scanning mirror, and a scanning mirror ⁇ , which is a secondary scanning mirror.
- the scanning mirror ⁇ and the scanning mirror ⁇ can rotate around a rotation axis. In other words, the rotation angles of the scanning mirror ⁇ and the scanning mirror ⁇ can be changed.
- the rotation angle of the scanning mirror is the amount of displacement (angle) from the reference angle (reference position) of the scanning mirror.
- the rotation axes of the scanning mirror ⁇ and the scanning mirror ⁇ do not have to be parallel. For example, they may be perpendicular.
- the line sensor captures an image of a region in outer space that is designated as an imaging region by the emissivity calibration device 100 .
- the arrows shown in FIG. 3 indicate optical paths.
- the monitor device 500 shown in FIG. 2 monitors the rotation angle and temperature of each scanning mirror during imaging as the state of each scanning mirror.
- the infrared emissivity of the scanning mirror before launch does not match that after launch. If the infrared emissivity of the scanning mirror after launch is not accurately known, differences in the angle of the scanning mirror during imaging will cause errors in the infrared radiance of the subject. For this reason, in this embodiment, the emissivity calibration device 100 specifies two areas in which the rotation angle of the scanning mirror differs for each image capture, causes the imaging device 200 to capture images of the two areas, and estimates the infrared emissivity of the scanning mirror based on the difference in the rotation angle of the scanning mirror from the difference between the two captured images.
- the emissivity calibration device 100 specifies two imaging regions to be imaged by the imaging device 200, and transmits imaging region data 150 indicating the two imaging regions to the imaging control device 400, as shown in Fig. 2. The method of specifying the imaging regions will be described in detail later.
- the imaging control device 400 transfers the imaging area data 150 to the imaging device 200 .
- the imaging device 200 captures images of the two imaging regions specified by the imaging region data 150 , and transmits the two captured images 250 to the emissivity calibration device 100 .
- the monitor device 500 monitors the rotation angle and temperature of each scanning mirror during imaging, and transmits monitor data 550 indicating the monitoring results to the emissivity calibration device 100.
- the monitor data 550 includes angle data and temperature data of each scanning mirror at each imaging time.
- the emissivity calibration device 100 uses the two captured images 250 and the monitor data 550 to estimate the infrared emissivity of the scanning mirror based on the difference in the rotation angle of the scanning mirror.
- FIG. 4 shows an example of the hardware configuration of the emissivity calibration device 100 according to the present embodiment.
- FIG. 5 shows an example of the functional configuration of the emissivity calibration device 100 according to this embodiment.
- the emissivity calibration device 100 is a computer.
- the emissivity calibration device 100 corresponds to a data processing device.
- the operation procedure of the emissivity calibration device 100 corresponds to a data processing method.
- a program that realizes the operation of the emissivity calibration device 100 corresponds to a data processing program.
- the emissivity calibration device 100 includes, as hardware, a processor 901, a main memory device 902, an auxiliary memory device 903, a communication device 904, and an input/output device 905.
- the emissivity calibration device 100 includes, as its functional configuration, an imaging area designation unit 101, an image acquisition unit 102, a monitor data acquisition unit 103, a coefficient calculation unit 104, a coefficient output unit 105, an imaging area database 106, and an apparatus characteristic database 107.
- the functions of the imaging area designation unit 101, the image acquisition unit 102, the monitor data acquisition unit 103, the coefficient calculation unit 104, and the coefficient output unit 105 are realized by, for example, a program.
- the auxiliary storage device 903 stores programs for implementing the functions of the imaging area designation unit 101 , the image acquisition unit 102 , the monitor data acquisition unit 103 , the coefficient calculation unit 104 , and the coefficient output unit 105 . These programs are loaded from the auxiliary storage device 903 to the main storage device 902. Then, the processor 901 executes these programs to perform the operations of an imaging area designation unit 101, an image acquisition unit 102, a monitor data acquisition unit 103, a coefficient calculation unit 104, and a coefficient output unit 105, which will be described later.
- FIG. 4 illustrates a state in which the processor 901 is executing a program that realizes the functions of the imaging area designation unit 101, the image acquisition unit 102, the monitor data acquisition unit 103, the coefficient calculation unit 104, and the coefficient output unit 105, i.e., a state in which the processor 901 operates as the imaging area designation unit 101, the image acquisition unit 102, the monitor data acquisition unit 103, the coefficient calculation unit 104, and the coefficient output unit 105.
- the imaging area database 106 and the device characteristics database 107 shown in FIG. 4 are realized in the main storage device 902 and/or the auxiliary storage device 903 .
- an imaging area designation unit 101 designates two imaging areas.
- the imaging areas are areas to be imaged by the imaging device 200.
- the two imaging areas are treated as a set.
- the imaging area designation unit 101 designates a plurality of area sets. Specifically, the imaging area designation unit 101 designates, as the two imaging areas, two areas that are estimated to have a uniform brightness temperature in outer space, have known infrared radiance values, and have different rotation angles for each imaging of each scanning mirror. For example, the imaging area designation unit 101 designates two deep space areas as imaging areas. Furthermore, the imaging area designation unit 101 may designate two areas located symmetrically with respect to an arbitrary area in outer space as the imaging area.
- the imaging region designation unit 101 may designate two deep space regions that are symmetrically positioned with respect to an arbitrary region in outer space as the imaging region.
- the deep space region is a region of space that is more than 2 million kilometers away from the Earth.
- the imaging area designation unit 101 transmits imaging area data 150 indicating the two imaging areas to the imaging control device 400 via the communication device 904 .
- the imaging region designation unit 101 stores the imaging region data 150 in the coefficient output unit 105 .
- the process performed by the imaging area designation unit 101 corresponds to imaging area designation processing.
- the image acquisition unit 102 acquires the captured image 250 from the imaging device 200 via the communication device 904.
- the captured image 250 is an image obtained by capturing an image of an area designated as an imaging area by the imaging area designation unit 101.
- the image acquisition unit 102 acquires the captured images 250 of a plurality of area sets.
- the image acquisition unit 102 transfers the acquired captured image 250 to the coefficient calculation unit 104 .
- the process performed by the image acquisition unit 102 corresponds to an image acquisition process.
- the monitor data acquisition unit 103 acquires the monitor data 550 from the monitor device 500 via the communication device 904.
- the monitor data acquisition unit 103 acquires the monitor data 550 of a plurality of region sets.
- the monitor data acquisition unit 103 acquires the monitor data 550 in the imaging of a plurality of region sets.
- the monitor data acquisition unit 103 transfers the acquired monitor data 550 to the coefficient calculation unit 104.
- the monitor data 550 includes angle data and temperature data of each scanning mirror at each time when the imaging area designation unit 101 captures the two imaging areas. 2, assume that imaging device 200 has two scanning mirrors (scanning mirror ⁇ and scanning mirror ⁇ ).
- the monitor data acquisition unit 103 acquires ( ⁇ 1 , ⁇ 1 , T1, ⁇ 1 , ⁇ 2, ⁇ 2 , T2 , ⁇ 2 ) as angle data and temperature data corresponding to a set of captured images ( N1 , N2 ).
- the set of captured images ( N1 , N2 ) may be captured at the same time or at different times.
- the process performed by the monitor data acquisition unit 103 corresponds to a monitor data acquisition process.
- the coefficient calculation unit 104 acquires two paired captured images 250 from the image acquisition unit 102.
- the coefficient calculation unit 104 also acquires corresponding monitor data 550 from the monitor data acquisition unit 103.
- the coefficient calculation unit 104 acquires device characteristic data 160 from a device characteristic database 107.
- the device characteristic data 160 is data indicating the characteristics of the image capture device 200. Specifically, the device characteristic data 160 indicates an infrared wavelength and an infrared calibration coefficient for each channel of the line sensor of the image capture device 200.
- the coefficient calculation unit 104 calculates coefficients 170 of an infrared emissivity function for each scanning mirror using the captured images 250, the monitor data 550, and the device characteristic data 160.
- the coefficient calculation unit 104 calculates coefficients 170 of an infrared emissivity function that reduces the difference in infrared radiance between the two captured images 250. All of the coefficients 170 of the infrared emissivity function for each scanning mirror of all scanning mirrors provided on the same optical path of the imaging device 200 are calculated by the same physical model using the two captured images 250, the monitor data 550 including the rotation angle data and temperature data of the scanning mirror for each image capture, and the device characteristic data 160. Then, the coefficient calculation unit 104 outputs the calculated coefficient 170 of the infrared emissivity function of each scanning mirror to the coefficient output unit 105 . The coefficients of the infrared emissivity function and the physical model will be described in detail later. The process performed by the coefficient calculation unit 104 corresponds to a coefficient calculation process.
- the coefficient output unit 105 outputs the coefficient 170 of the infrared emissivity function of each scanning mirror using the input/output device 905. For example, the coefficient output unit 105 displays the coefficient 170 of the infrared emissivity function of each scanning mirror on a display included in the input/output device 905.
- the imaging area database 106 stores imaging area data 150 .
- the device characteristic database 107 also stores device characteristic data 160 .
- the imaging region designation unit 101 designates two imaging regions. As described above, the imaging area designation unit 101 designates, as imaging areas, two areas that are estimated to have a uniform brightness temperature in outer space, have known infrared radiance values, and have at least one scanning mirror with a different rotation angle each time imaging is performed. The imaging region designation unit 101 designates a plurality of region sets.
- FIG. 7 shows an example of a method for specifying an imaging area by the imaging area specifying unit 101.
- each rectangle around the Earth represents a deep space region.
- the imaging region designation unit 101 may designate, for example, two deep space regions located symmetrically in the X-axis direction with respect to the center line of the Earth in the Y-axis direction as the imaging region.
- the imaging region designation unit 101 may designate, for example, two deep space regions located symmetrically in the Y-axis direction with respect to the center line of the Earth in the X-axis direction as the imaging region.
- the method of specifying the imaging area by the imaging area designation unit 101 is not limited to the method of Fig. 7 and Fig. 8.
- the imaging area designation unit 101 may designate two deep space areas that are not symmetrical in either the X-axis direction or the Y-axis direction as the imaging area.
- any deep space area around the earth and any deep space area not around the earth may be designated as the two imaging areas.
- two deep space areas not around the earth may be designated as the imaging areas.
- the imaging area designation unit 101 may designate an area other than the deep space area as the imaging area as long as it satisfies the following conditions: "It is estimated to have a uniform brightness temperature, the infrared radiance value is known, and the rotation angle of each scanning mirror is different each time imaging is performed.”
- the imaging area designation unit 101 can designate the imaging area according to an instruction from a user (hereinafter also referred to as a designer) of the emissivity calibration device 100.
- the imaging area designation unit 101 may also designate the imaging area by analyzing captured images previously captured by the imaging device 200 and searching for an area "estimated to have a uniform radiance temperature, with a known infrared radiance value, and with a different rotation angle for each imaging of each scanning mirror.”
- the image acquisition unit 102 acquires the captured image 250 from the imaging device 200 .
- the captured image 250 acquired by the image acquisition unit 102 may be a luminance image, or may be a set of data required to generate a luminance image.
- the monitor data acquisition unit 103 acquires the monitor data 550 from the monitor device 500. That is, the monitor data acquisition unit 103 acquires the angle data and temperature data of each scanning mirror when the captured image 250 acquired in STEP 2 was captured.
- the monitor data acquisition unit 103 acquires monitor data 550 in capturing images of a plurality of area sets.
- the image acquisition unit 102 acquires captured images 250 of a plurality of area sets.
- the coefficient calculation unit 104 calculates the coefficient 170 of the infrared emissivity function using the captured image 250 , the monitor data 550 and the apparatus characteristic data 160 .
- the captured image 250, monitor data 550, and device characteristic data 160 are input to a physical model M that represents the relationship between the optical path and infrared radiance of the imaging device 200, and coefficients 170 of the infrared emissivity function of each scanning mirror are calculated for all scanning mirrors from the same physical model M.
- the physical model M is an equation that mathematically represents the physical relationship related to the infrared radiance in the imaging device 200. Details of the physical model M will be described later.
- the coefficient output unit 105 outputs the coefficient 170 of the infrared emissivity function.
- the user (designer) of the emissivity calibration device 100 checks the coefficients 170 of the infrared emissivity function output from the coefficient output unit 105, and judges whether or not, for example, an effect of reducing luminance errors related to the angle dependency of the scanning mirror has been obtained. Specifically, the designer calculates the infrared radiance of the captured image 250 using the coefficients 170 of the infrared emissivity functions of all the scanning mirrors. Then, the designer compares the infrared radiance between the captured images 250 of two regions for each region set, and judges whether or not the luminance errors due to differences in the rotation angles of the scanning mirror have been reduced.
- the designer can calibrate the infrared radiance using the output coefficient 170 of the infrared emissivity function.
- the designer instructs the emissivity calibration device 100 to, for example, recalculate the infrared emissivity function coefficient 170.
- the emissivity calibration device 100 repeats the processes from STEP 4 onwards. In this case, in STEP 4, the coefficient calculation unit 104 recalculates the coefficient 170 of the infrared emissivity function using the new captured image 250 that has not been used in the calculation and the corresponding unused monitor data 550.
- the imaging area designation unit 101 may specify only one set of imaging areas, the image acquisition unit 102 may acquire only the imaging image 250 of the set of imaging areas, and the monitor data acquisition unit 103 may acquire only the monitor data 550 of the set of imaging areas. In this case, if the designer issues an instruction to recalculate the coefficient 170 of the infrared emissivity function, the processes from STEP 1 onwards are repeated.
- the coefficient calculation unit 104 inputs the captured image 250, the monitor data 550, and the device characteristic data 160 to a physical model M that represents the relationship between the optical path and infrared radiance of the imaging device 200, and calculates the coefficients of the infrared emissivity function of all scanning mirrors on the same optical path.
- the coefficients refer to the internal parameters of the infrared emissivity function. The technique disclosed in Reference 1 or Reference 2 is used as a method for determining these internal parameters.
- the coefficient calculation unit 104 calculates the coefficient 170 of the infrared emissivity function in the following manner.
- the description will proceed on the assumption that the imaging device 200 includes a scanning mirror ⁇ , which is a primary scanning mirror, and a scanning mirror ⁇ , which is a secondary scanning mirror, as shown in FIG.
- ⁇ represents the angle of incidence of light incident on the scanning mirror ⁇ .
- ⁇ depends on the rotation angle x of the scanning mirror ⁇ , and is therefore a function of x ( ⁇ (x)).
- the internal parameters of ⁇ (x) depend on the hardware.
- ⁇ ⁇ represents the angle of incidence of light incident on the scanning mirror ⁇ .
- ⁇ ⁇ depends on the rotation angle y of the scanning mirror ⁇ , and is therefore a function of y ( ⁇ ⁇ (y)).
- the internal parameters of ⁇ ⁇ (y) depend on the hardware.
- C the digital output of the line sensor (detection element).
- q, m, and b are the infrared calibration coefficients of the line sensor.
- q, m, and b are included in the device characteristic data 160.
- the temperature of scanning mirror ⁇ is represented as T ⁇
- T ⁇ The temperature of scanning mirror ⁇
- T ⁇ The infrared radiance of the object to be observed is represented as R.
- the blackbody radiance of scanning mirror ⁇ is represented as R ⁇ (T ⁇ ), which is a function of T ⁇ and is calculated using the infrared wavelength values included in device characteristic data 160 .
- the blackbody radiance of scanning mirror ⁇ is denoted as R ⁇ (T ⁇ ), where R ⁇ (T ⁇ ) is a function of T ⁇ and the value is calculated using the infrared wavelength values included in device characteristic data 160 .
- the infrared radiance (hereinafter also referred to as observed radiance) is expressed by the physical model M of the following equations (1), (2), and (3).
- R d M Formula (1)
- Rd qC2 + mC + b Equation (2)
- M (1 - ⁇ ( ⁇ ) ) (1 - ⁇ ( ⁇ )) R + ⁇ ( ⁇ ) R ⁇ ( T ⁇ ) + ⁇ ( ⁇ ) (1 - ⁇ ( ⁇ )) R ⁇ ( T ⁇ ) Equation (3)
- scanning mirror ⁇ is the secondary scanning mirror
- scanning mirror ⁇ is the primary scanning mirror.
- the optical path inside the imaging device 200 is configured as follows. Subject (light source) ⁇ Primary scanning mirror ⁇ ⁇ Secondary scanning mirror ⁇ ⁇ Line sensor
- the rotation angle of the scanning mirror ⁇ when capturing an image i (infrared radiance: R i ) is denoted as x i
- the temperature of the scanning mirror ⁇ is denoted as T ⁇ ,i
- the rotation angle of the scanning mirror ⁇ when capturing the image i is denoted as y i
- the temperature of the scanning mirror ⁇ is denoted as T ⁇ ,i .
- ⁇ ,i indicates the angle of incidence of light incident on the scanning mirror ⁇ when the captured image i is captured.
- ⁇ ,i depends on the rotation angle x i of the scanning mirror ⁇ , and is therefore a function of x i ( ⁇ ,i (x i )). Furthermore, ⁇ ,i indicates the angle of incidence of light incident on the scanning mirror ⁇ when the captured image i is captured. ⁇ ,i depends on the rotation angle y i of the scanning mirror ⁇ , and is therefore a function of y i ( ⁇ ,i (y i )).
- the rotation angle of scanning mirror ⁇ when capturing image j (infrared radiance: R j ) is denoted as x j
- the temperature of scanning mirror ⁇ is denoted as T ⁇ ,j
- the rotation angle of scanning mirror ⁇ when capturing image j is denoted as y j
- the temperature of scanning mirror ⁇ is denoted as T ⁇ ,j
- ⁇ ,j indicates the angle of incidence of light incident on scanning mirror ⁇ when captured image j is captured.
- ⁇ ,j depends on the rotation angle xj of scanning mirror ⁇ , and is therefore a function of xj ( ⁇ ,j ( xj )).
- ⁇ ,j indicates the angle of incidence of light incident on scanning mirror ⁇ when captured image j is captured.
- ⁇ ,j depends on the rotation angle yj of scanning mirror ⁇ , and is therefore a function of yj ( ⁇ ,j ( yj )).
- Equation (12) the difference between equations (10) and (11) is expressed by equation (12).
- ⁇ R k R d,i ⁇ R d,j Equation (10)
- ⁇ M k M i -M j Equation (11)
- E k ⁇ R k - ⁇ M k Equation (12)
- the subscript k at the bottom right of ⁇ R k , ⁇ M k , and E k is the region set number, and the region set number k is uniquely determined for each region set (i, j).
- the value of Ek for each region set k indicates the difference between the theoretical value of the observed radiance calculated by the physical model M and the value calculated from the rotation angle and temperature observed by the monitor device 500.
- the smaller the magnitude of this difference Ek the smaller the difference in infrared radiance between the imaging region i and the imaging region j of the region set k.
- the coefficient calculation unit 104 calculates coefficients a0 , a1 , a2 , b0 , b1, and b2 of the infrared emissivity functions ⁇ ( ⁇ ) and ⁇ ( ⁇ ) that minimize the magnitude of this difference Ek .
- the coefficient calculation unit 104 calculates coefficients 170 of the infrared emissivity function that minimizes the magnitude of the difference in infrared radiance between the image capture regions i and j.
- the coefficient calculation unit 104 can use an optimization algorithm to calculate the coefficient 170 of the infrared emissivity function.
- the coefficient calculation unit 104 calculates the coefficient 170 of the infrared emissivity function using an optimization algorithm as follows.
- the coefficient calculation unit 104 uses the following formula (13) to calculate the coefficient 170 of the infrared emissivity function.
- N is the total number of region sets
- F is a function arbitrarily designed by the designer
- k is the region set number.
- a natural number from 1 to N is assigned to k.
- the coefficient calculation unit 104 obtains an optimal solution to equation (15) by applying equation (14) and equation (15) to "CMA-ES” in the above-mentioned reference 1 or "( ⁇ / ⁇ w , ⁇ )-CMA-ES” in reference 2. Furthermore, the coefficient calculation unit 104 may obtain an optimal solution to equation (15) by applying equation (14) and equation (15) to an algorithm including a part or all of "CMA-ES” or "( ⁇ / ⁇ w , ⁇ )-CMA-ES”.
- the optimized value of parameter x in equation (15) is coefficient 170 of the infrared emissivity function.
- the coefficient calculation unit 104 treats the value of the rotation angle (x or y) of the scanning mirror corresponding to that reflecting mirror on the physical model M as a constant and performs calculations. Furthermore, if one of the two scanning mirrors does not exist, the coefficient calculation unit 104 sets the values of the emissivity function and blackbody radiance of the non-existent scanning mirror to 0 and performs calculations.
- the emissivity calibration device 100 calculates the coefficients of the infrared emissivity function that minimizes the magnitude of the difference in infrared radiance between the imaging region i and the imaging region j of the region set k.
- the designer can calibrate the infrared emissivity using the calculated coefficients of the infrared emissivity function. Therefore, according to this embodiment, it is possible to reduce errors in the infrared radiance even in images captured by an imaging device after launch.
- the infrared emissivity of the satellite at the time of launch may not match the infrared emissivity a long time after launch. According to this embodiment, it is also possible to reduce the error in infrared radiance caused by such a mismatch in infrared emissivity due to aging.
- the scanning mirror and the line sensor are on the same optical path.
- the coefficients of the infrared emissivity function for all the scanning mirrors need to be calculated from the same physical model.
- the emissivity calibration device 100 calculates the infrared emissivity coefficients of all scanning mirrors by applying the same physical model M. Therefore, according to this embodiment, the infrared emissivities of all scanning mirrors can be calibrated for one system.
- the processor 901 shown in FIG. 4 is an integrated circuit (IC) that performs processing.
- the processor 901 is a central processing unit (CPU), a digital signal processor (DSP), or the like.
- the main storage device 902 shown in FIG. 4 is a RAM (Random Access Memory).
- the auxiliary storage device 903 shown in FIG. 4 is a read only memory (ROM), a flash memory, a hard disk drive (HDD), or the like.
- the communication device 904 shown in FIG. 4 is an electronic circuit that performs data communication processing.
- the communication device 904 is, for example, a communication chip or a NIC (Network Interface Card).
- the input/output device 905 shown in FIG. 4 is, for example, a mouse, a keyboard, and a display.
- the auxiliary storage device 903 also stores an OS (Operating System). At least a part of the OS is executed by the processor 901 .
- the processor 901 executes at least a part of the OS, and executes programs that realize the functions of the imaging area designation unit 101, the image acquisition unit 102, the monitor data acquisition unit 103, the coefficient calculation unit 104, and the coefficient output unit 105.
- the processor 901 executes the OS, thereby performing task management, memory management, file management, communication control, and the like.
- At least one of information, data, signal values, and variable values indicating the results of processing by the imaging area designation unit 101, the image acquisition unit 102, the monitor data acquisition unit 103, the coefficient calculation unit 104, and the coefficient output unit 105 is stored in at least one of the main memory device 902, the auxiliary memory device 903, the register within the processor 901, and the cache memory.
- the programs for realizing the functions of the imaging area designation unit 101, the image acquisition unit 102, the monitor data acquisition unit 103, the coefficient calculation unit 104, and the coefficient output unit 105 may be stored in a portable recording medium such as a magnetic disk, a flexible disk, an optical disk, a compact disk, a Blu-ray (registered trademark) disk, a DVD, etc. Then, the portable recording medium storing the programs for realizing the functions of the imaging area designation unit 101, the image acquisition unit 102, the monitor data acquisition unit 103, the coefficient calculation unit 104, and the coefficient output unit 105 may be distributed.
- the "part" of at least any of the imaging area designation unit 101, the image acquisition unit 102, the monitor data acquisition unit 103, the coefficient calculation unit 104, and the coefficient output unit 105 may be read as a "circuit” or a "process” or a “procedure” or a "processing” or a “circuitry”.
- the emissivity calibration apparatus 100 may be realized by a processing circuit.
- the processing circuit is, for example, a logic IC (Integrated Circuit), a GA (Gate Array), an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-Programmable Gate Array).
- the imaging region designation unit 101, the image acquisition unit 102, the monitor data acquisition unit 103, the coefficient calculation unit 104, and the coefficient output unit 105 are each realized as part of a processing circuit.
- processing circuitry the higher-level concept of a processor and a processing circuit is called "processing circuitry.” That is, a processor and a processing circuit are each specific examples of "processing circuitry.”
- 100 Emissivity calibration device, 101: Imaging area designation unit, 102: Image acquisition unit, 103: Monitor data acquisition unit, 104: Coefficient calculation unit, 105: Coefficient output unit, 106: Imaging area database, 107: Device characteristics database, 150: Imaging area data, 160: Device characteristics data, 170: Coefficient of infrared emissivity function, 200: Imaging device, 250: Captured image, 300: Satellite, 400: Imaging control device, 500: Monitor device, 550: Monitor data, 901: Processor, 902: Main memory device, 903: Auxiliary memory device, 904: Communication device, 905: Input/output device.
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Aviation & Aerospace Engineering (AREA)
- Physics & Mathematics (AREA)
- Astronomy & Astrophysics (AREA)
- General Physics & Mathematics (AREA)
- Radiation Pyrometers (AREA)
Abstract
An imaging area designation unit (101) designates two areas of an outer space area in which one or more scanning mirrors and a line sensor are provided on the same optical path that are presumed to have a similar brightness temperature, have a known infrared radiation brightness value, and are areas for which the rotational angle of at least one of the one or more scanning mirrors is different each time image capture is performed as two imaging areas on which an imaging device that is in the outer space area is to perform image capture. An image acquisition unit (102) acquires two captured images obtained as a result of the imaging device performing image capture on the two imaging areas. A monitoring data acquisition unit (103) acquires monitoring data that includes angle data that indicates the rotational angle of each scanning mirror each time image capture is performed on the two imaging areas. A coefficient calculation unit (104) uses the two captured images and the monitoring data to calculate a coefficient for an infrared emissivity function for each of the one or more scanning mirrors.
Description
本開示は、宇宙空間にある撮像装置に設けられた走査鏡の赤外放射率の校正に関する。
This disclosure relates to calibrating the infrared emissivity of a scanning mirror installed in an imaging device in space.
撮像装置では、環境の変化による特性変化が生じやすい。適切な補正処理を行ない、より高品質な撮像画像を得るためには、撮像時点での撮像装置の特性を正確に把握する必要がある。このため、撮像装置を稼動させながら特定の対象を撮像する、あるいは、特殊な撮像手法を用いることで、撮像装置の特性評価の更新が行われる。
走査鏡が設けられた撮像装置では、赤外画像の撮像時の走査鏡の角度によって観測放射輝度に変化が生じる。これは、走査鏡の表面に施されたコーティングの赤外放射率が角度に依存するためである。このため、撮像装置が撮像した被写体の赤外放射輝度を正確に推定するには、走査鏡の赤外放射率を走査鏡の角度に対して適切に校正する必要がある。 The characteristics of an imaging device are prone to change due to changes in the environment. In order to perform appropriate correction processing and obtain higher quality captured images, it is necessary to accurately grasp the characteristics of the imaging device at the time of capturing the image. For this reason, the characteristic evaluation of the imaging device is updated by capturing an image of a specific object while the imaging device is in operation, or by using a special imaging technique.
In an imaging device equipped with a scanning mirror, the observed radiance changes depending on the angle of the scanning mirror when capturing an infrared image. This is because the infrared emissivity of the coating applied to the surface of the scanning mirror depends on the angle. Therefore, in order to accurately estimate the infrared radiance of a subject captured by the imaging device, it is necessary to properly calibrate the infrared emissivity of the scanning mirror to the angle of the scanning mirror.
走査鏡が設けられた撮像装置では、赤外画像の撮像時の走査鏡の角度によって観測放射輝度に変化が生じる。これは、走査鏡の表面に施されたコーティングの赤外放射率が角度に依存するためである。このため、撮像装置が撮像した被写体の赤外放射輝度を正確に推定するには、走査鏡の赤外放射率を走査鏡の角度に対して適切に校正する必要がある。 The characteristics of an imaging device are prone to change due to changes in the environment. In order to perform appropriate correction processing and obtain higher quality captured images, it is necessary to accurately grasp the characteristics of the imaging device at the time of capturing the image. For this reason, the characteristic evaluation of the imaging device is updated by capturing an image of a specific object while the imaging device is in operation, or by using a special imaging technique.
In an imaging device equipped with a scanning mirror, the observed radiance changes depending on the angle of the scanning mirror when capturing an infrared image. This is because the infrared emissivity of the coating applied to the surface of the scanning mirror depends on the angle. Therefore, in order to accurately estimate the infrared radiance of a subject captured by the imaging device, it is necessary to properly calibrate the infrared emissivity of the scanning mirror to the angle of the scanning mirror.
なお、本開示に関連する技術として、非特許文献1に開示の技術がある。
Note that a technology related to this disclosure is disclosed in Non-Patent Document 1.
コーティングが施された走査鏡の赤外放射率は、走査鏡への光の入射角度に依存する。人工衛星に搭載された撮像装置に設けられた走査鏡の赤外放射率は、打上げ前の地上と打上げ後の宇宙との環境の変化によって変化することが確認されている。つまり、打上げ後の宇宙空間にある走査鏡の赤外放射率は、打ち上げ前に地上で計測された赤外放射率と一致しない。赤外放射率は被写体の赤外放射輝度の算出に用いられる。このため、打ち上げ後の走査鏡の赤外放射率を正確に把握できていないと、撮像時の走査鏡の角度の違いによって、算出された赤外放射輝度に誤差が生じる。
The infrared emissivity of a coated scanning mirror depends on the angle of incidence of light on the scanning mirror. It has been confirmed that the infrared emissivity of a scanning mirror installed in an imaging device mounted on a satellite changes due to changes in the environment between on the ground before launch and in space after launch. In other words, the infrared emissivity of a scanning mirror in space after launch will not match the infrared emissivity measured on the ground before launch. Infrared emissivity is used to calculate the infrared radiance of a subject. For this reason, if the infrared emissivity of the scanning mirror after launch is not accurately known, differences in the angle of the scanning mirror during imaging will cause errors in the calculated infrared radiance.
本開示は、このような赤外放射輝度に含まれる誤差を低減することを主な目的とする。
The primary objective of this disclosure is to reduce the errors contained in such infrared radiance.
本開示に係るデータ処理装置は、
1つ以上の走査鏡とラインセンサとが同一光路上に設けられた宇宙空間にある撮像装置に撮像させる2つの撮像領域として、前記宇宙空間において一様な輝度温度を有すると推定され、赤外放射輝度値が既知であり、前記1つ以上の走査鏡のうちの少なくとも1つの走査鏡の撮像時ごとの回転角度が異なる2つの領域を指定する撮像領域指定部と、
前記撮像装置が前記2つの撮像領域を撮像して得られた2つの撮像画像を取得する画像取得部と、
各走査鏡の前記2つの撮像領域の撮像での撮影時ごとの回転角度が示される角度データが含まれるモニタデータを取得するモニタデータ取得部と、
前記2つの撮像画像と、前記モニタデータとを用いて、前記1つ以上の走査鏡の各走査鏡について赤外放射率関数の係数を算出する係数算出部とを有する。 The data processing device according to the present disclosure comprises:
an imaging area designation unit that designates two imaging areas to be imaged by an imaging device in outer space in which one or more scanning mirrors and a line sensor are provided on the same optical path, the imaging areas being estimated to have a uniform radiance temperature in the outer space, having known infrared radiance values, and having different rotation angles for each imaging of at least one of the one or more scanning mirrors;
an image acquisition unit that acquires two captured images obtained by the imaging device capturing images of the two imaging regions;
a monitor data acquisition unit that acquires monitor data including angle data indicating a rotation angle of each scanning mirror at each time of imaging the two imaging areas;
The imaging device further comprises a coefficient calculation section which calculates a coefficient of an infrared emissivity function for each of the one or more scanning mirrors using the two captured images and the monitor data.
1つ以上の走査鏡とラインセンサとが同一光路上に設けられた宇宙空間にある撮像装置に撮像させる2つの撮像領域として、前記宇宙空間において一様な輝度温度を有すると推定され、赤外放射輝度値が既知であり、前記1つ以上の走査鏡のうちの少なくとも1つの走査鏡の撮像時ごとの回転角度が異なる2つの領域を指定する撮像領域指定部と、
前記撮像装置が前記2つの撮像領域を撮像して得られた2つの撮像画像を取得する画像取得部と、
各走査鏡の前記2つの撮像領域の撮像での撮影時ごとの回転角度が示される角度データが含まれるモニタデータを取得するモニタデータ取得部と、
前記2つの撮像画像と、前記モニタデータとを用いて、前記1つ以上の走査鏡の各走査鏡について赤外放射率関数の係数を算出する係数算出部とを有する。 The data processing device according to the present disclosure comprises:
an imaging area designation unit that designates two imaging areas to be imaged by an imaging device in outer space in which one or more scanning mirrors and a line sensor are provided on the same optical path, the imaging areas being estimated to have a uniform radiance temperature in the outer space, having known infrared radiance values, and having different rotation angles for each imaging of at least one of the one or more scanning mirrors;
an image acquisition unit that acquires two captured images obtained by the imaging device capturing images of the two imaging regions;
a monitor data acquisition unit that acquires monitor data including angle data indicating a rotation angle of each scanning mirror at each time of imaging the two imaging areas;
The imaging device further comprises a coefficient calculation section which calculates a coefficient of an infrared emissivity function for each of the one or more scanning mirrors using the two captured images and the monitor data.
本開示によれば、赤外放射輝度に含まれる誤差を低減することができる。
According to this disclosure, it is possible to reduce errors contained in infrared radiance.
実施の形態1.
以下、実施の形態を、図を用いて説明する。 Embodiment 1.
Hereinafter, an embodiment will be described with reference to the drawings.
以下、実施の形態を、図を用いて説明する。 Embodiment 1.
Hereinafter, an embodiment will be described with reference to the drawings.
***構成の説明***
図1は、本実施の形態に係るシステム構成例を示す。
図1において、放射率校正装置100は地上に配置されている。一方、撮像装置200は人工衛星300に搭載されている。つまり、撮像装置200は宇宙空間に存在する。 ***Configuration Description***
FIG. 1 shows an example of a system configuration according to the present embodiment.
1, theemissivity calibration device 100 is placed on the ground, while the image capturing device 200 is mounted on an artificial satellite 300. In other words, the image capturing device 200 exists in outer space.
図1は、本実施の形態に係るシステム構成例を示す。
図1において、放射率校正装置100は地上に配置されている。一方、撮像装置200は人工衛星300に搭載されている。つまり、撮像装置200は宇宙空間に存在する。 ***Configuration Description***
FIG. 1 shows an example of a system configuration according to the present embodiment.
1, the
図2は、人工衛星300の内部構成の概要を示す。
人工衛星300では、撮像装置200と撮像制御装置400とモニタ装置500が存在する。
なお、図2では、本実施の形態を説明するのに必要な内部構成のみを示している。つまり、図2では、人工衛星300の運行を制御する機構等の図示は省略している。
図2において、撮像制御装置400は、撮像装置200の撮像の制御を行う。
モニタ装置500は、撮像装置200に搭載されている走査鏡の状態をモニタする。
なお、図2では、撮像装置200と撮像制御装置400とモニタ装置500を別個の装置としているが、撮像装置200と撮像制御装置400とモニタ装置500が一体となっていてもよい。 FIG. 2 shows an outline of the internal configuration of thesatellite 300.
Theartificial satellite 300 includes an image capturing device 200 , an image capturing control device 400 , and a monitor device 500 .
2 shows only the internal configuration necessary for explaining this embodiment, i.e., the mechanism for controlling the operation of theartificial satellite 300 and the like are omitted.
In FIG. 2, animaging control device 400 controls the imaging of the imaging device 200 .
Themonitor device 500 monitors the state of the scanning mirror mounted in the imaging device 200 .
In FIG. 2, the image capturingdevice 200, the image capturing control device 400, and the monitor device 500 are shown as separate devices, but the image capturing device 200, the image capturing control device 400, and the monitor device 500 may be integrated.
人工衛星300では、撮像装置200と撮像制御装置400とモニタ装置500が存在する。
なお、図2では、本実施の形態を説明するのに必要な内部構成のみを示している。つまり、図2では、人工衛星300の運行を制御する機構等の図示は省略している。
図2において、撮像制御装置400は、撮像装置200の撮像の制御を行う。
モニタ装置500は、撮像装置200に搭載されている走査鏡の状態をモニタする。
なお、図2では、撮像装置200と撮像制御装置400とモニタ装置500を別個の装置としているが、撮像装置200と撮像制御装置400とモニタ装置500が一体となっていてもよい。 FIG. 2 shows an outline of the internal configuration of the
The
2 shows only the internal configuration necessary for explaining this embodiment, i.e., the mechanism for controlling the operation of the
In FIG. 2, an
The
In FIG. 2, the image capturing
次に、撮像装置200の内部構成例を概説する。
図3は、撮像装置200の内部構造の概要を示す。なお、図3は、本実施の形態を説明するのに必要な内部構成のみを示している。
図3に示すように、撮像装置200は、同一光路上に1つ以上の走査鏡とラインセンサを備える。各走査鏡は1軸駆動式の走査鏡である。
図3では、撮像装置200が1次走査鏡である走査鏡βと2次走査鏡である走査鏡αを備える例を示す。走査鏡αと走査鏡βは回転軸を中心として回転することができる。つまり、走査鏡αと走査鏡βは回転角度を変化させることができる。走査鏡の回転角度は、走査鏡の基準角(基準位置)からの変位量(角度)である。走査鏡αと走査鏡βの回転軸は平行でなくても良い。例えば、垂直であっても良い。
ラインセンサは、放射率校正装置100により撮像領域として指定された宇宙空間の領域を撮像する。
なお、図3に示す矢印は光路を示す。
図2に示すモニタ装置500は、各走査鏡の状態として、撮像時の各走査鏡の回転角度と温度をモニタする。 Next, an example of the internal configuration of theimaging device 200 will be outlined.
Fig. 3 shows an outline of the internal structure of theimaging device 200. Note that Fig. 3 shows only the internal configuration necessary for explaining this embodiment.
3, theimaging device 200 includes one or more scanning mirrors and a line sensor on the same optical path. Each scanning mirror is a one-axis driven scanning mirror.
3 shows an example in which theimaging device 200 includes a scanning mirror β, which is a primary scanning mirror, and a scanning mirror α, which is a secondary scanning mirror. The scanning mirror α and the scanning mirror β can rotate around a rotation axis. In other words, the rotation angles of the scanning mirror α and the scanning mirror β can be changed. The rotation angle of the scanning mirror is the amount of displacement (angle) from the reference angle (reference position) of the scanning mirror. The rotation axes of the scanning mirror α and the scanning mirror β do not have to be parallel. For example, they may be perpendicular.
The line sensor captures an image of a region in outer space that is designated as an imaging region by theemissivity calibration device 100 .
The arrows shown in FIG. 3 indicate optical paths.
Themonitor device 500 shown in FIG. 2 monitors the rotation angle and temperature of each scanning mirror during imaging as the state of each scanning mirror.
図3は、撮像装置200の内部構造の概要を示す。なお、図3は、本実施の形態を説明するのに必要な内部構成のみを示している。
図3に示すように、撮像装置200は、同一光路上に1つ以上の走査鏡とラインセンサを備える。各走査鏡は1軸駆動式の走査鏡である。
図3では、撮像装置200が1次走査鏡である走査鏡βと2次走査鏡である走査鏡αを備える例を示す。走査鏡αと走査鏡βは回転軸を中心として回転することができる。つまり、走査鏡αと走査鏡βは回転角度を変化させることができる。走査鏡の回転角度は、走査鏡の基準角(基準位置)からの変位量(角度)である。走査鏡αと走査鏡βの回転軸は平行でなくても良い。例えば、垂直であっても良い。
ラインセンサは、放射率校正装置100により撮像領域として指定された宇宙空間の領域を撮像する。
なお、図3に示す矢印は光路を示す。
図2に示すモニタ装置500は、各走査鏡の状態として、撮像時の各走査鏡の回転角度と温度をモニタする。 Next, an example of the internal configuration of the
Fig. 3 shows an outline of the internal structure of the
3, the
3 shows an example in which the
The line sensor captures an image of a region in outer space that is designated as an imaging region by the
The arrows shown in FIG. 3 indicate optical paths.
The
前述したように、打上げ前の走査鏡の赤外放射率と打ち上げ後の赤外放射率とは一致しない。そして、打ち上げ後の走査鏡の赤外放射率を正確に把握できていないと、撮像時の走査鏡の角度の違いによって、被写体の赤外放射輝度に誤差が生じる。
このため、本実施の形態では、放射率校正装置100が、撮像時ごとの走査鏡の回転角度が異なる2つの領域を指定し、撮像装置200に当該2つの領域を撮像させ、2つの撮像画像の差分から、走査鏡の回転角度の違いに基づき走査鏡の赤外放射率の推定を行う。 As mentioned above, the infrared emissivity of the scanning mirror before launch does not match that after launch. If the infrared emissivity of the scanning mirror after launch is not accurately known, differences in the angle of the scanning mirror during imaging will cause errors in the infrared radiance of the subject.
For this reason, in this embodiment, theemissivity calibration device 100 specifies two areas in which the rotation angle of the scanning mirror differs for each image capture, causes the imaging device 200 to capture images of the two areas, and estimates the infrared emissivity of the scanning mirror based on the difference in the rotation angle of the scanning mirror from the difference between the two captured images.
このため、本実施の形態では、放射率校正装置100が、撮像時ごとの走査鏡の回転角度が異なる2つの領域を指定し、撮像装置200に当該2つの領域を撮像させ、2つの撮像画像の差分から、走査鏡の回転角度の違いに基づき走査鏡の赤外放射率の推定を行う。 As mentioned above, the infrared emissivity of the scanning mirror before launch does not match that after launch. If the infrared emissivity of the scanning mirror after launch is not accurately known, differences in the angle of the scanning mirror during imaging will cause errors in the infrared radiance of the subject.
For this reason, in this embodiment, the
より具体的には、放射率校正装置100は、撮像装置200に撮像させる2つの撮像領域を指定し、図2に示すように、2つの撮像領域が示される撮像領域データ150を撮像制御装置400に送信する。撮像領域の指定方法の詳細は後述する。
撮像制御装置400は、撮像領域データ150を撮像装置200に転送する。
そして、撮像装置200は、撮像領域データ150で指定された2つの撮像領域を撮像し、2つの撮像画像250を放射率校正装置100に送信する。
また、モニタ装置500は、撮像時に各走査鏡の回転角度と温度をモニタし、モニタ結果が示されるモニタデータ550を放射率校正装置100に送信する。モニタデータ550には、各走査鏡の撮影時ごとの角度データと温度データが含まれる。
そして、放射率校正装置100は、2つの撮像画像250とモニタデータ550を用いて、走査鏡の回転角度の違いに基づき走査鏡の赤外放射率の推定を行う。 More specifically, theemissivity calibration device 100 specifies two imaging regions to be imaged by the imaging device 200, and transmits imaging region data 150 indicating the two imaging regions to the imaging control device 400, as shown in Fig. 2. The method of specifying the imaging regions will be described in detail later.
Theimaging control device 400 transfers the imaging area data 150 to the imaging device 200 .
Then, theimaging device 200 captures images of the two imaging regions specified by the imaging region data 150 , and transmits the two captured images 250 to the emissivity calibration device 100 .
Furthermore, themonitor device 500 monitors the rotation angle and temperature of each scanning mirror during imaging, and transmits monitor data 550 indicating the monitoring results to the emissivity calibration device 100. The monitor data 550 includes angle data and temperature data of each scanning mirror at each imaging time.
Then, theemissivity calibration device 100 uses the two captured images 250 and the monitor data 550 to estimate the infrared emissivity of the scanning mirror based on the difference in the rotation angle of the scanning mirror.
撮像制御装置400は、撮像領域データ150を撮像装置200に転送する。
そして、撮像装置200は、撮像領域データ150で指定された2つの撮像領域を撮像し、2つの撮像画像250を放射率校正装置100に送信する。
また、モニタ装置500は、撮像時に各走査鏡の回転角度と温度をモニタし、モニタ結果が示されるモニタデータ550を放射率校正装置100に送信する。モニタデータ550には、各走査鏡の撮影時ごとの角度データと温度データが含まれる。
そして、放射率校正装置100は、2つの撮像画像250とモニタデータ550を用いて、走査鏡の回転角度の違いに基づき走査鏡の赤外放射率の推定を行う。 More specifically, the
The
Then, the
Furthermore, the
Then, the
図4は、本実施の形態に係る放射率校正装置100のハードウェア構成例を示す。
また、図5は、本実施の形態に係る放射率校正装置100の機能構成例を示す。
放射率校正装置100はコンピュータである。放射率校正装置100はデータ処理装置に相当する。また、放射率校正装置100の動作手順は、データ処理方法に相当する。また、放射率校正装置100の動作を実現するプログラムは、データ処理プログラムに相当する。 FIG. 4 shows an example of the hardware configuration of theemissivity calibration device 100 according to the present embodiment.
FIG. 5 shows an example of the functional configuration of theemissivity calibration device 100 according to this embodiment.
Theemissivity calibration device 100 is a computer. The emissivity calibration device 100 corresponds to a data processing device. Moreover, the operation procedure of the emissivity calibration device 100 corresponds to a data processing method. Moreover, a program that realizes the operation of the emissivity calibration device 100 corresponds to a data processing program.
また、図5は、本実施の形態に係る放射率校正装置100の機能構成例を示す。
放射率校正装置100はコンピュータである。放射率校正装置100はデータ処理装置に相当する。また、放射率校正装置100の動作手順は、データ処理方法に相当する。また、放射率校正装置100の動作を実現するプログラムは、データ処理プログラムに相当する。 FIG. 4 shows an example of the hardware configuration of the
FIG. 5 shows an example of the functional configuration of the
The
図4に示すように、放射率校正装置100は、ハードウェアとして、プロセッサ901、主記憶装置902、補助記憶装置903、通信装置904及び入出力装置905を備える。
また、放射率校正装置100は、図5に示すように、機能構成として、撮像領域指定部101、画像取得部102、モニタデータ取得部103、係数算出部104、係数出力部105、撮像領域データベース106及び装置特性データベース107を備える。撮像領域指定部101、画像取得部102、モニタデータ取得部103、係数算出部104及び係数出力部105の機能は、例えば、プログラムにより実現される。
補助記憶装置903には、撮像領域指定部101、画像取得部102、モニタデータ取得部103、係数算出部104及び係数出力部105の機能を実現するプログラムが記憶されている。
これらプログラムは、補助記憶装置903から主記憶装置902にロードされる。そして、プロセッサ901がこれらプログラムを実行して、後述する撮像領域指定部101、画像取得部102、モニタデータ取得部103、係数算出部104及び係数出力部105の動作を行う。
図4は、プロセッサ901が撮像領域指定部101、画像取得部102、モニタデータ取得部103、係数算出部104及び係数出力部105の機能を実現するプログラムを実行している状態、すなわち、プロセッサ901が撮像領域指定部101、画像取得部102、モニタデータ取得部103、係数算出部104及び係数出力部105として動作している状態を模式的に表している。
また、図4に示す撮像領域データベース106及び装置特性データベース107は、主記憶装置902又は/及び補助記憶装置903で実現される。 As shown in FIG. 4, theemissivity calibration device 100 includes, as hardware, a processor 901, a main memory device 902, an auxiliary memory device 903, a communication device 904, and an input/output device 905.
5, theemissivity calibration device 100 includes, as its functional configuration, an imaging area designation unit 101, an image acquisition unit 102, a monitor data acquisition unit 103, a coefficient calculation unit 104, a coefficient output unit 105, an imaging area database 106, and an apparatus characteristic database 107. The functions of the imaging area designation unit 101, the image acquisition unit 102, the monitor data acquisition unit 103, the coefficient calculation unit 104, and the coefficient output unit 105 are realized by, for example, a program.
Theauxiliary storage device 903 stores programs for implementing the functions of the imaging area designation unit 101 , the image acquisition unit 102 , the monitor data acquisition unit 103 , the coefficient calculation unit 104 , and the coefficient output unit 105 .
These programs are loaded from theauxiliary storage device 903 to the main storage device 902. Then, the processor 901 executes these programs to perform the operations of an imaging area designation unit 101, an image acquisition unit 102, a monitor data acquisition unit 103, a coefficient calculation unit 104, and a coefficient output unit 105, which will be described later.
FIG. 4 illustrates a state in which theprocessor 901 is executing a program that realizes the functions of the imaging area designation unit 101, the image acquisition unit 102, the monitor data acquisition unit 103, the coefficient calculation unit 104, and the coefficient output unit 105, i.e., a state in which the processor 901 operates as the imaging area designation unit 101, the image acquisition unit 102, the monitor data acquisition unit 103, the coefficient calculation unit 104, and the coefficient output unit 105.
Moreover, theimaging area database 106 and the device characteristics database 107 shown in FIG. 4 are realized in the main storage device 902 and/or the auxiliary storage device 903 .
また、放射率校正装置100は、図5に示すように、機能構成として、撮像領域指定部101、画像取得部102、モニタデータ取得部103、係数算出部104、係数出力部105、撮像領域データベース106及び装置特性データベース107を備える。撮像領域指定部101、画像取得部102、モニタデータ取得部103、係数算出部104及び係数出力部105の機能は、例えば、プログラムにより実現される。
補助記憶装置903には、撮像領域指定部101、画像取得部102、モニタデータ取得部103、係数算出部104及び係数出力部105の機能を実現するプログラムが記憶されている。
これらプログラムは、補助記憶装置903から主記憶装置902にロードされる。そして、プロセッサ901がこれらプログラムを実行して、後述する撮像領域指定部101、画像取得部102、モニタデータ取得部103、係数算出部104及び係数出力部105の動作を行う。
図4は、プロセッサ901が撮像領域指定部101、画像取得部102、モニタデータ取得部103、係数算出部104及び係数出力部105の機能を実現するプログラムを実行している状態、すなわち、プロセッサ901が撮像領域指定部101、画像取得部102、モニタデータ取得部103、係数算出部104及び係数出力部105として動作している状態を模式的に表している。
また、図4に示す撮像領域データベース106及び装置特性データベース107は、主記憶装置902又は/及び補助記憶装置903で実現される。 As shown in FIG. 4, the
5, the
The
These programs are loaded from the
FIG. 4 illustrates a state in which the
Moreover, the
図5において、撮像領域指定部101は、2つの撮像領域を指定する。撮像領域は撮像装置200に撮像させる領域である。2つの撮像領域は1組として扱われる。撮像領域指定部101は、複数の領域組を指定する。
具体的には、撮像領域指定部101は、2つの撮像領域として、宇宙空間において一様な輝度温度を有すると推定され、赤外放射輝度値が既知であり、各走査鏡の撮像時ごとの回転角度が異なる2つの領域を指定する。
例えば、撮像領域指定部101は、2つの深宇宙領域を撮像領域として指定する。
また、撮像領域指定部101は、宇宙空間における任意の領域を基準として対称の位置にある2つの領域を撮像領域として指定してもよい。
また、撮像領域指定部101は、宇宙空間における任意の領域を基準として対称の位置にある2つの深宇宙領域を撮像領域として指定してもよい。
なお、深宇宙領域とは、地球からの距離が200万キロメートル以上である宇宙領域である。
撮像領域指定部101は、2つの撮像領域が示される撮像領域データ150を通信装置904を介して撮像制御装置400に送信する。
また、撮像領域指定部101は、撮像領域データ150を係数出力部105に格納する。
撮像領域指定部101により行われる処理は撮像領域指定処理に相当する。 5, an imagingarea designation unit 101 designates two imaging areas. The imaging areas are areas to be imaged by the imaging device 200. The two imaging areas are treated as a set. The imaging area designation unit 101 designates a plurality of area sets.
Specifically, the imagingarea designation unit 101 designates, as the two imaging areas, two areas that are estimated to have a uniform brightness temperature in outer space, have known infrared radiance values, and have different rotation angles for each imaging of each scanning mirror.
For example, the imagingarea designation unit 101 designates two deep space areas as imaging areas.
Furthermore, the imagingarea designation unit 101 may designate two areas located symmetrically with respect to an arbitrary area in outer space as the imaging area.
Furthermore, the imagingregion designation unit 101 may designate two deep space regions that are symmetrically positioned with respect to an arbitrary region in outer space as the imaging region.
The deep space region is a region of space that is more than 2 million kilometers away from the Earth.
The imagingarea designation unit 101 transmits imaging area data 150 indicating the two imaging areas to the imaging control device 400 via the communication device 904 .
Furthermore, the imagingregion designation unit 101 stores the imaging region data 150 in the coefficient output unit 105 .
The process performed by the imagingarea designation unit 101 corresponds to imaging area designation processing.
具体的には、撮像領域指定部101は、2つの撮像領域として、宇宙空間において一様な輝度温度を有すると推定され、赤外放射輝度値が既知であり、各走査鏡の撮像時ごとの回転角度が異なる2つの領域を指定する。
例えば、撮像領域指定部101は、2つの深宇宙領域を撮像領域として指定する。
また、撮像領域指定部101は、宇宙空間における任意の領域を基準として対称の位置にある2つの領域を撮像領域として指定してもよい。
また、撮像領域指定部101は、宇宙空間における任意の領域を基準として対称の位置にある2つの深宇宙領域を撮像領域として指定してもよい。
なお、深宇宙領域とは、地球からの距離が200万キロメートル以上である宇宙領域である。
撮像領域指定部101は、2つの撮像領域が示される撮像領域データ150を通信装置904を介して撮像制御装置400に送信する。
また、撮像領域指定部101は、撮像領域データ150を係数出力部105に格納する。
撮像領域指定部101により行われる処理は撮像領域指定処理に相当する。 5, an imaging
Specifically, the imaging
For example, the imaging
Furthermore, the imaging
Furthermore, the imaging
The deep space region is a region of space that is more than 2 million kilometers away from the Earth.
The imaging
Furthermore, the imaging
The process performed by the imaging
画像取得部102は、通信装置904を介して、撮像装置200から撮像画像250を取得する。撮像画像250は、撮像領域指定部101により撮像領域として指定された領域を撮影して得られた画像である。画像取得部102は、複数の領域組の撮像画像250を取得する。
画像取得部102は、取得した撮像画像250を係数算出部104に転送する。
画像取得部102により行われる処理は画像取得処理に相当する。 Theimage acquisition unit 102 acquires the captured image 250 from the imaging device 200 via the communication device 904. The captured image 250 is an image obtained by capturing an image of an area designated as an imaging area by the imaging area designation unit 101. The image acquisition unit 102 acquires the captured images 250 of a plurality of area sets.
Theimage acquisition unit 102 transfers the acquired captured image 250 to the coefficient calculation unit 104 .
The process performed by theimage acquisition unit 102 corresponds to an image acquisition process.
画像取得部102は、取得した撮像画像250を係数算出部104に転送する。
画像取得部102により行われる処理は画像取得処理に相当する。 The
The
The process performed by the
モニタデータ取得部103は、通信装置904を介して、モニタ装置500からモニタデータ550を取得する。モニタデータ取得部103は、複数の領域組のモニタデータ550を取得する。モニタデータ取得部103は、複数の領域組の撮像におけるモニタデータ550を取得する。
モニタデータ取得部103は、取得したモニタデータ550を係数算出部104に転送する。モニタデータ550には、撮像領域指定部101により2つの撮像領域の撮影での撮影時ごとの各走査鏡の角度データと温度データが含まれる。
例えば、図2に示すように、撮像装置200に2つの走査鏡(走査鏡αと走査鏡β)がある場合を想定する。この場合に、撮像画像N1の撮影時の走査鏡αの回転角度をθ1、走査鏡βの回転角度をφ1、走査鏡αの温度をT1、走査鏡βの温度をτ1とする。同様に、撮像画像N2の撮像時の走査鏡αの回転角度をθ2、走査鏡βの回転角度をφ2、走査鏡αの温度をT2、走査鏡βの温度をτ2とする。
このとき、モニタデータ取得部103は、1組の撮像画像(N1,N2)に対応する角度データ及び温度データとして、(θ1,φ1,T1,τ1,θ2,φ2,T2,τ2)を取得する。なお、1組の撮像画像(N1,N2)は同時刻に撮像されていてもよいし、異なる時刻に撮像されていてもよい。
モニタデータ取得部103により行われる処理はモニタデータ取得処理に相当する。 The monitordata acquisition unit 103 acquires the monitor data 550 from the monitor device 500 via the communication device 904. The monitor data acquisition unit 103 acquires the monitor data 550 of a plurality of region sets. The monitor data acquisition unit 103 acquires the monitor data 550 in the imaging of a plurality of region sets.
The monitordata acquisition unit 103 transfers the acquired monitor data 550 to the coefficient calculation unit 104. The monitor data 550 includes angle data and temperature data of each scanning mirror at each time when the imaging area designation unit 101 captures the two imaging areas.
2, assume thatimaging device 200 has two scanning mirrors (scanning mirror α and scanning mirror β). In this case, the rotation angle of scanning mirror α when capturing image N1 is set to θ 1 , the rotation angle of scanning mirror β when capturing image N1 is set to φ 1 , the temperature of scanning mirror α when capturing image N2 is set to T 1 , and the temperature of scanning mirror β when capturing image N2 is set to τ 2 .
At this time, the monitordata acquisition unit 103 acquires ( θ1 , φ1 , T1, τ1 , θ2, φ2 , T2 , τ2 ) as angle data and temperature data corresponding to a set of captured images ( N1 , N2 ). Note that the set of captured images ( N1 , N2 ) may be captured at the same time or at different times.
The process performed by the monitordata acquisition unit 103 corresponds to a monitor data acquisition process.
モニタデータ取得部103は、取得したモニタデータ550を係数算出部104に転送する。モニタデータ550には、撮像領域指定部101により2つの撮像領域の撮影での撮影時ごとの各走査鏡の角度データと温度データが含まれる。
例えば、図2に示すように、撮像装置200に2つの走査鏡(走査鏡αと走査鏡β)がある場合を想定する。この場合に、撮像画像N1の撮影時の走査鏡αの回転角度をθ1、走査鏡βの回転角度をφ1、走査鏡αの温度をT1、走査鏡βの温度をτ1とする。同様に、撮像画像N2の撮像時の走査鏡αの回転角度をθ2、走査鏡βの回転角度をφ2、走査鏡αの温度をT2、走査鏡βの温度をτ2とする。
このとき、モニタデータ取得部103は、1組の撮像画像(N1,N2)に対応する角度データ及び温度データとして、(θ1,φ1,T1,τ1,θ2,φ2,T2,τ2)を取得する。なお、1組の撮像画像(N1,N2)は同時刻に撮像されていてもよいし、異なる時刻に撮像されていてもよい。
モニタデータ取得部103により行われる処理はモニタデータ取得処理に相当する。 The monitor
The monitor
2, assume that
At this time, the monitor
The process performed by the monitor
係数算出部104は、組になる2つの撮像画像250を画像取得部102から取得する。また、係数算出部104は、モニタデータ取得部103から対応するモニタデータ550を取得する。
更に、係数算出部104は装置特性データベース107から装置特性データ160を取得する。装置特性データ160は、撮像装置200の特性が示されるデータである。具体的には、装置特性データ160には、撮像装置200のラインセンサのチャネルごとの赤外線波長と赤外校正係数が示される。
係数算出部104は、撮像画像250とモニタデータ550と装置特性データ160を用いて、各走査鏡について赤外放射率関数の係数170を算出する。具体的には、2つの撮像画像250の間の赤外放射輝度の差が低減する赤外放射率関数の係数170を算出する。撮像装置200の同一光路上に設けられた全走査鏡の走査鏡ごとの赤外放射率関数の係数170は全て、2つの撮像画像250と画像撮像時ごとの走査鏡の回転角度データと温度データを含むモニタデータ550と装置特性データ160とを用いて、同一の物理モデルによって算出される。
そして、係数算出部104は、算出した各走査鏡の赤外放射率関数の係数170を係数出力部105に出力する。
なお、赤外放射率関数の係数及び物理モデルの詳細は後述する。
係数算出部104により行われる処理は係数算出処理に相当する。 Thecoefficient calculation unit 104 acquires two paired captured images 250 from the image acquisition unit 102. The coefficient calculation unit 104 also acquires corresponding monitor data 550 from the monitor data acquisition unit 103.
Furthermore, thecoefficient calculation unit 104 acquires device characteristic data 160 from a device characteristic database 107. The device characteristic data 160 is data indicating the characteristics of the image capture device 200. Specifically, the device characteristic data 160 indicates an infrared wavelength and an infrared calibration coefficient for each channel of the line sensor of the image capture device 200.
Thecoefficient calculation unit 104 calculates coefficients 170 of an infrared emissivity function for each scanning mirror using the captured images 250, the monitor data 550, and the device characteristic data 160. Specifically, it calculates coefficients 170 of an infrared emissivity function that reduces the difference in infrared radiance between the two captured images 250. All of the coefficients 170 of the infrared emissivity function for each scanning mirror of all scanning mirrors provided on the same optical path of the imaging device 200 are calculated by the same physical model using the two captured images 250, the monitor data 550 including the rotation angle data and temperature data of the scanning mirror for each image capture, and the device characteristic data 160.
Then, thecoefficient calculation unit 104 outputs the calculated coefficient 170 of the infrared emissivity function of each scanning mirror to the coefficient output unit 105 .
The coefficients of the infrared emissivity function and the physical model will be described in detail later.
The process performed by thecoefficient calculation unit 104 corresponds to a coefficient calculation process.
更に、係数算出部104は装置特性データベース107から装置特性データ160を取得する。装置特性データ160は、撮像装置200の特性が示されるデータである。具体的には、装置特性データ160には、撮像装置200のラインセンサのチャネルごとの赤外線波長と赤外校正係数が示される。
係数算出部104は、撮像画像250とモニタデータ550と装置特性データ160を用いて、各走査鏡について赤外放射率関数の係数170を算出する。具体的には、2つの撮像画像250の間の赤外放射輝度の差が低減する赤外放射率関数の係数170を算出する。撮像装置200の同一光路上に設けられた全走査鏡の走査鏡ごとの赤外放射率関数の係数170は全て、2つの撮像画像250と画像撮像時ごとの走査鏡の回転角度データと温度データを含むモニタデータ550と装置特性データ160とを用いて、同一の物理モデルによって算出される。
そして、係数算出部104は、算出した各走査鏡の赤外放射率関数の係数170を係数出力部105に出力する。
なお、赤外放射率関数の係数及び物理モデルの詳細は後述する。
係数算出部104により行われる処理は係数算出処理に相当する。 The
Furthermore, the
The
Then, the
The coefficients of the infrared emissivity function and the physical model will be described in detail later.
The process performed by the
係数出力部105は、各走査鏡の赤外放射率関数の係数170を入出力装置905を用いて出力する。例えば、係数出力部105は、各走査鏡の赤外放射率関数の係数170を入出力装置905に含まれるディスプレイに表示する。
The coefficient output unit 105 outputs the coefficient 170 of the infrared emissivity function of each scanning mirror using the input/output device 905. For example, the coefficient output unit 105 displays the coefficient 170 of the infrared emissivity function of each scanning mirror on a display included in the input/output device 905.
撮像領域データベース106は、撮像領域データ150を記憶する。
また、装置特性データベース107は、装置特性データ160を記憶する。 Theimaging area database 106 stores imaging area data 150 .
The devicecharacteristic database 107 also stores device characteristic data 160 .
また、装置特性データベース107は、装置特性データ160を記憶する。 The
The device
***動作の説明***
次に、図6を参照して、本実施の形態に係る放射率校正装置100の動作例を説明する。 *** Operation Description ***
Next, an example of the operation of theemissivity calibration apparatus 100 according to the present embodiment will be described with reference to FIG.
次に、図6を参照して、本実施の形態に係る放射率校正装置100の動作例を説明する。 *** Operation Description ***
Next, an example of the operation of the
STEP1において、撮像領域指定部101が2つの撮像領域を指定する。
前述したように、撮像領域指定部101は、撮像領域として、宇宙空間において一様な輝度温度を有すると推定され、赤外放射輝度値が既知であり、少なくとも1つの走査鏡の撮像時ごとの回転角度が異なる2つの領域を指定する。
撮像領域指定部101は、複数の領域組を指定する。 In STEP 1, the imagingregion designation unit 101 designates two imaging regions.
As described above, the imagingarea designation unit 101 designates, as imaging areas, two areas that are estimated to have a uniform brightness temperature in outer space, have known infrared radiance values, and have at least one scanning mirror with a different rotation angle each time imaging is performed.
The imagingregion designation unit 101 designates a plurality of region sets.
前述したように、撮像領域指定部101は、撮像領域として、宇宙空間において一様な輝度温度を有すると推定され、赤外放射輝度値が既知であり、少なくとも1つの走査鏡の撮像時ごとの回転角度が異なる2つの領域を指定する。
撮像領域指定部101は、複数の領域組を指定する。 In STEP 1, the imaging
As described above, the imaging
The imaging
図7は、撮像領域指定部101による撮像領域の指定方法の一例を示す。
図7において、地球の周りにある矩形はそれぞれ深宇宙領域を表している。
撮像領域指定部101は、例えば、地球のY軸方向の中心線を基準としてX軸方向に対称の位置にある2つの深宇宙領域を撮像領域として指定してもよい。
また、図8に示すように、撮像領域指定部101は、例えば、地球のX軸方向の中心線を基準としてY軸方向に対称の位置にある2つの深宇宙領域を撮像領域として指定してもよい。
図7及び図8のように対称の位置にある2つの深宇宙領域を撮像した場合は、撮像時の走査鏡の回転角度と温度以外のパラメータが撮像画像間で共通していると考えられる。このため、図7及び図8のように対称の位置にある2つの深宇宙領域を撮像した場合は、モニタデータ取得部103による赤外放射率関数の係数の算出精度が向上すると考えられる。 FIG. 7 shows an example of a method for specifying an imaging area by the imagingarea specifying unit 101. In FIG.
In FIG. 7, each rectangle around the Earth represents a deep space region.
The imagingregion designation unit 101 may designate, for example, two deep space regions located symmetrically in the X-axis direction with respect to the center line of the Earth in the Y-axis direction as the imaging region.
Also, as shown in FIG. 8, the imagingregion designation unit 101 may designate, for example, two deep space regions located symmetrically in the Y-axis direction with respect to the center line of the Earth in the X-axis direction as the imaging region.
When two deep space regions at symmetrical positions are imaged as in Figures 7 and 8, it is considered that parameters other than the rotation angle of the scanning mirror and temperature at the time of image capture are common between the captured images. Therefore, when two deep space regions at symmetrical positions are imaged as in Figures 7 and 8, it is considered that the accuracy of calculation of the coefficients of the infrared emissivity function by the monitordata acquisition unit 103 is improved.
図7において、地球の周りにある矩形はそれぞれ深宇宙領域を表している。
撮像領域指定部101は、例えば、地球のY軸方向の中心線を基準としてX軸方向に対称の位置にある2つの深宇宙領域を撮像領域として指定してもよい。
また、図8に示すように、撮像領域指定部101は、例えば、地球のX軸方向の中心線を基準としてY軸方向に対称の位置にある2つの深宇宙領域を撮像領域として指定してもよい。
図7及び図8のように対称の位置にある2つの深宇宙領域を撮像した場合は、撮像時の走査鏡の回転角度と温度以外のパラメータが撮像画像間で共通していると考えられる。このため、図7及び図8のように対称の位置にある2つの深宇宙領域を撮像した場合は、モニタデータ取得部103による赤外放射率関数の係数の算出精度が向上すると考えられる。 FIG. 7 shows an example of a method for specifying an imaging area by the imaging
In FIG. 7, each rectangle around the Earth represents a deep space region.
The imaging
Also, as shown in FIG. 8, the imaging
When two deep space regions at symmetrical positions are imaged as in Figures 7 and 8, it is considered that parameters other than the rotation angle of the scanning mirror and temperature at the time of image capture are common between the captured images. Therefore, when two deep space regions at symmetrical positions are imaged as in Figures 7 and 8, it is considered that the accuracy of calculation of the coefficients of the infrared emissivity function by the monitor
但し、撮像領域指定部101による撮像領域の指定方法は図7及び図8の方法には限定されない。撮像領域指定部101は、例えば、図9に示すように、X軸方向、Y軸方向のいずれの方向でも対称の位置にない2つの深宇宙領域を撮像領域として指定してもよい。また、地球の周りのいずれかの深宇宙領域と地球の周りにないいずれかの深宇宙領域とを2つの撮像領域として指定してもよい。また、地球の周りにない2つの深宇宙領域を撮像領域として指定してもよい。
また、撮像領域指定部101は、「一様な輝度温度を有すると推定され、赤外放射輝度値が既知であり、各走査鏡の撮像時ごとの回転角度が異なる」という条件を満たす限り、深宇宙領域以外の領域を撮像領域に指定してもよい。
なお、撮像領域指定部101は、放射率校正装置100の利用者(以下、設計者ともいう)の指示に従って、撮像領域を指定することができる。また、撮像領域指定部101は、撮像装置200において過去に撮像された撮像画像を解析して「一様な輝度温度を有すると推定され、赤外放射輝度値が既知であり、各走査鏡の撮像時ごとの回転角度が異なる」領域を探索して、撮像領域を指定してもよい。 However, the method of specifying the imaging area by the imagingarea designation unit 101 is not limited to the method of Fig. 7 and Fig. 8. For example, as shown in Fig. 9, the imaging area designation unit 101 may designate two deep space areas that are not symmetrical in either the X-axis direction or the Y-axis direction as the imaging area. Also, any deep space area around the earth and any deep space area not around the earth may be designated as the two imaging areas. Also, two deep space areas not around the earth may be designated as the imaging areas.
In addition, the imagingarea designation unit 101 may designate an area other than the deep space area as the imaging area as long as it satisfies the following conditions: "It is estimated to have a uniform brightness temperature, the infrared radiance value is known, and the rotation angle of each scanning mirror is different each time imaging is performed."
The imagingarea designation unit 101 can designate the imaging area according to an instruction from a user (hereinafter also referred to as a designer) of the emissivity calibration device 100. The imaging area designation unit 101 may also designate the imaging area by analyzing captured images previously captured by the imaging device 200 and searching for an area "estimated to have a uniform radiance temperature, with a known infrared radiance value, and with a different rotation angle for each imaging of each scanning mirror."
また、撮像領域指定部101は、「一様な輝度温度を有すると推定され、赤外放射輝度値が既知であり、各走査鏡の撮像時ごとの回転角度が異なる」という条件を満たす限り、深宇宙領域以外の領域を撮像領域に指定してもよい。
なお、撮像領域指定部101は、放射率校正装置100の利用者(以下、設計者ともいう)の指示に従って、撮像領域を指定することができる。また、撮像領域指定部101は、撮像装置200において過去に撮像された撮像画像を解析して「一様な輝度温度を有すると推定され、赤外放射輝度値が既知であり、各走査鏡の撮像時ごとの回転角度が異なる」領域を探索して、撮像領域を指定してもよい。 However, the method of specifying the imaging area by the imaging
In addition, the imaging
The imaging
STEP2において、画像取得部102が撮像装置200から撮像画像250を取得する。
本実施の形態では、画像取得部102が取得する撮像画像250は、輝度画像でもよいし、輝度画像を生成するために必要なデータ一式でもよい。 InSTEP 2 , the image acquisition unit 102 acquires the captured image 250 from the imaging device 200 .
In this embodiment, the captured image 250 acquired by theimage acquisition unit 102 may be a luminance image, or may be a set of data required to generate a luminance image.
本実施の形態では、画像取得部102が取得する撮像画像250は、輝度画像でもよいし、輝度画像を生成するために必要なデータ一式でもよい。 In
In this embodiment, the captured image 250 acquired by the
STEP3において、モニタデータ取得部103が、モニタ装置500からモニタデータ550を取得する。つまり、モニタデータ取得部103は、STEP2で取得された撮像画像250の撮影時の各走査鏡の角度データと温度データを取得する。
モニタデータ取得部103は、複数の領域組の撮像におけるモニタデータ550を取得する。
画像取得部102は、複数の領域組の撮像画像250を取得する。 In STEP 3, the monitordata acquisition unit 103 acquires the monitor data 550 from the monitor device 500. That is, the monitor data acquisition unit 103 acquires the angle data and temperature data of each scanning mirror when the captured image 250 acquired in STEP 2 was captured.
The monitordata acquisition unit 103 acquires monitor data 550 in capturing images of a plurality of area sets.
Theimage acquisition unit 102 acquires captured images 250 of a plurality of area sets.
モニタデータ取得部103は、複数の領域組の撮像におけるモニタデータ550を取得する。
画像取得部102は、複数の領域組の撮像画像250を取得する。 In STEP 3, the monitor
The monitor
The
STEP4において、係数算出部104が、撮像画像250とモニタデータ550と装置特性データ160を用いて、赤外放射率関数の係数170を算出する。
本実施の形態では、撮像装置200の光路と赤外放射輝度の関係を表す物理モデルMに、撮像画像250とモニタデータ550と装置特性データ160を入力として与え、同一の物理モデルMから全ての走査鏡について各走査鏡の赤外放射率関数の係数170を算出する。物理モデルMは、撮像装置200における赤外放射輝度に関する物理的関係を数理的に表現した方程式である。物理モデルMの詳細は後述する。 In STEP 4 , thecoefficient calculation unit 104 calculates the coefficient 170 of the infrared emissivity function using the captured image 250 , the monitor data 550 and the apparatus characteristic data 160 .
In this embodiment, the captured image 250, monitor data 550, and device characteristic data 160 are input to a physical model M that represents the relationship between the optical path and infrared radiance of theimaging device 200, and coefficients 170 of the infrared emissivity function of each scanning mirror are calculated for all scanning mirrors from the same physical model M. The physical model M is an equation that mathematically represents the physical relationship related to the infrared radiance in the imaging device 200. Details of the physical model M will be described later.
本実施の形態では、撮像装置200の光路と赤外放射輝度の関係を表す物理モデルMに、撮像画像250とモニタデータ550と装置特性データ160を入力として与え、同一の物理モデルMから全ての走査鏡について各走査鏡の赤外放射率関数の係数170を算出する。物理モデルMは、撮像装置200における赤外放射輝度に関する物理的関係を数理的に表現した方程式である。物理モデルMの詳細は後述する。 In STEP 4 , the
In this embodiment, the captured image 250, monitor data 550, and device characteristic data 160 are input to a physical model M that represents the relationship between the optical path and infrared radiance of the
最後に、STEP5において、係数出力部105が赤外放射率関数の係数170を出力する。
Finally, in STEP 5, the coefficient output unit 105 outputs the coefficient 170 of the infrared emissivity function.
放射率校正装置100の利用者(設計者)は、係数出力部105から出力された赤外放射率関数の係数170を確認し、例えば、走査鏡の角度依存性に係わる輝度誤差低減効果が得られているか否かを判断する。具体的には、設計者は、全ての走査鏡の赤外放射率関数の係数170を用いて、撮像画像250の赤外放射輝度を算出する。そして、設計者は、領域組ごとに2領域の撮像画像250の間で赤外放射輝度を比較し、走査鏡の回転角度の違いによる輝度誤差が低減しているかどうかを判断する。
輝度誤差低減効果が得られていると判断した場合は、設計者は、出力された赤外放射率関数の係数170を用いて赤外放射輝度を校正することができる。
一方、輝度誤差低減効果が得られていないと判断した場合は、設計者は、例えば、赤外放射率関数の係数170の再計算を放射率校正装置100に指示する。設計者から赤外放射率関数の係数170の再計算を指示された場合は、放射率校正装置100はSTEP4以降の処理を繰り返す。
この場合に、STEP4において、係数算出部104は、計算に未使用である新たな撮像画像250と対応する未使用のモニタデータ550を用いて、赤外放射率関数の係数170の再計算を行う。 The user (designer) of theemissivity calibration device 100 checks the coefficients 170 of the infrared emissivity function output from the coefficient output unit 105, and judges whether or not, for example, an effect of reducing luminance errors related to the angle dependency of the scanning mirror has been obtained. Specifically, the designer calculates the infrared radiance of the captured image 250 using the coefficients 170 of the infrared emissivity functions of all the scanning mirrors. Then, the designer compares the infrared radiance between the captured images 250 of two regions for each region set, and judges whether or not the luminance errors due to differences in the rotation angles of the scanning mirror have been reduced.
If it is determined that the luminance error reduction effect has been achieved, the designer can calibrate the infrared radiance using the output coefficient 170 of the infrared emissivity function.
On the other hand, if it is determined that the luminance error reduction effect has not been obtained, the designer instructs theemissivity calibration device 100 to, for example, recalculate the infrared emissivity function coefficient 170. When the designer instructs the emissivity calibration device 100 to recalculate the infrared emissivity function coefficient 170, the emissivity calibration device 100 repeats the processes from STEP 4 onwards.
In this case, in STEP 4, thecoefficient calculation unit 104 recalculates the coefficient 170 of the infrared emissivity function using the new captured image 250 that has not been used in the calculation and the corresponding unused monitor data 550.
輝度誤差低減効果が得られていると判断した場合は、設計者は、出力された赤外放射率関数の係数170を用いて赤外放射輝度を校正することができる。
一方、輝度誤差低減効果が得られていないと判断した場合は、設計者は、例えば、赤外放射率関数の係数170の再計算を放射率校正装置100に指示する。設計者から赤外放射率関数の係数170の再計算を指示された場合は、放射率校正装置100はSTEP4以降の処理を繰り返す。
この場合に、STEP4において、係数算出部104は、計算に未使用である新たな撮像画像250と対応する未使用のモニタデータ550を用いて、赤外放射率関数の係数170の再計算を行う。 The user (designer) of the
If it is determined that the luminance error reduction effect has been achieved, the designer can calibrate the infrared radiance using the output coefficient 170 of the infrared emissivity function.
On the other hand, if it is determined that the luminance error reduction effect has not been obtained, the designer instructs the
In this case, in STEP 4, the
なお、撮像領域指定部101が1組の撮像領域のみを指定し、画像取得部102が1組の撮像領域の撮像画像250のみを取得し、モニタデータ取得部103が1組の撮像領域のモニタデータ550のみを取得するようにしてもよい。
この場合に、設計者から赤外放射率関数の係数170の再計算を指示された場合は、STEP1以降の処理が繰り返される。 In addition, the imagingarea designation unit 101 may specify only one set of imaging areas, the image acquisition unit 102 may acquire only the imaging image 250 of the set of imaging areas, and the monitor data acquisition unit 103 may acquire only the monitor data 550 of the set of imaging areas.
In this case, if the designer issues an instruction to recalculate the coefficient 170 of the infrared emissivity function, the processes from STEP 1 onwards are repeated.
この場合に、設計者から赤外放射率関数の係数170の再計算を指示された場合は、STEP1以降の処理が繰り返される。 In addition, the imaging
In this case, if the designer issues an instruction to recalculate the coefficient 170 of the infrared emissivity function, the processes from STEP 1 onwards are repeated.
ここで、STEP4で係数算出部104により行われる赤外放射率関数の係数170の計算の詳細を説明する。
係数算出部104は、撮像装置200の光路と赤外放射輝度の関係を表す物理モデルMに、撮像画像250とモニタデータ550と装置特性データ160を入力として与え、同一光路上の全ての走査鏡の赤外放射率関数の係数を算出する。ここで、係数とは赤外放射率関数の内部パラメータを指す。この内部パラメータを決定する手法として参考文献1又は参考文献2に開示の技術を用いる。 Here, the calculation of the coefficient 170 of the infrared emissivity function performed by thecoefficient calculation unit 104 in STEP 4 will be described in detail.
Thecoefficient calculation unit 104 inputs the captured image 250, the monitor data 550, and the device characteristic data 160 to a physical model M that represents the relationship between the optical path and infrared radiance of the imaging device 200, and calculates the coefficients of the infrared emissivity function of all scanning mirrors on the same optical path. Here, the coefficients refer to the internal parameters of the infrared emissivity function. The technique disclosed in Reference 1 or Reference 2 is used as a method for determining these internal parameters.
係数算出部104は、撮像装置200の光路と赤外放射輝度の関係を表す物理モデルMに、撮像画像250とモニタデータ550と装置特性データ160を入力として与え、同一光路上の全ての走査鏡の赤外放射率関数の係数を算出する。ここで、係数とは赤外放射率関数の内部パラメータを指す。この内部パラメータを決定する手法として参考文献1又は参考文献2に開示の技術を用いる。 Here, the calculation of the coefficient 170 of the infrared emissivity function performed by the
The
[参考文献1]:Adapting Arbitrary Mutation Distributions in Evolution Strategies:The Covariance Matrix Adaptation(Proceedings of IEEE International Conference on Evolutionary Computation, 1996, pp. 312-317)
[参考文献2]:Evolution strategies;Handbook of Computational Intelligence,J. Kacprzyk and W.Pedrycz,editors,pp.871-898,Springer(2015) [Reference 1]: Adapting Arbitrary Mutation Distributions in Evolution Strategies: The Covariance Matrix Adaptation (Proceedings of IEEE International Conference on Evolutionary Computation, 1996, pp. 312-317)
[Reference 2]: Evolution strategies; Handbook of Computational Intelligence, J. Kacprzyk and W. Pedrycz, editors, pp. 871-898, Springer (2015)
[参考文献2]:Evolution strategies;Handbook of Computational Intelligence,J. Kacprzyk and W.Pedrycz,editors,pp.871-898,Springer(2015) [Reference 1]: Adapting Arbitrary Mutation Distributions in Evolution Strategies: The Covariance Matrix Adaptation (Proceedings of IEEE International Conference on Evolutionary Computation, 1996, pp. 312-317)
[Reference 2]: Evolution strategies; Handbook of Computational Intelligence, J. Kacprzyk and W. Pedrycz, editors, pp. 871-898, Springer (2015)
より具体的には、係数算出部104は、以下の方法にて赤外放射率関数の係数170を算出する。
なお、以下では、図3に示すように、撮像装置200が1次走査鏡である走査鏡βと2次走査鏡である走査鏡αを備える構成を前提として説明を進める。 More specifically, thecoefficient calculation unit 104 calculates the coefficient 170 of the infrared emissivity function in the following manner.
In the following, the description will proceed on the assumption that theimaging device 200 includes a scanning mirror β, which is a primary scanning mirror, and a scanning mirror α, which is a secondary scanning mirror, as shown in FIG.
なお、以下では、図3に示すように、撮像装置200が1次走査鏡である走査鏡βと2次走査鏡である走査鏡αを備える構成を前提として説明を進める。 More specifically, the
In the following, the description will proceed on the assumption that the
本実施の形態では、走査鏡αの赤外放射率関数をεα(θα)=a0+a1θα+a2θα
2と定義する。
ここで、θαは走査鏡αへ入射する光の入射角を示す。θαは走査鏡αの回転角度xに依存することからxの関数(θα(x))である。θα(x)の内部パラメータはハードウェアによる。
同様に、走査鏡βの赤外放射率関数をεβ(θβ)=b0+b1θβ+b2θβ 2と定義する。
ここで、θβは走査鏡βへ入射する光の入射角を示す。θβは走査鏡βの回転角度yに依存することからyの関数(θβ(y))である。θβ(y)の内部パラメータはハードウェアによる。 In this embodiment, the infrared emissivity function of the scanning mirror α is defined as ε α (θ α )=a 0 +a 1 θ α +a 2 θ α 2 .
Here, θα represents the angle of incidence of light incident on the scanning mirror α. θα depends on the rotation angle x of the scanning mirror α, and is therefore a function of x ( θα (x)). The internal parameters of θα (x) depend on the hardware.
Similarly, define the infrared emissivity function of scanning mirror β as ε β (θ β )=b 0 +b 1 θ β +b 2 θ β 2 .
Here, θ β represents the angle of incidence of light incident on the scanning mirror β. θ β depends on the rotation angle y of the scanning mirror β, and is therefore a function of y (θ β (y)). The internal parameters of θ β (y) depend on the hardware.
ここで、θαは走査鏡αへ入射する光の入射角を示す。θαは走査鏡αの回転角度xに依存することからxの関数(θα(x))である。θα(x)の内部パラメータはハードウェアによる。
同様に、走査鏡βの赤外放射率関数をεβ(θβ)=b0+b1θβ+b2θβ 2と定義する。
ここで、θβは走査鏡βへ入射する光の入射角を示す。θβは走査鏡βの回転角度yに依存することからyの関数(θβ(y))である。θβ(y)の内部パラメータはハードウェアによる。 In this embodiment, the infrared emissivity function of the scanning mirror α is defined as ε α (θ α )=a 0 +a 1 θ α +a 2 θ α 2 .
Here, θα represents the angle of incidence of light incident on the scanning mirror α. θα depends on the rotation angle x of the scanning mirror α, and is therefore a function of x ( θα (x)). The internal parameters of θα (x) depend on the hardware.
Similarly, define the infrared emissivity function of scanning mirror β as ε β (θ β )=b 0 +b 1 θ β +b 2 θ β 2 .
Here, θ β represents the angle of incidence of light incident on the scanning mirror β. θ β depends on the rotation angle y of the scanning mirror β, and is therefore a function of y (θ β (y)). The internal parameters of θ β (y) depend on the hardware.
また、ラインセンサ(検出素子)のデジタル出力をCとする。
また、ラインセンサ(検出素子)の観測輝度をRd=qC2+mC+bと表す。
ここで、q、m及びbはラインセンサの赤外校正係数である。q、m及びbは、装置特性データ160に含まれる。 Also, let C be the digital output of the line sensor (detection element).
The observed luminance of the line sensor (detection element) is expressed as R d =qC 2 +mC+b.
where q, m, and b are the infrared calibration coefficients of the line sensor. q, m, and b are included in the device characteristic data 160.
また、ラインセンサ(検出素子)の観測輝度をRd=qC2+mC+bと表す。
ここで、q、m及びbはラインセンサの赤外校正係数である。q、m及びbは、装置特性データ160に含まれる。 Also, let C be the digital output of the line sensor (detection element).
The observed luminance of the line sensor (detection element) is expressed as R d =qC 2 +mC+b.
where q, m, and b are the infrared calibration coefficients of the line sensor. q, m, and b are included in the device characteristic data 160.
走査鏡αの温度をTα、走査鏡βの温度をTβと表す。
被観測対象物の赤外放射輝度をRと表す。
走査鏡αの黒体放射輝度をRα(Tα)と表す。Rα(Tα)はTαの関数であり、値は装置特性データ160に含まれる赤外線波長の値を用いて計算される。
走査鏡βの黒体放射輝度をRβ(Tβ)と表す。Rβ(Tβ)はTβの関数であり、値は装置特性データ160に含まれる赤外線波長の値を用いて計算される。 The temperature of scanning mirror α is represented as T α , and the temperature of scanning mirror β is represented as T β .
The infrared radiance of the object to be observed is represented as R.
The blackbody radiance of scanning mirror α is represented as R α (T α ), which is a function of T α and is calculated using the infrared wavelength values included in device characteristic data 160 .
The blackbody radiance of scanning mirror β is denoted as R β (T β ), where R β (T β ) is a function of T β and the value is calculated using the infrared wavelength values included in device characteristic data 160 .
被観測対象物の赤外放射輝度をRと表す。
走査鏡αの黒体放射輝度をRα(Tα)と表す。Rα(Tα)はTαの関数であり、値は装置特性データ160に含まれる赤外線波長の値を用いて計算される。
走査鏡βの黒体放射輝度をRβ(Tβ)と表す。Rβ(Tβ)はTβの関数であり、値は装置特性データ160に含まれる赤外線波長の値を用いて計算される。 The temperature of scanning mirror α is represented as T α , and the temperature of scanning mirror β is represented as T β .
The infrared radiance of the object to be observed is represented as R.
The blackbody radiance of scanning mirror α is represented as R α (T α ), which is a function of T α and is calculated using the infrared wavelength values included in device characteristic data 160 .
The blackbody radiance of scanning mirror β is denoted as R β (T β ), where R β (T β ) is a function of T β and the value is calculated using the infrared wavelength values included in device characteristic data 160 .
以上から、同一光路上に二つの走査鏡(走査鏡α、走査鏡β)とラインセンサを備えた撮像装置200において、赤外放射輝度(以下、観測放射輝度ともいう)は次の式(1)、式(2)、式(3)の物理モデルMで表される。
Rd=M 式(1)
Rd=qC2+mC+b 式(2)
M=(1-εα(θα))(1-εβ(θβ))R+εα(θ)Rα(Tα)+εβ(θβ)(1-εα(θ))Rβ(Tβ) 式(3)
前述したように、走査鏡αは2次走査鏡、走査鏡βは1次走査鏡である。
撮像装置200の内部の光路は次のように構成されている。
被写体(光源)→1次走査鏡β→2次走査鏡α→ラインセンサ From the above, in theimaging device 200 equipped with two scanning mirrors (scanning mirror α, scanning mirror β) and a line sensor on the same optical path, the infrared radiance (hereinafter also referred to as observed radiance) is expressed by the physical model M of the following equations (1), (2), and (3).
R d =M Formula (1)
Rd = qC2 + mC + b Equation (2)
M = (1 - εα (θα ) ) (1 - εβ( θβ )) R + εα (θ) Rα ( Tα ) + εβ ( θβ ) (1 - εα ( θ )) Rβ ( Tβ ) Equation (3)
As previously mentioned, scanning mirror α is the secondary scanning mirror, and scanning mirror β is the primary scanning mirror.
The optical path inside theimaging device 200 is configured as follows.
Subject (light source) → Primary scanning mirror β → Secondary scanning mirror α → Line sensor
Rd=M 式(1)
Rd=qC2+mC+b 式(2)
M=(1-εα(θα))(1-εβ(θβ))R+εα(θ)Rα(Tα)+εβ(θβ)(1-εα(θ))Rβ(Tβ) 式(3)
前述したように、走査鏡αは2次走査鏡、走査鏡βは1次走査鏡である。
撮像装置200の内部の光路は次のように構成されている。
被写体(光源)→1次走査鏡β→2次走査鏡α→ラインセンサ From the above, in the
R d =M Formula (1)
Rd = qC2 + mC + b Equation (2)
M = (1 - εα (θα ) ) (1 - εβ( θβ )) R + εα (θ) Rα ( Tα ) + εβ ( θβ ) (1 - εα ( θ )) Rβ ( Tβ ) Equation (3)
As previously mentioned, scanning mirror α is the secondary scanning mirror, and scanning mirror β is the primary scanning mirror.
The optical path inside the
Subject (light source) → Primary scanning mirror β → Secondary scanning mirror α → Line sensor
続いて、この撮像装置200によって撮像される2領域1組の撮像領域を考える。
ある撮像画像i(赤外放射輝度:Ri)を撮像したときの走査鏡αの回転角度をxi、走査鏡αの温度をTα,iと表す。また、撮像画像iを撮像したときの走査鏡βの回転角度をyi、走査鏡βの温度をTβ,iと表す。
また、θα,iは撮像画像iを撮像したときの走査鏡αへ入射する光の入射角を示す。θα,iは走査鏡αの回転角度xiに依存することからxiの関数(θα,i(xi))である。更に、θβ,iは撮像画像iを撮像したときの走査鏡βへ入射する光の入射角を示す。θβ,iは走査鏡βの回転角度yiに依存することからyiの関数(θβ,i(yi))である。
同様にして、ある撮像画像j(赤外放射輝度:Rj)を撮像したときの走査鏡αの回転角度をxj、走査鏡αの温度をTα,jと表す。また、撮像画像jを撮像したときの走査鏡βの走査鏡βの回転角度をyj、走査鏡βの温度をTβ,jと表す。
また、θα,jは撮像画像jを撮像したときの走査鏡αへ入射する光の入射角を示す。θα,jは走査鏡αの回転角度xjに依存することからxjの関数(θα,j(xj))である。更に、θβ,jは撮像画像jを撮像したときの走査鏡βへ入射する光の入射角を示す。θβ,jは走査鏡βの回転角度yjに依存することからyjの関数(θβ,j(yj))である。 Next, a pair of imaging regions imaged by theimaging device 200 will be considered.
The rotation angle of the scanning mirror α when capturing an image i (infrared radiance: R i ) is denoted as x i , the temperature of the scanning mirror α is denoted as T α,i , and the rotation angle of the scanning mirror β when capturing the image i is denoted as y i , the temperature of the scanning mirror β is denoted as T β,i .
Furthermore, θα,i indicates the angle of incidence of light incident on the scanning mirror α when the captured image i is captured. θα,i depends on the rotation angle x i of the scanning mirror α, and is therefore a function of x i ( θα,i (x i )). Furthermore, θβ ,i indicates the angle of incidence of light incident on the scanning mirror β when the captured image i is captured. θβ,i depends on the rotation angle y i of the scanning mirror β, and is therefore a function of y i (θβ ,i (y i )).
Similarly, the rotation angle of scanning mirror α when capturing image j (infrared radiance: R j ) is denoted as x j , the temperature of scanning mirror α is denoted as T α,j , and the rotation angle of scanning mirror β when capturing image j is denoted as y j , the temperature of scanning mirror β is denoted as T β,j .
Furthermore, θα,j indicates the angle of incidence of light incident on scanning mirror α when captured image j is captured. θα,j depends on the rotation angle xj of scanning mirror α, and is therefore a function of xj ( θα,j ( xj )). Furthermore, θβ ,j indicates the angle of incidence of light incident on scanning mirror β when captured image j is captured. θβ,j depends on the rotation angle yj of scanning mirror β , and is therefore a function of yj (θβ ,j ( yj )).
ある撮像画像i(赤外放射輝度:Ri)を撮像したときの走査鏡αの回転角度をxi、走査鏡αの温度をTα,iと表す。また、撮像画像iを撮像したときの走査鏡βの回転角度をyi、走査鏡βの温度をTβ,iと表す。
また、θα,iは撮像画像iを撮像したときの走査鏡αへ入射する光の入射角を示す。θα,iは走査鏡αの回転角度xiに依存することからxiの関数(θα,i(xi))である。更に、θβ,iは撮像画像iを撮像したときの走査鏡βへ入射する光の入射角を示す。θβ,iは走査鏡βの回転角度yiに依存することからyiの関数(θβ,i(yi))である。
同様にして、ある撮像画像j(赤外放射輝度:Rj)を撮像したときの走査鏡αの回転角度をxj、走査鏡αの温度をTα,jと表す。また、撮像画像jを撮像したときの走査鏡βの走査鏡βの回転角度をyj、走査鏡βの温度をTβ,jと表す。
また、θα,jは撮像画像jを撮像したときの走査鏡αへ入射する光の入射角を示す。θα,jは走査鏡αの回転角度xjに依存することからxjの関数(θα,j(xj))である。更に、θβ,jは撮像画像jを撮像したときの走査鏡βへ入射する光の入射角を示す。θβ,jは走査鏡βの回転角度yjに依存することからyjの関数(θβ,j(yj))である。 Next, a pair of imaging regions imaged by the
The rotation angle of the scanning mirror α when capturing an image i (infrared radiance: R i ) is denoted as x i , the temperature of the scanning mirror α is denoted as T α,i , and the rotation angle of the scanning mirror β when capturing the image i is denoted as y i , the temperature of the scanning mirror β is denoted as T β,i .
Furthermore, θα,i indicates the angle of incidence of light incident on the scanning mirror α when the captured image i is captured. θα,i depends on the rotation angle x i of the scanning mirror α, and is therefore a function of x i ( θα,i (x i )). Furthermore, θβ ,i indicates the angle of incidence of light incident on the scanning mirror β when the captured image i is captured. θβ,i depends on the rotation angle y i of the scanning mirror β, and is therefore a function of y i (θβ ,i (y i )).
Similarly, the rotation angle of scanning mirror α when capturing image j (infrared radiance: R j ) is denoted as x j , the temperature of scanning mirror α is denoted as T α,j , and the rotation angle of scanning mirror β when capturing image j is denoted as y j , the temperature of scanning mirror β is denoted as T β,j .
Furthermore, θα,j indicates the angle of incidence of light incident on scanning mirror α when captured image j is captured. θα,j depends on the rotation angle xj of scanning mirror α, and is therefore a function of xj ( θα,j ( xj )). Furthermore, θβ ,j indicates the angle of incidence of light incident on scanning mirror β when captured image j is captured. θβ,j depends on the rotation angle yj of scanning mirror β , and is therefore a function of yj (θβ ,j ( yj )).
このとき、撮像領域の組(i,j)に関する赤外放射輝度の関係式は次の式(4)から式(9)で表せる。
Rd,i=Mi 式(4)
Rd,i=qCi 2+mCi+b 式(5)
Mi=(1-εα(θα,i)(1-εβ(θβ,i))Ri+εα(θα,i)Rα(Tα,i)+εβ(θβ,i)(1-εα(θα,i)Rβ(Tβ,i) 式(6)
Rd,j=Mj 式(7)
Rd,j=qCj 2+mCj+b 式(8)
(Mj=(1-εα(θα,j))(1-εβ(θβ,j)R2+εα(θα,j)Rα(Tα,j)+εβ(θβ,j)(1-εα(θα,j))Rβ(Tβ,j) 式(9)
ここで、θ、T、R、Mのそれぞれの右下添え字iとjは、撮像領域に対応する番号である。 In this case, the relational expressions of the infrared radiance for the pair (i, j) of imaging regions can be expressed by the following expressions (4) to (9).
R d,i = M i Equation (4)
R d,i = qC i 2 + mC i + b Equation (5)
M i = (1 - ε α (θ α,i ) (1 - ε β (θ β,i )) R i + ε α (θ α,i ) R α (T α,i ) + ε β (θ β,i ) (1 - ε α (θ α,i ) R β (T β,i ) Equation (6)
R d,j = M j Equation (7)
Rd,j = qCj2 + mCj + b Equation (8)
(M j = (1 - ε α (θ α,j )) (1 - ε β (θ β,j ) R 2 + ε α (θ α,j ) R α (T α,j ) + ε β (θ β,j ) (1 - ε α (θ α,j )) R β (T β,j ) Equation (9)
Here, the subscripts i and j at the bottom right of θ, T, R, and M are numbers corresponding to the imaging region.
Rd,i=Mi 式(4)
Rd,i=qCi 2+mCi+b 式(5)
Mi=(1-εα(θα,i)(1-εβ(θβ,i))Ri+εα(θα,i)Rα(Tα,i)+εβ(θβ,i)(1-εα(θα,i)Rβ(Tβ,i) 式(6)
Rd,j=Mj 式(7)
Rd,j=qCj 2+mCj+b 式(8)
(Mj=(1-εα(θα,j))(1-εβ(θβ,j)R2+εα(θα,j)Rα(Tα,j)+εβ(θβ,j)(1-εα(θα,j))Rβ(Tβ,j) 式(9)
ここで、θ、T、R、Mのそれぞれの右下添え字iとjは、撮像領域に対応する番号である。 In this case, the relational expressions of the infrared radiance for the pair (i, j) of imaging regions can be expressed by the following expressions (4) to (9).
R d,i = M i Equation (4)
R d,i = qC i 2 + mC i + b Equation (5)
M i = (1 - ε α (θ α,i ) (1 - ε β (θ β,i )) R i + ε α (θ α,i ) R α (T α,i ) + ε β (θ β,i ) (1 - ε α (θ α,i ) R β (T β,i ) Equation (6)
R d,j = M j Equation (7)
Rd,j = qCj2 + mCj + b Equation (8)
(M j = (1 - ε α (θ α,j )) (1 - ε β (θ β,j ) R 2 + ε α (θ α,j ) R α (T α,j ) + ε β (θ β,j ) (1 - ε α (θ α,j )) R β (T β,j ) Equation (9)
Here, the subscripts i and j at the bottom right of θ, T, R, and M are numbers corresponding to the imaging region.
次に、撮像領域iと撮像領域jについて観測放射輝度の差分ΔRkおよび検出器への入力放射輝度の差分ΔMkをそれぞれ式(10)および式(11)で表す。式(10)と式(11)の差分を式(12)で表す。
ΔRk=Rd,i-Rd,j 式(10)
ΔMk=Mi-Mj 式(11)
Ek=ΔRk-ΔMk 式(12)
ここで、ΔRk、ΔMk、Ekの右下添え字kは領域組の番号である。各領域組(i,j)に対して領域組の番号kが一意に定められる。
各領域組kに対するEkの値は、物理モデルMによって算出される観測放射輝度の理論値とモニタ装置500で観測された回転角度及び温度から算出された値との差分を示す。この差分Ekの大きさが小さい程、領域組kの撮像領域iと撮像領域jの間の赤外放射輝度の差が小さいことになる。
係数算出部104は、この差分Ekの大きさを最小化する赤外放射率関数εα(θα)、εβ(θβ)の係数a0、a1、a2、b0、b1及びb2を算出する。換言すれば、係数算出部104は、撮像領域iと撮像領域jの間の赤外放射輝度の差の大きさを最小化する赤外放射率関数の係数170を算出する。 Next, the difference ΔR k in observed radiance between imaging region i and imaging region j and the difference ΔM k in input radiance to the detector are expressed by equations (10) and (11), respectively. The difference between equations (10) and (11) is expressed by equation (12).
ΔR k =R d,i −R d,j Equation (10)
ΔM k =M i -M j Equation (11)
E k =ΔR k -ΔM k Equation (12)
Here, the subscript k at the bottom right of ΔR k , ΔM k , and E k is the region set number, and the region set number k is uniquely determined for each region set (i, j).
The value of Ek for each region set k indicates the difference between the theoretical value of the observed radiance calculated by the physical model M and the value calculated from the rotation angle and temperature observed by themonitor device 500. The smaller the magnitude of this difference Ek , the smaller the difference in infrared radiance between the imaging region i and the imaging region j of the region set k.
Thecoefficient calculation unit 104 calculates coefficients a0 , a1 , a2 , b0 , b1, and b2 of the infrared emissivity functions εα ( θα ) and εβ ( θβ ) that minimize the magnitude of this difference Ek . In other words, the coefficient calculation unit 104 calculates coefficients 170 of the infrared emissivity function that minimizes the magnitude of the difference in infrared radiance between the image capture regions i and j.
ΔRk=Rd,i-Rd,j 式(10)
ΔMk=Mi-Mj 式(11)
Ek=ΔRk-ΔMk 式(12)
ここで、ΔRk、ΔMk、Ekの右下添え字kは領域組の番号である。各領域組(i,j)に対して領域組の番号kが一意に定められる。
各領域組kに対するEkの値は、物理モデルMによって算出される観測放射輝度の理論値とモニタ装置500で観測された回転角度及び温度から算出された値との差分を示す。この差分Ekの大きさが小さい程、領域組kの撮像領域iと撮像領域jの間の赤外放射輝度の差が小さいことになる。
係数算出部104は、この差分Ekの大きさを最小化する赤外放射率関数εα(θα)、εβ(θβ)の係数a0、a1、a2、b0、b1及びb2を算出する。換言すれば、係数算出部104は、撮像領域iと撮像領域jの間の赤外放射輝度の差の大きさを最小化する赤外放射率関数の係数170を算出する。 Next, the difference ΔR k in observed radiance between imaging region i and imaging region j and the difference ΔM k in input radiance to the detector are expressed by equations (10) and (11), respectively. The difference between equations (10) and (11) is expressed by equation (12).
ΔR k =R d,i −R d,j Equation (10)
ΔM k =M i -M j Equation (11)
E k =ΔR k -ΔM k Equation (12)
Here, the subscript k at the bottom right of ΔR k , ΔM k , and E k is the region set number, and the region set number k is uniquely determined for each region set (i, j).
The value of Ek for each region set k indicates the difference between the theoretical value of the observed radiance calculated by the physical model M and the value calculated from the rotation angle and temperature observed by the
The
係数算出部104は、最適化アルゴリズムを用いて赤外放射率関数の係数170を算出することができる。係数算出部104は、以下のようにして、最適化アルゴリズムを用いて赤外放射率関数の係数170を算出する。
係数算出部104は、赤外放射率関数の係数170を算出するために次の式(13)を用いる。 Thecoefficient calculation unit 104 can use an optimization algorithm to calculate the coefficient 170 of the infrared emissivity function. The coefficient calculation unit 104 calculates the coefficient 170 of the infrared emissivity function using an optimization algorithm as follows.
Thecoefficient calculation unit 104 uses the following formula (13) to calculate the coefficient 170 of the infrared emissivity function.
係数算出部104は、赤外放射率関数の係数170を算出するために次の式(13)を用いる。 The
The
F=F({Ek|1≦k≦N}) 式(13)
F = F ({E k |1 ≦ k ≦ N}) Equation (13)
ここで、Nは領域組の総数、Fは設計者によって任意に設計される関数であり、kは領域組の番号である。kには1からNまでの自然数が割り当てられる。
式(13)で定義されたFを最小化させるための赤外放射率関数εα(θα)、εβ(θβ)の係数a0、a1、a2、b0、b1及びb2を求めることを考える。
ここで、式(14)のように、目的関数fをFで定義する。
f=F 式(14)
最適化するパラメータxは、式(15)に示す通りである。 Here, N is the total number of region sets, F is a function arbitrarily designed by the designer, and k is the region set number. A natural number from 1 to N is assigned to k.
Consider finding the coefficients a0 , a1 , a2 , b0 , b1, and b2 of the infrared emissivity functions εα ( θα ), εβ ( θβ ) for minimizing F defined in equation ( 13 ).
Here, the objective function f is defined as F, as in equation (14).
f = F Equation (14)
The parameter x to be optimized is as shown in equation (15).
式(13)で定義されたFを最小化させるための赤外放射率関数εα(θα)、εβ(θβ)の係数a0、a1、a2、b0、b1及びb2を求めることを考える。
ここで、式(14)のように、目的関数fをFで定義する。
f=F 式(14)
最適化するパラメータxは、式(15)に示す通りである。 Here, N is the total number of region sets, F is a function arbitrarily designed by the designer, and k is the region set number. A natural number from 1 to N is assigned to k.
Consider finding the coefficients a0 , a1 , a2 , b0 , b1, and b2 of the infrared emissivity functions εα ( θα ), εβ ( θβ ) for minimizing F defined in equation ( 13 ).
Here, the objective function f is defined as F, as in equation (14).
f = F Equation (14)
The parameter x to be optimized is as shown in equation (15).
係数算出部104は、上記の参考文献1の「CMA-ES」又は参考文献2の「(μ/μw,λ)-CMA-ES」に式(14)及び式(15)を適用し、式(15)の最適解を得る。また、係数算出部104は、「CMA-ES」又は「(μ/μw,λ)-CMA-ES」の一部或いは全てを含むアルゴリズムに式(14)及び式(15)を適用し、式(15)の最適解を得てもよい。
式(15)のパラメータxの最適化された値が赤外放射率関数の係数170である。 Thecoefficient calculation unit 104 obtains an optimal solution to equation (15) by applying equation (14) and equation (15) to "CMA-ES" in the above-mentioned reference 1 or "(μ/μ w , λ)-CMA-ES" in reference 2. Furthermore, the coefficient calculation unit 104 may obtain an optimal solution to equation (15) by applying equation (14) and equation (15) to an algorithm including a part or all of "CMA-ES" or "(μ/μ w , λ)-CMA-ES".
The optimized value of parameter x in equation (15) is coefficient 170 of the infrared emissivity function.
式(15)のパラメータxの最適化された値が赤外放射率関数の係数170である。 The
The optimized value of parameter x in equation (15) is coefficient 170 of the infrared emissivity function.
なお、以下の参考文献3の第1章に記載の「実ベクトル空間の部分集合X⊆Rd上で定義される関数f:X→Rの最適化(連続最適化)に対し、正規分布を用いた多点探索を行う進化計算手法である」の「f」が上記の式(14)にあたる。この「f」は、参考文献3の2.1項に記載の「STEP.2 解候補の目的数値f(xi)」でも参照される。
また、上記の式(15)は参考文献3の2.1項に記載のアルゴリズムによって最適化されてもよい。 The "f" in "an evolutionary computation method that performs a multi-point search using normal distribution for the optimization of a function f: X→R defined on a subset X⊆Rd of a real vector space (continuous optimization)" in Chapter 1 of Reference 3 below corresponds to the above formula (14). This "f" is also referred to in "STEP.2 Target value f(x i ) of a solution candidate" in Section 2.1 of Reference 3.
Furthermore, the above equation (15) may be optimized by the algorithm described in Section 2.1 of Reference 3.
また、上記の式(15)は参考文献3の2.1項に記載のアルゴリズムによって最適化されてもよい。 The "f" in "an evolutionary computation method that performs a multi-point search using normal distribution for the optimization of a function f: X→R defined on a subset X⊆Rd of a real vector space (continuous optimization)" in Chapter 1 of Reference 3 below corresponds to the above formula (14). This "f" is also referred to in "STEP.2 Target value f(x i ) of a solution candidate" in Section 2.1 of Reference 3.
Furthermore, the above equation (15) may be optimized by the algorithm described in Section 2.1 of Reference 3.
[参考文献3]:Evolution Strategies による連続最適化-CMA-ESの設計原理と理論的基盤、秋本洋平、システム/制御/情報、Vol.60、No.7、pp.292―297、2016
[Reference 3]: Continuous Optimization by Evolution Strategies - CMA-ES Design Principles and Theoretical Foundations, Yohei Akimoto, System/Control/Information, Vol. 60, No. 7, pp. 292-297, 2016
なお、2つの走査鏡のうちの一方が、回転軸を持たない反射鏡である場合は、係数算出部104は、物理モデルM上でその反射鏡に対応する走査鏡の回転角の値(x又はy)を定数として扱い、計算を行う。
また、2つの走査鏡のうちの一方が存在しない場合は、係数算出部104は、存在しない走査鏡の放射率関数と黒体放射輝度の値を0とし、計算を行う。 In addition, if one of the two scanning mirrors is a reflecting mirror that does not have a rotation axis, thecoefficient calculation unit 104 treats the value of the rotation angle (x or y) of the scanning mirror corresponding to that reflecting mirror on the physical model M as a constant and performs calculations.
Furthermore, if one of the two scanning mirrors does not exist, thecoefficient calculation unit 104 sets the values of the emissivity function and blackbody radiance of the non-existent scanning mirror to 0 and performs calculations.
また、2つの走査鏡のうちの一方が存在しない場合は、係数算出部104は、存在しない走査鏡の放射率関数と黒体放射輝度の値を0とし、計算を行う。 In addition, if one of the two scanning mirrors is a reflecting mirror that does not have a rotation axis, the
Furthermore, if one of the two scanning mirrors does not exist, the
***実施の形態の効果の説明***
本実施の形態では、放射率校正装置100は、領域組kの撮像領域iと撮像領域jの赤外放射輝度の差の大きさを最小化する赤外放射率関数の係数を算出する。設計者は、算出された赤外放射率関数の係数を用いて赤外放射率を校正することができる。このため、本実施の形態によれば、打ち上げ後の撮像装置からの撮像画像であっても、赤外放射輝度の誤差を低減することができる。 ***Description of Effects of the Embodiment***
In this embodiment, theemissivity calibration device 100 calculates the coefficients of the infrared emissivity function that minimizes the magnitude of the difference in infrared radiance between the imaging region i and the imaging region j of the region set k. The designer can calibrate the infrared emissivity using the calculated coefficients of the infrared emissivity function. Therefore, according to this embodiment, it is possible to reduce errors in the infrared radiance even in images captured by an imaging device after launch.
本実施の形態では、放射率校正装置100は、領域組kの撮像領域iと撮像領域jの赤外放射輝度の差の大きさを最小化する赤外放射率関数の係数を算出する。設計者は、算出された赤外放射率関数の係数を用いて赤外放射率を校正することができる。このため、本実施の形態によれば、打ち上げ後の撮像装置からの撮像画像であっても、赤外放射輝度の誤差を低減することができる。 ***Description of Effects of the Embodiment***
In this embodiment, the
また、本実施の形態では、打上げ前と打上げ後の環境の変化に起因する走査鏡の赤外放射率の変化による赤外放射輝度の誤差に加えて、経年変化に起因する走査鏡の赤外放射率の変化による赤外放射輝度の誤差も低減することができる。つまり、経年変化により、人工衛星の打ち上げ当初の赤外放射率と打ち上げから長時間経過した後の赤外放射率とが一致しない場合がある。本実施の形態によれば、このような経年変化による赤外放射率の不一致に起因する赤外放射輝度の誤差も低減することができる。
Furthermore, in this embodiment, in addition to the error in infrared radiance due to the change in infrared emissivity of the scanning mirror caused by the change in the environment before and after launch, it is also possible to reduce the error in infrared radiance due to the change in infrared emissivity of the scanning mirror caused by aging. In other words, due to aging, the infrared emissivity of the satellite at the time of launch may not match the infrared emissivity a long time after launch. According to this embodiment, it is also possible to reduce the error in infrared radiance caused by such a mismatch in infrared emissivity due to aging.
また、撮像装置200では、走査鏡とラインセンサが同一光路上に存在する。同一光路上に存在する全ての走査鏡とラインセンサを1つの系とし、物理モデルを用いて、走査鏡ごとの赤外放射率関数の係数を1つの系に対して算出するには、全ての走査鏡の赤外放射率関数の係数が同一の物理モデルから算出される必要がある。
本実施の形態では、放射率校正装置100は、同一の物理モデルMを適用して全ての走査鏡の赤外放射率の係数を算出する。このため、本実施の形態によれば、全ての走査鏡の赤外放射率を1つの系に対して校正することができる。 Furthermore, in theimaging device 200, the scanning mirror and the line sensor are on the same optical path. In order to treat all the scanning mirrors and line sensors on the same optical path as one system and calculate the coefficients of the infrared emissivity function for each scanning mirror for one system using a physical model, the coefficients of the infrared emissivity function for all the scanning mirrors need to be calculated from the same physical model.
In this embodiment, theemissivity calibration device 100 calculates the infrared emissivity coefficients of all scanning mirrors by applying the same physical model M. Therefore, according to this embodiment, the infrared emissivities of all scanning mirrors can be calibrated for one system.
本実施の形態では、放射率校正装置100は、同一の物理モデルMを適用して全ての走査鏡の赤外放射率の係数を算出する。このため、本実施の形態によれば、全ての走査鏡の赤外放射率を1つの系に対して校正することができる。 Furthermore, in the
In this embodiment, the
***ハードウェア構成の補足説明***
最後に、放射率校正装置100のハードウェア構成の補足説明を行う。
図4に示すプロセッサ901は、プロセッシングを行うIC(Integrated Circuit)である。
プロセッサ901は、CPU(Central Processing Unit)、DSP(Digital Signal Processor)等である。
図4に示す主記憶装置902は、RAM(Random Access Memory)である。
図4に示す補助記憶装置903は、ROM(Read Only Memory)、フラッシュメモリ、HDD(Hard Disk Drive)等である。
図4に示す通信装置904は、データの通信処理を実行する電子回路である。
通信装置904は、例えば、通信チップ又はNIC(Network Interface Card)である。
図4に示す入出力装置905は、例えば、マウス、キーボード及びディスプレイである。 Additional hardware configuration information
Finally, a supplementary explanation of the hardware configuration of theemissivity calibration device 100 will be given.
Theprocessor 901 shown in FIG. 4 is an integrated circuit (IC) that performs processing.
Theprocessor 901 is a central processing unit (CPU), a digital signal processor (DSP), or the like.
Themain storage device 902 shown in FIG. 4 is a RAM (Random Access Memory).
Theauxiliary storage device 903 shown in FIG. 4 is a read only memory (ROM), a flash memory, a hard disk drive (HDD), or the like.
Thecommunication device 904 shown in FIG. 4 is an electronic circuit that performs data communication processing.
Thecommunication device 904 is, for example, a communication chip or a NIC (Network Interface Card).
The input/output device 905 shown in FIG. 4 is, for example, a mouse, a keyboard, and a display.
最後に、放射率校正装置100のハードウェア構成の補足説明を行う。
図4に示すプロセッサ901は、プロセッシングを行うIC(Integrated Circuit)である。
プロセッサ901は、CPU(Central Processing Unit)、DSP(Digital Signal Processor)等である。
図4に示す主記憶装置902は、RAM(Random Access Memory)である。
図4に示す補助記憶装置903は、ROM(Read Only Memory)、フラッシュメモリ、HDD(Hard Disk Drive)等である。
図4に示す通信装置904は、データの通信処理を実行する電子回路である。
通信装置904は、例えば、通信チップ又はNIC(Network Interface Card)である。
図4に示す入出力装置905は、例えば、マウス、キーボード及びディスプレイである。 Additional hardware configuration information
Finally, a supplementary explanation of the hardware configuration of the
The
The
The
The
The
The
The input/
また、補助記憶装置903には、OS(Operating System)も記憶されている。
そして、OSの少なくとも一部がプロセッサ901により実行される。
プロセッサ901はOSの少なくとも一部を実行しながら、撮像領域指定部101、画像取得部102、モニタデータ取得部103、係数算出部104及び係数出力部105の機能を実現するプログラムを実行する。
プロセッサ901がOSを実行することで、タスク管理、メモリ管理、ファイル管理、通信制御等が行われる。
また、撮像領域指定部101、画像取得部102、モニタデータ取得部103、係数算出部104及び係数出力部105の処理の結果を示す情報、データ、信号値及び変数値の少なくともいずれかが、主記憶装置902、補助記憶装置903、プロセッサ901内のレジスタ及びキャッシュメモリの少なくともいずれかに記憶される。
また、撮像領域指定部101、画像取得部102、モニタデータ取得部103、係数算出部104及び係数出力部105の機能を実現するプログラムは、磁気ディスク、フレキシブルディスク、光ディスク、コンパクトディスク、ブルーレイ(登録商標)ディスク、DVD等の可搬記録媒体に格納されていてもよい。そして、撮像領域指定部101、画像取得部102、モニタデータ取得部103、係数算出部104及び係数出力部105の機能を実現するプログラムが格納された可搬記録媒体を流通させてもよい。 Theauxiliary storage device 903 also stores an OS (Operating System).
At least a part of the OS is executed by theprocessor 901 .
Theprocessor 901 executes at least a part of the OS, and executes programs that realize the functions of the imaging area designation unit 101, the image acquisition unit 102, the monitor data acquisition unit 103, the coefficient calculation unit 104, and the coefficient output unit 105.
Theprocessor 901 executes the OS, thereby performing task management, memory management, file management, communication control, and the like.
In addition, at least one of information, data, signal values, and variable values indicating the results of processing by the imagingarea designation unit 101, the image acquisition unit 102, the monitor data acquisition unit 103, the coefficient calculation unit 104, and the coefficient output unit 105 is stored in at least one of the main memory device 902, the auxiliary memory device 903, the register within the processor 901, and the cache memory.
Furthermore, the programs for realizing the functions of the imagingarea designation unit 101, the image acquisition unit 102, the monitor data acquisition unit 103, the coefficient calculation unit 104, and the coefficient output unit 105 may be stored in a portable recording medium such as a magnetic disk, a flexible disk, an optical disk, a compact disk, a Blu-ray (registered trademark) disk, a DVD, etc. Then, the portable recording medium storing the programs for realizing the functions of the imaging area designation unit 101, the image acquisition unit 102, the monitor data acquisition unit 103, the coefficient calculation unit 104, and the coefficient output unit 105 may be distributed.
そして、OSの少なくとも一部がプロセッサ901により実行される。
プロセッサ901はOSの少なくとも一部を実行しながら、撮像領域指定部101、画像取得部102、モニタデータ取得部103、係数算出部104及び係数出力部105の機能を実現するプログラムを実行する。
プロセッサ901がOSを実行することで、タスク管理、メモリ管理、ファイル管理、通信制御等が行われる。
また、撮像領域指定部101、画像取得部102、モニタデータ取得部103、係数算出部104及び係数出力部105の処理の結果を示す情報、データ、信号値及び変数値の少なくともいずれかが、主記憶装置902、補助記憶装置903、プロセッサ901内のレジスタ及びキャッシュメモリの少なくともいずれかに記憶される。
また、撮像領域指定部101、画像取得部102、モニタデータ取得部103、係数算出部104及び係数出力部105の機能を実現するプログラムは、磁気ディスク、フレキシブルディスク、光ディスク、コンパクトディスク、ブルーレイ(登録商標)ディスク、DVD等の可搬記録媒体に格納されていてもよい。そして、撮像領域指定部101、画像取得部102、モニタデータ取得部103、係数算出部104及び係数出力部105の機能を実現するプログラムが格納された可搬記録媒体を流通させてもよい。 The
At least a part of the OS is executed by the
The
The
In addition, at least one of information, data, signal values, and variable values indicating the results of processing by the imaging
Furthermore, the programs for realizing the functions of the imaging
また、撮像領域指定部101、画像取得部102、モニタデータ取得部103、係数算出部104及び係数出力部105の少なくともいずれかの「部」を、「回路」又は「工程」又は「手順」又は「処理」又は「サーキットリー」に読み替えてもよい。
また、放射率校正装置100は、処理回路により実現されてもよい。処理回路は、例えば、ロジックIC(Integrated Circuit)、GA(Gate Array)、ASIC(Application Specific Integrated Circuit)、FPGA(Field-Programmable Gate Array)である。
この場合は、撮像領域指定部101、画像取得部102、モニタデータ取得部103、係数算出部104及び係数出力部105は、それぞれ処理回路の一部として実現される。
なお、本明細書では、プロセッサと処理回路との上位概念を、「プロセッシングサーキットリー」という。
つまり、プロセッサと処理回路とは、それぞれ「プロセッシングサーキットリー」の具体例である。 In addition, the "part" of at least any of the imagingarea designation unit 101, the image acquisition unit 102, the monitor data acquisition unit 103, the coefficient calculation unit 104, and the coefficient output unit 105 may be read as a "circuit" or a "process" or a "procedure" or a "processing" or a "circuitry".
Furthermore, theemissivity calibration apparatus 100 may be realized by a processing circuit. The processing circuit is, for example, a logic IC (Integrated Circuit), a GA (Gate Array), an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-Programmable Gate Array).
In this case, the imagingregion designation unit 101, the image acquisition unit 102, the monitor data acquisition unit 103, the coefficient calculation unit 104, and the coefficient output unit 105 are each realized as part of a processing circuit.
In this specification, the higher-level concept of a processor and a processing circuit is called "processing circuitry."
That is, a processor and a processing circuit are each specific examples of "processing circuitry."
また、放射率校正装置100は、処理回路により実現されてもよい。処理回路は、例えば、ロジックIC(Integrated Circuit)、GA(Gate Array)、ASIC(Application Specific Integrated Circuit)、FPGA(Field-Programmable Gate Array)である。
この場合は、撮像領域指定部101、画像取得部102、モニタデータ取得部103、係数算出部104及び係数出力部105は、それぞれ処理回路の一部として実現される。
なお、本明細書では、プロセッサと処理回路との上位概念を、「プロセッシングサーキットリー」という。
つまり、プロセッサと処理回路とは、それぞれ「プロセッシングサーキットリー」の具体例である。 In addition, the "part" of at least any of the imaging
Furthermore, the
In this case, the imaging
In this specification, the higher-level concept of a processor and a processing circuit is called "processing circuitry."
That is, a processor and a processing circuit are each specific examples of "processing circuitry."
なお、上述した実施の形態は、本質的に好ましい例示であって、本開示の範囲、本開示の適用物の範囲、および本開示の用途の範囲を制限することを意図するものではない。上述した実施の形態は、必要に応じて種々の変更が可能である。
The above-described embodiments are essentially preferred examples, and are not intended to limit the scope of the present disclosure, the scope of application of the present disclosure, or the scope of use of the present disclosure. The above-described embodiments can be modified in various ways as necessary.
100 放射率校正装置、101 撮像領域指定部、102 画像取得部、103 モニタデータ取得部、104 係数算出部、105 係数出力部、106 撮像領域データベース、107 装置特性データベース、150 撮像領域データ、160 装置特性データ、170 赤外放射率関数の係数、200 撮像装置、250 撮像画像、300 人工衛星、400 撮像制御装置、500 モニタ装置、550 モニタデータ、901 プロセッサ、902 主記憶装置、903 補助記憶装置、904 通信装置、905 入出力装置。
100: Emissivity calibration device, 101: Imaging area designation unit, 102: Image acquisition unit, 103: Monitor data acquisition unit, 104: Coefficient calculation unit, 105: Coefficient output unit, 106: Imaging area database, 107: Device characteristics database, 150: Imaging area data, 160: Device characteristics data, 170: Coefficient of infrared emissivity function, 200: Imaging device, 250: Captured image, 300: Satellite, 400: Imaging control device, 500: Monitor device, 550: Monitor data, 901: Processor, 902: Main memory device, 903: Auxiliary memory device, 904: Communication device, 905: Input/output device.
Claims (12)
- 1つ以上の走査鏡とラインセンサとが同一光路上に設けられた宇宙空間にある撮像装置に撮像させる2つの撮像領域として、前記宇宙空間において一様な輝度温度を有すると推定され、赤外放射輝度値が既知であり、前記1つ以上の走査鏡のうちの少なくとも1つの走査鏡の撮像時ごとの回転角度が異なる2つの領域を指定する撮像領域指定部と、
前記撮像装置が前記2つの撮像領域を撮像して得られた2つの撮像画像を取得する画像取得部と、
各走査鏡の前記2つの撮像領域の撮像での撮影時ごとの回転角度が示される角度データが含まれるモニタデータを取得するモニタデータ取得部と、
前記2つの撮像画像と、前記モニタデータとを用いて、前記1つ以上の走査鏡の各走査鏡について赤外放射率関数の係数を算出する係数算出部とを有するデータ処理装置。 an imaging area designation unit that designates two imaging areas to be imaged by an imaging device in outer space in which one or more scanning mirrors and a line sensor are provided on the same optical path, the imaging areas being estimated to have a uniform radiance temperature in the outer space, having known infrared radiance values, and having different rotation angles for each imaging of at least one of the one or more scanning mirrors;
an image acquisition unit that acquires two captured images obtained by the imaging device capturing images of the two imaging regions;
a monitor data acquisition unit that acquires monitor data including angle data indicating a rotation angle of each scanning mirror at each time of imaging the two imaging areas;
a coefficient calculation unit that calculates a coefficient of an infrared emissivity function for each of the one or more scanning mirrors using the two captured images and the monitor data. - 前記係数算出部は、
各走査鏡の前記2つの撮像領域の撮像での撮影時ごとの回転角度と前記2つの撮像画像の間の赤外放射輝度とを用いて、前記赤外放射率関数の係数を算出する請求項1に記載のデータ処理装置。 The coefficient calculation unit
2. The data processing device according to claim 1, wherein coefficients of the infrared emissivity function are calculated using a rotation angle of each scanning mirror at each time of photographing the two imaging regions and an infrared radiance between the two captured images. - 前記係数算出部は、
前記2つの撮像画像の間の赤外放射輝度の差を最小化する係数を算出する請求項1に記載のデータ処理装置。 The coefficient calculation unit
The data processing apparatus according to claim 1 , further comprising: a coefficient calculating section for calculating a coefficient that minimizes a difference in infrared radiance between the two captured images. - 前記撮像領域指定部は、
前記2つの撮像領域として、2つの深宇宙領域を指定する請求項1に記載のデータ処理装置。 The imaging area designation unit is
The data processing apparatus according to claim 1 , wherein two deep space regions are designated as the two imaging regions. - 前記撮像領域指定部は、
前記宇宙空間における任意の領域を基準として対称の位置にある2つの領域を前記2つの撮像領域として指定する請求項1に記載のデータ処理装置。 The imaging area designation unit is
2. The data processing apparatus according to claim 1, wherein two areas located symmetrically with respect to an arbitrary area in the outer space are designated as the two imaging areas. - 前記撮像装置には2つ以上の走査鏡とラインセンサとが同一光路上に設けられており、
前記係数算出部は、
前記2つ以上の走査鏡に同一の物理モデルを適用して、前記2つ以上の走査鏡の各走査鏡について前記赤外放射率関数の係数を算出する請求項1に記載のデータ処理装置。 The imaging device includes two or more scanning mirrors and a line sensor on the same optical path,
The coefficient calculation unit
2. The data processing apparatus of claim 1, further comprising: applying a same physical model to the two or more scanning mirrors to calculate coefficients of the infrared emissivity function for each of the two or more scanning mirrors. - 前記係数算出部は、
最適化アルゴリズムを用いて、前記赤外放射率関数の係数を算出する請求項1に記載のデータ処理装置。 The coefficient calculation unit
2. The data processing apparatus of claim 1, wherein the coefficients of the infrared emissivity function are calculated using an optimization algorithm. - 前記モニタデータ取得部は、
各走査鏡の前記2つの撮像領域の撮像での撮影時ごとの温度が示される温度データが含まれるモニタデータを取得する請求項1に記載のデータ処理装置。 The monitor data acquisition unit
2. The data processing apparatus according to claim 1, wherein the monitor data includes temperature data indicating temperatures at each time of photographing in the imaging of the two imaging regions of each scanning mirror. - 前記係数算出部は、
前記ラインセンサのチャネルごとの赤外線波長と赤外校正係数が示される装置特性データを用いて、前記赤外放射率関数の係数を算出する請求項1に記載のデータ処理装置。 The coefficient calculation unit
2. The data processing device according to claim 1, further comprising: a device characteristic data indicating an infrared wavelength and an infrared calibration coefficient for each channel of the line sensor, the ... calibration coefficient for each channel of the line sensor, the device characteristic data indicating an infrared radiation coefficient for each channel of the line sensor, the device characteristic data indicating an infrared radiation coefficient for each channel of the - 前記画像取得部は、
1軸駆動式の1つ以上の走査鏡が設けられた撮像装置から、前記2つの撮像画像を取得する請求項1に記載のデータ処理装置。 The image acquisition unit includes:
2. The data processing apparatus according to claim 1, wherein the two captured images are obtained from an imaging device provided with one or more scanning mirrors that are driven in one axis. - コンピュータが、1つ以上の走査鏡とラインセンサとが同一光路上に設けられた宇宙空間にある撮像装置に撮像させる2つの撮像領域として、前記宇宙空間において一様な輝度温度を有すると推定され、赤外放射輝度値が既知であり、前記1つ以上の走査鏡のうちの少なくとも1つの走査鏡の撮像時ごとの回転角度が異なる2つの領域を指定し、
前記コンピュータが、前記撮像装置が前記2つの撮像領域を撮像して得られた2つの撮像画像を取得し、
前記コンピュータが、各走査鏡の前記2つの撮像領域の撮像での撮影時ごとの回転角度が示される角度データが含まれるモニタデータを取得し、
前記コンピュータが、前記コンピュータが、前記2つの撮像画像と、前記モニタデータとを用いて、前記1つ以上の走査鏡の各走査鏡について赤外放射率関数の係数を算出するデータ処理方法。 a computer specifies, as two imaging regions to be imaged by an imaging device in outer space in which one or more scanning mirrors and a line sensor are provided on the same optical path, two regions which are estimated to have a uniform brightness temperature in the outer space, have known infrared radiance values, and have different rotation angles for each imaging of at least one of the one or more scanning mirrors;
the computer acquires two captured images obtained by the imaging device capturing images of the two imaging regions;
the computer acquires monitor data including angle data indicating a rotation angle of each scanning mirror at each image capture of the two imaging areas;
A data processing method in which the computer calculates coefficients of an infrared emissivity function for each of the one or more scanning mirrors using the two captured images and the monitor data. - 1つ以上の走査鏡とラインセンサとが同一光路上に設けられた宇宙空間にある撮像装置に撮像させる2つの撮像領域として、前記宇宙空間において一様な輝度温度を有すると推定され、赤外放射輝度値が既知であり、前記1つ以上の走査鏡のうちの少なくとも1つの走査鏡の撮像時ごとの回転角度が異なる2つの領域を指定する撮像領域指定処理と、
前記撮像装置が前記2つの撮像領域を撮像して得られた2つの撮像画像を取得する画像取得処理と、
各走査鏡の前記2つの撮像領域の撮像での撮影時ごとの回転角度が示される角度データが含まれるモニタデータを取得するモニタデータ取得処理と、
前記2つの撮像画像と、前記モニタデータとを用いて、前記1つ以上の走査鏡の各走査鏡について赤外放射率関数の係数を算出する係数算出処理とをコンピュータに実行させるデータ処理プログラム。 an imaging area designation process for designating two imaging areas to be imaged by an imaging device in outer space in which one or more scanning mirrors and a line sensor are provided on the same optical path, the imaging areas being estimated to have a uniform radiance temperature in the outer space, having known infrared radiance values, and having different rotation angles for each imaging of at least one of the one or more scanning mirrors;
an image acquisition process in which the imaging device acquires two captured images obtained by capturing images of the two imaging regions;
a monitor data acquisition process for acquiring monitor data including angle data indicating a rotation angle of each scanning mirror at each time of imaging the two imaging areas;
a coefficient calculation process for calculating a coefficient of an infrared emissivity function for each of the one or more scanning mirrors using the two captured images and the monitor data.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2024510325A JP7486696B1 (en) | 2022-12-07 | 2022-12-07 | Data processing device, data processing method and data processing program |
PCT/JP2022/045130 WO2024121983A1 (en) | 2022-12-07 | 2022-12-07 | Data processing device, data processing method, and data processing program |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2022/045130 WO2024121983A1 (en) | 2022-12-07 | 2022-12-07 | Data processing device, data processing method, and data processing program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024121983A1 true WO2024121983A1 (en) | 2024-06-13 |
Family
ID=91067374
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/045130 WO2024121983A1 (en) | 2022-12-07 | 2022-12-07 | Data processing device, data processing method, and data processing program |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP7486696B1 (en) |
WO (1) | WO2024121983A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH02130436A (en) * | 1988-11-10 | 1990-05-18 | Nec Corp | Calibration for infrared radiation meter |
JPH06307980A (en) * | 1993-04-21 | 1994-11-04 | Nikon Corp | System test method for optical sensor |
CN109737987A (en) * | 2018-12-29 | 2019-05-10 | 中国科学院长春光学精密机械与物理研究所 | Infrared radiometric calibration system on a kind of how photosynthetic in-orbit star of diameter space camera at a gulp |
US20220026280A1 (en) * | 2020-07-24 | 2022-01-27 | Raytheon Company | Radiometric calibration of detector |
US20220094834A1 (en) * | 2020-09-18 | 2022-03-24 | Raytheon Company | On-board light source calibration |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6307980B2 (en) | 2014-03-31 | 2018-04-11 | 株式会社ソシオネクスト | Differential amplifier circuit and semiconductor integrated circuit |
-
2022
- 2022-12-07 JP JP2024510325A patent/JP7486696B1/en active Active
- 2022-12-07 WO PCT/JP2022/045130 patent/WO2024121983A1/en unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH02130436A (en) * | 1988-11-10 | 1990-05-18 | Nec Corp | Calibration for infrared radiation meter |
JPH06307980A (en) * | 1993-04-21 | 1994-11-04 | Nikon Corp | System test method for optical sensor |
CN109737987A (en) * | 2018-12-29 | 2019-05-10 | 中国科学院长春光学精密机械与物理研究所 | Infrared radiometric calibration system on a kind of how photosynthetic in-orbit star of diameter space camera at a gulp |
US20220026280A1 (en) * | 2020-07-24 | 2022-01-27 | Raytheon Company | Radiometric calibration of detector |
US20220094834A1 (en) * | 2020-09-18 | 2022-03-24 | Raytheon Company | On-board light source calibration |
Also Published As
Publication number | Publication date |
---|---|
JP7486696B1 (en) | 2024-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101167260B1 (en) | Radiometry using an uncooled microbolometer detector | |
JP4987414B2 (en) | Method and apparatus for adjusting electro-optic image processing system | |
US20120211648A1 (en) | On-board non-uniformity correction calibration methods for microbolometer focal plane arrays | |
TWI631309B (en) | Dynamic removal of correlation of highly-correlated parameters for optical metrology | |
US7230741B2 (en) | Optimum non-uniformity correction for imaging sensors | |
EP1083470B1 (en) | Multi-computer chamber control system, method and medium | |
CN113284083A (en) | Method and system for performing automatic camera calibration | |
KR20240032183A (en) | Anomaly detection from aggregate statistics using neural networks | |
WO2017012561A1 (en) | Electromagnetic wave imaging system and antenna array signal correction method | |
JP7486696B1 (en) | Data processing device, data processing method and data processing program | |
EP3232403B1 (en) | Fast multi-spectral image registration by modeling platform motion | |
JP2018194455A (en) | Polarization property measurement method and polarization property measurement device | |
WO2020095630A1 (en) | Temperature estimating device, temperature estimating method, and temperature estimating program | |
Kochikov et al. | Stable method for optical monitoring the deposition of multilayer optical coatings | |
KR102452394B1 (en) | A device and method for predicting the total temperature distribution of a measurement object through measurement of a thermocouple temperature sensor based on deep learning | |
JP2022188753A (en) | Compensating for optical change in image capture device component over time | |
CN111862051B (en) | Method and system for performing automatic camera calibration | |
JP7498856B2 (en) | Systems and methods for misaligned scatterometric single wavelength measurements and improvements thereon - Patents.com | |
US20230402328A1 (en) | Optical metrology with nuisance feature mitigation | |
CN114894321B (en) | Calibration method of infrared remote sensing instrument, electronic device and computer storage medium | |
CN115371822B (en) | Calibration method of infrared camera | |
US20240280912A1 (en) | Method for measuring photomasks for semiconductor lithography | |
US20230204430A1 (en) | Method and apparatus for operating optical wavemeter and wavemeter comprising same | |
JP2006338123A (en) | Nonlinear mapping learning computer program and recording medium | |
Jovanovic et al. | Artificial Neural Network Calibration of Wide Range of Motion Biaxial Inclinometers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22967833 Country of ref document: EP Kind code of ref document: A1 |