WO2020022436A1 - Restoration function adjustment system, data restoration device, restoration function adjustment method, restoration function generation method, and computer program - Google Patents

Restoration function adjustment system, data restoration device, restoration function adjustment method, restoration function generation method, and computer program Download PDF

Info

Publication number
WO2020022436A1
WO2020022436A1 PCT/JP2019/029230 JP2019029230W WO2020022436A1 WO 2020022436 A1 WO2020022436 A1 WO 2020022436A1 JP 2019029230 W JP2019029230 W JP 2019029230W WO 2020022436 A1 WO2020022436 A1 WO 2020022436A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
function
repair
image
missing
Prior art date
Application number
PCT/JP2019/029230
Other languages
French (fr)
Japanese (ja)
Inventor
将晃 飯山
美濃 導彦
秀一 笠原
哲希 柴田
敦史 橋本
暢之 平原
Original Assignee
国立大学法人京都大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立大学法人京都大学 filed Critical 国立大学法人京都大学
Publication of WO2020022436A1 publication Critical patent/WO2020022436A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals

Definitions

  • the present invention relates to a technique for adjusting a function for repairing a missing part of data.
  • Surface temperature maps are used in a variety of marine-related industries, such as fisheries and marine transport, in addition to weather forecasts. Particularly in fisheries and marine transportation, a sea surface temperature map with high real-time properties and high clarity is required.
  • Satellites measure the temperature of the sea surface by detecting infrared radiation emitted from the sea surface.
  • infrared rays cannot be detected from a portion of the sea surface on which clouds are hung, so that the temperature of this portion cannot be measured.
  • Non-Patent Document 1 data assimilation
  • Non-Patent Document 2 inpainting
  • Data assimilation is often used in meteorological and marine fields. However, according to data assimilation, a huge amount of data is required, and the calculation takes considerable time. Therefore, it lacks real-time properties.
  • Inpainting can reduce the amount of calculation compared to data assimilation. However, according to inpainting, complete correct data is required.
  • the sea surface is often covered with clouds
  • the data obtained by the artificial satellites measured by the above-described method is often not preferable as the correct answer data. Therefore, it takes a long time to collect the correct answer data.
  • the present invention has been made in view of the above problems, and has been made to be able to collect correct answer data more easily and repair missing data while maintaining real-time properties by a conventional repair method such as inpainting. With the goal.
  • a repair function adjustment system that adjusts a function used for repairing missing data according to one embodiment of the present invention, wherein the first data having a missing portion that is originally missing is further lost.
  • a selection unit for selecting the first data from the plurality of data stored in the storage unit, and the deletion unit deletes the first data selected by the selection unit. Thereby, the second data is generated.
  • the selecting means randomly selects the first data
  • the deficient means generates second data by deleting a randomly selected part of the first data.
  • correct data can be more easily collected and lost data can be repaired than before, while maintaining the real-time property of the conventional repair method such as inpainting.
  • FIG. 1 is a diagram illustrating an example of a network system including a sea surface temperature map providing device. It is a figure showing an example of hardware constitutions of a sea surface temperature map offer device.
  • FIG. 4 is a diagram illustrating an example of a configuration of a map generation program. It is a figure showing an example of a sea surface temperature map.
  • FIG. 4 is a diagram illustrating an example of a positional relationship between a region and an area.
  • FIG. 4 is a diagram illustrating an example of a patch. It is a figure showing an example of a procedure of learning.
  • FIG. 3 is a diagram illustrating an example of a first mask filter. It is a figure showing an example of a restoration sea surface temperature map.
  • FIG. 4 is a diagram illustrating an example of three-dimensional data used as learning data. It is a figure for explaining the example of a comparative experiment. It is a figure showing the example of the method of acquiring assimilation image 5L. It is a figure showing the modification of the procedure of learning. It is a figure showing the modification of the procedure of learning.
  • FIG. 1 is a diagram showing an example of a network system including the sea surface temperature map providing device 1.
  • FIG. 2 is a diagram illustrating an example of a hardware configuration of the sea surface temperature map providing device 1.
  • FIG. 3 is a diagram illustrating an example of the configuration of the map generation program 18.
  • the sea surface temperature map providing device 1 shown in FIG. 1 is a device that provides a sea surface temperature map, which is an image representing the distribution of sea surface temperature, by restoring (restoring and estimating) an unknown temperature portion.
  • a sea surface temperature map providing device 1 a desktop computer, a notebook computer, a server, or the like is used.
  • a notebook computer is used as an example.
  • the sea surface temperature map providing apparatus 1 includes a processor 10, a RAM (Random Access Memory) 11, a ROM (Read Only Memory) 12, an auxiliary storage device 13, a liquid crystal display 14, a communication interface 15, a keyboard 16, And a pointing device 17.
  • the ROM 12 or the auxiliary storage device 13 stores a map generation program 18 (see FIG. 3) for providing a sea surface temperature map.
  • the map generation program 18 is loaded into the RAM 11.
  • the processor 10 is a processor such as a GPU (Graphics Processing Unit), an MPU (Micro Processing Unit), or a CPU (Central Processing Unit), and executes the map generation program 18 loaded in the RAM 11.
  • the processor 10, the RAM 11, and the ROM 12 may be configured as one chip.
  • the liquid crystal display 14 displays a screen for inputting a command or data or a generated sea surface temperature map.
  • the communication interface 15 is a device for communicating with another device via a LAN (Local Area Network) or the Internet.
  • LAN Local Area Network
  • NIC Network Interface Card
  • the keyboard 16 and the pointing device 17 are input devices for an operator to input commands or data.
  • the map generation program 18 can improve the sharpness by the inpainting as compared with the related art while maintaining the real-time property by the inpainting.
  • this mechanism will be described.
  • the portion where the cloud is covered is described as “shielded portion”
  • the portion where the cloud is not covered is described as “unshielded portion”.
  • Parts other than the ocean (mainly land parts) are referred to as “land parts”.
  • the map generation program 18 is a program for realizing the learning device 2 and the estimator 3 shown in FIG.
  • the map generation program 18 includes, as software modules for realizing the learning device 2, learning data generation unit 201, learning data storage unit 202, correct data selection unit 203, mask storage unit 204, first mask processing unit 205, A generator 206, a second mask processing unit 207, a discriminator 208, a first learning unit 211, a second learning unit 212, and the like are included. Further, as a software module for realizing the estimator 3, a sea surface temperature estimating unit 301, a patch merging unit 302, and the like are included. In addition, a learned model storage unit 221 is included.
  • the learning device 2 learns a standard for estimating the sea surface temperature of the shielded portion (hereinafter referred to as an “estimation standard”) by analyzing a past sea surface temperature map.
  • the estimator 3 estimates the sea surface temperature of the shielded portion based on the estimation criterion learned by the learning device 2.
  • details of each of the learning device 2 and the estimator 3 will be sequentially described.
  • FIG. 4 is a diagram illustrating an example of the sea surface temperature map 4.
  • FIG. 5 is a diagram illustrating an example of the positional relationship between the area 8A and the area 8B.
  • FIG. 6 is a diagram illustrating an example of the patch 5.
  • the sea surface temperature map providing device 1 receives a plurality of sea surface temperature maps 4 (41, 42, 43,...) As shown in FIG. Is done.
  • An infrared ray radiated from the sea surface of the area 8A is detected by an artificial satellite to measure the temperature of the sea surface of the area 8A, and the temperature distribution is represented by an image, whereby the sea surface temperature map 4 is generated.
  • the area 8A is divided into (Ha ⁇ Va) areas 8B as shown in FIG.
  • Each sea surface temperature map 4 is assigned a sequence number of “1”, “2”, “3”,.
  • the gradation value of the pixel of the sea surface temperature map 4 is smaller as it is brighter and larger as it is darker.
  • the sea surface temperature map 4 is a grayscale or color image.
  • a white portion is a shielding portion, and the gradation value of each pixel in the shielding portion is “0”.
  • the black portion is the land portion, and the gradation value of each pixel in the land portion is “255”.
  • a portion that is neither a shielded portion nor a land portion is a non-shielded portion.
  • a pure white portion is a shielding portion, and the red, green, and blue gradation values of each pixel of the shielding portion are all “0”. It is.
  • the black portion is the land portion, and the tone values of red, green, and blue of each pixel in the land portion are all “255”.
  • a portion that is neither a shielded portion nor a land portion, that is, a color portion is a non-shielded portion.
  • the learning data generation unit 201 generates learning data as follows. An image of each area 8B of each sea surface temperature map 4 as shown in FIG. Then, each patch 5 is stored in the learning data storage unit 202 in association with the position of the area 8B corresponding to the patch 5 and the sequence number of the original sea surface temperature map 4. Each patch 5 is used as learning data.
  • the patch 5 of the Hb-th and Vb-th section 8B from the left of the sea surface temperature map 4 with the sequence number “d” is ⁇ (Hb, Vb), d Are stored in the learning data storage unit 202 in association with ⁇ .
  • the patch 5 includes at least one of the unshielded portion 5a, the shielded portion 5b, and the land portion 5c.
  • the patch 51 includes an unshielded portion 5a and a shielded portion 5b among three types of portions.
  • the patch 52 includes only the non-shielding portion 5a among the three types of portions.
  • the patch 53 includes all three types of parts.
  • a case where the area of each patch 5 is equal and the area is Sg will be described as an example.
  • FIG. 6 a black outline representing the boundary is provided in FIG. 6 to make the boundary of each type of portion in the patch 5 easy to understand. However, such a contour line is not actually displayed in the patch 5. The same applies to each image shown later in FIG. 7 or FIG.
  • FIG. 7 is a diagram illustrating an example of a learning procedure.
  • FIG. 8 is a diagram illustrating an example of the first mask filter 5W.
  • the correct answer data selection unit 203 to the second learning unit 212 generate a learned model using the two indices of the reconstruction error and the hostile error in the procedure shown in FIG.
  • a plurality of first mask filters 5W having different patterns as shown in FIG. 8 are stored in the mask storage unit 204 in advance.
  • the first mask filter 5W is a filter having the same size (number of pixels) as the correct image 5T, and the value of each pixel is either “0” or “1”.
  • the hatched portion of the first mask filter 5W is a portion to be lost, and the value of each pixel is “0”. Other parts are parts to be maintained, and the value of each pixel is “1”.
  • the ratio of the hatched portion to the entire first mask filter 5W is approximately 10 to 30%.
  • Patch 5 may include an image of a land portion. Therefore, the correct answer data selection unit 203 may search for a data item in which the ratio of the area of the shielding portion 5b to the entire area excluding the land portion is less than the predetermined ratio Rs. That is, if the area of the land portion is the area Sc, the patch 5 that satisfies Sb / (Sg ⁇ Sc) ⁇ Rs may be searched. Further, a patch 5 such as the patch 53, which includes only the land portion 5c among the three types of portions, may be excluded from the search target.
  • the correct answer data selecting unit 203 selects the found patch 5, that is, the patch 5 in which the ratio of the area of the shielding portion to the entire area is less than the predetermined ratio Rs, as the correct answer image 5T (# 701).
  • the correct answer image 5T is used as correct answer data.
  • the first mask processing unit 205 randomly selects the first mask filter 5W one by one for each correct image 5T (# 702), and masks the correct image 5T with the selected first mask filter 5W.
  • An input image 5M is generated (# 703). For example, when the first mask filter 5W3 is selected for the correct image 5T1, the input image 5M is generated by masking the correct image 5T1 with the first mask filter 5W3.
  • the first mask processing unit 205 generates the input image 5M according to the following equation (1).
  • Xn is a vector indicating the gradation value of each pixel of the n-th correct image 5T.
  • X ⁇ n is a vector indicating the tone value of each pixel of the input image 5M obtained by masking the n-th correct image 5T with the first mask filter 5W.
  • Mrand is a vector indicating a binary value of each pixel of the first mask filter 5W to be applied to the n-th correct image 5T.
  • a random pattern image may be generated one by one for each correct answer image 5T as the first mask filter 5W.
  • the correct answer image 5T is a grayscale image of 256 gradations
  • the gradation value is rewritten to “0”.
  • the input image 5M is generated.
  • the gradation values of red, green, and blue of each pixel in this portion are rewritten to “0”. Thereby, the input image 5M is generated.
  • missing portion 5d the portion of the sea surface portion in the input image 5M where the cloud is covered or masked.
  • the generator 206 generates a restored image 5R by restoring the missing portion 5d in the input image 5M by inpainting (# 704).
  • a known algorithm such as deep learning is used as the inpainting algorithm.
  • an algorithm described in "Image denoising and inpainting with deep neural networks", Xie, J., Xu, L. and Chen, E., NIPS 2012. is used.
  • the restored image 5R is generated by substituting a vector indicating the gradation value of each pixel of the input image 5M into the generate function G employing a known algorithm. For example, in the case of the k-th input image 5M, it is generated by substituting X ⁇ k.
  • the land portion 5c need not be restored. Alternatively, the land portion 5c is not used in each of steps # 707, # 706, and # 708 described below.
  • the values of the parameters of the generate function G are adjusted so that the average value of the difference between the N sets of the correct images 5T and the repaired images 5R is minimized. That is, learning is performed so that the reconstruction error Lrec in the following equation (2) is minimized.
  • the reconstruction error may be commonly referred to as “MSE” or “mean squared error”.
  • the correct image 5T includes a portion of the sea surface whose temperature is unknown (that is, the shielded portion 5b), whereas the restored image 5R does not include such a portion. It is not preferable to learn including such a part.
  • Mxn is a second mask filter 5U indicating the occluded portion 5b in the n-th correct image 5T. Specifically, the value of each pixel in the shielding portion 5b is “0”, and the value of each pixel in the other portions is “1”.
  • the binary value of the i-th pixel of mxn can be expressed by the following equation (4).
  • the present embodiment solves this problem by combining the learning method based on the reconstruction error and the learning method based on the hostile error.
  • the learning based on the hostile error is generally called “GAN (Generative Adversarial Network)”.
  • the second mask processing unit 207 generates a fake image 5F by deleting a pixel in the restored image 5R at the same position as the shielded portion 5b of the original correct image 5T of the restored image 5R (# 705). ).
  • the fake image 5F is generated by masking the restored image 5R with the second mask filter 5U of the original correct image 5T according to the following equation (5).
  • Zn is a vector indicating the gradation value of each pixel of the n-th fake image 5F.
  • the discriminator 208 discriminates each of the N correct images 5T and the N fake images 5F into a real or a fake by a known algorithm (# 706). That is, the discriminating function D employing a known algorithm (for example, an algorithm described in “Generative adversarial network”, Ian J. Goodfellow, et.al, NIPS 2014) is added to the correct image 5T or the fake image 5F. By substituting the vector indicating the gradation value of the pixel, the correct image 5T or the fake image 5F is distinguished as a real or a fake image. For example, when distinguishing the k-th correct image 5T, Xk is substituted. When distinguishing the kth fake image 5F, Zk is substituted. In the present embodiment, as the output value 6A of the discriminate function D, “1” is output when it is distinguished from the real one, and “0” is output when it is distinguished from the fake.
  • a known algorithm for example, an algorithm described in “Generative adversarial network”, Ian J
  • the first learning unit 211 adjusts each parameter of the generate function G so that the value of the following equation (6) is minimized (# 707).
  • ⁇ Lrecours + (1- ⁇ ) Ladv (6) “ ⁇ ” is a weight parameter, and 0 ⁇ ⁇ ⁇ 1. “Ladv” is calculated by the following equation (7).
  • the second learning unit 212 adjusts each parameter of the discriminate function D so that the value of the following equation (8) is minimized (# 708).
  • the correct answer data selecting unit 203 to the second learning unit 212 repeatedly execute the processes of # 701 to # 708 until the correct answer rate Rt by the discriminator 208 approaches 50%. That is, the above-described processing is repeatedly performed until the absolute value of (Rt ⁇ 0.5) becomes equal to or less than a predetermined value (for example, “0.03”).
  • the values of the parameters of the generate function G at that time are stored in the learned model storage unit 221 as the learned model 7.
  • the generate function G may be stored in the learned model storage unit 221.
  • the correct answer rate Rt may be calculated based on the result of discrimination of a predetermined number of times performed most recently. However, if the predetermined number of times is too small, the correct answer rate Rt may accidentally approach 50%. Therefore, the predetermined number of times may be repeated several times (for example, at least 10 times). It is desirable to set
  • FIG. 9 is a diagram illustrating an example of the restored sea surface temperature map 4S.
  • the estimator 3 restores the sea surface temperature map 4 as follows.
  • the sea surface temperature estimating unit 301 divides the sea surface temperature map 4 into patches 5 for each area 8B (see FIG. 5), and substitutes each of the patches 5 into a generate function Gres of the following equation (9) to obtain each patch 5. 5 is generated as a repair patch 5S.
  • Gres (X (h, v) ) is a vector indicating the tone value of each pixel of the repair patch 5S in the h-th and v-th area 8B from the left.
  • X (h, v) is a vector indicating the gradation value of each pixel of the patch 5 in the h-th and v-th area 8B from the left.
  • G (X (h, v) ) is obtained by substituting X (h, v) into the generate function G.
  • “(1-m (h, v) )” represents each pixel of the shielding portion 5 b of the patch 5 in the h-th and v-th area 8 B from the left with “1”, and each of the other parts This is a vector representing a pixel by “0”.
  • “M (h, v) ” represents each pixel of the shielding portion 5b of the patch 5 in the h-th and v-th area 8B from the left as “0”, and each pixel in the other portions as “1”. ".
  • the image of the occluded portion 5b in the patch 5 is restored by the generate function G and the learned model 7, and the restored image and the remaining portion (that is, the portion that is not lost) are restored.
  • the restoration patch 5S is generated by combining the original image and the original image.
  • the patch merging unit 302 generates the repaired sea surface temperature map 4S as shown in FIG. 9 by arranging and merging the repair patches 5S of each area 8B based on their respective positions.
  • the restored sea surface temperature map 4S is displayed on the liquid crystal display 14, transmitted to another device by the communication interface 15, or stored in the auxiliary storage device 13.
  • FIG. 10 is a flowchart illustrating an example of the flow of processing by the map generation program 18.
  • the sea surface temperature map providing device 1 executes the process of the procedure shown in FIG.
  • the sea surface temperature map providing device 1 divides each of these sea surface temperature maps 4 into patches 5 for each area 8B and stores them as learning data (# 101 in FIG. 10). ).
  • the sea surface temperature map providing device 1 randomly selects N correct images 5T regardless of the area 8B and the date and time (# 102).
  • the input image 5M is generated by randomly applying the first mask filter 5W one by one for each of the selected correct images 5T (# 103).
  • a restored image 5R is generated by restoring each input image 5M by the generator 206 (# 104).
  • the sea surface temperature map providing device 1 masks each repaired image 5R using the second mask filter 5U representing the shielded portion 5b of the original correct image 5T (# 105). That is, the pixel located at the same position as the shielding portion 5b is made white. Thereby, N fake images 5F are obtained. The authenticity of each of the N correct images 5T and the N fake images 5F is determined by the discriminator 208 (# 106).
  • the sea surface temperature map providing apparatus 1 learns the generator 206 by adjusting each parameter of the generate function G so that the value of the equation (6) is minimized (# 107). Further, learning of the discriminator 208 is performed by adjusting each parameter of the discriminator function D so that the value of the equation (8) is minimized (# 108).
  • the sea surface temperature map providing device 1 repeatedly executes the processes of steps # 102 to # 108.
  • the sea surface temperature map providing device 1 stores the current values of the parameters of the generate function G as the learned model 7 (# 110).
  • the sea surface temperature map providing device 1 uses the generate function G and the learned model 7 to patch the patches 5 in each area 8B of the sea surface temperature map 4. Repair (# 112).
  • the restoration patch 5S is generated by synthesizing the portion of the shielded portion 5b in the restored image and the portion other than the shielded portion 5b in the original image (# 113).
  • a repaired sea surface temperature map 4S is generated by arranging and merging the repaired patches 5S of each area 8B based on their respective positions (# 114), and is output or stored (# 115).
  • the correct image 5T including the missing part can be used as learning data. Therefore, even in a learning device that employs a learning base (learning algorithm) that needs to prepare a large number of learning data with no loss, a learned model for inpainting can be generated more easily than before. Furthermore, by combining the learning based on the reconstruction error with the learning based on the hostile error, it is possible to adjust the learned model so as to obtain a restoration result with higher clarity than before while maintaining real-time properties.
  • a learning base learning algorithm
  • FIG. 11 is a diagram for explaining an example of a comparative experiment.
  • the sea surface temperature map providing device 1 acquires 500 images each showing the distribution of the sea surface temperature at one specific time every day for 500 days in the area 8A as the sea surface temperature map 4 (see FIG. 4).
  • the sea surface temperature map providing device 1 extracts a patch having a size of 64 ⁇ 64 pixels from each sea surface temperature map 4 as a patch 5 and uses it as learning data. If the size of the sea surface temperature map 4 is about 5000 ⁇ 6000 dots, the patches 5 are extracted from the sea surface temperature map 4 by about 7,300 each.
  • the sea surface temperature map providing device 1 generates the learned model 7 by executing the above-described processing.
  • the restoration image 5E5 is restored like the restored image 5E5.
  • the restoration image 5E3 is restored.
  • the restoration image 5E4 is restored as in the restored image 5E4.
  • the restored image 5E5 has reduced smoothness and higher clarity than the restored image 5E3, and has a more correct image than the restored image 5E4. Accurately reproduced.
  • FIG. 12 is a diagram illustrating an example of three-dimensional data used as learning data.
  • FIG. 13 is a diagram for explaining an example of a comparative experiment.
  • a learned model is generated by using plane data, that is, two-dimensional data, as learning data.
  • a trained model may be generated by using three-dimensional data in which patches 5 for consecutive k days (for example, three days) in the same area 8B are arranged in time series as shown in FIG. .
  • the sea surface temperature map 4 for the latest (k-1) days is input to the sea surface temperature map providing device 1 in addition to the sea surface temperature map 4 to be restored.
  • the use of this trained model makes it possible to more reliably restore the sea surface temperature map 4 than in the case of using a trained model based on two-dimensional data, even when most are missing.
  • the restored image 5P3 is restored using a learned model generated only by learning based on the reconstruction error
  • the restored image 5P4 is restored using a learned model generated only using learning based on the hostile error. is there.
  • the sea surface temperature map providing device 1 selects the patch 5 used as the correct image 5T regardless of the area 8B, and generates the learned model 7 common to all the areas 8B.
  • the patch 5 used as the correct answer image 5T may be selected for each section 8B, and the learned model 7 may be generated for each section 8B.
  • FIG. 14 is a diagram illustrating an example of a method of acquiring the assimilation image 5L.
  • FIG. 15 is a diagram illustrating a modification of the learning procedure.
  • FIG. 16 is a diagram illustrating a modification of the learning procedure.
  • step # 706 the discriminator 208 distinguishes each of the correct answer image 5T and the fake image 5F into either a real one or a fake one.
  • the assimilated image 5L may be used instead of the correct answer image 5T. That is, each of the assimilation image 5L and the fake image 5F may be distinguished into either a real or a fake image.
  • the sea surface temperature can be estimated by incorporating measured data (actual observation values) of various items into the physical simulation for the sea surface temperature.
  • data assimilation requires an enormous amount of data and requires a considerable amount of time for calculation. Therefore, it lacks real-time properties. Therefore, when data assimilation is applied to the inference phase, it takes a considerable time to obtain the result of the inference.
  • the assimilation image 5L is an image generated by data assimilation, and is prepared, for example, as follows.
  • the learning data generation unit 201 of the learning device 2 obtains past estimation results from an existing data assimilation system that estimates sea surface temperatures by data assimilation.
  • the sea surface temperature maps 41, 42, 43,... The data assimilation maps 4L1, 4L2, 4L3,.
  • the data is input to the map providing device 1 via a communication line or a recording medium.
  • the data assimilation map 4L is an image that estimates the temperature of the sea surface in the area 8A by data assimilation and shows the estimated temperature distribution. Unlike the sea surface temperature map 4, the data assimilation map 4L does not have a clouded portion.
  • the data assimilation map 4L can be generated by a known method. For example, “Norihisa Usui, et. Al, Four-dimensional variational ocean reanalysis: a 30-year high-resolution dataset in the western North Pacific (FORAWNP30), Journal of Oceanography, Vol. 73, No. 205, .pp. 233, ⁇ 2016. " ⁇ .
  • Each data assimilation map 4L is assigned a sequence number of “1”, “2”, “3”,. It is assumed that the data assimilation map 4L is corrected to the same resolution and gradation as the sea surface temperature map 4, and then input to the sea surface temperature map providing device 1. Alternatively, the sea surface temperature map 4 may be corrected to the same resolution and gradation as the data assimilation map 4L.
  • the learning data generation unit 201 extracts the image of each area 8B of each data assimilation map 4L as the assimilated image 5L. Then, each assimilated image 5L is stored in the assimilated image description unit 209 in association with the position of the area 8B corresponding to the assimilated image 5L and the sequence number of the original data assimilation map 4L. For example, in the data assimilation map 4L having the sequence number “d”, the assimilated image 5L of the Hb-th and Vb-th area 8B from the left is described in association with ⁇ (Hb, Vb), d ⁇ . This is stored in the unit 209.
  • the assimilation image 5L includes at least one of a non-shielded portion and a land portion. As described above, since the clouded portion is not included in the data assimilation map 4L, the assimilated image 5L does not include the occluded portion.
  • the correct answer data selection unit 203 to the second learning unit 212 generate a learned model according to the procedure shown in FIG.
  • Steps # 721 to # 724 are the same as steps # 701 to # 704 in FIG. The process corresponding to step # 705 is skipped.
  • the discriminator 208 discriminates each of the N assimilated images 5L and the N restored images 5R into either a genuine one or a fake one using a known algorithm (# 726). That is, the discrimination function D is distinguished by substituting a vector indicating the gradation value of each pixel of the assimilated image 5L or the restored image 5R. According to the process of step # 726, as in the case of the process of step # 706, "1" is output as the output value 6A of the discriminating function D when the discrimination is made genuine, and when the discrimination is made as fake. "0" is output.
  • the discriminator 208 distinguishes the correct answer image 5T and the fake image 5F into either a real one or a fake one.
  • the assimilated image 5L and the restored image 5R are both distinguished as genuine and fake without being masked by the second mask filter 5U. According to this method, not only the portion not hidden by the cloud (unshielded portion) but also the portion hidden by the cloud (shielded portion) can be used for discrimination.
  • Steps # 727 to # 728 are the same as steps # 707 to # 708. However, the output value 6A obtained in step # 726 is used instead of the output value 6A obtained in step # 706.
  • Steps # 721 to # 728 are repeatedly executed until the correct answer rate Rt by the discriminator 208 approaches 50%. That is, it is repeatedly executed until the absolute value of (Rt-0.5) becomes equal to or less than a predetermined value (for example, “0.03”).
  • the values of the parameters of the generate function G at that time are stored in the learned model storage unit 221 as the learned model 7.
  • the generate function G may be stored in the learned model storage unit 221.
  • a learned model can be generated according to the procedure shown in FIG. That is, similarly to the example illustrated in FIG. 7, the second mask processing unit 207 generates the fake image 5F by masking the restored image 5R with the second mask filter 5U (# 725). The third mask processing unit 210 generates a mask assimilated image 5H by masking the assimilated image 5L with the second mask filter 5U. Then, the discriminator 208 distinguishes the fake image 5F and the mask assimilation image 5H into either a real or a fake image.
  • processing after step # 727 is the same as the processing after step # 707 in FIG.
  • the method of generating a learned model by using three-dimensional data can be applied to the second modification.
  • each function shown in FIG. 3 is realized by executing the map generation program 18 by the processor 10.
  • all or some of the functions may be realized by a hardware module such as an FPGA (Field Programmable Gate Array) or an ASIC (Application Specific Integrated Circuit).
  • both the learning device 2 and the estimator 3 are provided in the sea surface temperature map providing device 1, but they may be provided in separate devices.
  • the learning device generates a learned model 7 by the learning device 2 and distributes the learned model 7 to one or a plurality of estimating devices via a communication line or a portable recording medium. Then, each estimating device restores the input sea surface temperature map 4 using the learned model 7.
  • the generator 206, the discriminator 208, the first learning unit 211, and the second learning unit 212 may be provided in a cloud server.
  • the first mask processing unit 205 transmits the input image 5M to the cloud server and instructs the client server to repair the input image 5M.
  • the cloud server generates the restored image 5R by restoring the input image 5M, similarly to the generator 206. Then, the repaired image 5R is transmitted to the sea surface temperature map providing device 1.
  • the second mask processing unit 207 Upon receiving the restored image 5R, the second mask processing unit 207 generates a fake image 5F by masking the restored image 5R with the second mask filter 5U. Then, the correct image 5T and the fake image 5F are transmitted to the cloud server, and the cloud server is instructed to perform machine learning.
  • the cloud server distinguishes between the fake image 5F and the correct image 5T as in the discriminator 208. Then, similarly to the first learning section 211, the generate function G is adjusted based on the equation (6), and similarly to the second learning section 212, the discriminate function D is adjusted based on the equation (8). .
  • the first learning unit 211 adjusts the parameters of the generate function G by machine learning combining the reconstruction error and the hostile error.
  • the adjustment may be performed by machine learning using only the reconstruction error. That is, the parameters of the generate function G may be adjusted so that the expression (3) is minimized.
  • the sea surface temperature map providing device 1 is used to repair a partially missing sea surface temperature map, but can be used to repair other data.
  • it may be used to restore a depth image obtained by a depth camera or sound transmitted via an unstable communication line.
  • it may be used to restore cosmic ray observation data.
  • the configuration of the whole or each part of the sea surface temperature map providing device 1, the content of the processing, the order of the processing, and the like can be appropriately changed according to the gist of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present invention makes it possible to acquire correct data and restore lost data more easily than before while maintaining the real-time characteristics of conventional restoration methods. A sea surface temperature map provision device generates an input image 5M by causing further loss in a correct image 5T having a lost portion that originally was lost (#703). A restored image 5R is generated by restoring the input image 5M using a generate function G (#704). The generate function G is adjusted on the basis of portions of the correct image 5T and the restored image 5R other than the respective lost portions of said images (#707).

Description

修復用関数調整システム、データ修復装置、修復用関数調整方法、修復用関数生成方法、およびコンピュータプログラムRestoration function adjustment system, data restoration device, restoration function adjustment method, restoration function generation method, and computer program
 本発明は、データの欠損した部分を修復する関数を調整する技術に関する。 {Circle around (1)} The present invention relates to a technique for adjusting a function for repairing a missing part of data.
 従来、気象衛星などの人工衛星によって海面の温度を測定し、温度の分布を表わす画像を生成する技術が提案されている。このような画像は、「海面温度マップ」、「海面温度分布図」、または「海面温度衛星画像」などと呼ばれることがある。 技術 Conventionally, there has been proposed a technique of measuring the temperature of the sea surface using an artificial satellite such as a meteorological satellite and generating an image representing the temperature distribution. Such an image may be called a “sea surface temperature map”, a “sea surface temperature distribution map”, a “sea surface temperature satellite image”, or the like.
 海面温度マップは、天気予報のほか、漁業および海上輸送など海洋に関連する様々な産業において用いられる。特に漁業および海上輸送においては、リアルタイム性および鮮明性がともに高い海面温度マップが求められる。 Surface temperature maps are used in a variety of marine-related industries, such as fisheries and marine transport, in addition to weather forecasts. Particularly in fisheries and marine transportation, a sea surface temperature map with high real-time properties and high clarity is required.
 人工衛星は、海面から放射される赤外線を検出することによって海面の温度を測定する。しかし、この方法によると、海面のうちの雲が掛かっている部分から赤外線を検出することができないので、この部分の温度を測定することができない。 Satellites measure the temperature of the sea surface by detecting infrared radiation emitted from the sea surface. However, according to this method, infrared rays cannot be detected from a portion of the sea surface on which clouds are hung, so that the temperature of this portion cannot be measured.
 そこで、この部分の温度を推測することによって海面温度マップを完成させることが考えられる。温度を推測する方法として、データ同化(非特許文献1)およびインペインティング(非特許文献2)が挙げられる。 Therefore, it is conceivable to complete the sea surface temperature map by estimating the temperature of this part. Methods for estimating the temperature include data assimilation (Non-Patent Document 1) and inpainting (Non-Patent Document 2).
 データ同化は、気象の分野および海洋の分野において、よく使用される。しかし、データ同化によると、膨大な量のデータが必要であり、計算にかなりの時間が掛かる。したがって、リアルタイム性に欠ける。 Data assimilation is often used in meteorological and marine fields. However, according to data assimilation, a huge amount of data is required, and the calculation takes considerable time. Therefore, it lacks real-time properties.
 インペインティングは、データ同化よりも計算量を低減することができる。ただし、インペインティングによると、完全な正解データが必要である。 Inpainting can reduce the amount of calculation compared to data assimilation. However, according to inpainting, complete correct data is required.
 しかし、海面には雲が掛かっていることが多いので、人工衛星が上述の方法によって測定して得たデータは、正解データとして好ましくないことが多い。そこで、正解データを収集するために、長い期間を要する。 However, since the sea surface is often covered with clouds, the data obtained by the artificial satellites measured by the above-described method is often not preferable as the correct answer data. Therefore, it takes a long time to collect the correct answer data.
 本発明は、このような問題点に鑑み、インペインティングなどの従来の修復方法によるリアルタイム性を維持しつつ、従来よりも容易に正解データを収集し、欠損したデータを修復できるようにすることを、目的とする。 The present invention has been made in view of the above problems, and has been made to be able to collect correct answer data more easily and repair missing data while maintaining real-time properties by a conventional repair method such as inpainting. With the goal.
 本発明の一形態に係る欠損のあるデータを修復するために使用する関数を調整する修復用関数調整システムであって、元々欠損している欠損部分を有する第一のデータをさらに欠損させることによって第二のデータを生成する欠損手段と、前記第二のデータを前記関数で修復することによって第三のデータを生成する修復手段と、前記第一のデータおよび前記第三のデータそれぞれの前記欠損部分以外の部分に基づいて前記関数を調整する調整手段と、を有する。 A repair function adjustment system that adjusts a function used for repairing missing data according to one embodiment of the present invention, wherein the first data having a missing portion that is originally missing is further lost. Deficient means for generating second data, repair means for generating third data by repairing the second data with the function, and deficiency of each of the first data and the third data Adjusting means for adjusting the function based on a portion other than the portion.
 好ましくは、記憶手段に記憶された複数のデータの中から前記第一のデータを選出する選出手段、を有し、前記欠損手段は、前記選出手段によって選出された前記第一のデータを欠損させることによって前記第二のデータを生成する。 Preferably, there is provided a selection unit for selecting the first data from the plurality of data stored in the storage unit, and the deletion unit deletes the first data selected by the selection unit. Thereby, the second data is generated.
 さらに、前記選出手段は、前記第一のデータをランダムに選出し、前記欠損手段は、前記第一のデータの中のランダムに選択された部分を欠損させることによって第二のデータを生成してもよい(付記1)。 Further, the selecting means randomly selects the first data, and the deficient means generates second data by deleting a randomly selected part of the first data. (Appendix 1).
 本発明によると、インペインティングなどの従来の修復方法によるリアルタイム性を維持しつつ、従来よりも容易に正解データを収集し、欠損したデータを修復できるようにすることができる。 According to the present invention, correct data can be more easily collected and lost data can be repaired than before, while maintaining the real-time property of the conventional repair method such as inpainting.
 さらに、請求項2、13、15、17、19に係る発明によると、従来の修復方法によるリアルタイム性を維持しつつ従来よりも鮮明性の高い修復結果が得られるように関数を調整することができる。 Further, according to the inventions according to claims 2, 13, 15, 17, and 19, it is possible to adjust the function so as to obtain a restoration result with higher clarity than before while maintaining the real-time property by the conventional restoration method. it can.
海面温度マップ提供装置を含むネットワークシステムの例を示す図である。1 is a diagram illustrating an example of a network system including a sea surface temperature map providing device. 海面温度マップ提供装置のハードウェア構成の例を示す図である。It is a figure showing an example of hardware constitutions of a sea surface temperature map offer device. マップ生成プログラムの構成の例を示す図である。FIG. 4 is a diagram illustrating an example of a configuration of a map generation program. 海面温度マップの例を示す図である。It is a figure showing an example of a sea surface temperature map. 地域および区域の位置関係の例を示す図である。FIG. 4 is a diagram illustrating an example of a positional relationship between a region and an area. パッチの例を示す図である。FIG. 4 is a diagram illustrating an example of a patch. 学習の手順の例を示す図である。It is a figure showing an example of a procedure of learning. 第一のマスクフィルタの例を示す図である。FIG. 3 is a diagram illustrating an example of a first mask filter. 修復海面温度マップの例を示す図である。It is a figure showing an example of a restoration sea surface temperature map. マップ生成プログラムによる処理の流れの例を説明するフローチャートである。It is a flowchart explaining the example of the flow of a process by a map generation program. 比較実験の例を示す図である。It is a figure showing the example of a comparative experiment. 学習データとして用いる3次元データの例を示す図である。FIG. 4 is a diagram illustrating an example of three-dimensional data used as learning data. 比較実験の例を説明するための図である。It is a figure for explaining the example of a comparative experiment. 同化画像5Lを取得する方法の例を示す図である。It is a figure showing the example of the method of acquiring assimilation image 5L. 学習の手順の変形例を示す図である。It is a figure showing the modification of the procedure of learning. 学習の手順の変形例を示す図である。It is a figure showing the modification of the procedure of learning.
 図1は、海面温度マップ提供装置1を含むネットワークシステムの例を示す図である。図2は、海面温度マップ提供装置1のハードウェア構成の例を示す図である。図3は、マップ生成プログラム18の構成の例を示す図である。 FIG. 1 is a diagram showing an example of a network system including the sea surface temperature map providing device 1. FIG. 2 is a diagram illustrating an example of a hardware configuration of the sea surface temperature map providing device 1. FIG. 3 is a diagram illustrating an example of the configuration of the map generation program 18.
 図1に示す海面温度マップ提供装置1は、海面の温度の分布を表わす画像である海面温度マップを、温度の不明な部分を修復(復元、推測)して提供する装置である。海面温度マップ提供装置1として、デスクトップ型コンピュータ、ノート型パソコン、またはサーバなどが用いられる。以下、ノート型パソコンが用いられる場合を例に説明する。 The sea surface temperature map providing device 1 shown in FIG. 1 is a device that provides a sea surface temperature map, which is an image representing the distribution of sea surface temperature, by restoring (restoring and estimating) an unknown temperature portion. As the sea surface temperature map providing device 1, a desktop computer, a notebook computer, a server, or the like is used. Hereinafter, a case where a notebook computer is used will be described as an example.
 海面温度マップ提供装置1は、図2に示すように、プロセッサ10、RAM(Random Access Memory)11、ROM(Read Only Memory)12、補助記憶装置13、液晶ディスプレイ14、通信インタフェース15、キーボード16、およびポインティングデバイス17などによって構成される。 As shown in FIG. 2, the sea surface temperature map providing apparatus 1 includes a processor 10, a RAM (Random Access Memory) 11, a ROM (Read Only Memory) 12, an auxiliary storage device 13, a liquid crystal display 14, a communication interface 15, a keyboard 16, And a pointing device 17.
 ROM12または補助記憶装置13には、海面温度マップを提供するためのマップ生成プログラム18(図3参照)が記憶されている。マップ生成プログラム18は、RAM11にロードされる。 The ROM 12 or the auxiliary storage device 13 stores a map generation program 18 (see FIG. 3) for providing a sea surface temperature map. The map generation program 18 is loaded into the RAM 11.
 プロセッサ10は、GPU(Graphics Processing Unit)、MPU(Micro Processing Unit)またはCPU(Central Processing Unit)などのプロセッサであって、RAM11にロードされたマップ生成プログラム18を実行する。なお、プロセッサ10、RAM11、およびROM12が1つのチップとして構成される場合もある。 The processor 10 is a processor such as a GPU (Graphics Processing Unit), an MPU (Micro Processing Unit), or a CPU (Central Processing Unit), and executes the map generation program 18 loaded in the RAM 11. Note that the processor 10, the RAM 11, and the ROM 12 may be configured as one chip.
 液晶ディスプレイ14は、コマンドもしくはデータを入力するための画面または生成された海面温度マップなどを表示する。 The liquid crystal display 14 displays a screen for inputting a command or data or a generated sea surface temperature map.
 通信インタフェース15は、LAN(Local Area Network)またはインターネットなどを介して他の装置と通信するための装置である。通信インタフェース15として、無線LANカードまたはNIC(Network Interface Card)などが用いられる。 The communication interface 15 is a device for communicating with another device via a LAN (Local Area Network) or the Internet. As the communication interface 15, a wireless LAN card or an NIC (Network Interface Card) is used.
 キーボード16およびポインティングデバイス17は、コマンドまたはデータなどをオペレータが入力するための入力装置である。 The keyboard 16 and the pointing device 17 are input devices for an operator to input commands or data.
 ところで、前に説明した通り、人工衛星によると、海面のうちの雲が掛かっていない部分の温度を測定することができるが雲が掛かっている部分の温度を測定することができない。そこで、従来、この部分の温度を推測するために、データ同化またはインペインティングが用いられている。 By the way, as described above, according to the artificial satellite, it is possible to measure the temperature of the part of the sea surface where the cloud is not covered, but it is not possible to measure the temperature of the part where the cloud is covered. Therefore, conventionally, data assimilation or inpainting has been used to estimate the temperature of this portion.
 ところが、データ同化は膨大な量のデータが必要でありかつリアルタイム性を欠き、インペインティングは鮮明性を欠く。 However, data assimilation requires a huge amount of data and lacks real-timeness, and inpainting lacks clarity.
 しかし、マップ生成プログラム18は、インペインティングによるリアルタイム性を維持しつつ、インペインティングによる鮮明性を従来よりも向上させることができる。以下、この仕組みについて説明する。また、海面の部分のうち、雲が掛かっている部分を「遮蔽部分」と記載し、雲が掛かっていない部分を「非遮蔽部分」と記載する。また、海洋以外の部分(主に陸の部分)を「陸部分」と記載する。 However, the map generation program 18 can improve the sharpness by the inpainting as compared with the related art while maintaining the real-time property by the inpainting. Hereinafter, this mechanism will be described. Also, of the sea surface portion, the portion where the cloud is covered is described as “shielded portion”, and the portion where the cloud is not covered is described as “unshielded portion”. Parts other than the ocean (mainly land parts) are referred to as “land parts”.
 マップ生成プログラム18は、図3に示す学習器2および推測器3などを実現するためのプログラムである。 The map generation program 18 is a program for realizing the learning device 2 and the estimator 3 shown in FIG.
 マップ生成プログラム18には、学習器2を実現するためのソフトウェアモジュールとして、学習データ生成部201、学習データ記憶部202、正解データ選出部203、マスク記憶部204、第一のマスク処理部205、ジェネレータ206、第二のマスク処理部207、ディスクリミネータ208、第一の学習部211、および第二の学習部212などが含まれる。さらに、推測器3を実現するためのソフトウェアモジュールとして、海面温度推測部301およびパッチマージ部302などが含まれる。そのほか、学習済モデル記憶部221が含まれる。 The map generation program 18 includes, as software modules for realizing the learning device 2, learning data generation unit 201, learning data storage unit 202, correct data selection unit 203, mask storage unit 204, first mask processing unit 205, A generator 206, a second mask processing unit 207, a discriminator 208, a first learning unit 211, a second learning unit 212, and the like are included. Further, as a software module for realizing the estimator 3, a sea surface temperature estimating unit 301, a patch merging unit 302, and the like are included. In addition, a learned model storage unit 221 is included.
 学習器2は、遮蔽部分の海面温度を推測するための基準(以下、「推測基準」と記載する。)を、過去の海面温度マップを解析することによって学習する。推測器3は、学習器2によって学習された推測基準に基づいて遮蔽部分の海面温度を推測する。以下、学習器2および推測器3それぞれの詳細を順次、説明する。 The learning device 2 learns a standard for estimating the sea surface temperature of the shielded portion (hereinafter referred to as an “estimation standard”) by analyzing a past sea surface temperature map. The estimator 3 estimates the sea surface temperature of the shielded portion based on the estimation criterion learned by the learning device 2. Hereinafter, details of each of the learning device 2 and the estimator 3 will be sequentially described.
 〔学習器2の仕組み〕
  〔学習データの準備〕
 図4は、海面温度マップ4の例を示す図である。図5は、地域8Aおよび区域8Bの位置関係の例を示す図である。図6は、パッチ5の例を示す図である。
[Mechanism of learning device 2]
[Preparation of learning data]
FIG. 4 is a diagram illustrating an example of the sea surface temperature map 4. FIG. 5 is a diagram illustrating an example of the positional relationship between the area 8A and the area 8B. FIG. 6 is a diagram illustrating an example of the patch 5.
 海面温度マップ提供装置1には、ある特定の地域8Aにおけるそれぞれ異なる日時の海面の温度の分布をそれぞれ表わす、図4のような複数の海面温度マップ4(41、42、43、…)が入力される。地域8Aの海面から放射される赤外線を人工衛星によって検出して地域8Aの海面の温度を測定し、温度の分布を画像で表わすことによって、海面温度マップ4が生成される。地域8Aは、図5に示すように、(Ha×Va)個の区域8Bに区切られている。 The sea surface temperature map providing device 1 receives a plurality of sea surface temperature maps 4 (41, 42, 43,...) As shown in FIG. Is done. An infrared ray radiated from the sea surface of the area 8A is detected by an artificial satellite to measure the temperature of the sea surface of the area 8A, and the temperature distribution is represented by an image, whereby the sea surface temperature map 4 is generated. The area 8A is divided into (Ha × Va) areas 8B as shown in FIG.
 各海面温度マップ4には、日時の古いものから順に「1」、「2」、「3」、…というシーケンス番号が付されている。 海 Each sea surface temperature map 4 is assigned a sequence number of “1”, “2”, “3”,.
 本実施形態では、海面温度マップ4の画素の階調値は、明るいほど小さく、暗いほど大きい。 In the present embodiment, the gradation value of the pixel of the sea surface temperature map 4 is smaller as it is brighter and larger as it is darker.
 海面温度マップ4は、グレースケールまたはカラーの画像である。海面温度マップ4が256階調のグレースケールの画像である場合は、真っ白な部分が遮蔽部分であり、遮蔽部分の各画素の階調値は「0」である。真っ黒な部分が陸部分であり、陸部分の各画素の階調値は「255」である。遮蔽部分でも陸部分でもない部分が、非遮蔽部分である。 The sea surface temperature map 4 is a grayscale or color image. When the sea surface temperature map 4 is a grayscale image of 256 gradations, a white portion is a shielding portion, and the gradation value of each pixel in the shielding portion is “0”. The black portion is the land portion, and the gradation value of each pixel in the land portion is “255”. A portion that is neither a shielded portion nor a land portion is a non-shielded portion.
 または、海面温度マップ4が256階調のカラーの画像である場合は、真っ白な部分が遮蔽部分であり、遮蔽部分の各画素の赤、緑、および青それぞれの階調値はすべて「0」である。また、真っ黒な部分が陸部分であり、陸部分の各画素の赤、緑、および青それぞれの階調値はすべて「255」である。遮蔽部分でも陸部分でもない部分つまりカラーである部分が、非遮蔽部分である。 Alternatively, when the sea surface temperature map 4 is a color image of 256 gradations, a pure white portion is a shielding portion, and the red, green, and blue gradation values of each pixel of the shielding portion are all “0”. It is. The black portion is the land portion, and the tone values of red, green, and blue of each pixel in the land portion are all “255”. A portion that is neither a shielded portion nor a land portion, that is, a color portion is a non-shielded portion.
 学習データ生成部201は、学習データを次のように生成する。図6のような、各海面温度マップ4の区域8Bそれぞれの画像を、パッチ5として抽出する。そして、各パッチ5を、そのパッチ5に対応する区域8Bの位置および元の海面温度マップ4のシーケンス番号と対応付けて、学習データ記憶部202に記憶させる。各パッチ5は、学習データとして用いられる。 (4) The learning data generation unit 201 generates learning data as follows. An image of each area 8B of each sea surface temperature map 4 as shown in FIG. Then, each patch 5 is stored in the learning data storage unit 202 in association with the position of the area 8B corresponding to the patch 5 and the sequence number of the original sea surface temperature map 4. Each patch 5 is used as learning data.
 学習データ生成部201の処理によると、例えば、シーケンス番号が「d」の海面温度マップ4の、左からHb番目かつ上からVb番目の区域8Bのパッチ5が、{(Hb,Vb),d}と対応付けられて学習データ記憶部202に記憶される。 According to the processing of the learning data generating unit 201, for example, the patch 5 of the Hb-th and Vb-th section 8B from the left of the sea surface temperature map 4 with the sequence number “d” is {(Hb, Vb), d Are stored in the learning data storage unit 202 in association with}.
 パッチ5には、非遮蔽部分5a、遮蔽部分5b、および陸部分5cのうちの少なくとも1つが含まれる。例えば、パッチ51は、3種類の部分のうち非遮蔽部分5aおよび遮蔽部分5bが含まれる。パッチ52は、3種類の部分のうち非遮蔽部分5aのみが含まれる。パッチ53は、3種類の部分すべてが含まれる。以下、各パッチ5の面積が等しく、面積Sgである場合を例に説明する。 The patch 5 includes at least one of the unshielded portion 5a, the shielded portion 5b, and the land portion 5c. For example, the patch 51 includes an unshielded portion 5a and a shielded portion 5b among three types of portions. The patch 52 includes only the non-shielding portion 5a among the three types of portions. The patch 53 includes all three types of parts. Hereinafter, a case where the area of each patch 5 is equal and the area is Sg will be described as an example.
 なお、パッチ5の中の各種類の部分の境界を分かりやすくするために、図6において、境界を表わす黒い輪郭線を施している。しかし、実際には、このような輪郭線はパッチ5に表われない。後に図7または図8で示す各画像も、同様である。 In FIG. 6, a black outline representing the boundary is provided in FIG. 6 to make the boundary of each type of portion in the patch 5 easy to understand. However, such a contour line is not actually displayed in the patch 5. The same applies to each image shown later in FIG. 7 or FIG.
  〔学習のアルゴリズム〕
 図7は、学習の手順の例を示す図である。図8は、第一のマスクフィルタ5Wの例を示す図である。
[Learning algorithm]
FIG. 7 is a diagram illustrating an example of a learning procedure. FIG. 8 is a diagram illustrating an example of the first mask filter 5W.
 正解データ選出部203ないし第二の学習部212は、再構築誤差および敵対的誤差の2つの指標を用いて学習済モデルを、図7に示す手順で生成する。 The correct answer data selection unit 203 to the second learning unit 212 generate a learned model using the two indices of the reconstruction error and the hostile error in the procedure shown in FIG.
 マスク記憶部204には、図8のような、パターンがそれぞれ異なる複数の第一のマスクフィルタ5Wが、予め記憶されている。第一のマスクフィルタ5Wは、正解画像5Tと同じサイズ(画素数)のフィルタであって、各画素の値が「0」または「1」のいずれかである。第一のマスクフィルタ5Wのうちのハッチの部分は欠損させるための部分であり、各画素の値は「0」である。それ以外の部分は維持させるための部分であって、各画素の値は「1」である。第一のマスクフィルタ5W全体に対するハッチの部分の割合は、およそ10~30%である。 A plurality of first mask filters 5W having different patterns as shown in FIG. 8 are stored in the mask storage unit 204 in advance. The first mask filter 5W is a filter having the same size (number of pixels) as the correct image 5T, and the value of each pixel is either “0” or “1”. The hatched portion of the first mask filter 5W is a portion to be lost, and the value of each pixel is “0”. Other parts are parts to be maintained, and the value of each pixel is “1”. The ratio of the hatched portion to the entire first mask filter 5W is approximately 10 to 30%.
 正解データ選出部203は、学習データ記憶部202に記憶されているパッチ5のうちの、全体の面積つまりSgに対する遮蔽部分の面積の割合が所定の割合Rs未満であるものを検索する。すなわち、遮蔽部分の面積が面積Sbであるならば、Sb/Sg<Rs、を満たすパッチ5をN個、区域8Bも日時も問わずランダムに検索する。例えば、N=30、である。 (4) The correct answer data selection unit 203 searches the patches 5 stored in the learning data storage unit 202 for a patch whose total area, that is, the ratio of the area of the shielded portion to Sg is less than a predetermined ratio Rs. That is, if the area of the shielding portion is the area Sb, N patches 5 satisfying Sb / Sg <Rs are randomly searched regardless of the area 8B or the date and time. For example, N = 30.
 パッチ5には、陸部分の画像が含まれていることがある。そこで、正解データ選出部203は、陸部分を除いた全体の面積に対する遮蔽部分5bの面積の割合が所定の割合Rs未満であるものを検索してもよい。すなわち、陸部分の面積が面積Scであるならば、Sb/(Sg-Sc)<Rs、を満たすパッチ5を検索してもよい。また、パッチ53のような、3種類の部分のうち陸部分5cのみが含まれるパッチ5は、検索の対象から除外してもよい。 Patch 5 may include an image of a land portion. Therefore, the correct answer data selection unit 203 may search for a data item in which the ratio of the area of the shielding portion 5b to the entire area excluding the land portion is less than the predetermined ratio Rs. That is, if the area of the land portion is the area Sc, the patch 5 that satisfies Sb / (Sg−Sc) <Rs may be searched. Further, a patch 5 such as the patch 53, which includes only the land portion 5c among the three types of portions, may be excluded from the search target.
 そして、正解データ選出部203は、見つかったパッチ5つまり全体の面積に対する遮蔽部分の面積の割合が所定の割合Rs未満であるパッチ5を正解画像5Tとして選出する(#701)。正解画像5Tは、正解データとして用いられる。 {Circle around (5)} Then, the correct answer data selecting unit 203 selects the found patch 5, that is, the patch 5 in which the ratio of the area of the shielding portion to the entire area is less than the predetermined ratio Rs, as the correct answer image 5T (# 701). The correct answer image 5T is used as correct answer data.
 第一のマスク処理部205は、正解画像5Tごとに1つずつランダムに第一のマスクフィルタ5Wを選択し(#702)、選択した第一のマスクフィルタ5Wで正解画像5Tをマスクすることによって入力画像5Mを生成する(#703)。例えば、正解画像5T1について第一のマスクフィルタ5W3が選択された場合は、正解画像5T1を第一のマスクフィルタ5W3でマスクすることによって入力画像5Mを生成する。 The first mask processing unit 205 randomly selects the first mask filter 5W one by one for each correct image 5T (# 702), and masks the correct image 5T with the selected first mask filter 5W. An input image 5M is generated (# 703). For example, when the first mask filter 5W3 is selected for the correct image 5T1, the input image 5M is generated by masking the correct image 5T1 with the first mask filter 5W3.
 具体的には、第一のマスク処理部205は、次の(1)式によって入力画像5Mを生成する。 Specifically, the first mask processing unit 205 generates the input image 5M according to the following equation (1).
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
「Xn」は、n個目の正解画像5Tの各画素の階調値を示すベクトルである。「X^n」は、n個目の正解画像5Tを第一のマスクフィルタ5Wでマスクすることによって得られた入力画像5Mの各画素の階調値を示すベクトルである。「Mrand」は、n個目の正解画像5Tに掛ける第一のマスクフィルタ5Wの各画素の二値を示すベクトルである。 “Xn” is a vector indicating the gradation value of each pixel of the n-th correct image 5T. "X ^ n" is a vector indicating the tone value of each pixel of the input image 5M obtained by masking the n-th correct image 5T with the first mask filter 5W. “Mrand” is a vector indicating a binary value of each pixel of the first mask filter 5W to be applied to the n-th correct image 5T.
 なお、第一のマスクフィルタ5Wを予め用意しておくのではなく、正解画像5Tごとにランダムなパターンの画像を1つずつ第一のマスクフィルタ5Wとして生成してもよい。 Instead of preparing the first mask filter 5W in advance, a random pattern image may be generated one by one for each correct answer image 5T as the first mask filter 5W.
 第一のマスク処理部205の処理によると、正解画像5Tが256階調のグレースケールの画像であれば、正解画像5Tの中の、第一のマスクフィルタ5Wのハッチの部分と重なる各画素の階調値が「0」に書き換えられる。これにより、入力画像5Mが生成される。または、256階調のカラーの画像であれば、この部分の各画素の赤、緑、および青それぞれの階調値が「0」に書き換えられる。これにより、入力画像5Mが生成される。 According to the processing of the first mask processing unit 205, if the correct answer image 5T is a grayscale image of 256 gradations, each pixel of the correct answer image 5T that overlaps the hatched portion of the first mask filter 5W. The gradation value is rewritten to “0”. Thereby, the input image 5M is generated. Alternatively, in the case of a color image having 256 gradations, the gradation values of red, green, and blue of each pixel in this portion are rewritten to “0”. Thereby, the input image 5M is generated.
 以下、入力画像5Mの中の海面の部分のうちの雲が掛かりまたはマスクされた部分を「欠損部分5d」と記載する。 Hereinafter, the portion of the sea surface portion in the input image 5M where the cloud is covered or masked is referred to as “missing portion 5d”.
 ジェネレータ206は、入力画像5Mの中の欠損部分5dをインペインティングによって修復することによって、修復画像5Rを生成する(#704)。インペインティングのアルゴリズムとして、ディープラーニングなどの公知のアルゴリズムが用いられる。例えば、"Image denoising and inpainting with deep neural networks", Xie, J., Xu, L. and Chen, E., NIPS 2012. に記載されるアルゴリズムが用いられる。 The generator 206 generates a restored image 5R by restoring the missing portion 5d in the input image 5M by inpainting (# 704). A known algorithm such as deep learning is used as the inpainting algorithm. For example, an algorithm described in "Image denoising and inpainting with deep neural networks", Xie, J., Xu, L. and Chen, E., NIPS 2012. is used.
 つまり、修復画像5Rは、公知のアルゴリズムを採用するジェネレート関数Gに、入力画像5Mの各画素の階調値を示すベクトルを代入することによって生成される。例えば、k番目の入力画像5Mの場合は、X^kを代入することによって生成される。ただし、陸部分5cは修復しなくてもよい。または、後述するステップ#707、#706、#708の各処理において陸部分5cが用いられない。 That is, the restored image 5R is generated by substituting a vector indicating the gradation value of each pixel of the input image 5M into the generate function G employing a known algorithm. For example, in the case of the k-th input image 5M, it is generated by substituting X ^ k. However, the land portion 5c need not be restored. Alternatively, the land portion 5c is not used in each of steps # 707, # 706, and # 708 described below.
 ところで、従来の機械学習のアルゴリズムによると、例えば、N組の正解画像5Tおよび修復画像5Rの差の平均値が最小になるように、ジェネレート関数Gの各パラメータの値を調整する。つまり、次の(2)式の再構築誤差Lrecが最小になるように学習する。再構築誤差は、一般に「MSE」または「平均二乗誤差」と呼ばれることもある。 By the way, according to the conventional machine learning algorithm, for example, the values of the parameters of the generate function G are adjusted so that the average value of the difference between the N sets of the correct images 5T and the repaired images 5R is minimized. That is, learning is performed so that the reconstruction error Lrec in the following equation (2) is minimized. The reconstruction error may be commonly referred to as “MSE” or “mean squared error”.
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
「||A||」は、Aの値の2乗である。 “|| A ||” is the square of the value of A.
 しかし、正解画像5Tには、温度が不明である海面の部分(つまり、遮蔽部分5b)が含まれているのに対して、修復画像5Rには、そのような部分が含まれない。そのような部分を含めて学習するのは、好ましくない。 However, the correct image 5T includes a portion of the sea surface whose temperature is unknown (that is, the shielded portion 5b), whereas the restored image 5R does not include such a portion. It is not preferable to learn including such a part.
 このような問題点に鑑み、本実施形態においては、そのような部分を無視するために、(2)式を変形した、次の(3)式を用いて学習する。具体的な学習方法は、後述する。 鑑 In view of such a problem, in the present embodiment, learning is performed using the following equation (3) obtained by modifying the equation (2) in order to ignore such a portion. A specific learning method will be described later.
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
「mxn」は、n番目の正解画像5Tの中の遮蔽部分5bを示す第二のマスクフィルタ5Uである。具体的には、遮蔽部分5bの各画素の値は「0」であり、それ以外の部分の各画素の値は「1」である。mxnのi番目の画素の二値を次の(4)式によって表わすことができる。 “Mxn” is a second mask filter 5U indicating the occluded portion 5b in the n-th correct image 5T. Specifically, the value of each pixel in the shielding portion 5b is “0”, and the value of each pixel in the other portions is “1”. The binary value of the i-th pixel of mxn can be expressed by the following equation (4).
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
 (3)式に基づいて学習すれば、データ同化によって学習する場合よりも少ない正解データで学習し学習済モデルを得ることができる。そして、この学習済モデルを推測器3に適用しても、一応、海面の一部に雲が掛かった海面温度マップを修復することができる。しかし、この学習済モデルによると、修復された部分が平滑であり、漁業および海上輸送などからのニーズを満たさない。 If learning is performed based on (3), learning can be performed with less correct data than when learning by data assimilation, and a trained model can be obtained. Then, even if this trained model is applied to the estimator 3, it is possible to repair a sea surface temperature map in which a part of the sea surface is covered with clouds. However, according to this trained model, the restored part is smooth and does not meet the needs from fishing and marine transportation.
 そこで、本実施形態では、再構築誤差による学習方法と敵対的誤差による学習方法とを組み合わせることによって、この問題点を解決する。なお、敵対的誤差による学習は、一般に「GAN(Generative Adversarial Network」と呼ばれる。 Therefore, the present embodiment solves this problem by combining the learning method based on the reconstruction error and the learning method based on the hostile error. The learning based on the hostile error is generally called “GAN (Generative Adversarial Network)”.
 第二のマスク処理部207は、修復画像5Rの中の、その修復画像5Rの元の正解画像5Tの遮蔽部分5bと同じ位置の画素を欠損させることによって、フェイク画像5Fを生成する(#705)。具体的には、次の(5)式によって、修復画像5Rを元の正解画像5Tの第二のマスクフィルタ5Uでマスクすることによって、フェイク画像5Fを生成する。 The second mask processing unit 207 generates a fake image 5F by deleting a pixel in the restored image 5R at the same position as the shielded portion 5b of the original correct image 5T of the restored image 5R (# 705). ). Specifically, the fake image 5F is generated by masking the restored image 5R with the second mask filter 5U of the original correct image 5T according to the following equation (5).
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000005
「Zn」は、n番目のフェイク画像5Fの各画素の階調値を示すベクトルである。 “Zn” is a vector indicating the gradation value of each pixel of the n-th fake image 5F.
 ディスクリミネータ208は、公知のアルゴリズムによってN個の正解画像5TおよびN個のフェイク画像5Fのそれぞれを、本物および偽物のいずれかに区別する(#706)。つまり、公知のアルゴリズム(例えば、"Generative adversarial network", Ian J. Goodfellow, et.al, NIPS 2014 に記載されるアルゴリズム)を採用するディスクリミネート関数Dに、正解画像5Tまたはフェイク画像5Fの各画素の階調値を示すベクトルを代入することによって、正解画像5Tまたはフェイク画像5Fを本物または偽物に区別する。例えば、k番目の正解画像5Tを区別する場合は、Xkを代入する。k番目のフェイク画像5Fを区別する場合は、Zkを代入する。本実施形態では、ディスクリミネート関数Dの出力値6Aとして、本物に区別される場合は「1」が出力され、偽物に区別される場合は「0」が出力される。 The discriminator 208 discriminates each of the N correct images 5T and the N fake images 5F into a real or a fake by a known algorithm (# 706). That is, the discriminating function D employing a known algorithm (for example, an algorithm described in “Generative adversarial network”, Ian J. Goodfellow, et.al, NIPS 2014) is added to the correct image 5T or the fake image 5F. By substituting the vector indicating the gradation value of the pixel, the correct image 5T or the fake image 5F is distinguished as a real or a fake image. For example, when distinguishing the k-th correct image 5T, Xk is substituted. When distinguishing the kth fake image 5F, Zk is substituted. In the present embodiment, as the output value 6A of the discriminate function D, “1” is output when it is distinguished from the real one, and “0” is output when it is distinguished from the fake.
 第一の学習部211は、次の(6)式の値が最小になるようにジェネレート関数Gの各パラメータを調整する(#707)。
αLrecours+(1-α)Ladv  … (6)
「α」は、重さパラメータであり、0≦α≦1、である。「Ladv」は、次の(7)式によって算出される。
The first learning unit 211 adjusts each parameter of the generate function G so that the value of the following equation (6) is minimized (# 707).
αLrecours + (1-α) Ladv (6)
“Α” is a weight parameter, and 0 ≦ α ≦ 1. “Ladv” is calculated by the following equation (7).
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000006
 第二の学習部212は、第一の学習部211による学習が終わったら、次の(8)式の値が最小になるようにディスクリミネート関数Dの各パラメータを調整する(#708)。 {Circle around (2)} After the learning by the first learning unit 211 is completed, the second learning unit 212 adjusts each parameter of the discriminate function D so that the value of the following equation (8) is minimized (# 708).
Figure JPOXMLDOC01-appb-M000007
Figure JPOXMLDOC01-appb-M000007
 正解データ選出部203ないし第二の学習部212は、ディスクリミネータ208による正答率Rtが5割に近づくまで#701~#708の処理を繰り返し実行する。つまり、(Rt-0.5)の絶対値が所定の値(例えば、「0.03」)以下になるまで、上述の各処理を繰り返し実行する。 (4) The correct answer data selecting unit 203 to the second learning unit 212 repeatedly execute the processes of # 701 to # 708 until the correct answer rate Rt by the discriminator 208 approaches 50%. That is, the above-described processing is repeatedly performed until the absolute value of (Rt−0.5) becomes equal to or less than a predetermined value (for example, “0.03”).
 そして、正答率Rtが5割に近づいたら、ジェネレート関数Gの、その時点における各パラメータの値が、学習済モデル7として学習済モデル記憶部221に記憶される。または、ジェネレート関数Gを学習済モデル記憶部221に記憶させてもよい。 When the correct answer rate Rt approaches 50%, the values of the parameters of the generate function G at that time are stored in the learned model storage unit 221 as the learned model 7. Alternatively, the generate function G may be stored in the learned model storage unit 221.
 なお、正答率Rtは、直近に行った所定の回数の区別の結果に基づいて算出すればよい。ただし、所定の回数が少なすぎると偶然的に正答率Rtが5割に近づくことがあるので、上述の各処理を何回か(例えば、最低10回)繰り返す必要があるように、所定の回数を設定しておくのが望ましい。 Note that the correct answer rate Rt may be calculated based on the result of discrimination of a predetermined number of times performed most recently. However, if the predetermined number of times is too small, the correct answer rate Rt may accidentally approach 50%. Therefore, the predetermined number of times may be repeated several times (for example, at least 10 times). It is desirable to set
 〔推測器3の仕組み〕
 図9は、修復海面温度マップ4Sの例を示す図である。
[Mechanism of estimator 3]
FIG. 9 is a diagram illustrating an example of the restored sea surface temperature map 4S.
 学習が完了した後、海面温度マップ提供装置1に海面温度マップ4が入力されると、推測器3は、海面温度マップ4を次のように修復する。 When the sea surface temperature map 4 is input to the sea surface temperature map providing device 1 after the learning is completed, the estimator 3 restores the sea surface temperature map 4 as follows.
 海面温度推測部301は、海面温度マップ4を区域8B(図5参照)ごとのパッチ5に分割し、次の(9)式のジェネレート関数Gresに各パッチ5を代入することによって、各パッチ5を修復した修復パッチ5Sを生成する。 The sea surface temperature estimating unit 301 divides the sea surface temperature map 4 into patches 5 for each area 8B (see FIG. 5), and substitutes each of the patches 5 into a generate function Gres of the following equation (9) to obtain each patch 5. 5 is generated as a repair patch 5S.
Figure JPOXMLDOC01-appb-M000008
Figure JPOXMLDOC01-appb-M000008
Gres(X(h,v))は、左からh番目かつ上からv番目の区域8Bの修復パッチ5Sの各画素の階調値を示すベクトルである。「X(h,v)」は、左からh番目かつ上からv番目の区域8Bのパッチ5の各画素の階調値を示すベクトルである。「G(X(h,v))」は、ジェネレート関数GにX(h,v)を代入したものである。「(1-m(h,v))」は、左からh番目かつ上からv番目の区域8Bのパッチ5の遮蔽部分5b部分の各画素を「1」で表わし、それ以外の部分の各画素を「0」で表わすベクトルである。「m(h,v)」は、左からh番目かつ上からv番目の区域8Bのパッチ5の遮蔽部分5b部分の各画素を「0」で表わし、それ以外の部分の各画素を「1」で表わすベクトルである。 Gres (X (h, v) ) is a vector indicating the tone value of each pixel of the repair patch 5S in the h-th and v-th area 8B from the left. “X (h, v) ” is a vector indicating the gradation value of each pixel of the patch 5 in the h-th and v-th area 8B from the left. “G (X (h, v) )” is obtained by substituting X (h, v) into the generate function G. “(1-m (h, v) )” represents each pixel of the shielding portion 5 b of the patch 5 in the h-th and v-th area 8 B from the left with “1”, and each of the other parts This is a vector representing a pixel by “0”. “M (h, v) ” represents each pixel of the shielding portion 5b of the patch 5 in the h-th and v-th area 8B from the left as “0”, and each pixel in the other portions as “1”. ".
 つまり、(9)式によると、パッチ5の中の遮蔽部分5bの画像をジェネレート関数Gおよび学習済モデル7によって修復し、修復した画像と残りの部分(つまり、欠損していない部分)のオリジナルの画像とを合成することによって、修復パッチ5Sが生成される。 That is, according to equation (9), the image of the occluded portion 5b in the patch 5 is restored by the generate function G and the learned model 7, and the restored image and the remaining portion (that is, the portion that is not lost) are restored. The restoration patch 5S is generated by combining the original image and the original image.
 パッチマージ部302は、各区域8Bの修復パッチ5Sをそれぞれの位置に基づいて配置してマージすることによって、図9のような修復海面温度マップ4Sを生成する。 The patch merging unit 302 generates the repaired sea surface temperature map 4S as shown in FIG. 9 by arranging and merging the repair patches 5S of each area 8B based on their respective positions.
 修復海面温度マップ4Sは、液晶ディスプレイ14に表示され、通信インタフェース15によって他の装置へ送信され、または補助記憶装置13に保存される。 The restored sea surface temperature map 4S is displayed on the liquid crystal display 14, transmitted to another device by the communication interface 15, or stored in the auxiliary storage device 13.
 図10は、マップ生成プログラム18による処理の流れの例を説明するフローチャートである。 FIG. 10 is a flowchart illustrating an example of the flow of processing by the map generation program 18.
 次に、海面温度マップ提供装置1の全体的な処理の流れを、フローチャートを参照しながら説明する。 Next, the overall processing flow of the sea surface temperature map providing device 1 will be described with reference to flowcharts.
 海面温度マップ提供装置1は、マップ生成プログラム18に基づいて、図10に示す手順の処理を実行する。 The sea surface temperature map providing device 1 executes the process of the procedure shown in FIG.
 海面温度マップ提供装置1は、複数の海面温度マップ4が入力されると、これらの海面温度マップ4のそれぞれを区域8Bごとのパッチ5に分割し、学習データとして記憶する(図10の#101)。 When a plurality of sea surface temperature maps 4 are input, the sea surface temperature map providing device 1 divides each of these sea surface temperature maps 4 into patches 5 for each area 8B and stores them as learning data (# 101 in FIG. 10). ).
 海面温度マップ提供装置1は、区域8Bおよび日時を問わず、N個の正解画像5Tをランダムに選出する(#102)。選出した正解画像5Tごとに1つずつ第一のマスクフィルタ5Wをランダムに適用することによって入力画像5Mを生成する(#103)。各入力画像5Mをジェネレータ206で修復することによって修復画像5Rを生成する(#104)。 The sea surface temperature map providing device 1 randomly selects N correct images 5T regardless of the area 8B and the date and time (# 102). The input image 5M is generated by randomly applying the first mask filter 5W one by one for each of the selected correct images 5T (# 103). A restored image 5R is generated by restoring each input image 5M by the generator 206 (# 104).
 さらに、海面温度マップ提供装置1は、それぞれの修復画像5Rを、元の正解画像5Tの遮蔽部分5bを表わす第二のマスクフィルタ5Uを用いてマスクする(#105)。つまり、遮蔽部分5bと同じ位置にある画素を真っ白にする。これにより、N個のフェイク画像5Fが得られる。N個の正解画像5TおよびN個のフェイク画像5Fそれぞれの真偽を、ディスクリミネータ208によって判別する(#106)。 {Circle around (5)} The sea surface temperature map providing device 1 masks each repaired image 5R using the second mask filter 5U representing the shielded portion 5b of the original correct image 5T (# 105). That is, the pixel located at the same position as the shielding portion 5b is made white. Thereby, N fake images 5F are obtained. The authenticity of each of the N correct images 5T and the N fake images 5F is determined by the discriminator 208 (# 106).
 そして、海面温度マップ提供装置1は、(6)式の値が最小になるようにジェネレート関数Gの各パラメータを調整することによって、ジェネレータ206の学習を行う(#107)。さらに、(8)式の値が最小になるようにディスクリミネート関数Dの各パラメータを調整することによって、ディスクリミネータ208の学習を行う(#108)。 {Circle around (2)} The sea surface temperature map providing apparatus 1 learns the generator 206 by adjusting each parameter of the generate function G so that the value of the equation (6) is minimized (# 107). Further, learning of the discriminator 208 is performed by adjusting each parameter of the discriminator function D so that the value of the equation (8) is minimized (# 108).
 正答率Rtが5割に近づくまで(#109でNo)、海面温度マップ提供装置1は、ステップ#102~#108の処理を繰り返し実行する。 ま で Until the correct answer rate Rt approaches 50% (No in # 109), the sea surface temperature map providing device 1 repeatedly executes the processes of steps # 102 to # 108.
 正答率Rtが5割に近づいたら(#109でYes)、海面温度マップ提供装置1は、ジェネレート関数Gの各パラメータの現時点の値を学習済モデル7として記憶する(#110)。 If the correct answer rate Rt approaches 50% (Yes in # 109), the sea surface temperature map providing device 1 stores the current values of the parameters of the generate function G as the learned model 7 (# 110).
 その後、海面温度マップ4が修復の対象として入力されると(#111)、海面温度マップ提供装置1は、海面温度マップ4の各区域8Bのパッチ5をジェネレート関数Gおよび学習済モデル7によって修復する(#112)。修復された画像の中の遮蔽部分5bの部分とオリジナルの画像の中の遮蔽部分5b以外の部分とを合成することによって、修復パッチ5Sを生成する(#113)。そして、各区域8Bの修復パッチ5Sをそれぞれの位置に基づいて並べてマージすることによって修復海面温度マップ4Sを生成し(#114)、出力しまたは保存する(#115)。 Thereafter, when the sea surface temperature map 4 is input as a restoration target (# 111), the sea surface temperature map providing device 1 uses the generate function G and the learned model 7 to patch the patches 5 in each area 8B of the sea surface temperature map 4. Repair (# 112). The restoration patch 5S is generated by synthesizing the portion of the shielded portion 5b in the restored image and the portion other than the shielded portion 5b in the original image (# 113). Then, a repaired sea surface temperature map 4S is generated by arranging and merging the repaired patches 5S of each area 8B based on their respective positions (# 114), and is output or stored (# 115).
 本実施形態によると、欠損した部分が含まれる正解画像5Tであっても学習データとして使用することができる。よって、欠損していない学習データを多数用意する必要のある学習ベース(学習アルゴリズム)を採用する学習器においても、インペインティングのための学習済モデルを従来よりも容易に生成することができる。さらに、再構築誤差による学習に、敵対的誤差による学習を組み合わせることによって、リアルタイム性を維持しつつ従来よりも鮮明性の高い修復結果を得られるように学習済モデルを調整することができる。 According to the present embodiment, even the correct image 5T including the missing part can be used as learning data. Therefore, even in a learning device that employs a learning base (learning algorithm) that needs to prepare a large number of learning data with no loss, a learned model for inpainting can be generated more easily than before. Furthermore, by combining the learning based on the reconstruction error with the learning based on the hostile error, it is possible to adjust the learned model so as to obtain a restoration result with higher clarity than before while maintaining real-time properties.
 〔海面温度マップ提供装置1の使用例および比較実験〕
 図11は、比較実験の例を説明するための図である。
[Example of use of sea surface temperature map providing device 1 and comparative experiment]
FIG. 11 is a diagram for explaining an example of a comparative experiment.
 次に、海面温度マップ提供装置1を使い方の一例および比較実験の結果について説明する。 Next, an example of how to use the sea surface temperature map providing device 1 and results of a comparative experiment will be described.
 日本およびその近隣の地域が、地域8A(図5参照)である。海面温度マップ提供装置1には、地域8Aにおける500日間の毎日の特定の1つの時刻における海面の温度の分布をそれぞれ示す500枚の画像を海面温度マップ4(図4参照)として取得する。 Japan and its neighboring areas are Area 8A (see FIG. 5). The sea surface temperature map providing device 1 acquires 500 images each showing the distribution of the sea surface temperature at one specific time every day for 500 days in the area 8A as the sea surface temperature map 4 (see FIG. 4).
 海面温度マップ提供装置1は、各海面温度マップ4から64×64画素の大きさのパッチをパッチ5として抽出し、学習データとして用いる。海面温度マップ4の大きさが約5000×6000ドットであれば、海面温度マップ4それぞれから約7300ずつパッチ5が抽出される。 The sea surface temperature map providing device 1 extracts a patch having a size of 64 × 64 pixels from each sea surface temperature map 4 as a patch 5 and uses it as learning data. If the size of the sea surface temperature map 4 is about 5000 × 6000 dots, the patches 5 are extracted from the sea surface temperature map 4 by about 7,300 each.
 そして、海面温度マップ提供装置1は、上述の処理を実行することによって学習済モデル7を生成する。この学習済モデル7によって、図11のような欠損していない正解画像5E1をわざと欠損させた修復対象画像5E2を修復すると、修復画像5E5のように修復される。 Then, the sea surface temperature map providing device 1 generates the learned model 7 by executing the above-described processing. When the repair target image 5E2 in which the correct answer image 5E1 that is not missing as shown in FIG. 11 is intentionally lost by the learned model 7 is restored, the restoration image 5E5 is restored like the restored image 5E5.
 しかし、再構築誤差による学習および敵対的誤差による学習のうち、再構築誤差による学習のみを適用して生成した学習済モデルによって修復対象画像5E2を修復すると、修復画像5E3のように修復される。または、敵対的誤差による学習のみを適用して生成した学習済モデルによって修復対象画像5E2を修復すると、修復画像5E4のように修復される。 However, when the restoration target image 5E2 is restored by the learned model generated by applying only the learning based on the reconstruction error, among the learning based on the reconstruction error and the learning based on the hostile error, the restoration image 5E3 is restored. Alternatively, when the restoration target image 5E2 is restored using a learned model generated by applying only learning based on hostile errors, the restoration image 5E4 is restored as in the restored image 5E4.
 修復画像5E5を修復画像5E3および修復画像5E5と比較して分かるように、修復画像5E5は、修復画像5E3よりも平滑性が軽減されて鮮明性が高く、かつ、修復画像5E4よりも正解画像を正確に再現している。 As can be seen by comparing the restored image 5E5 with the restored image 5E3 and the restored image 5E5, the restored image 5E5 has reduced smoothness and higher clarity than the restored image 5E3, and has a more correct image than the restored image 5E4. Accurately reproduced.
 〔第一の変形例〕
 図12は、学習データとして用いる3次元データの例を示す図である。図13は、比較実験の例を説明するための図である。
[First modification]
FIG. 12 is a diagram illustrating an example of three-dimensional data used as learning data. FIG. 13 is a diagram for explaining an example of a comparative experiment.
 本実施形態では、学習データとして平面のデータつまり2次元データを使用することによって、学習済モデルを生成した。 In the present embodiment, a learned model is generated by using plane data, that is, two-dimensional data, as learning data.
 しかし、図12のような、同じ区域8Bの連続するk日(例えば、3日)分のパッチ5を時系列に並べた3次元データを使用することによって、学習済モデルを生成してもよい。 However, a trained model may be generated by using three-dimensional data in which patches 5 for consecutive k days (for example, three days) in the same area 8B are arranged in time series as shown in FIG. .
 この方法で生成した学習済モデルで修復する場合は、修復する対象の海面温度マップ4に加えて、直近の(k-1)日分の海面温度マップ4を海面温度マップ提供装置1に入力する必要がある。しかし、この学習済モデルを使用すると、大部分が欠損している場合であっても、2次元データによる学習済モデルを使用する場合よりも海面温度マップ4を確実に修復することができる。 When the restoration is performed using the learned model generated by this method, the sea surface temperature map 4 for the latest (k-1) days is input to the sea surface temperature map providing device 1 in addition to the sea surface temperature map 4 to be restored. There is a need. However, the use of this trained model makes it possible to more reliably restore the sea surface temperature map 4 than in the case of using a trained model based on two-dimensional data, even when most are missing.
 例えば、図13に示すような、ある区域8Bの画像5P1が、修復対象画像5P2のように大部分を欠損している場合であっても、修復画像5P5のように修復することができる。なお、修復画像5P3は、再構築誤差による学習のみで生成された学習済モデルで修復したものであり、修復画像5P4は、敵対的誤差による学習のみで生成された学習済モデルで修復したものである。 For example, as shown in FIG. 13, even when the image 5P1 of a certain area 8B is largely missing like the image 5P2 to be restored, it can be restored like the restored image 5P5. Note that the restored image 5P3 is restored using a learned model generated only by learning based on the reconstruction error, and the restored image 5P4 is restored using a learned model generated only using learning based on the hostile error. is there.
 ある位置において、雲の有無の変化は、海面温度の変化よりも著しい。つまり、数日間における海面温度の変化は小さいが、雲の有無の変化は著しい。よって、この変形例によると、学習データとして有用な画像をより確実に取得することができ、海面温度の推測の確実性をより高めることができる。 At some locations, changes in the presence or absence of clouds are more pronounced than changes in sea surface temperature. In other words, the change in sea surface temperature during a few days is small, but the change in the presence or absence of clouds is remarkable. Therefore, according to this modified example, an image useful as learning data can be more reliably obtained, and the certainty of estimating the sea surface temperature can be further increased.
 本実施形態では、海面温度マップ提供装置1は、正解画像5Tとして用いるパッチ5を、区域8Bを問わずに選出し、すべての区域8Bに共通の学習済モデル7を生成した。しかし、正解画像5Tとして用いるパッチ5を区域8Bごとに選出し、区域8Bごとに学習済モデル7を生成してもよい。 In the present embodiment, the sea surface temperature map providing device 1 selects the patch 5 used as the correct image 5T regardless of the area 8B, and generates the learned model 7 common to all the areas 8B. However, the patch 5 used as the correct answer image 5T may be selected for each section 8B, and the learned model 7 may be generated for each section 8B.
 〔第二の変形例〕
 図14は、同化画像5Lを取得する方法の例を示す図である。図15は、学習の手順の変形例を示す図である。図16は、学習の手順の変形例を示す図である。
[Second modified example]
FIG. 14 is a diagram illustrating an example of a method of acquiring the assimilation image 5L. FIG. 15 is a diagram illustrating a modification of the learning procedure. FIG. 16 is a diagram illustrating a modification of the learning procedure.
 本実施形態では、図7に示したように、ディスクリミネータ208は、ステップ#706において、正解画像5Tおよびフェイク画像5Fのそれぞれを、本物および偽物のいずれかに区別した。しかし、正解画像5Tの代わりに同化画像5Lを用いてもよい。つまり、同化画像5Lおよびフェイク画像5Fのそれぞれを、本物および偽物のいずれかに区別してもよい。 In the present embodiment, as shown in FIG. 7, in step # 706, the discriminator 208 distinguishes each of the correct answer image 5T and the fake image 5F into either a real one or a fake one. However, the assimilated image 5L may be used instead of the correct answer image 5T. That is, each of the assimilation image 5L and the fake image 5F may be distinguished into either a real or a fake image.
 ところで、データ同化によると、海面温度用の物理シミュレーションに種々の事項の実測データ(実際の観測値)を取り入れて海面温度を推測することができる。ところが、「発明が解決しようとする課題」の欄で述べた通り、データ同化は、膨大な量のデータが必要であり、計算にかなりの時間が掛かる。したがって、リアルタイム性に欠ける。よって、推論フェーズにデータ同化を適用すると、推論の結果を得るのにかなりの時間が掛かってしまう。 By the way, according to the data assimilation, the sea surface temperature can be estimated by incorporating measured data (actual observation values) of various items into the physical simulation for the sea surface temperature. However, as described in the section of “Problems to be Solved by the Invention”, data assimilation requires an enormous amount of data and requires a considerable amount of time for calculation. Therefore, it lacks real-time properties. Therefore, when data assimilation is applied to the inference phase, it takes a considerable time to obtain the result of the inference.
 しかし、データ同化によると、物理シミュレーションがベースなので、赤外線を人工衛星で検出して海面温度のマップを生成する場合よりも大気の状態または海面の状態などによるノイズの影響を受けにくく、よりクリアなマップを生成することができる。 However, according to data assimilation, since it is based on physical simulation, it is less susceptible to noise due to atmospheric conditions or sea surface conditions than when generating a sea surface temperature map by detecting infrared rays with a satellite and clearer Maps can be generated.
 そこで、第二の変形例では、データ同化によって得られた結果を学習フェーズにおいて使用する。 Therefore, in the second modification, the result obtained by data assimilation is used in the learning phase.
 同化画像5Lは、データ同化によって生成される画像であって、例えば次のように用意される。 The assimilation image 5L is an image generated by data assimilation, and is prepared, for example, as follows.
 学習器2の学習データ生成部201(図3参照)は、データ同化によって海面温度を推測する既存のデータ同化システムから過去の推測結果を取得する。 The learning data generation unit 201 of the learning device 2 (see FIG. 3) obtains past estimation results from an existing data assimilation system that estimates sea surface temperatures by data assimilation.
 具体的には、データ同化マップ4Lとして、海面温度マップ41、42、43、…、それぞれと同じ日時のデータ同化マップ4L1、4L2、4L3、…が、図14のようにデータ同化システムから海面温度マップ提供装置1へ通信回線または記録媒体を介して入力される。 Specifically, as the data assimilation map 4L, the sea surface temperature maps 41, 42, 43,..., The data assimilation maps 4L1, 4L2, 4L3,. The data is input to the map providing device 1 via a communication line or a recording medium.
 データ同化マップ4Lは、データ同化によって地域8Aの海面の温度を推測し、推測した温度の分布を表わす画像である。データ同化マップ4Lは、海面温度マップ4とは異なり、雲が掛かっている部分を有さない。データ同化マップ4Lは、公知の方法によって生成することができる。例えば、"Norihisa Usui, et. al, Four-dimensional variational ocean reanalysis: a 30-year high-resolution dataset in the western North Pacific (FORAWNP30), Journal of Oceanography, Vol. 73, No. 2, pp. 205?233, 2016." に記載される方法によって生成することができる。 The data assimilation map 4L is an image that estimates the temperature of the sea surface in the area 8A by data assimilation and shows the estimated temperature distribution. Unlike the sea surface temperature map 4, the data assimilation map 4L does not have a clouded portion. The data assimilation map 4L can be generated by a known method. For example, "Norihisa Usui, et. Al, Four-dimensional variational ocean reanalysis: a 30-year high-resolution dataset in the western North Pacific (FORAWNP30), Journal of Oceanography, Vol. 73, No. 205, .pp. 233, {2016. "}.
 各データ同化マップ4Lには、日時の古いものから順に「1」、「2」、「3」、…というシーケンス番号が付される。なお、データ同化マップ4Lは、海面温度マップ4と同じ解像度および階調に補正された後、海面温度マップ提供装置1に入力されるものとする。または、海面温度マップ4をデータ同化マップ4Lと同じ解像度および階調に補正してもよい。 シ ー ケ ン ス Each data assimilation map 4L is assigned a sequence number of “1”, “2”, “3”,. It is assumed that the data assimilation map 4L is corrected to the same resolution and gradation as the sea surface temperature map 4, and then input to the sea surface temperature map providing device 1. Alternatively, the sea surface temperature map 4 may be corrected to the same resolution and gradation as the data assimilation map 4L.
 学習データ生成部201は、各データ同化マップ4Lの区域8Bそれぞれの画像を、同化画像5Lとして抽出する。そして、各同化画像5Lを、その同化画像5Lに対応する区域8Bの位置および元のデータ同化マップ4Lのシーケンス番号と対応付けて、同化画像記載部209に記憶させる。例えば、シーケンス番号が「d」のデータ同化マップ4Lの、左からHb番目かつ上からVb番目の区域8Bの同化画像5Lを、{(Hb,Vb),d}と対応付けられて同化画像記載部209に記憶させる。 The learning data generation unit 201 extracts the image of each area 8B of each data assimilation map 4L as the assimilated image 5L. Then, each assimilated image 5L is stored in the assimilated image description unit 209 in association with the position of the area 8B corresponding to the assimilated image 5L and the sequence number of the original data assimilation map 4L. For example, in the data assimilation map 4L having the sequence number “d”, the assimilated image 5L of the Hb-th and Vb-th area 8B from the left is described in association with {(Hb, Vb), d}. This is stored in the unit 209.
 なお、同化画像5Lには、非遮蔽部分および陸部分のうちの少なくとも1つが含まれる。上述の通り、雲が掛かっている部分がデータ同化マップ4Lに含まれていないので、同化画像5Lには、遮蔽部分が含まれない。 Note that the assimilation image 5L includes at least one of a non-shielded portion and a land portion. As described above, since the clouded portion is not included in the data assimilation map 4L, the assimilated image 5L does not include the occluded portion.
 正解データ選出部203ないし第二の学習部212は、図15に示す手順で学習済モデルを生成する。 The correct answer data selection unit 203 to the second learning unit 212 generate a learned model according to the procedure shown in FIG.
 ステップ#721~#724の各処理は、図7のステップ#701~#704の各処理と同様である。ステップ#705に相当する処理は、スキップされる。 Steps # 721 to # 724 are the same as steps # 701 to # 704 in FIG. The process corresponding to step # 705 is skipped.
 ディスクリミネータ208は、公知のアルゴリズムによってN個の同化画像5LおよびN個の修復画像5Rのそれぞれを、本物および偽物のいずれかに区別する(#726)。つまり、ディスクリミネート関数Dに、同化画像5Lまたは修復画像5Rの各画素の階調値を示すベクトルを代入することによって区別する。ステップ#726の処理によると、ステップ#706の処理の場合と同様、ディスクリミネート関数Dの出力値6Aとして、本物に区別される場合は「1」が出力され、偽物に区別される場合は「0」が出力される。 The discriminator 208 discriminates each of the N assimilated images 5L and the N restored images 5R into either a genuine one or a fake one using a known algorithm (# 726). That is, the discrimination function D is distinguished by substituting a vector indicating the gradation value of each pixel of the assimilated image 5L or the restored image 5R. According to the process of step # 726, as in the case of the process of step # 706, "1" is output as the output value 6A of the discriminating function D when the discrimination is made genuine, and when the discrimination is made as fake. "0" is output.
 図7に示す例においては、ディスクリミネータ208は、正解画像5Tおよびフェイク画像5Fを本物および偽物のいずれかに区別した。しかし、図15に示す例においては、上述の通り、同化画像5Lおよび修復画像5Rを、ともに第二のマスクフィルタ5Uでマスクすることなく、本物および偽物のいずれかに区別する。この方法によると、雲に隠れていない部分(非遮蔽部分)だけでなく雲に隠れている部分(遮蔽部分)をも、区別のために使用することができる。 In the example shown in FIG. 7, the discriminator 208 distinguishes the correct answer image 5T and the fake image 5F into either a real one or a fake one. However, in the example illustrated in FIG. 15, as described above, the assimilated image 5L and the restored image 5R are both distinguished as genuine and fake without being masked by the second mask filter 5U. According to this method, not only the portion not hidden by the cloud (unshielded portion) but also the portion hidden by the cloud (shielded portion) can be used for discrimination.
 ステップ#727~#728の各処理は、ステップ#707~#708の各処理と同様である。ただし、ステップ#706によって得られる出力値6Aの代わりに、ステップ#726によって得られる出力値6Aが使用される。 Steps # 727 to # 728 are the same as steps # 707 to # 708. However, the output value 6A obtained in step # 726 is used instead of the output value 6A obtained in step # 706.
 #721~#728の処理は、ディスクリミネータ208による正答率Rtが5割に近づくまで繰り返し実行される。つまり、(Rt-0.5)の絶対値が所定の値(例えば、「0.03」)以下になるまで繰り返し実行される。 Steps # 721 to # 728 are repeatedly executed until the correct answer rate Rt by the discriminator 208 approaches 50%. That is, it is repeatedly executed until the absolute value of (Rt-0.5) becomes equal to or less than a predetermined value (for example, “0.03”).
 そして、正答率Rtが5割に近づいたら、ジェネレート関数Gの、その時点における各パラメータの値が、学習済モデル7として学習済モデル記憶部221に記憶される。または、ジェネレート関数Gを学習済モデル記憶部221に記憶させてもよい。 When the correct answer rate Rt approaches 50%, the values of the parameters of the generate function G at that time are stored in the learned model storage unit 221 as the learned model 7. Alternatively, the generate function G may be stored in the learned model storage unit 221.
 または、図16に示す手順で学習済モデルを生成することもできる。つまり、図7に示す例と同様に、第二のマスク処理部207は、修復画像5Rを第二のマスクフィルタ5Uでマスクすることによってフェイク画像5Fを生成する(#725)。第三のマスク処理部210は、同化画像5Lを第二のマスクフィルタ5Uでマスクすることによってマスク同化画像5Hを生成する。そして、ディスクリミネータ208は、フェイク画像5Fおよびマスク同化画像5Hを本物および偽物のいずれかに区別する。 Alternatively, a learned model can be generated according to the procedure shown in FIG. That is, similarly to the example illustrated in FIG. 7, the second mask processing unit 207 generates the fake image 5F by masking the restored image 5R with the second mask filter 5U (# 725). The third mask processing unit 210 generates a mask assimilated image 5H by masking the assimilated image 5L with the second mask filter 5U. Then, the discriminator 208 distinguishes the fake image 5F and the mask assimilation image 5H into either a real or a fake image.
 なお、ステップ#727以降の処理は、図15のステップ#707以降の処理と同様である。 Note that the processing after step # 727 is the same as the processing after step # 707 in FIG.
 3次元データを使用することによって学習済モデルを生成する方法は、第二の変形例においても適用することができる。 The method of generating a learned model by using three-dimensional data can be applied to the second modification.
 〔その他〕
 本実施形態では、図3に示した各機能を、マップ生成プログラム18をプロセッサ10で実行することによって実現した。しかし、全部または一部の機能をFPGA(Field Programmable Gate Array)またはASIC(Application Specific Integrated Circuit)などのハードウェアモジュールによって実現してもよい。
[Others]
In the present embodiment, each function shown in FIG. 3 is realized by executing the map generation program 18 by the processor 10. However, all or some of the functions may be realized by a hardware module such as an FPGA (Field Programmable Gate Array) or an ASIC (Application Specific Integrated Circuit).
 本実施形態では、学習器2および推測器3の両方を海面温度マップ提供装置1に設けたが、別々の装置に設けてもよい。例えば、学習用装置が学習器2によって学習済モデル7を生成し、1台または複数台の推測用装置へ通信回線または可搬型の記録媒体を介して配付する。そして、各推測用装置は、入力された海面温度マップ4を、学習済モデル7を用いて、修復する。 In the present embodiment, both the learning device 2 and the estimator 3 are provided in the sea surface temperature map providing device 1, but they may be provided in separate devices. For example, the learning device generates a learned model 7 by the learning device 2 and distributes the learned model 7 to one or a plurality of estimating devices via a communication line or a portable recording medium. Then, each estimating device restores the input sea surface temperature map 4 using the learned model 7.
 本実施形態では、図3に示した学習器2の学習データ生成部201ないしディスクリミネータ212のすべてを海面温度マップ提供装置1に設ける場合を例に説明したが、一部分が他の装置にあってもよい。例えば、ジェネレータ206、ディスクリミネータ208、第一の学習部211、および第二の学習部212がクラウドサーバに設けられていてもよい。 In the present embodiment, an example has been described in which all of the learning data generation unit 201 to the discriminator 212 of the learning device 2 shown in FIG. You may. For example, the generator 206, the discriminator 208, the first learning unit 211, and the second learning unit 212 may be provided in a cloud server.
 この場合は、第一のマスク処理部205は、入力画像5Mを生成したら、入力画像5Mをクラウドサーバへ送信するとともに、入力画像5Mを修復するようクライアントサーバへ指令する。 In this case, after generating the input image 5M, the first mask processing unit 205 transmits the input image 5M to the cloud server and instructs the client server to repair the input image 5M.
 クラウドサーバは、ジェネレータ206と同様に、入力画像5Mを修復することによって修復画像5Rを生成する。そして、修復画像5Rを海面温度マップ提供装置1へ送信する。 The cloud server generates the restored image 5R by restoring the input image 5M, similarly to the generator 206. Then, the repaired image 5R is transmitted to the sea surface temperature map providing device 1.
 第二のマスク処理部207は、修復画像5Rを受信すると、修復画像5Rを第二のマスクフィルタ5Uでマスクすることによってフェイク画像5Fを生成する。そして、正解画像5Tおよびフェイク画像5Fをクラウドサーバへ送信するとともに、機械学習するようにクラウドサーバへ指令する。 Upon receiving the restored image 5R, the second mask processing unit 207 generates a fake image 5F by masking the restored image 5R with the second mask filter 5U. Then, the correct image 5T and the fake image 5F are transmitted to the cloud server, and the cloud server is instructed to perform machine learning.
 クラウドサーバは、ディスクリミネータ208と同様に、フェイク画像5Fおよび正解画像5Tそれぞれの真偽を区別する。そして、第一の学習部211と同様に(6)式に基づいてジェネレート関数Gを調整し、第二の学習部212と同様に(8)式に基づいてディスクリミネート関数Dを調整する。 The cloud server distinguishes between the fake image 5F and the correct image 5T as in the discriminator 208. Then, similarly to the first learning section 211, the generate function G is adjusted based on the equation (6), and similarly to the second learning section 212, the discriminate function D is adjusted based on the equation (8). .
 本実施形態では、第一の学習部211は、再構築誤差および敵対的誤差を組み合わせた機械学習によってジェネレート関数Gのパラメータを調整した。しかし、平滑性が高くても構わない場合は、再構築誤差のみによる機械学習によって調整すればよい。つまり、(3)式が最小になるようにジェネレート関数Gのパラメータを調整すればよい。 In the present embodiment, the first learning unit 211 adjusts the parameters of the generate function G by machine learning combining the reconstruction error and the hostile error. However, when the smoothness may be high, the adjustment may be performed by machine learning using only the reconstruction error. That is, the parameters of the generate function G may be adjusted so that the expression (3) is minimized.
 本実施形態では、海面温度マップ提供装置1は、一部分が欠損した海面温度マップを修復するために用いられたが、他のデータを修復するために用いることができる。例えば、深度カメラによって得られた深度画像または不安定な通信回線を介して送信されてきた音声を修復するために用いてもよい。または、宇宙線の観測データを修復するために用いてもよい。 In the present embodiment, the sea surface temperature map providing device 1 is used to repair a partially missing sea surface temperature map, but can be used to repair other data. For example, it may be used to restore a depth image obtained by a depth camera or sound transmitted via an unstable communication line. Alternatively, it may be used to restore cosmic ray observation data.
 その他、海面温度マップ提供装置1の全体または各部の構成、処理の内容、処理の順序などは、本発明の趣旨に沿って適宜変更することができる。 In addition, the configuration of the whole or each part of the sea surface temperature map providing device 1, the content of the processing, the order of the processing, and the like can be appropriately changed according to the gist of the present invention.
  1 海面温度マップ提供装置(修復用関数調整システム、データ修復装置)
  201 学習データ生成部(記憶手段)
  203 正解データ選出部(選出手段)
  205 第一のマスク処理部(欠損手段)
  206 ジェネレータ(修復手段)
  207 第二のマスク処理部(偽データ生成手段)
  208 ディスクリミネータ(区分手段)
  211 第一の学習部(調整手段)
  212 第二の学習部(第二の調整手段)
  3 推測器(修復手段)
  5F フェイク画像(第四のデータ)
  5H マスク同化画像(第五のデータ)
  5M 入力画像(第二のデータ)
  5L 同化画像(第六のデータ)
  5R 修復画像(第三のデータ)
  5T 正解画像(第一のデータ)
  6A 出力値(結果)
  8A 地域(海面)
  8B 区域
  D ディスクリミネート関数(第二の関数)
  G ジェネレート関数(関数)
1 Sea surface temperature map providing device (repair function adjustment system, data restoration device)
201 learning data generation unit (storage means)
203 Correct answer data selection section (selection means)
205 First mask processing unit (deletion means)
206 generator (repair means)
207 Second Mask Processing Unit (Fake Data Generation Means)
208 Discriminator (dividing means)
211 first learning unit (adjustment means)
212 second learning unit (second adjusting means)
3 Estimator (repair means)
5F Fake image (fourth data)
5H mask assimilation image (fifth data)
5M input image (second data)
5L Assimilation image (sixth data)
5R restored image (third data)
5T Correct image (first data)
6A output value (result)
8A area (sea level)
8B area D discriminate function (second function)
G Generate function (function)

Claims (19)

  1.  欠損のあるデータを修復するために使用する関数を調整する修復用関数調整システムであって、
     元々欠損している部分であるオリジナル欠損部分を有する第一のデータをさらに欠損させることによって第二のデータを生成する欠損手段と、
     前記第二のデータを前記関数で修復することによって第三のデータを生成する修復手段と、
     前記第一のデータおよび前記第三のデータそれぞれの前記オリジナル欠損部分以外の部分に基づいて前記関数を調整する調整手段と、
     を有することを特徴とする修復用関数調整システム。
    A repair function adjustment system that adjusts a function used to repair missing data,
    Deletion means for generating second data by further deleting the first data having an original missing part which is a part originally missing,
    Restoration means for generating third data by restoring the second data with the function,
    Adjusting means for adjusting the function based on portions other than the original missing portion of each of the first data and the third data,
    A repair function adjustment system, comprising:
  2.  記憶手段に記憶された複数のデータの中から前記第一のデータを選出する選出手段、
     を有し、
     前記欠損手段は、前記選出手段によって選出された前記第一のデータを欠損させることによって前記第二のデータを生成する、
     請求項1に記載の修復用関数調整システム。
    Selecting means for selecting the first data from a plurality of data stored in the storage means,
    Has,
    The deletion means generates the second data by deleting the first data selected by the selection means,
    The repair function adjustment system according to claim 1.
  3.  前記第三のデータの中の、当該第三のデータの元である前記第一のデータの前記オリジナル欠損部分と同じ位置の部分を欠損させることによって、第四のデータを生成する、偽データ生成手段と、
     前記第一のデータおよび前記第四のデータのそれぞれを第二の関数に基づいて本物および偽物のいずれかに区分する区分手段と、
     前記区分手段が前記第一のデータおよび前記第四のデータのそれぞれを区分した結果に応じて前記第二の関数を調整する第二の調整手段と、
     を有し、
     前記調整手段は、前記結果に基づいて前記関数を調整する、
     請求項1または請求項2に記載の修復用関数調整システム。
    Generating fake data by deleting a portion of the third data at the same position as the original missing portion of the first data that is the source of the third data; Means,
    Classifying means for classifying each of the first data and the fourth data into either a real or a fake based on a second function,
    A second adjusting unit that adjusts the second function according to a result of the classifying unit classifying each of the first data and the fourth data,
    Has,
    The adjusting means adjusts the function based on the result,
    The repair function adjustment system according to claim 1.
  4.  前記区分手段が前記第一のデータおよび前記第四のデータのそれぞれを区分する正確性が所定の範囲内になるまで、前記選出手段、前記欠損手段、前記修復手段、前記偽データ生成手段、前記区分手段、前記調整手段、および前記第二の調整手段は、それぞれの処理を1回ずつ繰り返し実行する、
     請求項3に記載の修復用関数調整システム。
    The selecting means, the deficient means, the repairing means, the fake data generating means, until the accuracy in which the sorting means separates each of the first data and the fourth data is within a predetermined range. The classifying unit, the adjusting unit, and the second adjusting unit repeatedly execute each process once.
    The repair function adjustment system according to claim 3.
  5.  前記第三のデータの中の、当該第三のデータの元である前記第一のデータの前記オリジナル欠損部分と同じ位置の部分を欠損させることによって、第四のデータを生成する、偽データ生成手段と、
     前記第四のデータおよび第五のデータのそれぞれを第二の関数に基づいて本物および偽物のいずれかに区分する区分手段と、
     前記区分手段が前記第四のデータおよび前記第五のデータのそれぞれを区分した結果に応じて前記第二の関数を調整する第二の調整手段と、
     を有し、
     前記調整手段は、前記結果に基づいて前記関数を調整し、
     前記第五のデータは、データ同化によって生成されかつ特定の対象を表わすデータを、前記第四のデータと同じ位置の部分を欠損させたものであり、
     前記第一のデータは、前記データ同化以外の方法によって生成された、前記特定の対象を表わすデータである、
     請求項1ないし請求項4のいずれかに記載の修復用関数調整システム。
    Generating fake data by deleting a portion of the third data at the same position as the original missing portion of the first data that is the source of the third data; Means,
    Classifying means for classifying each of the fourth data and the fifth data into either a real or a fake based on a second function,
    A second adjusting unit that adjusts the second function according to a result of the classifying unit classifying each of the fourth data and the fifth data,
    Has,
    The adjusting means adjusts the function based on the result,
    The fifth data is data generated by data assimilation and representing a specific object, in which a portion at the same position as the fourth data is deleted,
    The first data is data representing the specific object, generated by a method other than the data assimilation,
    The repair function adjusting system according to claim 1.
  6.  前記区分手段が前記第四のデータおよび前記第五のデータのそれぞれを区分する正確性が所定の範囲内になるまで、前記選出手段、前記欠損手段、前記修復手段、前記偽データ生成手段、前記区分手段、前記調整手段、および前記第二の調整手段は、それぞれの処理を1回ずつ繰り返し実行する、
     請求項5に記載の修復用関数調整システム。
    The selecting means, the deficient means, the repair means, the fake data generating means, until the accuracy in which the sorting means separates each of the fourth data and the fifth data falls within a predetermined range. The classifying unit, the adjusting unit, and the second adjusting unit repeatedly execute each process once.
    A repair function adjustment system according to claim 5.
  7.  前記第三のデータおよび第六のデータのそれぞれを第二の関数に基づいて本物および偽物のいずれかに区分する区分手段と、
     前記区分手段が前記第三のデータおよび前記第六のデータのそれぞれを区分した結果に応じて前記第二の関数を調整する第二の調整手段と、
     を有し、
     前記調整手段は、前記結果に基づいて前記関数を調整し、
     前記第六のデータは、データ同化によって生成されかつ特定の対象を表わすデータであり、
     前記第一のデータは、前記データ同化以外の方法によって生成された、前記特定の対象を表わすデータである、
     請求項1請求項2に記載の修復用関数調整システム。
    Classifying means for classifying each of the third data and the sixth data into either a real or a fake based on a second function,
    A second adjusting unit that adjusts the second function according to a result of the dividing unit dividing the third data and the sixth data,
    Has,
    The adjusting means adjusts the function based on the result,
    The sixth data is data generated by data assimilation and representing a specific object,
    The first data is data representing the specific object, generated by a method other than the data assimilation,
    The repair function adjustment system according to claim 1.
  8.  前記区分手段が前記第三のデータおよび前記第六のデータのそれぞれを区分する正確性が所定の範囲内になるまで、前記選出手段、前記欠損手段、前記修復手段、前記区分手段、前記調整手段、および前記第二の調整手段は、それぞれの処理を1回ずつ繰り返し実行する、
     請求項7に記載の修復用関数調整システム。
    The selecting means, the missing means, the repairing means, the sorting means, and the adjusting means until the accuracy of the sorting means for sorting each of the third data and the sixth data is within a predetermined range. , And the second adjusting unit repeatedly executes each process once.
    A repair function adjustment system according to claim 7.
  9.  前記第一のデータは、海面の温度の分布を表わす画像である、
     請求項1ないし請求項8のいずれかに記載の修復用関数調整システム。
    The first data is an image representing the distribution of sea surface temperature,
    A repair function adjustment system according to claim 1.
  10.  前記画像は、前記海面を区切った複数の区域の中から選択された1つまたは複数の区域それぞれの画像である、
     請求項9に記載の修復用関数調整システム。
    The image is an image of each of one or more areas selected from a plurality of areas that divide the sea surface,
    A repair function adjustment system according to claim 9.
  11.  前記第一のデータは、互いに異なるS個(S>3)の時期それぞれに測定された海面の温度の分布を表わす画像である、
     請求項1ないし請求項8のいずれかに記載の修復用関数調整システム。
    The first data is an image representing a distribution of sea surface temperature measured at each of S (S> 3) different times.
    A repair function adjustment system according to claim 1.
  12.  前記海面は、複数の区域に区切られており、
     前記画像は、複数組の、前記S個の時期のうちのいずれかにおける前記複数の区域の中のいずれかの画像である、
     請求項11に記載の修復用関数調整システム。
    The sea surface is divided into a plurality of zones,
    The image is a set of any of the images in the plurality of zones at any of the S times.
    A repair function adjustment system according to claim 11.
  13.  前記海面は、複数の区域に区切られており、
     前記画像は、複数組の、前記S個の時期のうちの連続するT個(S>T>1)の時期それぞれにおける前記複数の区域の中のいずれかの画像である、
     請求項11に記載の修復用関数調整システム。
    The sea surface is divided into a plurality of zones,
    The image is any one of the images in the plurality of sections in each of a plurality of sets of successive T times (S>T> 1) of the S times.
    A repair function adjustment system according to claim 11.
  14.  修復の対象であるデータを、請求項1ないし請求項13のいずれかに記載の修復用関数調整システムによって調整された前記関数によって修復する修復手段、
     を有することを特徴とするデータ修復装置。
    A repairing means for repairing data to be repaired by the function adjusted by the repairing function adjusting system according to any one of claims 1 to 13.
    A data restoration device comprising:
  15.  請求項14に記載のデータ修復装置によって修復されたデータ。 A data restored by the data restoration device according to claim 14.
  16.  欠損のあるデータを修復するために使用する関数を調整する修復用関数調整方法であって、
     元々欠損している欠損部分を有する第一のデータをさらに欠損させることによって第二のデータを生成する第一のステップと、
     前記第二のデータを前記関数で修復することによって第三のデータを生成する第二のステップと、
     前記第一のデータおよび前記第三のデータそれぞれの前記欠損部分以外の部分に基づいて前記関数を調整する第三のステップと、
     を有することを特徴とする修復用関数調整方法。
    A repair function adjustment method for adjusting a function used to repair missing data,
    A first step of generating second data by further deleting the first data having the originally missing defect portion;
    A second step of generating third data by repairing the second data with the function;
    A third step of adjusting the function based on portions other than the missing portion of each of the first data and the third data,
    A repair function adjusting method, comprising:
  17.  欠損のあるデータを修復するために使用する修復用関数を生成する修復用関数生成方法であって、
     元々欠損している欠損部分を有する第一のデータをさらに欠損させることによって第二のデータを生成する第一のステップと、
     第一の関数で前記第二のデータを修復することによって第三のデータを生成する第二のステップと、
     前記第一のデータおよび前記第三のデータそれぞれの前記欠損部分以外の部分に基づいて前記第一の関数を調整する第三のステップと、
     前記第一の関数または当該第一の関数の各パラメータの値を前記修復用関数または当該修復用関数の各パラメータの値として記憶手段に記憶させる第四のステップと、
     を有することを特徴とする修復用関数生成方法。
    A repair function generation method for generating a repair function used to repair missing data, comprising:
    A first step of generating second data by further deleting the first data having the originally missing defect portion;
    A second step of generating third data by repairing said second data with a first function;
    A third step of adjusting the first function based on portions other than the missing portion of each of the first data and the third data,
    A fourth step of storing the value of each parameter of the first function or the first function in a storage unit as the value of each parameter of the repair function or the repair function,
    And a repair function generating method.
  18.  欠損のあるデータを修復するために使用する関数を調整するコンピュータに用いられるコンピュータプログラムであって、
     前記コンピュータに、
     元々欠損している欠損部分を有する第一のデータをさらに欠損させることによって第二のデータを生成する欠損処理を実行させ、
     前記第二のデータを前記関数で修復することによって第三のデータを生成する修復処理を実行させ、
     前記第一のデータおよび前記第三のデータそれぞれの前記欠損部分以外の部分に基づいて前記関数を調整する調整処理を実行させる、
     ことを特徴とするコンピュータプログラム。
    A computer program used by a computer to adjust a function used to repair missing data,
    On the computer,
    By causing the first data having a missing part that is originally missing to be further lost, a loss process of generating second data is performed,
    Causing a repair process to generate third data by repairing the second data with the function,
    The adjustment processing of adjusting the function based on a portion other than the missing portion of each of the first data and the third data,
    A computer program characterized by the above-mentioned.
  19.  欠損のあるデータを修復手段が修復するために使用する関数を調整する支援を行うコンピュータに用いられるコンピュータプログラムであって、
     前記コンピュータに、
     元々欠損している欠損部分を有する第一のデータをさらに欠損させることによって第二のデータを生成する欠損処理を実行させ、
     前記第二のデータを前記修復手段に前記関数を用いて修復させることによって第三のデータを生成する修復処理を実行させ、
     前記第一のデータおよび前記第三のデータそれぞれの前記欠損部分以外の部分に基づいて前記関数を調整する調整処理が実行されるように調整手段を制御する調整制御処理を実行させる、
     ことを特徴とするコンピュータプログラム。
    A computer program used for a computer that assists in adjusting a function used by the repair unit to repair missing data,
    On the computer,
    By causing the first data having a missing part that is originally missing to be further lost, a loss process of generating second data is performed,
    Causing the repair means to repair the second data using the function to execute a repair process of generating third data,
    Executing an adjustment control process for controlling an adjustment unit such that an adjustment process for adjusting the function is performed based on a portion other than the missing portion of each of the first data and the third data.
    A computer program characterized by the above-mentioned.
PCT/JP2019/029230 2018-07-25 2019-07-25 Restoration function adjustment system, data restoration device, restoration function adjustment method, restoration function generation method, and computer program WO2020022436A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018139601A JP2021177262A (en) 2018-07-25 2018-07-25 Restoration function adjustment system, data restoration device, restoration function adjustment method, restoration function generation method, and computer program
JP2018-139601 2018-07-25

Publications (1)

Publication Number Publication Date
WO2020022436A1 true WO2020022436A1 (en) 2020-01-30

Family

ID=69181767

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/029230 WO2020022436A1 (en) 2018-07-25 2019-07-25 Restoration function adjustment system, data restoration device, restoration function adjustment method, restoration function generation method, and computer program

Country Status (2)

Country Link
JP (1) JP2021177262A (en)
WO (1) WO2020022436A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640076A (en) * 2020-05-29 2020-09-08 北京金山云网络技术有限公司 Image completion method and device and electronic equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001197355A (en) * 2000-01-13 2001-07-19 Minolta Co Ltd Digital image pickup device and image restoring method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001197355A (en) * 2000-01-13 2001-07-19 Minolta Co Ltd Digital image pickup device and image restoring method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHIBATA, SATOKI ET AL., RESTORATION OF OCEAN TEMPERATURE IMAGES BY LEARNING BASED INPAINTING AND OPTICAL FLOW, 29 January 2016 (2016-01-29), pages 1 - 26, XP055681086, Retrieved from the Internet <URL:http://www.mm.media.kyoto-u.ac.jp/wp-content/uploads/2014/07/2015-b-shibata.pdf> *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640076A (en) * 2020-05-29 2020-09-08 北京金山云网络技术有限公司 Image completion method and device and electronic equipment
CN111640076B (en) * 2020-05-29 2023-10-10 北京金山云网络技术有限公司 Image complement method and device and electronic equipment

Also Published As

Publication number Publication date
JP2021177262A (en) 2021-11-11

Similar Documents

Publication Publication Date Title
KR102134405B1 (en) System and Method for Improving Low Light Level Image Using Generative Adversarial Network
US7706606B1 (en) Fast, adaptive color to grayscale conversion
WO2019242329A1 (en) Convolutional neural network training method and device
CN106096655B (en) A kind of remote sensing image airplane detection method based on convolutional neural networks
EP2370949A1 (en) Image processing device for determining cut lines and related methods
CN113989100A (en) Infrared texture sample expansion method based on pattern generation countermeasure network
JP6943251B2 (en) Image processing equipment, image processing methods and computer-readable recording media
CN110889797B (en) Depth self-adaptive image hiding method based on confrontation sample generation
WO2020022436A1 (en) Restoration function adjustment system, data restoration device, restoration function adjustment method, restoration function generation method, and computer program
WO2010065693A1 (en) Image processing device for tonal balancing of mosaic images and related methods
Dietrich-Sussner et al. Synthetic glacier SAR image generation from arbitrary masks using pix2pix algorithm
Shibata et al. Restoration of sea surface temperature satellite images using a partially occluded training set
CN113487493B (en) GANilla-based SAR image automatic colorization method
CN110322454B (en) High-resolution remote sensing image multi-scale segmentation optimization method based on spectrum difference maximization
CN114463176A (en) Improved ESRGAN-based image super-resolution reconstruction method
Lu et al. RSI-Mix: Data Augmentation Method for Remote Sensing Image Classification
Wang et al. Metalantis: A Comprehensive Underwater Image Enhancement Framework
Mehta et al. Remote sensing image contrast and brightness enhancement based on Cuckoo search and DTCWT-SVD
CN116703744B (en) Remote sensing image dodging and color homogenizing method and device based on convolutional neural network
Mondal et al. Dct coefficients weighting (dctcw)-based gray wolf optimization (gwo) for brightness preserving image contrast enhancement
Sasongko et al. Multispectral data UAV for rice growth phase: a comparison of pixel-based and object-based approach
RU2747044C1 (en) Hardware-software complex designed for training and (or) re-training of processing algorithms for aerial photographs of the territory for detection, localization and classification up to type of aviation and ground equipment
US20240161252A1 (en) Machine Learning Model-Based Image Noise Synthesis
US20230281823A1 (en) Method and electronic device for on-device lifestyle recommendations
Indychko et al. Color Adaptation in Images of Polished Sections of Geological Specimens

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19840816

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19840816

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP