WO2020066755A1 - Learning model generation device, ground object change determination device, learning model generation method, ground object change determination method, and computer-readable recording medium - Google Patents

Learning model generation device, ground object change determination device, learning model generation method, ground object change determination method, and computer-readable recording medium Download PDF

Info

Publication number
WO2020066755A1
WO2020066755A1 PCT/JP2019/036411 JP2019036411W WO2020066755A1 WO 2020066755 A1 WO2020066755 A1 WO 2020066755A1 JP 2019036411 W JP2019036411 W JP 2019036411W WO 2020066755 A1 WO2020066755 A1 WO 2020066755A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
difference
change
pair
image
Prior art date
Application number
PCT/JP2019/036411
Other languages
French (fr)
Japanese (ja)
Inventor
喜宏 山下
Original Assignee
Necソリューションイノベータ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Necソリューションイノベータ株式会社 filed Critical Necソリューションイノベータ株式会社
Priority to JP2020548522A priority Critical patent/JP7294678B2/en
Publication of WO2020066755A1 publication Critical patent/WO2020066755A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to a feature change determining device and a feature change determining method for determining a change in a feature after a certain period of time has elapsed, and furthermore, a computer readable program that records a program for realizing these features. Recording media.
  • the present invention also relates to a learning model generation device and a learning model generation method for generating a learning model used for these.
  • Patent Documents 1 and 2 disclose devices for determining such a change in a feature. Specifically, the device disclosed in Patent Literature 1 first obtains two ortho images obtained by photographing a specific area from the sky. Further, one of the ortho images is photographed at the first time point, and the other is photographed at the second time point. Further, each ortho image holds three-dimensional data for each pixel.
  • the device disclosed in Patent Literature 1 compares two ortho images and, for each corresponding pixel, calculates a difference in color or tone between the two images and a difference in height included in the three-dimensional data. Ask for. Then, the device disclosed in Patent Literature 1 determines a change of a house, which is a feature, from the obtained difference. This determination is performed by applying the difference in color or color tone and the difference in height to a rule created in advance.For example, when the difference in height is large, it is determined that the house has been lost. When the color difference increases, it is determined that the house has been renovated.
  • the apparatus disclosed in Patent Document 2 specifies an area other than a house (a building-independent area) using an infrared photograph and map data when a change determination target is a house, and identifies the specified building.
  • the determination of the change of the feature can be executed by excluding the object-independent area. According to the device disclosed in Patent Literature 2, it is possible to avoid a situation in which an object other than a building is erroneously detected, and to improve determination accuracy.
  • this apparatus has the following problems.
  • the first is that the image at the first time and the image at the second time, which are the basis of the two ortho images, were photographed under different conditions such as the photographing date and time and the weather. There is a problem that a difference occurs in the color tone in a portion where there is no change, and as a result, it is determined that there is a change even though there is no change.
  • Second due to the above-described difference in the photographing conditions, a difference occurs between the two ortho images in terms of noise generation, an area of a feature, and the like. The problem is that it is determined that there is a change.
  • the third problem is that a change in height and color outside the house is extracted as a change in the house, and an erroneous determination is made.
  • first and second problems described above may be solved to some extent by setting a stricter threshold for comparison and determination, but in this case, it is not possible to determine a portion that actually changes as a change. there is a possibility.
  • An example of an object of the present invention is to solve the above-described problem, and to determine the change of a feature over time, suppress the influence of external factors, and improve the determination accuracy.
  • An object change determination device, a learning model generation method, a feature change determination method, and a computer-readable recording medium are provided.
  • a learning model generation device includes: An image acquisition unit that acquires a pair of images obtained by shooting a specific area from the sky at different times, A three-dimensional data acquisition unit that acquires, for each of the paired images, three-dimensional data that holds, for each pixel, information on the height of a feature that is a subject; A difference data creation unit for finding a difference between pixel values and a difference in height held in the three-dimensional data for each pair of corresponding pixels of the paired image, and creating difference data including the obtained difference; For each corresponding one or two or more pairs of pixels of the pair image, a label acquisition unit configured to acquire a label indicating a change in the feature set for the pair, A learning model generation unit that generates a learning model that specifies a relationship between the difference data and the change of the feature by machine learning the difference data and the label. It is characterized by having.
  • a feature change determination device is a device for determining a change in a feature in a specific area, Acquiring a pair image obtained by shooting the specific area at a different time from the sky, an image acquisition unit, A three-dimensional data acquisition unit for acquiring three-dimensional data for each pixel, which holds information on the height of a feature as a subject for each of the paired images; For each pair of corresponding pixels of the paired image, a difference data creation unit that determines a difference between pixel values and a difference between heights held in the three-dimensional data, and creates difference data including the obtained difference.
  • a determining unit that applies the difference data to a learning model obtained by machine learning the relationship between the difference data of the pair image and the change of the feature to determine the change of the feature in the specific area. When, It is characterized by having.
  • a learning model generation method includes: (A) acquiring a pair image obtained by shooting a specific area from the sky at different times, (B) obtaining, for each of the paired images, three-dimensional data that holds, for each pixel, information on the height of a feature serving as a subject; (C) obtaining, for each pair of corresponding pixels of the paired image, a difference in pixel value and a difference in height held in the three-dimensional data, and creating difference data including the obtained difference; (D) obtaining, for each corresponding one or more pairs of pixels of the paired image, a label indicating a change of the feature set for the pair; (E) generating a learning model that specifies a relationship between the difference data and the change of the feature by machine learning the difference data and the label; Characterized by having
  • a feature change determination method is a method for determining a feature change in a specific area, (A) acquiring a pair image obtained by photographing the specific area from the sky at different times, and (B) obtaining, for each of the paired images, three-dimensional data that holds, for each pixel, information on the height of a feature serving as a subject; (C) obtaining, for each pair of corresponding pixels of the paired image, a difference in pixel value and a difference in height held in the three-dimensional data, and creating difference data including the obtained difference; (D) determining the change of the feature in the specific area by applying the difference data to a learning model obtained by machine learning the relationship between the difference data of the pair image and the change of the feature; , Steps, Characterized by having
  • the first computer reading comprises: On the computer, (A) acquiring a pair image obtained by shooting a specific area from the sky at different times, (B) obtaining, for each of the paired images, three-dimensional data that holds, for each pixel, information on the height of a feature serving as a subject; (C) obtaining, for each pair of corresponding pixels of the paired image, a difference in pixel value and a difference in height held in the three-dimensional data, and creating difference data including the obtained difference; (D) obtaining, for each corresponding one or more pairs of pixels of the paired image, a label indicating a change of the feature set for the pair; (E) generating a learning model that specifies a relationship between the difference data and the change of the feature by machine learning the difference data and the label; Characterized by recording a program including an instruction to execute the program.
  • a second computer-readable recording medium is a computer-readable recording medium storing a program for determining a change in a feature in a specific area by a computer.
  • a medium On the computer, (A) acquiring a pair image obtained by photographing the specific area from the sky at different times, and (B) obtaining, for each of the paired images, three-dimensional data that holds, for each pixel, information on the height of a feature serving as a subject; (C) obtaining, for each pair of corresponding pixels of the paired image, a difference in pixel value and a difference in height held in the three-dimensional data, and creating difference data including the obtained difference; (D) determining the change of the feature in the specific area by applying the difference data to a learning model obtained by machine learning the relationship between the difference data of the pair image and the change of the feature; , Steps, Characterized by recording a program including an instruction to execute the program.
  • FIG. 1 is a block diagram illustrating a configuration of a learning model generation device according to an embodiment of the present invention.
  • FIG. 2 is a diagram illustrating an example of a pair image and three-dimensional data used in the present embodiment.
  • FIG. 3 is a diagram showing an example of difference data created in the present embodiment.
  • FIG. 4 is an explanatory diagram illustrating the contents of a label obtained in the embodiment of the present invention.
  • FIG. 5 is a diagram showing Variation 1 of label setting used in the embodiment of the present invention.
  • FIG. 6 is a diagram showing Variation 2 of setting a label used in the embodiment of the present invention.
  • FIG. 7 is a diagram showing Variation 3 of setting a label used in the embodiment of the present invention.
  • FIG. 1 is a block diagram illustrating a configuration of a learning model generation device according to an embodiment of the present invention.
  • FIG. 2 is a diagram illustrating an example of a pair image and three-dimensional data used in the present embodiment.
  • FIG. 3 is a
  • FIG. 8 is a flowchart showing the operation of the learning model generation device according to the embodiment of the present invention.
  • FIG. 9 is a block diagram illustrating a configuration of the feature change determination device according to the embodiment of the present invention.
  • FIG. 10 is a block diagram specifically illustrating a configuration of the feature change determination device according to the embodiment of the present invention.
  • FIG. 11 is a diagram illustrating an output example 1 of the feature changing device according to the embodiment of the present invention.
  • FIG. 12 is a diagram illustrating an output example 2 of the feature changing device according to the embodiment of the present invention.
  • FIG. 13 is a diagram illustrating an output example 3 of the feature changing device according to the embodiment of the present invention.
  • FIG. 14 is a flowchart showing the operation of the learning model generation device according to the embodiment of the present invention.
  • FIG. 14 is a flowchart showing the operation of the learning model generation device according to the embodiment of the present invention.
  • FIG. 15 is a block diagram illustrating a configuration of a modification of the feature change determination device according to the embodiment of the present invention.
  • FIG. 16 is a block diagram illustrating an example of a computer that implements the learning model generation device and the feature change determination device according to the embodiment of the present invention.
  • FIG. 1 is a block diagram illustrating a configuration of a learning model generation device according to an embodiment of the present invention.
  • the learning model generation device 10 is a device that generates a learning model used for determining a change in a feature.
  • the learning model generation device 10 includes an image acquisition unit 11, a three-dimensional data acquisition unit 12, a difference data creation unit 13, a label acquisition unit 14, and a learning model generation unit 15. I have.
  • the image acquisition unit 11 acquires a pair image obtained by photographing a specific area from the sky at different times.
  • the three-dimensional data acquisition unit 12 acquires three-dimensional data for each pixel in the pair image, which holds information on the height of a feature as a subject for each pixel.
  • the difference data creation unit 13 finds a difference in pixel value and a difference in height held as three-dimensional data for each pair of corresponding pixels of the paired image, and creates difference data including the found difference.
  • the label acquiring unit 14 acquires, for each pair of one or more pixels corresponding to the paired image, a label indicating a change of a feature set for each pair.
  • the learning model generation unit 15 generates a learning model that specifies the relationship between the difference data and the change of the feature by machine learning the difference data and the label.
  • the relationship between the difference data obtained from the pair image and the change of the feature is machine-learned, and a learning model is generated. Therefore, if this learning model is used, as described later, when determining a change in a feature over time, the influence of an external factor can be suppressed to improve the determination accuracy.
  • the functions of the learning model generation device 10 according to the present embodiment will be described more specifically with reference to FIGS. Further, in the following description, a case where the feature is a building will be described, but the type of the feature is not particularly limited in the present embodiment. In addition to the above, the ground features, forests, roads, infrastructure structures, vehicles, and all other things on the ground can be cited as the features.
  • FIG. 2 is a diagram showing an example of a pair image and three-dimensional data used in the present embodiment.
  • the image obtaining unit 11 obtains, for example, the paired image shown in FIG. 2
  • the three-dimensional data obtaining unit 12 obtains, for example, the three-dimensional data shown in FIG.
  • images A and B are used as a pair image.
  • Each of the images A and B is an ortho image obtained by performing digital processing on an image obtained by shooting from the sky.
  • the photographing time of the image A is a first time
  • the photographing time of the image B is a second time after the first time.
  • the three-dimensional data holds the height for each pixel of the images A and B.
  • DSM Digital @ Surface @ Model
  • the vertical and horizontal axes of the matrix represent the positions of pixels.
  • the numerical value displayed in the matrix indicates the height [m]. Pixels located at the same coordinates correspond to each other.
  • FIG. 3 is a diagram showing an example of difference data created in the present embodiment.
  • the difference data creating unit 13 calculates, for each pair of corresponding pixels of the paired image A and the image B, the R value of the image, The difference between each of the G value and the B value is obtained.
  • the difference between the pixel values is the difference between each of the R value, the G value, and the B value.
  • the difference between the pixel values may be set according to the color space.
  • the difference data creating unit 13 subtracts the height data held for each pixel in the image B from the height data held for each pixel in the image A, and obtains a difference for each corresponding pixel.
  • the difference between the pixels is collectively referred to as “height difference”.
  • the difference data is also referred to as “four-channel data”.
  • FIG. 4 is an explanatory diagram for explaining the contents of the label obtained in the embodiment of the present invention.
  • the label acquiring unit 14 newly constructs, loses, remodels, and changes each one or two or more corresponding pixels of the pair image as a label. Gets labels that include at least one of none.
  • the label is manually set in advance for each pixel pair.
  • the label is set in advance in accordance with changes in height and pixel value (color) from the first time point to the second time point. For example, if the height increases and the color changes from soil color to another color, it is set to “new construction”. On the other hand, if the height becomes low and the color changes from another color to earth color, it is set to “lost”. If the color has changed and the height has not changed or the difference between the heights is small, “remodeling” is set.
  • FIG. 5 is a diagram showing Variation 1 of label setting used in the embodiment of the present invention.
  • FIG. 6 is a diagram showing Variation 2 of setting a label used in the embodiment of the present invention.
  • FIG. 7 is a diagram showing Variation 3 of setting a label used in the embodiment of the present invention.
  • labels are set by text data for each area, for example, for each building.
  • a label is set for each part in the area in units of an area larger than the area unit shown in FIG.
  • a label is set for each part in the image using the entire image as a unit.
  • the portion corresponding to the building can be specified, for example, by using the data of the peripheral line of the building included in the basic item of the basic map information provided by the Geographical Survey Institute.
  • the learning model generation unit 15 executes machine learning using the four-channel data created by the difference data creation unit 13 and the labels acquired by the label acquisition unit 14 as learning data. As a method of machine learning in this case, for example, deep learning can be mentioned.
  • the learning model generation unit 15 updates the value of the parameter in the function indicating the relationship between the difference data and the change of the building by this machine learning, and builds a model.
  • the constructed model specifically, a set of parameters is used in a feature change determination device described later.
  • FIG. 8 is a flowchart showing the operation of the learning model generation device according to the embodiment of the present invention.
  • FIGS. 1 to 7 are appropriately referred to.
  • the learning model generation method is performed by operating the learning model generation device 10. Therefore, the description of the learning model generation method in the present embodiment is replaced with the following description of the operation of the learning model generation device 10.
  • the image acquisition unit 11 generates a pair of images obtained by shooting a specific area from the sky at different times in order to create learning data (difference data) used in machine learning to be described later.
  • An image is obtained (step A1).
  • the three-dimensional data acquisition unit 12 acquires three-dimensional data for each pixel of the paired images acquired in step A1, which holds information on the height of a feature as a subject (step A2).
  • the difference data creation unit 13 obtains a difference in pixel value and a difference in height held in the three-dimensional data for each pair of corresponding pixels of the paired image, and outputs the four-channel data including the obtained difference. Is created as learning data (step A3).
  • the label acquiring unit 14 acquires, for each corresponding one or more pairs of pixels of the paired image, a label indicating a change of a feature set for each pair (step A4).
  • the label is set in advance.
  • the learning model generation unit 15 executes machine learning such as deep learning using the difference data created in step A3 and the label acquired in step A4 as learning data, and obtains the difference data A learning model for specifying a relationship with a change of a feature is generated (step A5).
  • the first program in the present embodiment may be any program that causes a computer to execute steps A1 to A5 shown in FIG.
  • the learning model generation device 10 and the learning model generation method according to the present embodiment can be realized.
  • the processor of the computer functions as the image acquisition unit 11, the three-dimensional data acquisition unit 12, the difference data creation unit 13, the label acquisition unit 14, and the learning model generation unit 15 and performs processing.
  • the first program according to the present embodiment may be executed by a computer system configured by a plurality of computers.
  • each computer may function as one of the image acquisition unit 11, the three-dimensional data acquisition unit 12, the difference data creation unit 13, the label acquisition unit 14, and the learning model generation unit 15, respectively.
  • FIG. 9 is a block diagram illustrating a configuration of the feature change determination device according to the embodiment of the present invention.
  • the feature change determining device 20 is a device for determining a feature change in a specific area.
  • the feature change determination device 20 includes an image acquisition unit 21, a three-dimensional data acquisition unit 22, a difference data creation unit 23, and a determination unit 24.
  • the image acquisition unit 21 acquires a pair image obtained by photographing a specific area from the sky at different times.
  • the three-dimensional data acquisition unit 22 acquires three-dimensional data for each pair of images, which holds information on the height of a feature as a subject for each pixel.
  • the difference data creation unit 23 finds a difference in pixel value and a difference in height held in three-dimensional data for each pair of corresponding pixels in the paired image, and creates difference data including the obtained difference.
  • the determination unit 24 applies the difference data to a learning model obtained by machine learning the relationship between the difference data of the paired images and the change of the feature to determine the change of the feature in the specific area.
  • FIG. 10 is a block diagram specifically illustrating a configuration of the feature change determination device according to the embodiment of the present invention.
  • the feature change determination device 20 includes a model storage unit in addition to the above-described image acquisition unit 21, three-dimensional data acquisition unit 22, difference data creation unit 23, and determination unit 24.
  • a section 25 is provided.
  • the model storage unit 25 stores a learning model used by the determination unit 24. Further, the learning model is generated by the learning model generation device 10 shown in FIG.
  • the image acquisition unit 21 functions in the same manner as the image acquisition unit 21 illustrated in FIG.
  • Each of the pair images acquired by the image acquiring unit 21 is also an ortho image obtained by performing digital processing on an image obtained by photographing from the sky.
  • the three-dimensional data acquisition unit 22 also functions in the same manner as the three-dimensional data acquisition unit 12 shown in FIG.
  • the three-dimensional data acquired by the three-dimensional data acquisition unit 22 also holds a height for each pixel of each image (see FIG. 2).
  • the difference data creation unit 23 also functions in the same manner as the difference data creation unit 13 shown in FIG.
  • the difference data creation unit 23 obtains, for example, a difference between the R value, the G value, and the B value of the image as a difference between the pixel values for each corresponding pixel pair of each image of the paired images. Further, the difference data creating unit 13 subtracts the height data held for each pixel of the other image from the height data held for each pixel of one of the paired images, To find the difference.
  • the R value difference, the G value difference, the B value difference, and the height difference obtained in this manner become difference data (four channel data).
  • the determination unit 24 Upon acquiring the four-channel data created by the difference data creating unit 23, the determination unit 24 applies the four-channel data to the learning model stored in the model storage unit 25. Then, the determination unit 24 outputs the output from the learning model as a determination result of a feature change in the specific area.
  • FIG. 11 is a diagram illustrating an output example 1 of the feature changing device according to the embodiment of the present invention.
  • FIG. 12 is a diagram illustrating an output example 2 of the feature changing device according to the embodiment of the present invention.
  • FIG. 13 is a diagram illustrating an output example 3 of the feature changing device according to the embodiment of the present invention.
  • FIG. 11 shows a case where a pair image of one building is input as an input image.
  • a determination result of the same content as the label is output as a text for the paired image.
  • FIG. 12 shows a case where a pair image of an area larger than one building is input.
  • an image to which a determination result is added for each portion in the region is output.
  • FIG. 13 illustrates a case where a pair image of a specific area is input.
  • an image to which a determination result is added is output for each portion in the specific area.
  • FIG. 14 is a flowchart showing the operation of the learning model generation device according to the embodiment of the present invention.
  • FIGS. 1 to 7 are appropriately referred to.
  • the learning model generation method is performed by operating the learning model generation device 10. Therefore, the description of the learning model generation method in the present embodiment is replaced with the following description of the operation of the learning model generation device 10.
  • the image acquisition unit 21 acquires a pair image of an area targeted for a feature change (step B1).
  • the three-dimensional data acquisition unit 12 acquires, for each pair of images acquired in step B1, three-dimensional data that holds, for each pixel, information on the height of a feature in an area where a feature change is to be performed (Ste B2).
  • the difference data creation unit 13 obtains a difference in pixel value and a difference in height held in the three-dimensional data for each pair of corresponding pixels of the paired image, and outputs the four-channel data including the obtained difference. It is created (step B3).
  • the determination unit 24 applies the four-channel data to the learning model stored in the model storage unit 25 to determine a change in the feature (step B4). ). Thereafter, the determination unit 24 outputs the determination result to an external terminal device, display device, or the like (step B5). Thereafter, the determination result is displayed on the screen of the terminal device and the screen of the display device (see FIGS. 11 to 13).
  • the learning model obtained by machine learning the relationship between the difference data obtained from the paired image and the change of the feature is used, and the location in the area to be determined is used. An object change is determined. For this reason, in the present embodiment, the influence of external factors such as shooting conditions and noise can be suppressed when determining a change in a feature over time, and thus the determination accuracy can be improved.
  • the second program in the present embodiment may be any program that causes a computer to execute steps B1 to B5 shown in FIG.
  • the processor of the computer functions as the image acquisition unit 21, the three-dimensional data acquisition unit 22, the difference data creation unit 23, and the determination unit 24, and performs processing.
  • the first program according to the present embodiment may be executed by a computer system configured by a plurality of computers.
  • each computer may function as any one of the image acquisition unit 21, the three-dimensional data acquisition unit 22, the difference data creation unit 23, and the determination unit 24.
  • FIG. 15 is a block diagram illustrating a configuration of a modification of the feature change determination device according to the embodiment of the present invention.
  • the feature change determination device 20 includes an image acquisition unit 21, a three-dimensional data acquisition unit 22, a difference data creation unit 23, a determination unit 24, and a model storage unit 25.
  • the label acquisition unit 14 and the learning model generation unit 15 shown in FIG. Therefore, in the present modification, the feature change determination device 20 also has a function as the learning model generation device 10 illustrated in FIG. In this modification, the feature change determination device 20 itself can generate a learning model.
  • FIG. 16 is a block diagram illustrating an example of a computer that implements the learning model generation device and the feature change determination device according to the embodiment of the present invention.
  • the computer 110 includes a CPU 111, a main memory 112, a storage device 113, an input interface 114, a display controller 115, a data reader / writer 116, and a communication interface 117. These units are connected via a bus 121 so as to be able to perform data communication with each other.
  • the computer 110 may include a GPU (Graphics Processing Unit) or an FPGA (Field-Programmable Gate Array) in addition to or instead of the CPU 111.
  • the CPU 111 performs various operations by expanding the program (code) according to the present embodiment stored in the storage device 113 into the main memory 112 and executing them in a predetermined order.
  • the main memory 112 is typically a volatile storage device such as a DRAM (Dynamic Random Access Memory).
  • the program according to the present embodiment is provided in a state stored in computer-readable recording medium 120. Note that the program in the present embodiment may be distributed on the Internet connected via the communication interface 117.
  • the storage device 113 includes a semiconductor storage device such as a flash memory in addition to a hard disk drive.
  • the input interface 114 mediates data transmission between the CPU 111 and input devices 118 such as a keyboard and a mouse.
  • the display controller 115 is connected to the display device 119 and controls display on the display device 119.
  • the data reader / writer 116 mediates data transmission between the CPU 111 and the recording medium 120, reads out a program from the recording medium 120, and writes a processing result of the computer 110 to the recording medium 120.
  • the communication interface 117 mediates data transmission between the CPU 111 and another computer.
  • the recording medium 120 include a general-purpose semiconductor storage device such as CF (Compact @ Flash (registered trademark)) and SD (Secure Digital), a magnetic recording medium such as a flexible disk (Flexible @ Disk), or a CD-ROM.
  • CF Compact @ Flash
  • SD Secure Digital
  • An optical recording medium such as a ROM (Compact Disk Read Only Memory) may be used.
  • the learning model generation device 10 and the feature change determination device 20 according to the present embodiment can also be realized by using hardware corresponding to each unit instead of a computer in which a program is installed. Further, each of the learning model generation device 10 and the feature change determination device 20 may be partially implemented by a program, and the remaining portion may be implemented by hardware.
  • An image acquisition unit that acquires a pair image obtained by shooting a specific area from the sky at different times, A three-dimensional data acquisition unit for acquiring three-dimensional data for each pixel, which holds information on the height of a feature as a subject for each of the paired images; For each pair of corresponding pixels of the paired image, a difference data creation unit that determines a difference between pixel values and a difference between heights held in the three-dimensional data, and creates difference data including the obtained difference.
  • a label acquisition unit configured to acquire a label indicating a change in the feature set for the pair
  • a learning model generation unit that generates a learning model that specifies a relationship between the difference data and the change of the feature by machine learning the difference data and the label.
  • the difference data creation unit obtains, for each pair of corresponding pixels of the paired image, a difference between an R value, a G value, and a B value as a difference between the pixel values.
  • a learning model generation device characterized in that:
  • the label acquisition unit acquires, for each pair of one or more pixels corresponding to the paired image, a label including at least one of new construction, loss, remodeling, and no change as the label,
  • a learning model generation device characterized in that:
  • a determining unit that applies the difference data to a learning model obtained by machine learning the relationship between the difference data of the pair image and the change of the feature to determine the change of the feature in the specific area.
  • the difference data creation unit obtains, for each pair of corresponding pixels of the paired image, a difference between an R value, a G value, and a B value as a difference between the pixel values.
  • a feature change determination device (Appendix 6) A feature change determination device according to claim 4 or 5, wherein When the feature is a building, The learning model is obtained by machine learning a relationship between the difference data of the pair image and a change in the feature including at least one of new construction, loss, renovation, and no change, A feature change judging device characterized in that:
  • (Appendix 7) The feature change determination device according to any one of supplementary notes 4 to 6, wherein For each corresponding one or two or more pairs of pixels of the pair image, a label acquisition unit configured to acquire a label indicating a change in the feature set for the pair, A learning model generation unit that generates a learning model that specifies a relationship between the difference data and the change of the feature by machine learning the difference data and the label. Further comprising A feature change judging device characterized in that:
  • (Appendix 8) (A) acquiring a pair image obtained by shooting a specific area from the sky at different times, (B) obtaining, for each of the paired images, three-dimensional data that holds, for each pixel, information on the height of a feature serving as a subject; (C) obtaining, for each pair of corresponding pixels of the paired image, a difference in pixel value and a difference in height held in the three-dimensional data, and creating difference data including the obtained difference; (D) obtaining, for each corresponding one or more pairs of pixels of the paired image, a label indicating a change of the feature set for the pair; (E) generating a learning model that specifies a relationship between the difference data and the change of the feature by machine learning the difference data and the label; Having, A learning model generation method characterized by the following.
  • the label acquiring unit includes, for each pair of one or more pixels corresponding to the paired image, a label including at least one of new construction, loss, remodeling, and no change as the label.
  • (Appendix 11) A method for determining a change in a feature in a specific area, (A) acquiring a pair image obtained by photographing the specific area from the sky at different times, and (B) obtaining, for each of the paired images, three-dimensional data that holds, for each pixel, information on the height of a feature serving as a subject; (C) obtaining, for each pair of corresponding pixels of the paired image, a difference in pixel value and a difference in height held in the three-dimensional data, and creating difference data including the obtained difference; (D) determining the change of the feature in the specific area by applying the difference data to a learning model obtained by machine learning the relationship between the difference data of the pair image and the change of the feature; , Steps, Having, A feature change judging method characterized in that:
  • a feature change determination method according to claim 11 or 12, wherein When the feature is a building, The learning model is obtained by machine learning a relationship between the difference data of the pair image and a change in the feature including at least one of new construction, loss, renovation, and no change, A feature change judging method characterized in that:
  • the label acquiring unit includes, for each pair of one or more pixels corresponding to the paired image, a label including at least one of new construction, loss, remodeling, and no change as the label. get, A computer-readable recording medium characterized by the above-mentioned.
  • a computer-readable recording medium recording a program for determining a change of a feature in a specific area by a computer, On the computer, (A) acquiring a pair image obtained by photographing the specific area from the sky at different times, and (B) obtaining, for each of the paired images, three-dimensional data that holds, for each pixel, information on the height of a feature serving as a subject; (C) obtaining, for each pair of corresponding pixels of the paired image, a difference in pixel value and a difference in height held in the three-dimensional data, and creating difference data including the obtained difference; (D) determining the change of the feature in the specific area by applying the difference data to a learning model obtained by machine learning the relationship between the difference data of the pair image and the change of the feature; , Steps, And a computer-readable recording medium storing a program including instructions for executing the program.
  • Appendix 20 A computer-readable recording medium according to claim 18 or 19, When the feature is a building, The learning model is obtained by machine learning a relationship between the difference data of the pair image and a change in the feature including at least one of new construction, loss, renovation, and no change, A computer-readable recording medium characterized by the above-mentioned.
  • (Appendix 21) 21 The computer-readable recording medium according to any one of supplementary notes 18 to 20, wherein The program, the computer, (E) obtaining, for each pair of one or more pixels corresponding to the paired image, a label indicating a change of the feature set for the pair; (F) generating a learning model that specifies the relationship between the difference data and the change of the feature by machine learning the difference data and the label; Further comprising instructions for executing A computer-readable recording medium characterized by the above-mentioned.
  • the present invention when determining a change in a feature over time, it is possible to suppress the influence of external factors and improve the determination accuracy.
  • INDUSTRIAL APPLICABILITY The present invention is useful in a field in which a temporal change of a feature needs to be determined, for example, an investigation of a land use situation, an investigation of a topographical change, a deterioration determination of an infrastructure structure, a management of a forest change, and the like.
  • learning model generation device 11 image acquisition unit 12 three-dimensional data acquisition unit 13 difference data creation unit 14 label acquisition unit 15 learning model generation unit 20 feature change determination device 21 image acquisition unit 22 three-dimensional data acquisition unit 23 difference data creation unit 24 determination unit 25 model storage unit 110 computer 111 CPU 112 Main memory 113 Storage device 114 Input interface 115 Display controller 116 Data reader / writer 117 Communication interface 118 Input device 119 Display device 120 Recording medium 121 Bus

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A ground object change determination device 20 is provided with: an image acquisition unit 21 that acquires pair images obtained by photographing a specific area from the sky at different times; a three-dimensional data acquisition unit 22 that acquires, for each of the pair images, three-dimensional data storing, in each pixel, information about the height of a ground object which is a photographing subject; a difference data creation unit 23 that, for each pair of corresponding pixels in the pair images, obtains a difference of pixel values and a difference of heights stored in the three-dimensional data, and creates difference data including the obtained differences; and a determination unit 24 that applies the difference data to a learning model obtained by performing machine learning of a relation between the difference data of the pair images and the change of the ground object, thus determining the change of the ground object in the specific area.

Description

学習モデル生成装置、地物変化判定装置、学習モデル生成方法、地物変化判定方法、及びコンピュータ読み取り可能な記録媒体Learning model generation device, feature change determination device, learning model generation method, feature change determination method, and computer-readable recording medium
 本発明は、一定期間の経過後の地物の変化を判定するための、地物変化判定装置、及び地物変化判定方法に関し、更には、これらを実現するためのプログラムを記録したコンピュータ読み取り可能な記録媒体に関する。また、本発明は、これらに用いられる学習モデルを生成するための学習モデル生成装置及び学習モデル生成方法にも関する。 The present invention relates to a feature change determining device and a feature change determining method for determining a change in a feature after a certain period of time has elapsed, and furthermore, a computer readable program that records a program for realizing these features. Recording media. The present invention also relates to a learning model generation device and a learning model generation method for generating a learning model used for these.
 近年、一定期間の経過後における地物の変化を判定することが行われる。このような判定の結果は、例えば、土地の利用状況の調査、地形の変化の調査等に利用されている。また、地物としては、例えば、建物、地面、森林、道路、インフラ構造物、車両等の地上にある全ての物が挙げられる。 In recent years, it has been performed to determine a change in a feature after a certain period has elapsed. The result of such determination is used, for example, for investigating land use conditions, investigating changes in topography, and the like. Examples of the feature include all things on the ground, such as buildings, grounds, forests, roads, infrastructure structures, and vehicles.
 また、特許文献1及び2は、このような地物の変化を判定するための装置を開示している。具体的には、特許文献1に開示された装置は、まず、特定の領域を上空から撮影して得られた2つのオルソ画像を取得する。また、オルソ画像のうち一方は第1の時点で撮影されており、他方は第2の時点で撮影されている。更に、各オルソ画像は、画素毎に、3次元データを保持している。 特許 Furthermore, Patent Documents 1 and 2 disclose devices for determining such a change in a feature. Specifically, the device disclosed in Patent Literature 1 first obtains two ortho images obtained by photographing a specific area from the sky. Further, one of the ortho images is photographed at the first time point, and the other is photographed at the second time point. Further, each ortho image holds three-dimensional data for each pixel.
 続いて、特許文献1に開示された装置は、2つのオルソ画像を比較して、対応する画素毎に、両画像における、色又は色調の差分と、3次元データに含まれる高さの差分とを求める。そして、特許文献1に開示された装置は、求めた差分から、地物である家屋の変化を判定する。また、この判定は、色又は色調の差分と高さの差分とを、予め作成されている規則に当てはめることによって行われており、例えば、高さの差分が大きくなると、家屋が滅失したと判定され、色の差分が大きくなると家屋が改築されたと判定する。 Subsequently, the device disclosed in Patent Literature 1 compares two ortho images and, for each corresponding pixel, calculates a difference in color or tone between the two images and a difference in height included in the three-dimensional data. Ask for. Then, the device disclosed in Patent Literature 1 determines a change of a house, which is a feature, from the obtained difference. This determination is performed by applying the difference in color or color tone and the difference in height to a rule created in advance.For example, when the difference in height is large, it is determined that the house has been lost. When the color difference increases, it is determined that the house has been renovated.
 また、特許文献2に開示された装置は、変化の判定対象が家屋である場合に、赤外線写真と地図データを用いて、家屋以外の領域(建築物非依存領域)を特定し、特定した建築物非依存領域を除外して、地物の変化の判定を実行することができる。特許文献2に開示された装置によれば、建築物以外の物を誤って検出してしまう事態を回避することができ、判定精度の向上を図ることができる。 Further, the apparatus disclosed in Patent Document 2 specifies an area other than a house (a building-independent area) using an infrared photograph and map data when a change determination target is a house, and identifies the specified building. The determination of the change of the feature can be executed by excluding the object-independent area. According to the device disclosed in Patent Literature 2, it is possible to avoid a situation in which an object other than a building is erroneously detected, and to improve determination accuracy.
特許第4339289号公報Japanese Patent No. 4339289 特許第5366190号公報Japanese Patent No. 5366190
 上述した特許文献1に開示された装置によれば、地物の変化の判定が可能となるが、この装置には、以下の問題がある。1つ目は、2つのオルソ画像の元になった第1の時点の画像と第2の時点の画像とが、撮影日時、天候等の撮影条件が違う状況下で撮影されたために、地物に変化がない部分において、色調に差が発生し、結果、変化がないにも関わらず変化があると判定されてしまうという問題である。2つ目は、上述のような撮影条件の相違によって、2つのオルソ画像間で、ノイズの発生、地物の面積等の点で相違が生じ、この相違によっても、変化がないにも関わらず変化があると判定されてしまうという問題である。3つ目は、家屋以外における高さ及び色の変化が、家屋の変化として抽出されてしまい、誤った判定が行われてしまうという問題である。 According to the apparatus disclosed in Patent Document 1 described above, it is possible to determine a change in a feature, but this apparatus has the following problems. The first is that the image at the first time and the image at the second time, which are the basis of the two ortho images, were photographed under different conditions such as the photographing date and time and the weather. There is a problem that a difference occurs in the color tone in a portion where there is no change, and as a result, it is determined that there is a change even though there is no change. Second, due to the above-described difference in the photographing conditions, a difference occurs between the two ortho images in terms of noise generation, an area of a feature, and the like. The problem is that it is determined that there is a change. The third problem is that a change in height and color outside the house is extracted as a change in the house, and an erroneous determination is made.
 なお、上述の1つ目及び2つ目の問題は、比較判定の際の閾値を厳しくすると、ある程度解消できるとも考えられるが、この場合は、実際に変化がある部分を変化有りと判定できなくなる可能性がある。 Note that the first and second problems described above may be solved to some extent by setting a stricter threshold for comparison and determination, but in this case, it is not possible to determine a portion that actually changes as a change. there is a possibility.
 一方、特許文献2に開示された装置によれば、上述の3つ目の問題を解消できると考えられるが、そのためには、画素毎に3次元データを持ったオルソ画像に加えて、撮影領域における近赤外線写真、地図データ等の情報を入力する必要がある。このため、特許文献2に開示された装置の運用にはコストがかかるという問題がある。また、特許文献2に開示された装置であっても、上述の1つ目の問題と2つ目の問題とが解消されるわけではない。また、特許文献2に開示された装置においては、赤外線写真、地図データ等の情報が最新の情報でない場合は、却って誤検出が発生する可能性が高くなってしまう。 On the other hand, according to the device disclosed in Patent Document 2, it is considered that the third problem described above can be solved. For this purpose, in addition to an ortho image having three-dimensional data for each pixel, a photographing area , It is necessary to input information such as near-infrared photographs and map data. For this reason, there is a problem that operation of the device disclosed in Patent Document 2 is costly. Further, even with the device disclosed in Patent Literature 2, the above-described first problem and second problem are not necessarily solved. Further, in the apparatus disclosed in Patent Document 2, when information such as infrared photographs and map data is not the latest information, the possibility of erroneous detection increases rather.
 本発明の目的の一例は、上記問題を解消し、時間の経過による地物の変化を判定するに際して、外的要因による影響を抑制して判定精度の向上を図り得る、学習モデル生成装置、地物変化判定装置、学習モデル生成方法、地物変化判定方法、及びコンピュータ読み取り可能な記録媒体を提供することにある。 An example of an object of the present invention is to solve the above-described problem, and to determine the change of a feature over time, suppress the influence of external factors, and improve the determination accuracy. An object change determination device, a learning model generation method, a feature change determination method, and a computer-readable recording medium are provided.
 上記目的を達成するため、本発明の一側面における学習モデル生成装置は、
 特定地域を上空から時期を変えて撮影して得られたペア画像 を取得する、画像取得部と、
 前記ペア画像それぞれについて、被写体となっている地物 の高さの情報を画素毎に保持する3次元データを取得する、3次元データ取得部と、
 前記ペア画像の対応する画素のペア毎に、画素値 の差分及び前記3次元データで保持されている高さの差分を求め、求めた差分を含む差分データを作成する、差分データ作成部と、
 前記ペア画像の対応する1又は2以上の画素のペア毎に、当該ペアに設定された、前記地物の変化を示すラベルを取得する、ラベル取得部と、
 前記差分データと前記ラベルとを機械学習することによって、前記差分データと前記地物の変化との関係を特定する学習モデルを生成する、学習モデル生成部と、
を備えている、ことを特徴とする。
In order to achieve the above object, a learning model generation device according to one aspect of the present invention includes:
An image acquisition unit that acquires a pair of images obtained by shooting a specific area from the sky at different times,
A three-dimensional data acquisition unit that acquires, for each of the paired images, three-dimensional data that holds, for each pixel, information on the height of a feature that is a subject;
A difference data creation unit for finding a difference between pixel values and a difference in height held in the three-dimensional data for each pair of corresponding pixels of the paired image, and creating difference data including the obtained difference;
For each corresponding one or two or more pairs of pixels of the pair image, a label acquisition unit configured to acquire a label indicating a change in the feature set for the pair,
A learning model generation unit that generates a learning model that specifies a relationship between the difference data and the change of the feature by machine learning the difference data and the label.
It is characterized by having.
 また、上記目的を達成するため、本発明の一側面における地物変化判定装置は、特定地域における地物の変化を判定するための装置であって、
 前記特定地域を上空から時期を変えて撮影して得られたペア画像を取得する、画像取得部と、
 前記ペア画像それぞれについて、被写体となっている地物の高さの情報を画素毎に保持する3次元データを取得する、3次元データ取得部と、
 前記ペア画像の対応する画素のペア毎に、画素値の差分及び前記3次元データで保持されている高さの差分を求め、求めた差分を含む差分データを作成する、差分データ作成部と、
 ペア画像の差分データと地物の変化との関係を機械学習することによって得られている学習モデルに、前記差分データを適用して、前記特定地域における前記地物の変化を判定する、判定部と、
を備えている、ことを特徴とする。
In order to achieve the above object, a feature change determination device according to one aspect of the present invention is a device for determining a change in a feature in a specific area,
Acquiring a pair image obtained by shooting the specific area at a different time from the sky, an image acquisition unit,
A three-dimensional data acquisition unit for acquiring three-dimensional data for each pixel, which holds information on the height of a feature as a subject for each of the paired images;
For each pair of corresponding pixels of the paired image, a difference data creation unit that determines a difference between pixel values and a difference between heights held in the three-dimensional data, and creates difference data including the obtained difference.
A determining unit that applies the difference data to a learning model obtained by machine learning the relationship between the difference data of the pair image and the change of the feature to determine the change of the feature in the specific area. When,
It is characterized by having.
 上記目的を達成するため、本発明の一側面における学習モデル生成方法は、
(a)特定地域を上空から時期を変えて撮影して得られたペア画像を取得する、ステップと、
(b)前記ペア画像それぞれについて、被写体となっている地物の高さの情報を画素毎に保持する3次元データを取得する、ステップと、
(c)前記ペア画像の対応する画素のペア毎に、画素値の差分及び前記3次元データで保持されている高さの差分を求め、求めた差分を含む差分データを作成する、ステップと、
(d)前記ペア画像の対応する1又は2以上の画素のペア毎に、当該ペアに設定された、前記地物の変化を示すラベルを取得する、ステップと、
(e)前記差分データと前記ラベルとを機械学習することによって、前記差分データと前記地物の変化との関係を特定する学習モデルを生成する、ステップと、
を有する、ことを特徴とする。
In order to achieve the above object, a learning model generation method according to one aspect of the present invention includes:
(A) acquiring a pair image obtained by shooting a specific area from the sky at different times,
(B) obtaining, for each of the paired images, three-dimensional data that holds, for each pixel, information on the height of a feature serving as a subject;
(C) obtaining, for each pair of corresponding pixels of the paired image, a difference in pixel value and a difference in height held in the three-dimensional data, and creating difference data including the obtained difference;
(D) obtaining, for each corresponding one or more pairs of pixels of the paired image, a label indicating a change of the feature set for the pair;
(E) generating a learning model that specifies a relationship between the difference data and the change of the feature by machine learning the difference data and the label;
Characterized by having
 また、上記目的を達成するため、本発明の一側面における地物変化判定方法は、特定地域における地物の変化を判定するための方法であって、
(a)前記特定地域を上空から時期を変えて撮影して得られたペア画像を取得する、ステップと、
(b)前記ペア画像それぞれについて、被写体となっている地物の高さの情報を画素毎に保持する3次元データを取得する、ステップと、
(c)前記ペア画像の対応する画素のペア毎に、画素値の差分及び前記3次元データで保持されている高さの差分を求め、求めた差分を含む差分データを作成する、ステップと、
(d)ペア画像の差分データと地物の変化との関係を機械学習することによって得られている学習モデルに、前記差分データを適用して、前記特定地域における前記地物の変化を判定する、ステップと、
を有する、ことを特徴とする。
In order to achieve the above object, a feature change determination method according to one aspect of the present invention is a method for determining a feature change in a specific area,
(A) acquiring a pair image obtained by photographing the specific area from the sky at different times, and
(B) obtaining, for each of the paired images, three-dimensional data that holds, for each pixel, information on the height of a feature serving as a subject;
(C) obtaining, for each pair of corresponding pixels of the paired image, a difference in pixel value and a difference in height held in the three-dimensional data, and creating difference data including the obtained difference;
(D) determining the change of the feature in the specific area by applying the difference data to a learning model obtained by machine learning the relationship between the difference data of the pair image and the change of the feature; , Steps,
Characterized by having
 更に、上記目的を達成するため、本発明の一側面における第1のコンピュータ読み取りは、
コンピュータに、
(a)特定地域を上空から時期を変えて撮影して得られたペア画像を取得する、ステップと、
(b)前記ペア画像それぞれについて、被写体となっている地物の高さの情報を画素毎に保持する3次元データを取得する、ステップと、
(c)前記ペア画像の対応する画素のペア毎に、画素値の差分及び前記3次元データで保持されている高さの差分を求め、求めた差分を含む差分データを作成する、ステップと、
(d)前記ペア画像の対応する1又は2以上の画素のペア毎に、当該ペアに設定された、前記地物の変化を示すラベルを取得する、ステップと、
(e)前記差分データと前記ラベルとを機械学習することによって、前記差分データと前記地物の変化との関係を特定する学習モデルを生成する、ステップと、
を実行させる命令を含む、プログラムを記録していることを特徴とする。
Further, to achieve the above object, the first computer reading according to one aspect of the present invention comprises:
On the computer,
(A) acquiring a pair image obtained by shooting a specific area from the sky at different times,
(B) obtaining, for each of the paired images, three-dimensional data that holds, for each pixel, information on the height of a feature serving as a subject;
(C) obtaining, for each pair of corresponding pixels of the paired image, a difference in pixel value and a difference in height held in the three-dimensional data, and creating difference data including the obtained difference;
(D) obtaining, for each corresponding one or more pairs of pixels of the paired image, a label indicating a change of the feature set for the pair;
(E) generating a learning model that specifies a relationship between the difference data and the change of the feature by machine learning the difference data and the label;
Characterized by recording a program including an instruction to execute the program.
 更に、上記目的を達成するため、本発明の一側面における第2のコンピュータ読み取り可能な記録媒体は、コンピュータによって、特定地域における地物の変化を判定するためのプログラムを記録したコンピュータ読み取り可能な記録媒体であって、
前記コンピュータに、
(a)前記特定地域を上空から時期を変えて撮影して得られたペア画像を取得する、ステップと、
(b)前記ペア画像それぞれについて、被写体となっている地物の高さの情報を画素毎に保持する3次元データを取得する、ステップと、
(c)前記ペア画像の対応する画素のペア毎に、画素値の差分及び前記3次元データで保持されている高さの差分を求め、求めた差分を含む差分データを作成する、ステップと、
(d)ペア画像の差分データと地物の変化との関係を機械学習することによって得られている学習モデルに、前記差分データを適用して、前記特定地域における前記地物の変化を判定する、ステップと、
を実行させる命令を含む、プログラムを記録していることを特徴とする。
Further, in order to achieve the above object, a second computer-readable recording medium according to one aspect of the present invention is a computer-readable recording medium storing a program for determining a change in a feature in a specific area by a computer. A medium,
On the computer,
(A) acquiring a pair image obtained by photographing the specific area from the sky at different times, and
(B) obtaining, for each of the paired images, three-dimensional data that holds, for each pixel, information on the height of a feature serving as a subject;
(C) obtaining, for each pair of corresponding pixels of the paired image, a difference in pixel value and a difference in height held in the three-dimensional data, and creating difference data including the obtained difference;
(D) determining the change of the feature in the specific area by applying the difference data to a learning model obtained by machine learning the relationship between the difference data of the pair image and the change of the feature; , Steps,
Characterized by recording a program including an instruction to execute the program.
 以上のように、本発明によれば、時間の経過による地物の変化を判定するに際して、外的要因による影響を抑制して判定精度の向上を図ることができる。 As described above, according to the present invention, when determining a change in a feature over time, it is possible to suppress the influence of external factors and improve the determination accuracy.
図1は、本発明の実施の形態における学習モデル生成装置の構成を示すブロック図である。FIG. 1 is a block diagram illustrating a configuration of a learning model generation device according to an embodiment of the present invention. 図2は、本実施の形態で用いられるペア画像及び3次元データの一例を示す図である。FIG. 2 is a diagram illustrating an example of a pair image and three-dimensional data used in the present embodiment. 図3は、本実施の形態において作成される差分データの一例を示す図である。FIG. 3 is a diagram showing an example of difference data created in the present embodiment. 図4は、本発明の実施の形態において取得されるラベルの内容を説明する説明図である。FIG. 4 is an explanatory diagram illustrating the contents of a label obtained in the embodiment of the present invention. 図5は、本発明の実施の形態で用いられるラベルの設定のバリエーション1を示す図である。FIG. 5 is a diagram showing Variation 1 of label setting used in the embodiment of the present invention. 図6は、本発明の実施の形態で用いられるラベルの設定のバリエーション2を示す図である。FIG. 6 is a diagram showing Variation 2 of setting a label used in the embodiment of the present invention. 図7は、本発明の実施の形態で用いられるラベルの設定のバリエーション3を示す図である。FIG. 7 is a diagram showing Variation 3 of setting a label used in the embodiment of the present invention. 図8は、本発明の実施の形態における学習モデル生成装置の動作を示すフロー図である。FIG. 8 is a flowchart showing the operation of the learning model generation device according to the embodiment of the present invention. 図9は、本発明の実施の形態における地物変化判定装置の構成を示すブロック図である。FIG. 9 is a block diagram illustrating a configuration of the feature change determination device according to the embodiment of the present invention. 図10は、本発明の実施の形態における地物変化判定装置の構成を具体的に示すブロック図である。FIG. 10 is a block diagram specifically illustrating a configuration of the feature change determination device according to the embodiment of the present invention. 図11は、本発明の実施の形態における地物変化装置の出力例1を示す図である。FIG. 11 is a diagram illustrating an output example 1 of the feature changing device according to the embodiment of the present invention. 図12は、本発明の実施の形態における地物変化装置の出力例2を示す図である。FIG. 12 is a diagram illustrating an output example 2 of the feature changing device according to the embodiment of the present invention. 図13は、本発明の実施の形態における地物変化装置の出力例3を示す図である。FIG. 13 is a diagram illustrating an output example 3 of the feature changing device according to the embodiment of the present invention. 図14は、本発明の実施の形態における学習モデル生成装置の動作を示すフロー図である。FIG. 14 is a flowchart showing the operation of the learning model generation device according to the embodiment of the present invention. 図15は、本発明の実施の形態における地物変化判定装置の変形例の構成を示すブロック図である。FIG. 15 is a block diagram illustrating a configuration of a modification of the feature change determination device according to the embodiment of the present invention. 図16は、本発明の実施の形態における学習モデル生成装置及び地物変化判定装置を実現するコンピュータの一例を示すブロック図である。FIG. 16 is a block diagram illustrating an example of a computer that implements the learning model generation device and the feature change determination device according to the embodiment of the present invention.
(実施の形態:学習モデルの生成)
 最初に、本発明の実施の形態における、学習モデル生成装置、学習モデル生成方法、及びプログラムについて説明する。
(Embodiment: Generation of learning model)
First, a learning model generation device, a learning model generation method, and a program according to an embodiment of the present invention will be described.
[装置構成]
 最初に、本実施の形態における学習モデル生成装置の構成について図1を用いて説明する。図1は、本発明の実施の形態における学習モデル生成装置の構成を示すブロック図である。
[Device configuration]
First, the configuration of the learning model generation device according to the present embodiment will be described with reference to FIG. FIG. 1 is a block diagram illustrating a configuration of a learning model generation device according to an embodiment of the present invention.
 図1に示すように、本実施の形態における学習モデル生成装置10は、地物の変化の判定に用いられる学習モデルを生成する装置である。図1に示すように、学習モデル生成装置10は、画像取得部11と、3次元データ取得部12と、差分データ作成部13と、ラベル取得部14と、学習モデル生成部15とを備えている。 As shown in FIG. 1, the learning model generation device 10 according to the present embodiment is a device that generates a learning model used for determining a change in a feature. As shown in FIG. 1, the learning model generation device 10 includes an image acquisition unit 11, a three-dimensional data acquisition unit 12, a difference data creation unit 13, a label acquisition unit 14, and a learning model generation unit 15. I have.
 画像取得部11は、特定地域を上空から時期を変えて撮影して得られたペア画像を取得する。3次元データ取得部12は、ペア画像それぞれについて、被写体となっている地物の高さの情報を画素毎に保持する3次元データを取得する。差分データ作成部13は、ペア画像の対応する画素のペア毎に、画素値の差分及び3次元データで保持されている高さの差分を求め、求めた差分を含む差分データを作成する。 The image acquisition unit 11 acquires a pair image obtained by photographing a specific area from the sky at different times. The three-dimensional data acquisition unit 12 acquires three-dimensional data for each pixel in the pair image, which holds information on the height of a feature as a subject for each pixel. The difference data creation unit 13 finds a difference in pixel value and a difference in height held as three-dimensional data for each pair of corresponding pixels of the paired image, and creates difference data including the found difference.
 ラベル取得部14は、ペア画像の対応する1又は2以上の画素のペア毎に、各ペアに設定された、地物の変化を示すラベルを取得する。学習モデル生成部15は、差分データとラベルとを機械学習することによって、差分データと地物の変化との関係を特定する学習モデルを生成する。 The label acquiring unit 14 acquires, for each pair of one or more pixels corresponding to the paired image, a label indicating a change of a feature set for each pair. The learning model generation unit 15 generates a learning model that specifies the relationship between the difference data and the change of the feature by machine learning the difference data and the label.
 このように、本実施の形態では、ペア画像から得られた差分データと地物の変化との関係とが機械学習されて、学習モデルが生成される。このため、この学習モデルを用いれば、後述するように、時間の経過による地物の変化を判定するに際して、外的要因による影響を抑制して判定精度の向上を図ることができる。 As described above, in the present embodiment, the relationship between the difference data obtained from the pair image and the change of the feature is machine-learned, and a learning model is generated. Therefore, if this learning model is used, as described later, when determining a change in a feature over time, the influence of an external factor can be suppressed to improve the determination accuracy.
 次に、図2~図7を用いて、本実施の形態における学習モデル生成装置10の機能についてより具体的に説明する。また、以下の説明では、地物が建物である場合について説明するが、本実施の形態において地物の種類は特に限定されるものではない。地物としては、その他に、地面、森林、道路、インフラ構造物、車両等の地上にある全ての物が挙げられる。 Next, the functions of the learning model generation device 10 according to the present embodiment will be described more specifically with reference to FIGS. Further, in the following description, a case where the feature is a building will be described, but the type of the feature is not particularly limited in the present embodiment. In addition to the above, the ground features, forests, roads, infrastructure structures, vehicles, and all other things on the ground can be cited as the features.
 図2は、本実施の形態で用いられるペア画像及び3次元データの一例を示す図である。本実施の形態では、画像取得部11は、例えば、図2に示すペア画像を取得し、3次元データ取得部12は、例えば、図2に示す3次元データを取得する。 FIG. 2 is a diagram showing an example of a pair image and three-dimensional data used in the present embodiment. In the present embodiment, the image obtaining unit 11 obtains, for example, the paired image shown in FIG. 2, and the three-dimensional data obtaining unit 12 obtains, for example, the three-dimensional data shown in FIG.
 図2に示すように、本実施の形態では、ペア画像として、画像A及びBが用いられている。また、画像A及びBは、共に、上空から撮影して得られた画像にデジタル処理を行って得られたオルソ画像である。また、画像Aの撮影時点は第1の時点であり、画像Bの撮影時点は、第1の時点よりも後の第2の時点である。 画像 As shown in FIG. 2, in the present embodiment, images A and B are used as a pair image. Each of the images A and B is an ortho image obtained by performing digital processing on an image obtained by shooting from the sky. The photographing time of the image A is a first time, and the photographing time of the image B is a second time after the first time.
 また、図2に示すように、3次元データは、画像A及びBの画素毎に、高さを保持している。具体的には、3次元データとしては、DSM(Digital Surface Model)が用いられる。また、図2において、マトリックスの縦軸及び横軸は画素の位置を表している。マトリックスに表示されている数値が高さ[m]を示している。同じ座標に位置している画素同士が互いに対応している。 {Circle around (3)} As shown in FIG. 2, the three-dimensional data holds the height for each pixel of the images A and B. Specifically, DSM (Digital @ Surface @ Model) is used as three-dimensional data. In FIG. 2, the vertical and horizontal axes of the matrix represent the positions of pixels. The numerical value displayed in the matrix indicates the height [m]. Pixels located at the same coordinates correspond to each other.
 図3は、本実施の形態において作成される差分データの一例を示す図である。図3に示すように、本実施の形態では、差分データ作成部13は、ペア画像である画像Aと画像Bとの対応する画素のペア毎に、画素値の差分として、画像のR値、G値、及びB値それぞれの差分を求める。なお、図3の例では、画素値の差分は、R値、G値、及びB値それぞれの差分であるが、本実施の形態では、この例に限定されるものではない。画素値の差分は、色空間に応じて設定されていれば良い。 FIG. 3 is a diagram showing an example of difference data created in the present embodiment. As shown in FIG. 3, in the present embodiment, the difference data creating unit 13 calculates, for each pair of corresponding pixels of the paired image A and the image B, the R value of the image, The difference between each of the G value and the B value is obtained. In the example of FIG. 3, the difference between the pixel values is the difference between each of the R value, the G value, and the B value. However, the present embodiment is not limited to this example. The difference between the pixel values may be set according to the color space.
 また、差分データ作成部13は、画像Aにおける画素毎に保持された高さデータから、画像Bにおける画素毎に保持されている高さデータを、減算し、対応する画素毎に差分を求める。なお、本明細書では、各画素の差分を一括りにして、「高さ差分」と表記する。 (4) The difference data creating unit 13 subtracts the height data held for each pixel in the image B from the height data held for each pixel in the image A, and obtains a difference for each corresponding pixel. In this specification, the difference between the pixels is collectively referred to as “height difference”.
 このようにして求められた、R値差分、G値差分、B値差分、及び高さ差分が、差分データとなる。また、本明細書では、差分データは、「4チャンネルデータ」とも表記することとする。 R The R value difference, the G value difference, the B value difference, and the height difference obtained as described above become difference data. In this specification, the difference data is also referred to as “four-channel data”.
 図4は、本発明の実施の形態において取得されるラベルの内容を説明する説明図である。本実施の形態では、ラベル取得部14は、例えば、地物が建物であるとすると、ペア画像の対応する1又は2以上の画素のペア毎に、ラベルとして、新築、滅失、改築、及び変化無しのいずれかを少なくとも含むラベルを取得する。 FIG. 4 is an explanatory diagram for explaining the contents of the label obtained in the embodiment of the present invention. In the present embodiment, for example, assuming that the feature is a building, the label acquiring unit 14 newly constructs, loses, remodels, and changes each one or two or more corresponding pixels of the pair image as a label. Gets labels that include at least one of none.
 また、ラベルは、本実施の形態では、画素のペア毎に、予め、人手によって設定されている。その際、図4に示すように、ラベルは、予め、第1の時点から第2の時点までにおける高さ及び画素値(色)の変化に応じて設定される。例えば、高さが高くなり、且つ、色が土色から別の色に変化する場合は、「新築」に設定される。一方、高さが低くなり、且つ、色が別の色から土色に変化する場合は、「滅失」に設定される。また、色の変化があり、且つ、高さの変化が無い又は高さの差分が小さい場合は、「改築」に設定される。 In addition, in this embodiment, the label is manually set in advance for each pixel pair. At this time, as shown in FIG. 4, the label is set in advance in accordance with changes in height and pixel value (color) from the first time point to the second time point. For example, if the height increases and the color changes from soil color to another color, it is set to “new construction”. On the other hand, if the height becomes low and the color changes from another color to earth color, it is set to “lost”. If the color has changed and the height has not changed or the difference between the heights is small, “remodeling” is set.
 更に、本実施の形態では、ラベルの設定の仕方として、図5~図7に示すように、以下の3通りがある。図5は、本発明の実施の形態で用いられるラベルの設定のバリエーション1を示す図である。図6は、本発明の実施の形態で用いられるラベルの設定のバリエーション2を示す図である。図7は、本発明の実施の形態で用いられるラベルの設定のバリエーション3を示す図である。 (5) Further, in the present embodiment, there are the following three methods for setting a label as shown in FIGS. FIG. 5 is a diagram showing Variation 1 of label setting used in the embodiment of the present invention. FIG. 6 is a diagram showing Variation 2 of setting a label used in the embodiment of the present invention. FIG. 7 is a diagram showing Variation 3 of setting a label used in the embodiment of the present invention.
 図5の例では、領域単位、例えば、1つの建物毎に、テキストデータによって、ラベルが設定されている。図6の例では、図5に示した領域単位よりも広い領域を単位として、その領域内の部分毎に、ラベルが設定されている。図7の例では、画像全体を単位として、画像内の部分毎に、ラベルが設定されている。 In the example of FIG. 5, labels are set by text data for each area, for example, for each building. In the example of FIG. 6, a label is set for each part in the area in units of an area larger than the area unit shown in FIG. In the example of FIG. 7, a label is set for each part in the image using the entire image as a unit.
 なお、図6及び図7の例では、ラベルを設定するために、画像内における建物に該当する部分を特定する必要がある。建物に該当する部分の特定は、例えば、国土地理院の提供する基盤地図情報基本項目に含まれる建築物の外周線のデータを用いることによって可能である。 6 and 7, in order to set a label, it is necessary to specify a portion corresponding to a building in the image. The portion corresponding to the building can be specified, for example, by using the data of the peripheral line of the building included in the basic item of the basic map information provided by the Geographical Survey Institute.
 学習モデル生成部15は、差分データ作成部13によって作成された4チャンネルデータと、ラベル取得部14によって取得されたラベルとを、学習データとして機械学習を実行する。この場合の機械学習の手法としては、例えば、ディープラーニングが挙げられる。学習モデル生成部15は、この機械学習によって、差分データと建物の変化との関係を示す関数におけるパラメータの値を更新して、モデルを構築する。構築されたモデル、具体的には、パラメータの集合は、後述する地物変化判定装置において利用される。 The learning model generation unit 15 executes machine learning using the four-channel data created by the difference data creation unit 13 and the labels acquired by the label acquisition unit 14 as learning data. As a method of machine learning in this case, for example, deep learning can be mentioned. The learning model generation unit 15 updates the value of the parameter in the function indicating the relationship between the difference data and the change of the building by this machine learning, and builds a model. The constructed model, specifically, a set of parameters is used in a feature change determination device described later.
[装置動作]
 次に、本発明の実施の形態における学習モデル生成装置10の動作について図8を用いて説明する。図8は、本発明の実施の形態における学習モデル生成装置の動作を示すフロー図である。以下の説明においては、適宜図1~図7を参酌する。また、本実施の形態では、学習モデル生成装置10を動作させることによって、学習モデル生成方法が実施される。よって、本実施の形態における学習モデル生成方法の説明は、以下の学習モデル生成装置10の動作説明に代える。
[Device operation]
Next, the operation of the learning model generation device 10 according to the embodiment of the present invention will be described with reference to FIG. FIG. 8 is a flowchart showing the operation of the learning model generation device according to the embodiment of the present invention. In the following description, FIGS. 1 to 7 are appropriately referred to. In the present embodiment, the learning model generation method is performed by operating the learning model generation device 10. Therefore, the description of the learning model generation method in the present embodiment is replaced with the following description of the operation of the learning model generation device 10.
 図8に示すように、最初に、画像取得部11は、後述する機械学習で用いる学習データ(差分データ)を作成するために、特定地域を上空から時期を変えて撮影して得られたペア画像を取得する(ステップA1)。 As illustrated in FIG. 8, first, the image acquisition unit 11 generates a pair of images obtained by shooting a specific area from the sky at different times in order to create learning data (difference data) used in machine learning to be described later. An image is obtained (step A1).
 次に、3次元データ取得部12は、ステップA1で取得したペア画像それぞれについて、被写体となっている地物の高さの情報を画素毎に保持する3次元データを取得する(ステップA2)。 {Next, the three-dimensional data acquisition unit 12 acquires three-dimensional data for each pixel of the paired images acquired in step A1, which holds information on the height of a feature as a subject (step A2).
 次に、差分データ作成部13は、ペア画像の対応する画素のペア毎に、画素値の差分及び3次元データで保持されている高さの差分を求め、求めた差分を含む4チャンネルデータを、学習データとして作成する(ステップA3)。 Next, the difference data creation unit 13 obtains a difference in pixel value and a difference in height held in the three-dimensional data for each pair of corresponding pixels of the paired image, and outputs the four-channel data including the obtained difference. Is created as learning data (step A3).
 次に、ラベル取得部14は、ペア画像の対応する1又は2以上の画素のペア毎に、各ペアに設定された、地物の変化を示すラベルを取得する(ステップA4)。上述したように、ラベルは予め設定されている。 Next, the label acquiring unit 14 acquires, for each corresponding one or more pairs of pixels of the paired image, a label indicating a change of a feature set for each pair (step A4). As described above, the label is set in advance.
 次に、学習モデル生成部15は、ステップA3で作成された差分データと、ステップA4で取得されたラベルとを、学習データとして用いて、ディープラーニング等の機械学習を実行して、差分データと地物の変化との関係を特定する学習モデルを生成する(ステップA5)。 Next, the learning model generation unit 15 executes machine learning such as deep learning using the difference data created in step A3 and the label acquired in step A4 as learning data, and obtains the difference data A learning model for specifying a relationship with a change of a feature is generated (step A5).
[プログラム]
 本実施の形態における第1のプログラムは、コンピュータに、図8に示すステップA1~A5を実行させるプログラムであれば良い。このプログラムをコンピュータにインストールし、実行することによって、本実施の形態における学習モデル生成装置10と学習モデル生成方法とを実現することができる。この場合、コンピュータのプロセッサは、画像取得部11、3次元データ取得部12、差分データ作成部13、ラベル取得部14、及び学習モデル生成部15として機能し、処理を行なう。
[program]
The first program in the present embodiment may be any program that causes a computer to execute steps A1 to A5 shown in FIG. By installing and executing this program on a computer, the learning model generation device 10 and the learning model generation method according to the present embodiment can be realized. In this case, the processor of the computer functions as the image acquisition unit 11, the three-dimensional data acquisition unit 12, the difference data creation unit 13, the label acquisition unit 14, and the learning model generation unit 15 and performs processing.
 また、本実施の形態における第1のプログラムは、複数のコンピュータによって構築されたコンピュータシステムによって実行されても良い。この場合は、例えば、各コンピュータが、それぞれ、画像取得部11、3次元データ取得部12、差分データ作成部13、ラベル取得部14、及び学習モデル生成部15のいずれかとして機能しても良い。 The first program according to the present embodiment may be executed by a computer system configured by a plurality of computers. In this case, for example, each computer may function as one of the image acquisition unit 11, the three-dimensional data acquisition unit 12, the difference data creation unit 13, the label acquisition unit 14, and the learning model generation unit 15, respectively. .
(実施の形態:地物変化の判定)
 続いて、本発明の実施の形態における、地物変化判定装置、地物変化判定方法、及びプログラムについて説明する。
(Embodiment: Judgment of feature change)
Subsequently, a feature change determination device, a feature change determination method, and a program according to an embodiment of the present invention will be described.
[装置構成]
 最初に、本実施の形態における地物変化判定装置の構成について図9を用いて説明する。図9は、本発明の実施の形態における地物変化判定装置の構成を示すブロック図である。
[Device configuration]
First, the configuration of the feature change determination device according to the present embodiment will be described with reference to FIG. FIG. 9 is a block diagram illustrating a configuration of the feature change determination device according to the embodiment of the present invention.
 図9に示す、本実施の形態における地物変化判定装置20は、特定地域における地物の変化を判定するための装置である。図9に示すように、地物変化判定装置20は、画像取得部21と、3次元データ取得部22と、差分データ作成部23と、判定部24とを備えている。 地 The feature change determining device 20 according to the present embodiment shown in FIG. 9 is a device for determining a feature change in a specific area. As shown in FIG. 9, the feature change determination device 20 includes an image acquisition unit 21, a three-dimensional data acquisition unit 22, a difference data creation unit 23, and a determination unit 24.
 画像取得部21は、特定地域を上空から時期を変えて撮影して得られたペア画像を取得する。3次元データ取得部22は、ペア画像それぞれについて、被写体となっている地物の高さの情報を画素毎に保持する3次元データを取得する。 The image acquisition unit 21 acquires a pair image obtained by photographing a specific area from the sky at different times. The three-dimensional data acquisition unit 22 acquires three-dimensional data for each pair of images, which holds information on the height of a feature as a subject for each pixel.
 差分データ作成部23は、ペア画像の対応する画素のペア毎に、画素値の差分及び3次元データで保持されている高さの差分を求め、求めた差分を含む差分データを作成する。判定部24は、ペア画像の差分データと地物の変化との関係を機械学習することによって得られている学習モデルに、差分データを適用して、特定地域における地物の変化を判定する。 The difference data creation unit 23 finds a difference in pixel value and a difference in height held in three-dimensional data for each pair of corresponding pixels in the paired image, and creates difference data including the obtained difference. The determination unit 24 applies the difference data to a learning model obtained by machine learning the relationship between the difference data of the paired images and the change of the feature to determine the change of the feature in the specific area.
 この構成により、本実施の形態では、時間の経過による地物の変化を判定するに際して、撮影条件、ノイズといった外的要因による影響を抑制することができるので、判定精度の向上を図ることができる。 With this configuration, in the present embodiment, when judging a change in a feature over time, the influence of external factors such as shooting conditions and noise can be suppressed, so that the judgment accuracy can be improved. .
 続いて、図10を用いて、本実施の形態における地物変化判定装置の構成及び機能についてより具体的に説明する。図10は、本発明の実施の形態における地物変化判定装置の構成を具体的に示すブロック図である。 Next, the configuration and function of the feature change determination apparatus according to the present embodiment will be described more specifically with reference to FIG. FIG. 10 is a block diagram specifically illustrating a configuration of the feature change determination device according to the embodiment of the present invention.
 図10に示すように、本実施の形態における地物変化判定装置20は、上述した画像取得部21、3次元データ取得部22、差分データ作成部23、及び判定部24に加えて、モデル格納部25を備えている。また、モデル格納部25は、判定部24が用いる学習モデルを格納している。更に、学習モデルは、図1に示した学習モデル生成装置10によって生成されている。 As shown in FIG. 10, the feature change determination device 20 according to the present embodiment includes a model storage unit in addition to the above-described image acquisition unit 21, three-dimensional data acquisition unit 22, difference data creation unit 23, and determination unit 24. A section 25 is provided. The model storage unit 25 stores a learning model used by the determination unit 24. Further, the learning model is generated by the learning model generation device 10 shown in FIG.
 画像取得部21は、図1に示した画像取得部21と同様に機能する。画像取得部21によって取得されるペア画像それぞれも、上空から撮影して得られた画像にデジタル処理を行って得られたオルソ画像である。 The image acquisition unit 21 functions in the same manner as the image acquisition unit 21 illustrated in FIG. Each of the pair images acquired by the image acquiring unit 21 is also an ortho image obtained by performing digital processing on an image obtained by photographing from the sky.
 また、3次元データ取得部22も、図1に示した3次元データ取得部12と同様に機能する。3次元データ取得部22によって取得される3次元データも、各画像の画素毎に、高さを保持している(図2参照)。 The three-dimensional data acquisition unit 22 also functions in the same manner as the three-dimensional data acquisition unit 12 shown in FIG. The three-dimensional data acquired by the three-dimensional data acquisition unit 22 also holds a height for each pixel of each image (see FIG. 2).
 更に、差分データ作成部23も、図1に示した差分データ作成部13と同様に機能する。差分データ作成部23は、例えば、ペア画像の各画像の対応する画素のペア毎に、画素値の差分として、画像のR値、G値、及びB値それぞれの差分を求める。また、差分データ作成部13は、ペア画像の一方の画像の画素毎に保持された高さデータから、他方の画像の画素毎に保持されている高さデータを、減算し、対応する画素毎に差分を求める。このようにして求められた、R値差分、G値差分、B値差分、及び高さ差分が、差分データ(4チャンネルデータ)となる。 (4) The difference data creation unit 23 also functions in the same manner as the difference data creation unit 13 shown in FIG. The difference data creation unit 23 obtains, for example, a difference between the R value, the G value, and the B value of the image as a difference between the pixel values for each corresponding pixel pair of each image of the paired images. Further, the difference data creating unit 13 subtracts the height data held for each pixel of the other image from the height data held for each pixel of one of the paired images, To find the difference. The R value difference, the G value difference, the B value difference, and the height difference obtained in this manner become difference data (four channel data).
 判定部24は、差分データ作成部23によって作成された4チャンネルデータを取得すると、モデル格納部25に格納されている学習モデルに、この4チャンネルデータを適用する。そして、判定部24は、学習モデルからの出力を、特定地域における地物の変化の判定結果として出力する。 Upon acquiring the four-channel data created by the difference data creating unit 23, the determination unit 24 applies the four-channel data to the learning model stored in the model storage unit 25. Then, the determination unit 24 outputs the output from the learning model as a determination result of a feature change in the specific area.
 ここで、図11~図13を用いて、判定部24の判定結果の一例を説明する。図11は、本発明の実施の形態における地物変化装置の出力例1を示す図である。図12は、本発明の実施の形態における地物変化装置の出力例2を示す図である。図13は、本発明の実施の形態における地物変化装置の出力例3を示す図である。 Here, an example of the determination result of the determination unit 24 will be described with reference to FIGS. FIG. 11 is a diagram illustrating an output example 1 of the feature changing device according to the embodiment of the present invention. FIG. 12 is a diagram illustrating an output example 2 of the feature changing device according to the embodiment of the present invention. FIG. 13 is a diagram illustrating an output example 3 of the feature changing device according to the embodiment of the present invention.
 図11の例は、入力用の画像として、1つの建物のペア画像が入力された場合を示している。図11の例では、ペア画像に対して、ラベルと同様の内容の判定結果がテキストによって出力されている。また、図12の例は、建物1つよりも広い領域のペア画像が入力された場合を示している。図12の例では、領域内の部分毎に判定結果が付与された画像が出力されている。図13の例は、特定の地域のペア画像が入力された場合を示している。図13の例では、特定領域内の部分毎に、判定結果が付与された画像が出力されている。 例 The example of FIG. 11 shows a case where a pair image of one building is input as an input image. In the example of FIG. 11, a determination result of the same content as the label is output as a text for the paired image. Further, the example of FIG. 12 shows a case where a pair image of an area larger than one building is input. In the example of FIG. 12, an image to which a determination result is added for each portion in the region is output. The example of FIG. 13 illustrates a case where a pair image of a specific area is input. In the example of FIG. 13, an image to which a determination result is added is output for each portion in the specific area.
[装置動作]
 次に、本発明の実施の形態における地物変化判定装置20の動作について図14を用いて説明する。図14は、本発明の実施の形態における学習モデル生成装置の動作を示すフロー図である。以下の説明においては、適宜図1~図7を参酌する。また、本実施の形態では、学習モデル生成装置10を動作させることによって、学習モデル生成方法が実施される。よって、本実施の形態における学習モデル生成方法の説明は、以下の学習モデル生成装置10の動作説明に代える。
[Device operation]
Next, the operation of the feature change determination device 20 according to the embodiment of the present invention will be described with reference to FIG. FIG. 14 is a flowchart showing the operation of the learning model generation device according to the embodiment of the present invention. In the following description, FIGS. 1 to 7 are appropriately referred to. In the present embodiment, the learning model generation method is performed by operating the learning model generation device 10. Therefore, the description of the learning model generation method in the present embodiment is replaced with the following description of the operation of the learning model generation device 10.
 図14に示すように、最初に、画像取得部21は、地物変化の対象となる地域のペア画像を取得する(ステップB1)。 As shown in FIG. 14, first, the image acquisition unit 21 acquires a pair image of an area targeted for a feature change (step B1).
 次に、3次元データ取得部12は、ステップB1で取得したペア画像それぞれについて、地物変化の対象となる地域における地物の高さの情報を画素毎に保持する3次元データを取得する(ステップB2)。 Next, the three-dimensional data acquisition unit 12 acquires, for each pair of images acquired in step B1, three-dimensional data that holds, for each pixel, information on the height of a feature in an area where a feature change is to be performed ( Step B2).
 次に、差分データ作成部13は、ペア画像の対応する画素のペア毎に、画素値の差分及び3次元データで保持されている高さの差分を求め、求めた差分を含む4チャンネルデータを作成する(ステップB3)。 Next, the difference data creation unit 13 obtains a difference in pixel value and a difference in height held in the three-dimensional data for each pair of corresponding pixels of the paired image, and outputs the four-channel data including the obtained difference. It is created (step B3).
 次に、判定部24は、ステップB3で作成された4チャンネルデータを取得すると、これを、モデル格納部25に格納されている学習モデルに適用して、地物の変化を判定する(ステップB4)。その後、判定部24は、外部の端末装置、表示装置等に、判定結果を出力する(ステップB5)。この後、判定結果は、端末装置の画面、表示装置の画面に表示される(図11~図13参照)。 Next, when acquiring the four-channel data created in step B3, the determination unit 24 applies the four-channel data to the learning model stored in the model storage unit 25 to determine a change in the feature (step B4). ). Thereafter, the determination unit 24 outputs the determination result to an external terminal device, display device, or the like (step B5). Thereafter, the determination result is displayed on the screen of the terminal device and the screen of the display device (see FIGS. 11 to 13).
 このように、本実施の形態では、ペア画像から得られた差分データと地物の変化との関係とを機械学習して得られた、学習モデルが用いられて、判定対象となる地域における地物の変化が判定される。このため、本実施の形態では、時間の経過による地物の変化を判定するに際して、撮影条件、ノイズといった外的要因による影響を抑制することができるので、判定精度の向上を図ることができる。 As described above, in the present embodiment, the learning model obtained by machine learning the relationship between the difference data obtained from the paired image and the change of the feature is used, and the location in the area to be determined is used. An object change is determined. For this reason, in the present embodiment, the influence of external factors such as shooting conditions and noise can be suppressed when determining a change in a feature over time, and thus the determination accuracy can be improved.
[プログラム]
 本実施の形態における第2のプログラムは、コンピュータに、図14に示すステップB1~B5を実行させるプログラムであれば良い。このプログラムをコンピュータにインストールし、実行することによって、本実施の形態における地物変化判定装置20と地物判定方法とを実現することができる。この場合、コンピュータのプロセッサは画像取得部21、3次元データ取得部22、差分データ作成部23、及び判定部24として機能し、処理を行なう。
[program]
The second program in the present embodiment may be any program that causes a computer to execute steps B1 to B5 shown in FIG. By installing and executing this program on a computer, the feature change determination device 20 and the feature determination method according to the present embodiment can be realized. In this case, the processor of the computer functions as the image acquisition unit 21, the three-dimensional data acquisition unit 22, the difference data creation unit 23, and the determination unit 24, and performs processing.
 また、本実施の形態における第1のプログラムは、複数のコンピュータによって構築されたコンピュータシステムによって実行されても良い。この場合は、例えば、各コンピュータが、それぞれ、画像取得部21、3次元データ取得部22、差分データ作成部23、及び判定部24のいずれかとして機能しても良い。 The first program according to the present embodiment may be executed by a computer system configured by a plurality of computers. In this case, for example, each computer may function as any one of the image acquisition unit 21, the three-dimensional data acquisition unit 22, the difference data creation unit 23, and the determination unit 24.
[変形例]
 続いて、図15を用いて、本実施の形態における地物変化判定装置20の変形例について説明する。図15は、本発明の実施の形態における地物変化判定装置の変形例の構成を示すブロック図である。
[Modification]
Next, a modification of the feature change determination device 20 according to the present embodiment will be described with reference to FIG. FIG. 15 is a block diagram illustrating a configuration of a modification of the feature change determination device according to the embodiment of the present invention.
 図15に示すように、本変形例では、地物変化判定装置20は、画像取得部21、3次元データ取得部22、差分データ作成部23、判定部24、及びモデル格納部25に加えて、図1に示したラベル取得部14及び学習モデル生成部15を備えている。このため、本変形例では、地物変化判定装置20は、図1に示した学習モデル生成装置10としての機能も備えている。本変形例では、地物変化判定装置20自体が、学習モデルを生成することができる。 As shown in FIG. 15, in this modification, the feature change determination device 20 includes an image acquisition unit 21, a three-dimensional data acquisition unit 22, a difference data creation unit 23, a determination unit 24, and a model storage unit 25. , The label acquisition unit 14 and the learning model generation unit 15 shown in FIG. Therefore, in the present modification, the feature change determination device 20 also has a function as the learning model generation device 10 illustrated in FIG. In this modification, the feature change determination device 20 itself can generate a learning model.
(物理構成)
 ここで、本実施の形態における第1のプログラムを実行することによって、学習モデル生成装置10を実現するコンピュータと、本実施の形態における第2のプログラムを実行することによって、地物変化判定装置20を実現するコンピュータとについて図16を用いて説明する。図16は、本発明の実施の形態における学習モデル生成装置及び地物変化判定装置を実現するコンピュータの一例を示すブロック図である。
(Physical configuration)
Here, a computer that realizes the learning model generation device 10 by executing the first program according to the present embodiment, and a feature change determination device 20 that executes the second program according to the present embodiment A computer that realizes the above will be described with reference to FIG. FIG. 16 is a block diagram illustrating an example of a computer that implements the learning model generation device and the feature change determination device according to the embodiment of the present invention.
 図16に示すように、コンピュータ110は、CPU111と、メインメモリ112と、記憶装置113と、入力インターフェイス114と、表示コントローラ115と、データリーダ/ライタ116と、通信インターフェイス117とを備える。これらの各部は、バス121を介して、互いにデータ通信可能に接続される。なお、コンピュータ110は、CPU111に加えて、又はCPU111に代えて、GPU(Graphics Processing Unit)、又はFPGA(Field-Programmable Gate Array)を備えていても良い。 As shown in FIG. 16, the computer 110 includes a CPU 111, a main memory 112, a storage device 113, an input interface 114, a display controller 115, a data reader / writer 116, and a communication interface 117. These units are connected via a bus 121 so as to be able to perform data communication with each other. Note that the computer 110 may include a GPU (Graphics Processing Unit) or an FPGA (Field-Programmable Gate Array) in addition to or instead of the CPU 111.
 CPU111は、記憶装置113に格納された、本実施の形態におけるプログラム(コード)をメインメモリ112に展開し、これらを所定順序で実行することにより、各種の演算を実施する。メインメモリ112は、典型的には、DRAM(Dynamic Random Access Memory)等の揮発性の記憶装置である。また、本実施の形態におけるプログラムは、コンピュータ読み取り可能な記録媒体120に格納された状態で提供される。なお、本実施の形態におけるプログラムは、通信インターフェイス117を介して接続されたインターネット上で流通するものであっても良い。 The CPU 111 performs various operations by expanding the program (code) according to the present embodiment stored in the storage device 113 into the main memory 112 and executing them in a predetermined order. The main memory 112 is typically a volatile storage device such as a DRAM (Dynamic Random Access Memory). Further, the program according to the present embodiment is provided in a state stored in computer-readable recording medium 120. Note that the program in the present embodiment may be distributed on the Internet connected via the communication interface 117.
 また、記憶装置113の具体例としては、ハードディスクドライブの他、フラッシュメモリ等の半導体記憶装置が挙げられる。入力インターフェイス114は、CPU111と、キーボード及びマウスといった入力機器118との間のデータ伝送を仲介する。表示コントローラ115は、ディスプレイ装置119と接続され、ディスプレイ装置119での表示を制御する。 具体 Specific examples of the storage device 113 include a semiconductor storage device such as a flash memory in addition to a hard disk drive. The input interface 114 mediates data transmission between the CPU 111 and input devices 118 such as a keyboard and a mouse. The display controller 115 is connected to the display device 119 and controls display on the display device 119.
 データリーダ/ライタ116は、CPU111と記録媒体120との間のデータ伝送を仲介し、記録媒体120からのプログラムの読み出し、及びコンピュータ110における処理結果の記録媒体120への書き込みを実行する。通信インターフェイス117は、CPU111と、他のコンピュータとの間のデータ伝送を仲介する。 The data reader / writer 116 mediates data transmission between the CPU 111 and the recording medium 120, reads out a program from the recording medium 120, and writes a processing result of the computer 110 to the recording medium 120. The communication interface 117 mediates data transmission between the CPU 111 and another computer.
 また、記録媒体120の具体例としては、CF(Compact Flash(登録商標))及びSD(Secure Digital)等の汎用的な半導体記憶デバイス、フレキシブルディスク(Flexible Disk)等の磁気記録媒体、又はCD-ROM(Compact Disk Read Only Memory)などの光学記録媒体が挙げられる。 Specific examples of the recording medium 120 include a general-purpose semiconductor storage device such as CF (Compact @ Flash (registered trademark)) and SD (Secure Digital), a magnetic recording medium such as a flexible disk (Flexible @ Disk), or a CD-ROM. An optical recording medium such as a ROM (Compact Disk Read Only Memory) may be used.
 なお、本実施の形態における学習モデル生成装置10及び地物変化判定装置20は、それぞれ、プログラムがインストールされたコンピュータではなく、各部に対応したハードウェアを用いることによっても実現可能である。更に、学習モデル生成装置10及び地物変化判定装置20は、それぞれ、一部がプログラムで実現され、残りの部分がハードウェアで実現されていてもよい。 The learning model generation device 10 and the feature change determination device 20 according to the present embodiment can also be realized by using hardware corresponding to each unit instead of a computer in which a program is installed. Further, each of the learning model generation device 10 and the feature change determination device 20 may be partially implemented by a program, and the remaining portion may be implemented by hardware.
 上述した実施の形態の一部又は全部は、以下に記載する(付記1)~(付記21)によって表現することができるが、以下の記載に限定されるものではない。 一部 Some or all of the above-described embodiments can be expressed by the following (Appendix 1) to (Appendix 21), but are not limited to the following description.
(付記1)
 特定地域を上空から時期を変えて撮影して得られたペア画像を取得する、画像取得部と、
 前記ペア画像それぞれについて、被写体となっている地物の高さの情報を画素毎に保持する3次元データを取得する、3次元データ取得部と、
 前記ペア画像の対応する画素のペア毎に、画素値の差分及び前記3次元データで保持されている高さの差分を求め、求めた差分を含む差分データを作成する、差分データ作成部と、
 前記ペア画像の対応する1又は2以上の画素のペア毎に、当該ペアに設定された、前記地物の変化を示すラベルを取得する、ラベル取得部と、
 前記差分データと前記ラベルとを機械学習することによって、前記差分データと前記地物の変化との関係を特定する学習モデルを生成する、学習モデル生成部と、
を備えている、
ことを特徴とする学習モデル生成装置。
(Appendix 1)
An image acquisition unit that acquires a pair image obtained by shooting a specific area from the sky at different times,
A three-dimensional data acquisition unit for acquiring three-dimensional data for each pixel, which holds information on the height of a feature as a subject for each of the paired images;
For each pair of corresponding pixels of the paired image, a difference data creation unit that determines a difference between pixel values and a difference between heights held in the three-dimensional data, and creates difference data including the obtained difference.
For each corresponding one or two or more pairs of pixels of the pair image, a label acquisition unit configured to acquire a label indicating a change in the feature set for the pair,
A learning model generation unit that generates a learning model that specifies a relationship between the difference data and the change of the feature by machine learning the difference data and the label.
Has,
A learning model generation device, characterized in that:
(付記2)
 付記1に記載の学習モデル生成装置であって、
 前記差分データ作成部が、前記ペア画像の対応する画素のペア毎に、前記画素値の差分として、R値、G値、及びB値それぞれの差分を求める、
ことを特徴とする学習モデル生成装置。
(Appendix 2)
The learning model generation device according to Supplementary Note 1, wherein
The difference data creation unit obtains, for each pair of corresponding pixels of the paired image, a difference between an R value, a G value, and a B value as a difference between the pixel values.
A learning model generation device, characterized in that:
(付記3)
 付記1または2に記載の学習モデル生成装置であって、
 前記地物が建物である場合に、
 前記ラベル取得部が、前記ペア画像の対応する1又は2以上の画素のペア毎に、前記ラベルとして、新築、滅失、改築、及び変化無しのいずれかを少なくとも含むラベルを取得する、
ことを特徴とする学習モデル生成装置。
(Appendix 3)
The learning model generation device according to claim 1 or 2, wherein
When the feature is a building,
The label acquisition unit acquires, for each pair of one or more pixels corresponding to the paired image, a label including at least one of new construction, loss, remodeling, and no change as the label,
A learning model generation device, characterized in that:
(付記4)
 特定地域における地物の変化を判定するための装置であって、
 前記特定地域を上空から時期を変えて撮影して得られたペア画像を取得する、画像取得部と、
 前記ペア画像それぞれについて、被写体となっている地物の高さの情報を画素毎に保持する3次元データを取得する、3次元データ取得部と、
 前記ペア画像の対応する画素のペア毎に、画素値の差分及び前記3次元データで保持されている高さの差分を求め、求めた差分を含む差分データを作成する、差分データ作成部と、
 ペア画像の差分データと地物の変化との関係を機械学習することによって得られている学習モデルに、前記差分データを適用して、前記特定地域における前記地物の変化を判定する、判定部と、
を備えている、
ことを特徴とする地物変化判定装置。
(Appendix 4)
An apparatus for determining a change in a feature in a specific area,
Acquiring a pair image obtained by shooting the specific area at a different time from the sky, an image acquisition unit,
A three-dimensional data acquisition unit for acquiring three-dimensional data for each pixel, which holds information on the height of a feature as a subject for each of the paired images;
For each pair of corresponding pixels of the paired image, a difference data creation unit that determines a difference between pixel values and a difference between heights held in the three-dimensional data, and creates difference data including the obtained difference.
A determining unit that applies the difference data to a learning model obtained by machine learning the relationship between the difference data of the pair image and the change of the feature to determine the change of the feature in the specific area. When,
Has,
A feature change judging device characterized in that:
(付記5)
 付記4に記載の地物変化判定装置であって、
 前記差分データ作成部が、前記ペア画像の対応する画素のペア毎に、前記画素値の差分として、R値、G値、及びB値それぞれの差分を求める、
ことを特徴とする地物変化判定装置。
(Appendix 5)
The feature change determination device according to claim 4, wherein
The difference data creation unit obtains, for each pair of corresponding pixels of the paired image, a difference between an R value, a G value, and a B value as a difference between the pixel values.
A feature change judging device characterized in that:
(付記6)
 付記4または5に記載の地物変化判定装置であって、
 前記地物が建物である場合に、
 前記学習モデルは、前記ペア画像の差分データと、新築、滅失、改築、及び変化無しのいずれかを少なくとも含む前記地物の変化と、の関係を機械学習することによって、得られている、
ことを特徴とする地物変化判定装置。
(Appendix 6)
A feature change determination device according to claim 4 or 5, wherein
When the feature is a building,
The learning model is obtained by machine learning a relationship between the difference data of the pair image and a change in the feature including at least one of new construction, loss, renovation, and no change,
A feature change judging device characterized in that:
(付記7)
 付記4~6のいずれかに記載の地物変化判定装置であって、
 前記ペア画像の対応する1又は2以上の画素のペア毎に、当該ペアに設定された、前記地物の変化を示すラベルを取得する、ラベル取得部と、
 前記差分データと前記ラベルとを機械学習することによって、前記差分データと前記地物の変化との関係を特定する学習モデルを生成する、学習モデル生成部と、
を更に備えている、
ことを特徴とする地物変化判定装置。
(Appendix 7)
7. The feature change determination device according to any one of supplementary notes 4 to 6, wherein
For each corresponding one or two or more pairs of pixels of the pair image, a label acquisition unit configured to acquire a label indicating a change in the feature set for the pair,
A learning model generation unit that generates a learning model that specifies a relationship between the difference data and the change of the feature by machine learning the difference data and the label.
Further comprising
A feature change judging device characterized in that:
(付記8)
(a)特定地域を上空から時期を変えて撮影して得られたペア画像を取得する、ステップと、
(b)前記ペア画像それぞれについて、被写体となっている地物の高さの情報を画素毎に保持する3次元データを取得する、ステップと、
(c)前記ペア画像の対応する画素のペア毎に、画素値の差分及び前記3次元データで保持されている高さの差分を求め、求めた差分を含む差分データを作成する、ステップと、
(d)前記ペア画像の対応する1又は2以上の画素のペア毎に、当該ペアに設定された、前記地物の変化を示すラベルを取得する、ステップと、
(e)前記差分データと前記ラベルとを機械学習することによって、前記差分データと前記地物の変化との関係を特定する学習モデルを生成する、ステップと、
を有する、
ことを特徴とする学習モデル生成方法。
(Appendix 8)
(A) acquiring a pair image obtained by shooting a specific area from the sky at different times,
(B) obtaining, for each of the paired images, three-dimensional data that holds, for each pixel, information on the height of a feature serving as a subject;
(C) obtaining, for each pair of corresponding pixels of the paired image, a difference in pixel value and a difference in height held in the three-dimensional data, and creating difference data including the obtained difference;
(D) obtaining, for each corresponding one or more pairs of pixels of the paired image, a label indicating a change of the feature set for the pair;
(E) generating a learning model that specifies a relationship between the difference data and the change of the feature by machine learning the difference data and the label;
Having,
A learning model generation method characterized by the following.
(付記9)
 付記8に記載の学習モデル生成方法であって、
 前記(c)のステップにおいて、前記ペア画像の対応する画素のペア毎に、前記画素値の差分として、R値、G値、及びB値それぞれの差分を求める、
ことを特徴とする学習モデル生成方法。
(Appendix 9)
A learning model generation method according to attachment 8, wherein
In the step (c), a difference between an R value, a G value, and a B value is obtained as a difference between the pixel values for each pair of corresponding pixels of the pair image.
A learning model generation method characterized by the following.
(付記10)
 付記8または9に記載の学習モデル生成方法であって、
 前記地物が建物である場合に、
 前記(d)のステップにおいてラベル取得部が、前記ペア画像の対応する1又は2以上の画素のペア毎に、前記ラベルとして、新築、滅失、改築、及び変化無しのいずれかを少なくとも含むラベルを取得する、
ことを特徴とする学習モデル生成方法。
(Appendix 10)
A learning model generation method according to attachment 8 or 9, wherein
When the feature is a building,
In the step (d), the label acquiring unit includes, for each pair of one or more pixels corresponding to the paired image, a label including at least one of new construction, loss, remodeling, and no change as the label. get,
A learning model generation method characterized by the following.
(付記11)
 特定地域における地物の変化を判定するための方法であって、
(a)前記特定地域を上空から時期を変えて撮影して得られたペア画像を取得する、ステップと、
(b)前記ペア画像それぞれについて、被写体となっている地物の高さの情報を画素毎に保持する3次元データを取得する、ステップと、
(c)前記ペア画像の対応する画素のペア毎に、画素値の差分及び前記3次元データで保持されている高さの差分を求め、求めた差分を含む差分データを作成する、ステップと、
(d)ペア画像の差分データと地物の変化との関係を機械学習することによって得られている学習モデルに、前記差分データを適用して、前記特定地域における前記地物の変化を判定する、ステップと、
を有する、
ことを特徴とする地物変化判定方法。
(Appendix 11)
A method for determining a change in a feature in a specific area,
(A) acquiring a pair image obtained by photographing the specific area from the sky at different times, and
(B) obtaining, for each of the paired images, three-dimensional data that holds, for each pixel, information on the height of a feature serving as a subject;
(C) obtaining, for each pair of corresponding pixels of the paired image, a difference in pixel value and a difference in height held in the three-dimensional data, and creating difference data including the obtained difference;
(D) determining the change of the feature in the specific area by applying the difference data to a learning model obtained by machine learning the relationship between the difference data of the pair image and the change of the feature; , Steps,
Having,
A feature change judging method characterized in that:
(付記12)
 付記11に記載の地物変化判定方法であって、
 前記(c)のステップにおいて、前記ペア画像の対応する画素のペア毎に、前記画素値の差分として、R値、G値、及びB値それぞれの差分を求める、
ことを特徴とする地物変化判定方法。
(Appendix 12)
A feature change determination method according to attachment 11, wherein
In the step (c), a difference between an R value, a G value, and a B value is obtained as a difference between the pixel values for each pair of corresponding pixels of the pair image.
A feature change judging method characterized in that:
(付記13)
 付記11または12に記載の地物変化判定方法であって、
 前記地物が建物である場合に、
 前記学習モデルは、前記ペア画像の差分データと、新築、滅失、改築、及び変化無しのいずれかを少なくとも含む前記地物の変化と、の関係を機械学習することによって、得られている、
ことを特徴とする地物変化判定方法。
(Appendix 13)
A feature change determination method according to claim 11 or 12, wherein
When the feature is a building,
The learning model is obtained by machine learning a relationship between the difference data of the pair image and a change in the feature including at least one of new construction, loss, renovation, and no change,
A feature change judging method characterized in that:
(付記14)
 付記11~13のいずれかに記載の地物変化判定方法であって、
(e)前記ペア画像の対応する1又は2以上の画素のペア毎に、当該ペアに設定された、前記地物の変化を示すラベルを取得する、ステップと、
(f)前記差分データと前記ラベルとを機械学習することによって、前記差分データと前記地物の変化との関係を特定する学習モデルを生成する、ステップと、
を更に有している、
ことを特徴とする地物変化判定方法。
(Appendix 14)
The feature change determination method according to any one of supplementary notes 11 to 13, wherein
(E) obtaining, for each pair of one or more pixels corresponding to the paired image, a label indicating a change of the feature set for the pair;
(F) generating a learning model that specifies the relationship between the difference data and the change of the feature by machine learning the difference data and the label;
Further having
A feature change judging method characterized in that:
(付記15)
コンピュータに、
(a)特定地域を上空から時期を変えて撮影して得られたペア画像を取得する、ステップと、
(b)前記ペア画像それぞれについて、被写体となっている地物の高さの情報を画素毎に保持する3次元データを取得する、ステップと、
(c)前記ペア画像の対応する画素のペア毎に、画素値の差分及び前記3次元データで保持されている高さの差分を求め、求めた差分を含む差分データを作成する、ステップと、
(d)前記ペア画像の対応する1又は2以上の画素のペア毎に、当該ペアに設定された、前記地物の変化を示すラベルを取得する、ステップと、
(e)前記差分データと前記ラベルとを機械学習することによって、前記差分データと前記地物の変化との関係を特定する学習モデルを生成する、ステップと、
を実行させる命令を含む、プログラムを記録しているコンピュータ読み取り可能な記録媒体。
(Appendix 15)
On the computer,
(A) acquiring a pair image obtained by shooting a specific area from the sky at different times,
(B) obtaining, for each of the paired images, three-dimensional data that holds, for each pixel, information on the height of a feature serving as a subject;
(C) obtaining, for each pair of corresponding pixels of the paired image, a difference in pixel value and a difference in height held in the three-dimensional data, and creating difference data including the obtained difference;
(D) obtaining, for each corresponding one or more pairs of pixels of the paired image, a label indicating a change of the feature set for the pair;
(E) generating a learning model that specifies a relationship between the difference data and the change of the feature by machine learning the difference data and the label;
And a computer-readable recording medium storing a program including instructions for executing the program.
(付記16)
 付記15に記載のコンピュータ読み取り可能な記録媒体であって、
 前記(c)のステップにおいて、前記ペア画像の対応する画素のペア毎に、前記画素値の差分として、R値、G値、及びB値それぞれの差分を求める、
ことを特徴とするコンピュータ読み取り可能な記録媒体。
(Appendix 16)
A computer-readable recording medium according to supplementary note 15, wherein:
In the step (c), a difference between an R value, a G value, and a B value is obtained as a difference between the pixel values for each pair of corresponding pixels of the pair image.
A computer-readable recording medium characterized by the above-mentioned.
(付記17)
 付記15または16に記載のコンピュータ読み取り可能な記録媒体であって、
 前記地物が建物である場合に、
 前記(d)のステップにおいてラベル取得部が、前記ペア画像の対応する1又は2以上の画素のペア毎に、前記ラベルとして、新築、滅失、改築、及び変化無しのいずれかを少なくとも含むラベルを取得する、
ことを特徴とするコンピュータ読み取り可能な記録媒体。
(Appendix 17)
A computer-readable recording medium according to Supplementary Note 15 or 16, wherein
When the feature is a building,
In the step (d), the label acquiring unit includes, for each pair of one or more pixels corresponding to the paired image, a label including at least one of new construction, loss, remodeling, and no change as the label. get,
A computer-readable recording medium characterized by the above-mentioned.
(付記18)
 コンピュータによって、特定地域における地物の変化を判定するためのプログラムを記録したコンピュータ読み取り可能な記録媒体であって、
前記コンピュータに、
(a)前記特定地域を上空から時期を変えて撮影して得られたペア画像を取得する、ステップと、
(b)前記ペア画像それぞれについて、被写体となっている地物の高さの情報を画素毎に保持する3次元データを取得する、ステップと、
(c)前記ペア画像の対応する画素のペア毎に、画素値の差分及び前記3次元データで保持されている高さの差分を求め、求めた差分を含む差分データを作成する、ステップと、
(d)ペア画像の差分データと地物の変化との関係を機械学習することによって得られている学習モデルに、前記差分データを適用して、前記特定地域における前記地物の変化を判定する、ステップと、
を実行させる命令を含む、プログラムを記録しているコンピュータ読み取り可能な記録媒体。
(Appendix 18)
A computer-readable recording medium recording a program for determining a change of a feature in a specific area by a computer,
On the computer,
(A) acquiring a pair image obtained by photographing the specific area from the sky at different times, and
(B) obtaining, for each of the paired images, three-dimensional data that holds, for each pixel, information on the height of a feature serving as a subject;
(C) obtaining, for each pair of corresponding pixels of the paired image, a difference in pixel value and a difference in height held in the three-dimensional data, and creating difference data including the obtained difference;
(D) determining the change of the feature in the specific area by applying the difference data to a learning model obtained by machine learning the relationship between the difference data of the pair image and the change of the feature; , Steps,
And a computer-readable recording medium storing a program including instructions for executing the program.
(付記19)
 付記18に記載のコンピュータ読み取り可能な記録媒体であって、
 前記(c)のステップにおいて、前記ペア画像の対応する画素のペア毎に、前記画素値の差分として、R値、G値、及びB値それぞれの差分を求める、
ことを特徴とするコンピュータ読み取り可能な記録媒体。
(Appendix 19)
A computer-readable recording medium according to supplementary note 18, wherein:
In the step (c), a difference between an R value, a G value, and a B value is obtained as a difference between the pixel values for each pair of corresponding pixels of the pair image.
A computer-readable recording medium characterized by the above-mentioned.
(付記20)
 付記18または19に記載のコンピュータ読み取り可能な記録媒体であって、
 前記地物が建物である場合に、
 前記学習モデルは、前記ペア画像の差分データと、新築、滅失、改築、及び変化無しのいずれかを少なくとも含む前記地物の変化と、の関係を機械学習することによって、得られている、
ことを特徴とするコンピュータ読み取り可能な記録媒体。
(Appendix 20)
A computer-readable recording medium according to claim 18 or 19,
When the feature is a building,
The learning model is obtained by machine learning a relationship between the difference data of the pair image and a change in the feature including at least one of new construction, loss, renovation, and no change,
A computer-readable recording medium characterized by the above-mentioned.
(付記21)
 付記18~20のいずれかに記載のコンピュータ読み取り可能な記録媒体であって、
前記プログラムが、前記コンピュータに、
(e)前記ペア画像の対応する1又は2以上の画素のペア毎に、当該ペアに設定された、前記地物の変化を示すラベルを取得する、ステップと、
(f)前記差分データと前記ラベルとを機械学習することによって、前記差分データと前記地物の変化との関係を特定する学習モデルを生成する、ステップと、
を実行させる命令を更に含む、
ことを特徴とするコンピュータ読み取り可能な記録媒体。
(Appendix 21)
21. The computer-readable recording medium according to any one of supplementary notes 18 to 20, wherein
The program, the computer,
(E) obtaining, for each pair of one or more pixels corresponding to the paired image, a label indicating a change of the feature set for the pair;
(F) generating a learning model that specifies the relationship between the difference data and the change of the feature by machine learning the difference data and the label;
Further comprising instructions for executing
A computer-readable recording medium characterized by the above-mentioned.
 以上、実施の形態を参照して本願発明を説明したが、本願発明は上記実施の形態に限定されるものではない。本願発明の構成や詳細には、本願発明のスコープ内で当業者が理解し得る様々な変更をすることができる。 Although the present invention has been described with reference to the exemplary embodiments, the present invention is not limited to the above exemplary embodiments. Various changes that can be understood by those skilled in the art can be made to the configuration and details of the present invention within the scope of the present invention.
 この出願は、2018年9月28日に出願された日本出願特願2018-185480を基礎とする優先権を主張し、その開示の全てをここに取り込む。 This application claims priority based on Japanese Patent Application No. 2018-185480 filed on Sep. 28, 2018, the entire disclosure of which is incorporated herein.
 以上のように、本発明によれば、時間の経過による地物の変化を判定するに際して、外的要因による影響を抑制して判定精度の向上を図ることができる。本発明は、地物の時間的変化の判定が必要な分野、例えば、土地の利用状況の調査、地形の変化の調査、インフラ構造物の劣化判定、森林の変化の管理等に有用である。 As described above, according to the present invention, when determining a change in a feature over time, it is possible to suppress the influence of external factors and improve the determination accuracy. INDUSTRIAL APPLICABILITY The present invention is useful in a field in which a temporal change of a feature needs to be determined, for example, an investigation of a land use situation, an investigation of a topographical change, a deterioration determination of an infrastructure structure, a management of a forest change, and the like.
 10 学習モデル生成装置
 11 画像取得部
 12 3次元データ取得部
 13 差分データ作成部
 14 ラベル取得部
 15 学習モデル生成部
 20 地物変化判定装置
 21 画像取得部
 22 3次元データ取得部
 23 差分データ作成部
 24 判定部
 25 モデル格納部
 110 コンピュータ
 111 CPU
 112 メインメモリ
 113 記憶装置
 114 入力インターフェイス
 115 表示コントローラ
 116 データリーダ/ライタ
 117 通信インターフェイス
 118 入力機器
 119 ディスプレイ装置
 120 記録媒体
 121 バス
Reference Signs List 10 learning model generation device 11 image acquisition unit 12 three-dimensional data acquisition unit 13 difference data creation unit 14 label acquisition unit 15 learning model generation unit 20 feature change determination device 21 image acquisition unit 22 three-dimensional data acquisition unit 23 difference data creation unit 24 determination unit 25 model storage unit 110 computer 111 CPU
112 Main memory 113 Storage device 114 Input interface 115 Display controller 116 Data reader / writer 117 Communication interface 118 Input device 119 Display device 120 Recording medium 121 Bus

Claims (21)

  1.  特定地域を上空から時期を変えて撮影して得られたペア画像を取得する、画像取得手段と、
     前記ペア画像それぞれについて、被写体となっている地物の高さの情報を画素毎に保持する3次元データを取得する、3次元データ取得手段と、
     前記ペア画像の対応する画素のペア毎に、画素値の差分及び前記3次元データで保持されている高さの差分を求め、求めた差分を含む差分データを作成する、差分データ作成手段と、
     前記ペア画像の対応する1又は2以上の画素のペア毎に、当該ペアに設定された、前記地物の変化を示すラベルを取得する、ラベル取得手段と、
     前記差分データと前記ラベルとを機械学習することによって、前記差分データと前記地物の変化との関係を特定する学習モデルを生成する、学習モデル生成手段と、
    を備えている、
    ことを特徴とする学習モデル生成装置。
    Image acquisition means for acquiring a pair image obtained by shooting a specific area from the sky at different times,
    Three-dimensional data acquisition means for acquiring, for each of the paired images, three-dimensional data that holds information on the height of a feature as a subject for each pixel;
    Difference data creating means for finding a difference in pixel value and a difference in height held in the three-dimensional data for each pair of corresponding pixels of the paired image, and creating difference data including the obtained difference;
    A label acquisition unit configured to acquire, for each pair of one or more pixels corresponding to the pair image, a label set for the pair and indicating a change in the feature,
    By learning the difference data and the label machine learning, to generate a learning model that specifies the relationship between the difference data and the change of the feature, learning model generating means,
    Has,
    A learning model generation device, characterized in that:
  2.  請求項1に記載の学習モデル生成装置であって、
     前記差分データ作成手段が、前記ペア画像の対応する画素のペア毎に、前記画素値の差分として、R値、G値、及びB値それぞれの差分を求める、
    ことを特徴とする学習モデル生成装置。
    The learning model generation device according to claim 1,
    The difference data creating unit obtains, for each pair of corresponding pixels of the paired image, a difference between an R value, a G value, and a B value as a difference between the pixel values.
    A learning model generation device, characterized in that:
  3.  請求項1または2に記載の学習モデル生成装置であって、
     前記地物が建物である場合に、
     前記ラベル取得手段が、前記ペア画像の対応する1又は2以上の画素のペア毎に、前記ラベルとして、新築、滅失、改築、及び変化無しのいずれかを少なくとも含むラベルを取得する、
    ことを特徴とする学習モデル生成装置。
    The learning model generation device according to claim 1 or 2,
    When the feature is a building,
    The label acquisition unit acquires, for each pair of one or more pixels corresponding to the paired image, a label including at least one of new construction, loss, remodeling, and no change as the label.
    A learning model generation device, characterized in that:
  4.  特定地域における地物の変化を判定するための装置であって、
     前記特定地域を上空から時期を変えて撮影して得られたペア画像を取得する、画像取得手段と、
     前記ペア画像それぞれについて、被写体となっている地物の高さの情報を画素毎に保持する3次元データを取得する、3次元データ取得手段と、
     前記ペア画像の対応する画素のペア毎に、画素値の差分及び前記3次元データで保持されている高さの差分を求め、求めた差分を含む差分データを作成する、差分データ作成手段と、
     ペア画像の差分データと地物の変化との関係を機械学習することによって得られている学習モデルに、前記差分データを適用して、前記特定地域における前記地物の変化を判定する、判定手段と、
    を備えている、
    ことを特徴とする地物変化判定装置。
    An apparatus for determining a change in a feature in a specific area,
    Acquiring a pair image obtained by shooting the specific area at a different time from the sky, image acquisition means,
    Three-dimensional data acquisition means for acquiring, for each of the paired images, three-dimensional data that holds information on the height of a feature as a subject for each pixel;
    Difference data creating means for finding a difference in pixel value and a difference in height held in the three-dimensional data for each pair of corresponding pixels of the paired image, and creating difference data including the obtained difference;
    Determining means for applying the difference data to a learning model obtained by machine learning the relationship between the difference data of the pair image and the change of the feature to determine the change of the feature in the specific area When,
    Has,
    A feature change judging device characterized in that:
  5.  請求項4に記載の地物変化判定装置であって、
     前記差分データ作成手段が、前記ペア画像の対応する画素のペア毎に、前記画素値の差分として、R値、G値、及びB値それぞれの差分を求める、
    ことを特徴とする地物変化判定装置。
    The feature change determination device according to claim 4,
    The difference data creating unit obtains, for each pair of corresponding pixels of the paired image, a difference between an R value, a G value, and a B value as a difference between the pixel values.
    A feature change judging device characterized in that:
  6.  請求項4または5に記載の地物変化判定装置であって、
     前記地物が建物である場合に、
     前記学習モデルは、前記ペア画像の差分データと、新築、滅失、改築、及び変化無しのいずれかを少なくとも含む前記地物の変化と、の関係を機械学習することによって、得られている、
    ことを特徴とする地物変化判定装置。
    The feature change determination device according to claim 4 or 5,
    When the feature is a building,
    The learning model is obtained by machine learning a relationship between the difference data of the pair image and a change in the feature including at least one of new construction, loss, renovation, and no change,
    A feature change judging device characterized in that:
  7.  請求項4~6のいずれかに記載の地物変化判定装置であって、
     前記ペア画像の対応する1又は2以上の画素のペア毎に、当該ペアに設定された、前記地物の変化を示すラベルを取得する、ラベル取得手段と、
     前記差分データと前記ラベルとを機械学習することによって、前記差分データと前記地物の変化との関係を特定する学習モデルを生成する、学習モデル生成手段と、
    を更に備えている、
    ことを特徴とする地物変化判定装置。
    The feature change determination device according to any one of claims 4 to 6,
    A label acquisition unit configured to acquire, for each pair of one or more pixels corresponding to the pair image, a label set for the pair and indicating a change in the feature,
    By learning the difference data and the label machine learning, to generate a learning model that specifies the relationship between the difference data and the change of the feature, learning model generating means,
    Further comprising
    A feature change judging device characterized in that:
  8. (a)特定地域を上空から時期を変えて撮影して得られたペア画像を取得し、
    (b)前記ペア画像それぞれについて、被写体となっている地物の高さの情報を画素毎に保持する3次元データを取得し、
    (c)前記ペア画像の対応する画素のペア毎に、画素値の差分及び前記3次元データで保持されている高さの差分を求め、求めた差分を含む差分データを作成し、
    (d)前記ペア画像の対応する1又は2以上の画素のペア毎に、当該ペアに設定された、前記地物の変化を示すラベルを取得し、
    (e)前記差分データと前記ラベルとを機械学習することによって、前記差分データと前記地物の変化との関係を特定する学習モデルを生成する、
    ことを特徴とする学習モデル生成方法。
    (A) Obtain a pair image obtained by shooting a specific area from the sky at different times,
    (B) obtaining, for each of the paired images, three-dimensional data that holds, for each pixel, information on the height of a feature serving as a subject;
    (C) For each pair of corresponding pixels of the paired image, a difference in pixel value and a difference in height held in the three-dimensional data are obtained, and difference data including the obtained difference is created.
    (D) acquiring, for each corresponding one or more pairs of pixels of the pair image, a label indicating the change of the feature set for the pair;
    (E) generating a learning model that specifies the relationship between the difference data and the change of the feature by machine learning the difference data and the label;
    A learning model generation method characterized by the following.
  9.  請求項8に記載の学習モデル生成方法であって、
     前記(c)において、前記ペア画像の対応する画素のペア毎に、前記画素値の差分として、R値、G値、及びB値それぞれの差分を求める、
    ことを特徴とする学習モデル生成方法。
    The learning model generation method according to claim 8, wherein
    In (c), for each pair of corresponding pixels of the paired image, a difference between an R value, a G value, and a B value is determined as a difference between the pixel values.
    A learning model generation method characterized by the following.
  10.  請求項8または9に記載の学習モデル生成方法であって、
     前記地物が建物である場合に、
     前記(d)において、前記ペア画像の対応する1又は2以上の画素のペア毎に、前記ラベルとして、新築、滅失、改築、及び変化無しのいずれかを少なくとも含むラベルを取得する、
    ことを特徴とする学習モデル生成方法。
    The learning model generation method according to claim 8 or 9, wherein
    When the feature is a building,
    In (d), for each pair of one or more pixels corresponding to the paired image, a label including at least one of new construction, loss, remodeling, and no change is obtained as the label.
    A learning model generation method characterized by the following.
  11.  特定地域における地物の変化を判定するための方法であって、
    (a)前記特定地域を上空から時期を変えて撮影して得られたペア画像を取得すし、
    (b)前記ペア画像それぞれについて、被写体となっている地物の高さの情報を画素毎に保持する3次元データを取得し、
    (c)前記ペア画像の対応する画素のペア毎に、画素値の差分及び前記3次元データで保持されている高さの差分を求め、求めた差分を含む差分データを作成し、
    (d)ペア画像の差分データと地物の変化との関係を機械学習することによって得られている学習モデルに、前記差分データを適用して、前記特定地域における前記地物の変化を判定する、
    ことを特徴とする地物変化判定方法。
    A method for determining a change in a feature in a specific area,
    (A) obtaining a pair image obtained by shooting the specific area from the sky at different times,
    (B) obtaining, for each of the paired images, three-dimensional data that holds, for each pixel, information on the height of a feature serving as a subject;
    (C) For each pair of corresponding pixels of the paired image, a difference in pixel value and a difference in height held in the three-dimensional data are obtained, and difference data including the obtained difference is created.
    (D) determining the change of the feature in the specific area by applying the difference data to a learning model obtained by machine learning the relationship between the difference data of the pair image and the change of the feature; ,
    A feature change judging method characterized in that:
  12.  請求項11に記載の地物変化判定方法であって、
     前記(c)において、前記ペア画像の対応する画素のペア毎に、前記画素値の差分として、R値、G値、及びB値それぞれの差分を求める、
    ことを特徴とする地物変化判定方法。
    It is a feature change determination method according to claim 11,
    In (c), for each pair of corresponding pixels of the paired image, a difference between an R value, a G value, and a B value is determined as a difference between the pixel values.
    A feature change judging method characterized in that:
  13.  請求項11または12に記載の地物変化判定方法であって、
     前記地物が建物である場合に、
     前記学習モデルは、前記ペア画像の差分データと、新築、滅失、改築、及び変化無しのいずれかを少なくとも含む前記地物の変化と、の関係を機械学習することによって、得られている、
    ことを特徴とする地物変化判定方法。
    The feature change determination method according to claim 11 or 12,
    When the feature is a building,
    The learning model is obtained by machine learning a relationship between the difference data of the pair image and a change in the feature including at least one of new construction, loss, renovation, and no change,
    A feature change judging method characterized in that:
  14.  請求項11~13のいずれかに記載の地物変化判定方法であって、更に、
    (e)前記ペア画像の対応する1又は2以上の画素のペア毎に、当該ペアに設定された、前記地物の変化を示すラベルを取得し、
    (f)前記差分データと前記ラベルとを機械学習することによって、前記差分データと前記地物の変化との関係を特定する学習モデルを生成する、
    ことを特徴とする地物変化判定方法。
    The feature change determination method according to any one of claims 11 to 13, further comprising:
    (E) For each pair of one or more pixels corresponding to the paired image, obtain a label indicating the change of the feature set for the pair,
    (F) generating a learning model that specifies a relationship between the difference data and the change of the feature by machine learning the difference data and the label;
    A feature change judging method characterized in that:
  15. コンピュータに、
    (a)特定地域を上空から時期を変えて撮影して得られたペア画像を取得する、ステップと、
    (b)前記ペア画像それぞれについて、被写体となっている地物の高さの情報を画素毎に保持する3次元データを取得する、ステップと、
    (c)前記ペア画像の対応する画素のペア毎に、画素値の差分及び前記3次元データで保持されている高さの差分を求め、求めた差分を含む差分データを作成する、ステップと、
    (d)前記ペア画像の対応する1又は2以上の画素のペア毎に、当該ペアに設定された、前記地物の変化を示すラベルを取得する、ステップと、
    (e)前記差分データと前記ラベルとを機械学習することによって、前記差分データと前記地物の変化との関係を特定する学習モデルを生成する、ステップと、
    を実行させる命令を含む、プログラムを記録しているコンピュータ読み取り可能な記録媒体。
    On the computer,
    (A) acquiring a pair image obtained by shooting a specific area from the sky at different times,
    (B) obtaining, for each of the paired images, three-dimensional data that holds, for each pixel, information on the height of a feature serving as a subject;
    (C) obtaining, for each pair of corresponding pixels of the paired image, a difference in pixel value and a difference in height held in the three-dimensional data, and creating difference data including the obtained difference;
    (D) obtaining, for each corresponding one or more pairs of pixels of the paired image, a label indicating a change of the feature set for the pair;
    (E) generating a learning model that specifies a relationship between the difference data and the change of the feature by machine learning the difference data and the label;
    And a computer-readable recording medium storing a program including instructions for executing the program.
  16.  請求項15に記載のコンピュータ読み取り可能な記録媒体であって、
     前記(c)のステップにおいて、前記ペア画像の対応する画素のペア毎に、前記画素値の差分として、R値、G値、及びB値それぞれの差分を求める、
    ことを特徴とするコンピュータ読み取り可能な記録媒体。
    A computer-readable recording medium according to claim 15, wherein
    In the step (c), a difference between an R value, a G value, and a B value is obtained as a difference between the pixel values for each pair of corresponding pixels of the pair image.
    A computer-readable recording medium characterized by the above-mentioned.
  17.  請求項15または16に記載のコンピュータ読み取り可能な記録媒体であって、
     前記地物が建物である場合に、
     前記(d)のステップにおいて、前記ペア画像の対応する1又は2以上の画素のペア毎に、前記ラベルとして、新築、滅失、改築、及び変化無しのいずれかを少なくとも含むラベルを取得する、
    ことを特徴とするコンピュータ読み取り可能な記録媒体。
    A computer-readable recording medium according to claim 15 or 16,
    When the feature is a building,
    In the step (d), for each pair of one or more corresponding pixels of the pair image, a label including at least one of a new building, a lost building, a remodeling, and no change is obtained as the label.
    A computer-readable recording medium characterized by the above-mentioned.
  18.  コンピュータによって、特定地域における地物の変化を判定するためのプログラムを記録したコンピュータ読み取り可能な記録媒体であって、
    前記コンピュータに、
    (a)前記特定地域を上空から時期を変えて撮影して得られたペア画像を取得する、ステップと、
    (b)前記ペア画像それぞれについて、被写体となっている地物の高さの情報を画素毎に保持する3次元データを取得する、ステップと、
    (c)前記ペア画像の対応する画素のペア毎に、画素値の差分及び前記3次元データで保持されている高さの差分を求め、求めた差分を含む差分データを作成する、ステップと、
    (d)ペア画像の差分データと地物の変化との関係を機械学習することによって得られている学習モデルに、前記差分データを適用して、前記特定地域における前記地物の変化を判定する、ステップと、
    を実行させる命令を含む、プログラムを記録しているコンピュータ読み取り可能な記録媒体。
    A computer-readable recording medium recording a program for determining a change of a feature in a specific area by a computer,
    On the computer,
    (A) acquiring a pair image obtained by photographing the specific area from the sky at different times, and
    (B) obtaining, for each of the paired images, three-dimensional data that holds, for each pixel, information on the height of a feature serving as a subject;
    (C) obtaining, for each pair of corresponding pixels of the paired image, a difference in pixel value and a difference in height held in the three-dimensional data, and creating difference data including the obtained difference;
    (D) determining the change of the feature in the specific area by applying the difference data to a learning model obtained by machine learning the relationship between the difference data of the pair image and the change of the feature; , Steps,
    And a computer-readable recording medium storing a program including instructions for executing the program.
  19.  請求項18に記載のコンピュータ読み取り可能な記録媒体であって、
     前記(c)のステップにおいて、前記ペア画像の対応する画素のペア毎に、前記画素値の差分として、R値、G値、及びB値それぞれの差分を求める、
    ことを特徴とするコンピュータ読み取り可能な記録媒体。
    A computer-readable recording medium according to claim 18,
    In the step (c), a difference between an R value, a G value, and a B value is obtained as a difference between the pixel values for each pair of corresponding pixels of the pair image.
    A computer-readable recording medium characterized by the above-mentioned.
  20.  請求項18または19に記載のコンピュータ読み取り可能な記録媒体であって、
     前記地物が建物である場合に、
     前記学習モデルは、前記ペア画像の差分データと、新築、滅失、改築、及び変化無しのいずれかを少なくとも含む前記地物の変化と、の関係を機械学習することによって、得られている、
    ことを特徴とするコンピュータ読み取り可能な記録媒体。
    A computer-readable recording medium according to claim 18 or 19,
    When the feature is a building,
    The learning model is obtained by machine learning a relationship between the difference data of the pair image and a change in the feature including at least one of new construction, loss, renovation, and no change,
    A computer-readable recording medium characterized by the above-mentioned.
  21.  請求項18~20のいずれかに記載のコンピュータ読み取り可能な記録媒体であって、
    前記プログラムが、前記コンピュータに、
    (e)前記ペア画像の対応する1又は2以上の画素のペア毎に、当該ペアに設定された、前記地物の変化を示すラベルを取得する、ステップと、
    (f)前記差分データと前記ラベルとを機械学習することによって、前記差分データと前記地物の変化との関係を特定する学習モデルを生成する、ステップと、
    を実行させる命令を更に含む、
    ことを特徴とするコンピュータ読み取り可能な記録媒体。
    A computer-readable recording medium according to any one of claims 18 to 20, wherein
    The program, the computer,
    (E) obtaining, for each pair of one or more pixels corresponding to the paired image, a label indicating a change of the feature set for the pair;
    (F) generating a learning model that specifies the relationship between the difference data and the change of the feature by machine learning the difference data and the label;
    Further comprising instructions for executing
    A computer-readable recording medium characterized by the above-mentioned.
PCT/JP2019/036411 2018-09-28 2019-09-17 Learning model generation device, ground object change determination device, learning model generation method, ground object change determination method, and computer-readable recording medium WO2020066755A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020548522A JP7294678B2 (en) 2018-09-28 2019-09-17 Learning model generation device, feature change determination device, learning model generation method, feature change determination method, and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-185480 2018-09-28
JP2018185480 2018-09-28

Publications (1)

Publication Number Publication Date
WO2020066755A1 true WO2020066755A1 (en) 2020-04-02

Family

ID=69950526

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/036411 WO2020066755A1 (en) 2018-09-28 2019-09-17 Learning model generation device, ground object change determination device, learning model generation method, ground object change determination method, and computer-readable recording medium

Country Status (2)

Country Link
JP (1) JP7294678B2 (en)
WO (1) WO2020066755A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7345035B1 (en) 2022-10-20 2023-09-14 スカパーJsat株式会社 Aerial image change extraction device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010158565A (en) * 2010-04-19 2010-07-22 Olympus Corp Capsule endoscope
JP2017033197A (en) * 2015-07-30 2017-02-09 日本電信電話株式会社 Change area detection device, method, and program
JP2017220058A (en) * 2016-06-08 2017-12-14 株式会社日立ソリューションズ Surface information analysis system and surface information analysis method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5366190B2 (en) * 2008-11-25 2013-12-11 Necシステムテクノロジー株式会社 BUILDING CHANGE DETECTION DEVICE, BUILDING CHANGE DETECTION METHOD, AND PROGRAM

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010158565A (en) * 2010-04-19 2010-07-22 Olympus Corp Capsule endoscope
JP2017033197A (en) * 2015-07-30 2017-02-09 日本電信電話株式会社 Change area detection device, method, and program
JP2017220058A (en) * 2016-06-08 2017-12-14 株式会社日立ソリューションズ Surface information analysis system and surface information analysis method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7345035B1 (en) 2022-10-20 2023-09-14 スカパーJsat株式会社 Aerial image change extraction device

Also Published As

Publication number Publication date
JP7294678B2 (en) 2023-06-20
JPWO2020066755A1 (en) 2021-09-02

Similar Documents

Publication Publication Date Title
US10205896B2 (en) Automatic lens flare detection and correction for light-field images
JP2022541977A (en) Image labeling method, device, electronic device and storage medium
US9070345B2 (en) Integrating street view with live video data
JP5091994B2 (en) Motion vector detection device
US20190011269A1 (en) Position estimation device, position estimation method, and recording medium
US20180350028A1 (en) Compound Shader Object and Use Thereof
WO2020066755A1 (en) Learning model generation device, ground object change determination device, learning model generation method, ground object change determination method, and computer-readable recording medium
CN111870953A (en) Height map generation method, device, equipment and storage medium
CN105279268A (en) Multi-data-source map download method
JP6550305B2 (en) Road data generation device, road data generation method, and program
JP4875564B2 (en) Flicker correction apparatus and flicker correction method
JP6249508B2 (en) Change detection support device, change detection support method, and program
JP7151742B2 (en) Image conversion device, image conversion method, and computer program for image conversion
JP2015507736A (en) System and method for estimating target size
JP7395705B2 (en) Estimation device, estimation method and program
CN107862679B (en) Method and device for determining image detection area
JP6601893B2 (en) Image processing apparatus, image processing method, and program
US11922659B2 (en) Coordinate calculation apparatus, coordinate calculation method, and computer-readable recording medium
JP6897448B2 (en) Line width estimation programs, devices, and methods
KR101028767B1 (en) A Method For Updating Geographic Image Information
US10380463B2 (en) Image processing device, setting support method, and non-transitory computer-readable media
WO2017090705A1 (en) Image processing device, image processing method and computer-readable recording medium
JP2013219504A (en) Image processing apparatus and image processing program
RU2818049C2 (en) Method of correcting digital elevation model (embodiments)
KR102653971B1 (en) Method for detection spatial change based on artificial intelligence and apparatus for performing the method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19865472

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020548522

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19865472

Country of ref document: EP

Kind code of ref document: A1