WO2022209583A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2022209583A1
WO2022209583A1 PCT/JP2022/009399 JP2022009399W WO2022209583A1 WO 2022209583 A1 WO2022209583 A1 WO 2022209583A1 JP 2022009399 W JP2022009399 W JP 2022009399W WO 2022209583 A1 WO2022209583 A1 WO 2022209583A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
time
date
information processing
difference
Prior art date
Application number
PCT/JP2022/009399
Other languages
French (fr)
Japanese (ja)
Inventor
逸平 難波田
令司 松本
Original Assignee
パイオニア株式会社
パイオニアスマートセンシングイノベーションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パイオニア株式会社, パイオニアスマートセンシングイノベーションズ株式会社 filed Critical パイオニア株式会社
Publication of WO2022209583A1 publication Critical patent/WO2022209583A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B29/00Maps; Plans; Charts; Diagrams, e.g. route diagram

Definitions

  • the present invention relates to an information processing device, an information processing method, and a program.
  • a shadow or the like may appear in the image, and it is necessary to separate the changed portion to be detected from the change of the shadow or the like.
  • a photographed image of a road surface includes shadows of vehicles running on the road surface and structures on the roadside.
  • Patent Document 1 describes that an area having a luminance lower than the luminance of the valley of a histogram of a luminance image is determined as a shadow area, and an area having a luminance higher than the luminance of the valley is determined as a non-shadow area. It is In the technique disclosed in Japanese Patent Application Laid-Open No. 2002-200002, the change extraction is performed after shadow removal processing is performed on the luminance image.
  • Japanese Patent Application Laid-Open No. 2002-200000 describes determining whether or not a partial area that constitutes an image is a shadow area by performing threshold processing on the ratio of luminance values of corresponding pixels in a plurality of images to be compared. and removing it from the image when a partial region is determined to be a shadow region.
  • One example of the problem to be solved by the present invention is to provide a technique for detecting changes in images easily and accurately.
  • an acquisition unit that acquires and a difference image generating unit that generates a first difference image indicating the difference between the first image and the third image and a second difference image indicating the difference between the second image and the fourth image;
  • an extraction unit that extracts a common component between the first difference image and the second difference image as a first common component
  • both the third date and time and the fourth date and time are after the first date and time and after the second date and time.
  • both the third date and time and the fourth date and time are after the first date and time and after the second date and time.
  • the invention according to claim 16 A program that causes a computer to execute each step of the information processing method according to claim 15.
  • FIG. 1 is a block diagram illustrating the functional configuration of an information processing apparatus according to a first embodiment
  • FIG. 4 is a diagram for explaining processing performed by the information processing apparatus according to the first embodiment
  • FIG. 4 is a flowchart illustrating the flow of an information processing method according to the first embodiment
  • 4 is a flowchart illustrating in detail processing performed by the information processing apparatus according to the first embodiment
  • 7 is a flow chart showing a modification of the processing performed by the information processing apparatus according to the first embodiment
  • It is a figure which illustrates the computer for implement
  • FIG. 11 is a diagram for explaining processing performed by an information processing apparatus according to the second embodiment
  • FIG. 11 is a diagram for explaining processing performed by an information processing apparatus according to the second embodiment
  • FIG. 11 is a diagram for explaining processing performed by an information processing apparatus according to the second embodiment
  • FIG. 11 is a block diagram illustrating the functional configuration of an information processing apparatus according to a third embodiment
  • FIG. 11 is a flowchart illustrating the flow of an information processing method performed by an information processing apparatus according to a third embodiment
  • It is a figure for demonstrating the process which the information processing apparatus which concerns on 4th Embodiment performs.
  • each component of the information processing apparatus 10 indicates a functional unit block, not a hardware unit configuration, unless otherwise specified.
  • Each component of the information processing apparatus 10 includes a CPU of an arbitrary computer, a memory, a program loaded in the memory, a storage medium such as a hard disk for storing the program, an interface for network connection, and an arbitrary combination of hardware and software. is realized by a combination of There are various modifications of the implementation method and device.
  • FIG. 1 is a block diagram illustrating the functional configuration of an information processing device 10 according to the first embodiment.
  • the information processing apparatus 10 includes an acquisition unit 120 , a difference image generation unit 140 and an extraction unit 160 .
  • Acquisition unit 120 obtains a first image taken on a first date and time, a second image taken on a second date and time, a third image taken on a third date and time, and a third image taken on a fourth date and time. and obtain a fourth image.
  • the difference image generator 140 generates a first difference image indicating the difference between the first image and the third image, and a second difference image indicating the difference between the second image and the fourth image.
  • the extraction unit 160 extracts a common component between the first difference image and the second difference image as a first common component. Both the third date and time and the fourth date and time are after the first date and time and after the second date and time. A detailed description is given below.
  • FIG. 2 is a diagram for explaining the processing performed by the information processing device 10 according to this embodiment.
  • image A1 corresponds to the first image
  • image A2 corresponds to the second image
  • image B1 corresponds to the third image
  • image B2 corresponds to the fourth image.
  • images A1 to A3 and images B1 to B3 are respectively orthoimages including roads 31, and white lines 30 are drawn on the roads 31.
  • FIG. A shadow 32 is reflected in the images A1 to A3 and the images B1 to B3.
  • the information processing apparatus 10 can detect this change in the white line 30 .
  • images A1 to A3 and images B1 to B3 are different images.
  • images A1 to A3 are images taken at different times on the same day
  • images B1 to B3 are images taken at different times on the same day.
  • the date on which the images B1 to B3 were taken is later than the date on which the images A1 to A3 were taken.
  • a difference image C1 indicates the difference between the image A1 and the image B1.
  • a difference image C2 indicates the difference between the image A2 and the image B2.
  • a difference image C3 indicates the difference between the image A3 and the image B3.
  • the common component image D1 is an image showing common components of the difference image C1, the difference image C2, and the difference image C3. From this common component image D1, only the changing portion of the white line 30 is extracted.
  • the difference image C1 corresponds to the first difference image
  • the difference image C2 corresponds to the second difference image.
  • the common component image D1 corresponds to an image representing the first common component extracted by the extraction unit 160.
  • the change occurring between the time when the first image and the second image were captured and the time when the third image and the fourth image were captured is can be extracted as one common component. Also, the change between the first and second images and the change between the third and fourth images are not extracted as the first common component. Therefore, short-term changes such as shadows and long-term changes such as white lines and buildings can be separately detected.
  • the method according to the present embodiment does not use characteristics unique to shadows, it is less likely to be affected by uncertain factors related to shadow determination, and it can also deal with factors other than shadows that cause short-term changes. It is possible. It is possible to remove the influence of short-term changes without being conscious of whether the event is falling leaves, shadows, or snow. Furthermore, the method according to the present embodiment does not generate an intermediate image with shadows removed, for example. Therefore, the processing is simple and the load is low, and factors that lower the detection accuracy of the changed portion are less likely to enter. As a result, it is possible to detect the changed portion with high accuracy.
  • FIG. 3 is a flowchart illustrating the flow of the information processing method according to this embodiment.
  • the information processing method according to this embodiment includes an acquisition step S10, a difference image generation step S20, and an extraction step S30.
  • the acquisition step S10 a first image taken on a first date and time, a second image taken on a second date and time, a third image taken on a third date and time, and a fourth image taken on a fourth date and time.
  • a fourth image is acquired.
  • the difference image generating step S20 a first difference image indicating the difference between the first image and the third image and a second difference image indicating the difference between the second image and the fourth image are generated.
  • a common component between the first difference image and the second difference image is extracted as a first common component.
  • Both the third date and time and the fourth date and time are after the first date and time and after the second date and time.
  • the information processing method according to this embodiment can be executed by the information processing apparatus 10 according to this embodiment.
  • FIG. 4 is a flowchart illustrating in detail the processing performed by the information processing apparatus 10 according to this embodiment. Processing performed by the information processing apparatus 10 according to the present embodiment will be described in detail below with reference to FIGS. 1 and 4.
  • FIG. A series of processes for detecting changed portions using the image acquired by the acquisition unit 120 is hereinafter referred to as “detection processing”.
  • the acquisition unit 120 acquires images from, for example, the storage unit 200 accessible from the acquisition unit 120 .
  • FIG. 1 shows an example in which the storage unit 200 is provided outside the information processing device 10
  • the storage unit 200 may be provided inside the information processing device 10 .
  • a plurality of images are stored in advance in the storage unit 200 in association with shooting dates and shooting positions.
  • Acquisition unit 120 can acquire an image from storage unit 200 by specifying a position and time.
  • the shooting position of the image can be specified by latitude and longitude, or can be specified by using an identifier that indicates an individual intersection or landmark.
  • the image acquired by the acquisition unit 120 is not particularly limited, but may be an image of a road, a structure, or the like. Also, the image acquired by the acquisition unit 120 is not particularly limited, but is, for example, an image captured by an imaging device provided on a moving object (vehicle, two-wheeled vehicle, etc.) that moves on a road.
  • the image acquired by the acquisition unit 120 may be an orthoimage, a top view image of a road, a structure, or the like taken from directly above, a side image, an aerial photograph, or the like. Further, when detecting a three-dimensional change, the acquisition unit 120 may acquire an image based on point cloud data acquired by LiDAR. Among them, the image acquired by the acquisition unit 120 for detection processing is preferably an orthoimage.
  • the acquisition unit 120 may orthorectify the image read from the storage unit 200 to obtain an image used for detection processing. It is preferable that the images acquired by the acquisition unit 120 for the detection process are of the same type.
  • the information processing apparatus 10 uses the image acquired by the acquisition unit 120 to perform processing for detecting changed portions.
  • the images acquired by the acquisition unit 120 for detection processing include at least a first image, a second image, a third image, and a fourth image.
  • the image acquired by the acquisition unit 120 for the detection process may further include one or more images.
  • the images acquired by the acquisition unit 120 for detection processing are preferably images including the same object.
  • it is preferable that the images acquired by the acquisition unit 120 for the detection process have overlapping photographing areas.
  • the information processing device 10 can detect, for example, long-term changes in the target.
  • the object is the white line.
  • Objects include, but are not limited to, structures such as buildings, signs, traffic lights, and information boards, division lines such as white lines drawn on roads, crosswalks, road markings, road surfaces, and the like.
  • the paint drawn on the road is peeled off due to deterioration over time, road construction, etc., and is repainted anew, so it is necessary to detect these changes.
  • the image acquired by the acquisition unit 120 for detection processing may further include a short-term change substance.
  • a short-term changeable substance changes temporarily, and the information processing apparatus 10 can output a detection result excluding changes due to a short-term changeable substance.
  • the short-term change object is not particularly limited, but may be, for example, at least one of shadows, snow, puddles, earth and sand, vehicles, fallen objects, fallen leaves, and garbage.
  • the first image is an image taken on the first date and time
  • the second image is an image taken on the second date and time
  • the third image is an image taken on the third date and time
  • the fourth image is an image taken on the fourth date and time.
  • Both the third date and time and the fourth date and time are after the first date and time and after the second date and time.
  • the interval between the first date and time and the second date and time is preferably narrower than the interval between the first date and time and the third date and time.
  • the interval between the first date and time and the second date and time is preferably narrower than the interval between the second date and time and the fourth date and time.
  • the interval between the third date and time and the fourth date and time is preferably narrower than the interval between the first date and time and the third date and time. Also, the interval between the third date and time and the fourth date and time is preferably narrower than the interval between the second date and time and the fourth date and time.
  • the first date and time are different from the second date and time
  • the third date and time are different from the fourth date and time.
  • the first date and time and the second date and time are included in the first period
  • the third date and time and the fourth date and time are included in the second period.
  • the second epoch is later than the first epoch. According to the information processing device 10, it is possible to detect a change in the target between the first period and the second period.
  • the lengths of the first period and the second period are not particularly limited, they are, for example, one day.
  • the information processing apparatus 10 accepts designation of the detection target position and time.
  • the position and time of the detection target can be input to the information processing device 10 by the user, for example.
  • the position of the detection target may be specified, for example, by latitude and longitude, or may be specified by an intersection or landmark identifier as described above.
  • the position of the detection target may be designated by a distance or a relative position from a predetermined reference point. Landmarks include traffic lights, signs, bus stops, pedestrian crossings, and the like.
  • the designation of the detection target period is performed by designating the first period and the second period. Each epoch can be done by specifying the start and end of the epoch, or by specifying the start and length of the epoch.
  • the acquisition unit 120 selects an image corresponding to the detection target position and time from the storage unit 200 in S102 and acquires it for the detection process. Specifically, the acquisition unit 120 acquires an image including the position designated as the detection target position. Alternatively, the acquisition unit 120 acquires an image captured from a position designated as a detection target position. When the detection target position is specified by an intersection or landmark identifier, the acquisition unit 120 acquires an image including the intersection or landmark indicated by the identifier. In the present embodiment, the acquiring unit 120 acquires two or more images captured during the first period, and acquires two or more images captured during the second period.
  • Acquisition unit 120 acquires the first image and the second image such that the first time period includes the first date and time and the second date and time, for example. Acquisition unit 120 also acquires the third image and the fourth image such that the second time period includes the third date and time and the fourth date and time.
  • the difference image generation unit 140 performs edge detection in S201. Specifically, the differential image generation unit 140 performs edge detection processing on each image acquired by the acquisition unit 120 .
  • the edge detection process can be based, for example, on the luminance gradients of the image. However, when trying to detect a change in color, edge detection processing may be performed further based on the hue of the image.
  • the difference image generation unit 140 may perform correction processing or the like on the image acquired by the acquisition unit 120 as necessary. An edge image is obtained by edge detection processing.
  • a difference image is an image showing a difference between any image taken in the first period and any image taken in the second period.
  • the differential image generator 140 associates the positions of the edge images obtained in S201. Then, the values of the pixels at the corresponding positions are compared between the two edge images whose positions are associated with each other. That is, the values of pixels corresponding to the same position in the real space are compared. Pixels with matching pixel values are determined as common portions, and pixels with non-matching pixel values are determined as differences.
  • a difference image is an image in which the pixels determined to be the difference are identifiable from the other pixels.
  • the combination of two images used to generate the difference image is not particularly limited, but it is preferable to combine images in which the states of short-term objects such as shadows are different from each other.
  • the differential image generation unit 140 preferably derives the direction of the sun based on the shooting date and time associated with each image, and generates the differential image using two images in which the direction of the sun is separated by a predetermined angle or more.
  • the position and orientation of the image are associated with GPS information or the like
  • the difference image can further be associated with position information based on the position information of the original image of the difference image.
  • the extraction unit 160 extracts common components of a plurality of difference images.
  • the extractor 160 can generate an image showing this common component.
  • the extraction unit 160 associates the positions of the plurality of difference images in the same manner as the positions of the edge images were associated in S202. Then, values of pixels at corresponding positions are compared in a plurality of difference images with associated positions. Then, the pixels having the same pixel values are determined as the common portion.
  • pixels having a ratio of matching pixel values equal to or higher than a predetermined threshold value are determined to be common portions.
  • the image showing the pixels determined as the common portion so as to be identifiable from the other pixels is the image showing the first common component.
  • the information processing device 10 further includes an output unit 180 that outputs an image showing the first common component.
  • the output unit 180 outputs data for displaying the image representing the first common component generated by the extraction unit 160 on the display device. Note that if the extraction result shows that there is no common component in the difference image, the output unit 180 may output information indicating that there is no changed portion. Note that the output unit 180 may output information indicating that there is no changed portion when the ratio of the number of pixels indicating the common component in the image indicating the first common component is equal to or less than a predetermined ratio. .
  • detection processing may be performed again using a different image for the same detection target position and time. Then, when the degree of matching of the results of a plurality of detection processes is equal to or higher than a predetermined standard, the output section 180 may output the results. By doing so, more reliable results can be output.
  • FIG. 5 is a flowchart showing a modified example of processing performed by the information processing apparatus 10 according to this embodiment. According to this modified example, by increasing the number of images used in the detection process, it is possible to improve the detection accuracy of the changed portion. In this modified example, a predetermined number of differential images are generated. This modification is the same as the example of FIG. 4 except for the points described below.
  • S101 in this example is the same as S101 in FIG.
  • the acquisition unit 120 selects an image corresponding to the detection target position and time from the storage unit 200 in S103 and acquires it for the detection process.
  • the acquisition unit 120 acquires one image captured during the first period and one image captured during the second period.
  • the combination of two images acquired by the acquisition unit 120 is not particularly limited, but a combination of images with different shadow states is preferable.
  • the acquisition unit 120 preferably derives the direction of the sun based on the date and time of photography associated with each image, and acquires two images in which the direction of the sun is separated by a predetermined angle or more.
  • the difference image generation unit 140 performs edge detection in S201.
  • S201 in this example is the same as S201 in FIG.
  • a difference image is an image showing a difference between an image captured in the first period and an image captured in the second period.
  • the method for generating the differential image is the same as the method described for S202 in FIG.
  • the differential image generation unit 140 determines whether the number of differential images generated so far has reached a predetermined number. If the number of difference images has not reached the predetermined number (N of S204), the process returns to S103, and the acquisition unit 120 acquires images again. When the number of difference images reaches a predetermined number (Y of S204), the difference image generation unit 140 outputs all the generated difference images to the extraction unit 160. FIG.
  • the extraction unit 160 extracts common components of the plurality of difference images acquired from the difference image generation unit 140 .
  • the method of extracting the common part is the same as S301 in FIG.
  • the output unit 180 also outputs data for displaying the image representing the first common component generated by the extraction unit 160 on the display device.
  • the extraction unit 160 extracts common components using three or more differential images
  • the same image may be used to generate two or more differential images.
  • the same image cannot be used to generate all the difference images unless the image does not contain short-term change substances.
  • Each functional configuration unit of the information processing apparatus 10 may be implemented by hardware (eg, hardwired electronic circuit) that implements each functional configuration unit, or may be implemented by a combination of hardware and software (eg, combination of an electronic circuit and a program for controlling it, etc.).
  • hardware eg, hardwired electronic circuit
  • software e.g., combination of an electronic circuit and a program for controlling it, etc.
  • a case in which each functional configuration unit of the information processing apparatus 10 is implemented by a combination of hardware and software will be further described below.
  • FIG. 6 is a diagram illustrating a computer 1000 for realizing the information processing apparatus 10.
  • FIG. Computer 1000 is any computer.
  • the computer 1000 is an SoC (System On Chip), a personal computer (PC), a server machine, a tablet terminal, a smart phone, or the like.
  • the computer 1000 may be a dedicated computer designed to implement the information processing apparatus 10, or may be a general-purpose computer.
  • the computer 1000 has a bus 1020 , a processor 1040 , a memory 1060 , a storage device 1080 , an input/output interface 1100 and a network interface 1120 .
  • the bus 1020 is a data transmission path through which the processor 1040, memory 1060, storage device 1080, input/output interface 1100, and network interface 1120 mutually transmit and receive data.
  • the processor 1040 is various processors such as a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), or an FPGA (Field-Programmable Gate Array).
  • the memory 1060 is a main memory implemented using a RAM (Random Access Memory) or the like.
  • the storage device 1080 is an auxiliary storage device implemented using a hard disk, SSD (Solid State Drive), memory card, ROM (Read Only Memory), or the like.
  • the input/output interface 1100 is an interface for connecting the computer 1000 and input/output devices.
  • the input/output interface 1100 is connected to an input device such as a keyboard and an output device such as a display device.
  • the network interface 1120 is an interface for connecting the computer 1000 to the network.
  • This communication network is, for example, a LAN (Local Area Network) or a WAN (Wide Area Network).
  • a method for connecting the network interface 1120 to the network may be a wireless connection or a wired connection.
  • the storage device 1080 stores program modules that implement each functional component of the information processing apparatus 10 .
  • the processor 1040 reads each program module into the memory 1060 and executes it, thereby realizing the function corresponding to each program module.
  • the difference image generation unit 140 generates the first difference image indicating the difference between the first image and the third image, and the second difference image indicating the difference between the second image and the fourth image. Generate an image.
  • the extraction unit 160 extracts a common component between the first difference image and the second difference image as a first common component. Therefore, image change detection can be performed easily and accurately.
  • (Second embodiment) 7 and 8 are diagrams for explaining the processing performed by the information processing apparatus 10 according to the second embodiment.
  • the information processing apparatus 10 according to this embodiment is the same as the information processing apparatus 10 according to the first embodiment except for the points described below.
  • the first image, the second image, the third image, and the fourth image are all images containing roads.
  • the first image and the second image are the same image with no short-term changeable objects on the road, or the third image and the fourth image are the same image with no short-term changeable objects on the road. is an image of
  • image A4 corresponds to the first and second images
  • image B4 corresponds to the third image
  • image B5 corresponds to the fourth image. That is, the first image and the second image are the same image. The first date and time and the second date and time are the same.
  • Image A4 includes road 31 and white line 30, but shadow 32 is not included.
  • the image B4 and the image B5 include the road 31, the white line 30, and the shadow 32.
  • FIG. A difference image C4 indicates the difference between the image A4 and the image B4.
  • a difference image C5 indicates the difference between the image A4 and the image B5.
  • the common component image D2 is an image showing common components between the difference image C4 and the difference image C5.
  • image A6 corresponds to the first image
  • image A7 corresponds to the second image
  • image B6 corresponds to the third and fourth images. That is, the third image and the fourth image are the same image.
  • the third date and time and the fourth date and time are the same.
  • Image B6 includes road 31 and white line 30, but shadow 32 is not included.
  • the image A6 and the image A7 include a road 31, a white line 30, and a shadow 32.
  • a difference image C6 indicates the difference between the image A6 and the image B6.
  • a difference image C7 indicates the difference between the image A7 and the image B6.
  • the common component image D3 is an image showing common components of the difference image C6 and the difference image C7.
  • each image held in the storage unit 200 is associated with information indicating the presence or absence of a short-term changeable substance. Then, the acquiring unit 120 can select and acquire an image without a short-term changeable substance. When the acquisition unit 120 acquires an image without short-term change substances, the difference image generation unit 140 can use the image to generate a plurality of difference images. Further, the differential image generation unit 140 may use the image to generate all the differential images.
  • Examples of images with no short-term change include images taken when the sun is out or when the weather is cloudy.
  • the images stored in the storage unit 200 can be attached with information indicating the presence/absence of short-term changeable substances by confirming each image in advance. Further, information indicating the presence or absence of a short-term changeable substance may be attached to each image based on the date and time when the image was taken.
  • the same actions and effects as those of the first embodiment can be obtained.
  • the first image and the second image are the same image with no short-term changeable objects on the road, or the third image and the fourth image are with the short-term changeable objects on the road. not the same image. Therefore, the changed portion can be detected using a smaller number of images.
  • FIG. 9 is a block diagram illustrating the functional configuration of the information processing device 10 according to the third embodiment.
  • the information processing apparatus 10 according to the present embodiment is the same as the information processing apparatus 10 according to at least one of the first and second embodiments except for the points described below.
  • the information processing apparatus 10 further includes a determination unit 110 that determines whether or not to start detection processing. Then, when the determination unit 110 determines to start the detection process, the detection process is started.
  • the detection process is a series of processes for detecting changed portions using the images acquired by the acquisition unit 120. Specifically, the acquisition unit 120 acquires the first image, the second image, the third image, and the In this processing, the fourth image is acquired, the difference image generation unit 140 generates the first difference image and the second difference image, and the extraction unit 160 extracts the first common component.
  • the detection process includes an acquisition step S10, a difference image generation step S20, and an extraction step S30. A detailed description is given below.
  • the determination unit 110 determines the timing of performing the detection process. In particular, the determination unit 110 determines whether or not to start the detection process so that the detection process is performed at a timing when there is a relatively high possibility that the target has changed. First to fourth examples of the determination method performed by the determination unit 110 will be described below. Note that the determination method performed by the determination unit 110 is not limited to the following. Further, the information processing apparatus 10 may combine and execute a plurality of determination methods.
  • the determination unit 110 determines to start the detection process each time a predetermined period elapses.
  • the determination unit 110 stores the date and time when the most recent detection process was performed. Then, determination unit 110 calculates the elapsed time since the most recent detection process was performed at predetermined time intervals (for example, every day). Then, when the calculated elapsed time is longer than a predetermined period, the determination unit 110 determines to start the detection process. On the other hand, if the calculated elapsed time is shorter than the predetermined period, it is determined not to start the detection process.
  • the determination unit 110 can perform determination for each position. If it is determined to start the detection process, that position is designated as the position to be detected.
  • a predetermined period including the timing of the most recent detection process is set as the first period, and a predetermined period ending at the timing when the calculated elapsed time becomes longer than the predetermined period is set as the second period. is preferred. According to this example, detection processing can be performed periodically.
  • the first image, the second image, the third image, and the fourth image are all images containing roads.
  • the determination unit 110 acquires information about roads. Then, the determination unit 110 determines to start the detection process when determining that a predetermined event has occurred based on the road information.
  • Information about roads is, for example, road construction information provided by road management companies.
  • Acquisition unit 120 can acquire information about roads, for example, from server 50 of a service that provides road information.
  • a predetermined event is an event that has a high possibility of causing a change in an object, such as road construction. Since there is a high possibility that the white lines of the road will change during road construction, it is preferable that the detection process be performed to check whether there is any change before and after the road construction.
  • Information about roads includes the date and time or period when an event occurs and the location where the event occurs.
  • the information processing device 10 can determine the position and time of the detection target based on the information on the road. For example, acquisition unit 120 acquires a first image and a second image captured before the occurrence of the event, and a third image and a fourth image captured after the occurrence of the event. That is, a first period of time is specified before the occurrence of the event, and a second period of time is specified after the occurrence of the event. Also, the position at which the event occurred is specified as the detection target position.
  • the determination unit 110 acquires information about roads at predetermined intervals (for example, every day). Then, it is checked whether or not a predetermined event has occurred after the most recent detection process. The determination unit 110 determines to start the detection process when a predetermined event occurs. On the other hand, if the predetermined event has not occurred, it is determined not to start the detection process. According to this example, it is possible to perform the detection process by detecting the occurrence of an event that is highly likely to cause a change in the object.
  • the first image, the second image, the third image, and the fourth image are all images containing roads.
  • the determination unit 110 acquires information about the traffic volume of the road, and determines the timing for starting the detection process based on the information about the traffic volume.
  • the determination unit 110 acquires information on traffic volume from the server 50 of a service that provides traffic information, for example.
  • Information about traffic volume is, for example, information indicating traffic volume at each position on the road at each time.
  • the determination unit 110 calculates the integrated value of the traffic volume at each position on the road using the information about the traffic volume. Note that the integrated value may be the integrated value after the most recent detection process.
  • determination unit 110 determines whether or not the calculated integrated value exceeds a predetermined value.
  • the determination unit 110 determines to start the detection process when the integrated value exceeds a predetermined value. On the other hand, if the integrated value does not exceed the predetermined value, it is determined not to start the detection process.
  • the determination unit 110 can perform determination for each position.
  • a predetermined period including the timing of the most recent detection process is set as the first period, and a predetermined period ending at the timing when the integrated value exceeds a predetermined value is set as the second period.
  • the first image, the second image, the third image, and the fourth image are all images containing roads.
  • the determination unit 110 acquires information about the traffic flow on the road and determines the timing for starting the detection process based on the information about the traffic flow.
  • the determination unit 110 acquires information on traffic flow from, for example, the server 50 of a service that provides traffic information.
  • Information about traffic flow is, for example, information indicating the state of traffic flow at each branch point (three-forked road, intersection, five-way intersection, etc.) on a road at each hour.
  • the information on traffic flow may be information indicating the number or ratio of vehicles that have passed through a branch point within a predetermined time for each traveling direction (straight ahead, right turn, etc.). For example, at an intersection where there used to be right-turning vehicles, only straight-line vehicles from a certain period of time, there is a possibility that the traffic regulations have changed, that is, the markings on the road surface have changed.
  • the information about traffic flow may be information indicating the state of the flow of vehicles on the road. For example, if the number of convoys changed before and after a certain point in time, it is possible that the number of lanes changed, ie the markings on the road surface changed.
  • the determination unit 110 monitors information on traffic flow at predetermined intervals and detects changes in traffic flow. Specifically, the determination unit 110 compares the information about the traffic flow when the most recent detection process was performed and the latest information about the traffic flow. Then, the determination unit 110 determines to start the detection process when the difference between the information about the traffic flow when the most recent detection process is performed and the latest information about the traffic flow is greater than a predetermined reference value. do. On the other hand, when the difference is not larger than the predetermined reference value, it is determined not to start the detection process. Alternatively, if the determination unit 110 calculates the similarity between the information on the traffic flow when the most recent detection process was performed and the latest information on the traffic flow, and the calculated similarity is smaller than a predetermined reference value. , it is determined to start the detection process. On the other hand, when the degree of similarity is not smaller than a predetermined reference value, it is determined not to start the detection process.
  • the determination unit 110 may compare the proportion of straight vehicles at each junction based on information on traffic flow. In this case, the determination unit 110 determines to start the detection process when the difference between the percentage of vehicles traveling straight when the most recent detection process was performed and the latest percentage of vehicles traveling straight is greater than a predetermined reference value. do. On the other hand, if this difference is not greater than a predetermined reference value, it is determined not to start the detection process.
  • the determination unit 110 may also detect and compare the number of lanes at each position on the road using information on traffic flow. In this case, the determination unit 110 determines to start the detection process when the number of lanes when the most recent detection process is performed is different from the latest number of lanes. On the other hand, if these numbers are the same, it is determined not to start the detection process.
  • the determination unit 110 can perform determination for each position or branch point. Then, if it is determined to start the detection process, that position or branch point is designated as the position to be detected. A predetermined period including the timing of the most recent detection process is defined as the first period, and a predetermined period ending at the timing when the latest traffic flow information used for the determination is obtained is defined as the second period. It is preferable to
  • the hardware configuration of the computer that implements the information processing apparatus 10 according to this embodiment is represented, for example, by FIG. 6, as in the first embodiment.
  • the storage device 1080 of the computer 1000 that implements the information processing apparatus 10 of this embodiment further stores a program module that implements the function of the determination unit 110 of this embodiment.
  • FIG. 10 is a flowchart illustrating the flow of the information processing method performed by the information processing apparatus 10 according to this embodiment.
  • the information processing method according to this embodiment further includes a determination step S40 of determining whether or not to start the detection process. Then, when it is determined to start the detection process in determination step S40 (Y in S40), the detection process is started. On the other hand, if it is determined not to start the detection process in determination step S40 (N of S40), the detection process is not started.
  • the detection process is started when the determination unit 110 determines to start the detection process. Therefore, detection processing can be performed at appropriate timing.
  • FIG. 11 is a diagram for explaining processing performed by the information processing apparatus 10 according to the fourth embodiment.
  • the information processing apparatus 10 according to this embodiment is the same as the information processing apparatus 10 according to the third embodiment except for the points described below.
  • the determination unit 110 obtains three or more images captured on different dates and times, and determines a common component of each two temporally continuous images among the three or more images as a second image. Extract each as a common component. The determination unit 110 detects the presence or absence of a change in the target in the image acquired by the determination unit 110 by comparing the extracted plurality of second common components in time series. Then, the determination unit 110 determines to start the detection process when a change is detected in the object in the image acquired by the determination unit 110 . A detailed description is given below.
  • the determination unit 110 determines whether or not a change has occurred in the target based on a plurality of images. Then, when a change occurs, the information processing apparatus 10 can perform detection processing and extract a specific changed portion.
  • the determination unit 110 acquires images E1 to E6 with different shooting dates and times.
  • Each of the images E1-E6 includes a road 31, a white line 30 and a shadow 32.
  • FIG. In this example, the object is the white line 30 .
  • Images E1 to E6 are images shot in this order.
  • the images E1 to E6 may be images taken on different days, for example, but the image shooting interval is not particularly limited.
  • the determination unit 110 extracts a common component of each two temporally consecutive images among the images E1 to E6 as a second common component.
  • Images F1 to F5 are images showing the second common component.
  • the determination unit 110 determines the image F1 showing the common component of the image E1 and the image E2, the image F2 showing the common component of the image E2 and the image E3, the image F3 showing the common component of the image E3 and the image E4, the image E4 and the image An image F4 showing common components of E5 and an image F5 showing common components of images E5 and E6 are generated.
  • the determination unit 110 detects whether there is a change in the target in the images E1 to E6 by comparing the images F1 to F5 in time series. In this example, there is no difference between images F1 and F2, and there is a difference between images F2 and F3 and between images F3 and F4. Also, there is no difference between the image F4 and the image F5. In this case, it can be determined that there is a change between the imaging timing of the image E3 on which the image F3 is based and the imaging timing of the image E4. The determination by the determining unit 110 is also not affected by short-term changes such as shadows.
  • the determination unit 110 can perform determination using at least three images. For example, when the same processing is performed using three images E1 to E3, there is no difference between the images F1 and F2, so there is a change between the imaging timing of the image E1 and the imaging timing of the image E3. It can be determined that there is no Further, when the same processing is performed using three images E2 to E4, there is a difference between the image F2 and the image F3, so there is a change between the timing of capturing the image E2 and the timing of capturing the image E4. can be determined to have occurred.
  • the determination unit 110 can acquire four or more images taken at different dates and times, and detect the presence or absence of change and determine the timing of change, rather than determining the timing of change from only three images. preferable.
  • the determination unit 110 can further specify the change timing. Therefore, it is preferable that the acquisition unit 120 acquires the first and second images captured before the change timing and the third and fourth images captured after the change timing. That is, it is preferable to specify the first period before the change timing and the second period after the change timing. Further, the information processing apparatus 10 preferably designates the shooting position of the image acquired by the determination unit 110 and used for determination as the position to be detected.
  • the determination unit 110 can acquire, for example, three or more images from the storage unit 200 via the acquisition unit 120.
  • the image acquired by the determination unit 110 is the same as the image acquired by the acquisition unit 120 described in the first embodiment.
  • the determination unit 110 preferably acquires an image similar to the image acquired by the acquisition unit 120 for the detection process, and uses it to determine whether or not there is a change. That is, it is preferable that all the images acquired by the determination unit 110 are orthorectified images. Also, the images acquired by the determination unit 110 are preferably of the same type. It is preferable that the imaging regions of the images acquired by the determination unit 110 overlap each other.
  • the determination unit 110 can detect the presence or absence of long-term changes in the target. In addition, it is not necessary to predetermine the target.
  • the image acquired by the determination unit 110 may further include a short-term change substance.
  • the three or more images acquired by the determination unit 110 are images different from each other, and are images taken at different dates and times.
  • the determination unit 110 After acquiring the images, the determination unit 110 performs edge detection processing on each acquired image to obtain an edge image.
  • the edge detection processing is as described above in the description of S201. Note that the determination unit 110 may perform correction processing or the like on the acquired image as necessary.
  • the determination unit 110 associates the positions of the two edge images obtained from the two temporally consecutive images in the same manner as described above in the description of S202. Then, the determining unit 110 compares the values of the pixels at the corresponding positions in the two edge images whose positions are associated with each other, and determines the pixels having the same pixel values as the common portion. Thus, the image showing the pixels determined as the common portion so as to be identifiable from the other pixels is the image showing the second common component.
  • the determination unit 110 After generating a plurality of images showing the second common component from the acquired images, the determination unit 110 compares them in time series to detect the presence or absence of change. Specifically, the determination unit 110 performs a process of extracting a difference between two temporally continuous images showing the second common component, and when the difference is extracted, there is a change between the two images. I judge. On the other hand, if no difference is extracted, it is determined that there is no change between the two images. The process of extracting the difference between the images showing the second common component can be performed in the same manner as the difference image generating section 140 generates the difference image from the edge image. Note that the determination unit 110 may determine that no difference has been extracted when the ratio of the number of pixels indicating the difference in the difference image is equal to or less than a predetermined ratio.
  • the determination unit 110 determines that there is a change timing at least during the shooting period of the three or more images acquired by the determination unit 110. I can judge. When determining that there is a change timing during the shooting period of the three or more images acquired by the determination unit 110, the determination unit 110 determines to start the detection process. Then, the information processing apparatus 10 performs detection processing.
  • the determination unit 110 may further specify the change timing in detail.
  • the change timing is between the two images that are the basis of that image.
  • the change timing is can be identified.
  • the determination unit 110 detects whether or not there is a change in the object in the acquired image. Then, the determination unit 110 determines to start the detection process when a change is detected in the object in the image acquired by the determination unit 110 . Therefore, detection processing can be performed at appropriate timing.

Abstract

An information processing device (10) is provided with an acquisition unit (120), a difference image generation unit (140), and an extraction unit (160). The acquisition unit (120) acquires a first image captured at a first date and time, a second image captured at a second date and time, a third image captured at a third date and time, and a fourth image captured at a fourth date and time. The difference image generation unit (140) generates a first difference image indicating a difference between the first image and the third image, and a second difference image indicating a difference between the second image and the fourth image. The extraction unit (160) extracts, as a first common component, a common component of the first difference image and the second difference image. The third date and time and the fourth date and time are both later than the first date and time, and later than the second date and time.

Description

情報処理装置、情報処理方法、およびプログラムInformation processing device, information processing method, and program
 本発明は、情報処理装置、情報処理方法、およびプログラムに関する。 The present invention relates to an information processing device, an information processing method, and a program.
 高精度地図のメンテナンス等のために、道路や建物を撮影した画像の変化を検出する必要がある。ここで、画像には影等が映り込むことがあり、検出したい変化部分と、影等の変化を切り分ける必要がある。たとえば、路面の撮影画像には路面上の走行車両や路側にある構造物の影などが写っている。そのような画像を使用して路面上のペイントなどの変化を検出しようとする場合、影のエッジをペイントのエッジと認識してしまう恐れがある。そして、影は時刻や季節、天候により変化するため、それが変化部分として検出されてしまう恐れがある。  It is necessary to detect changes in images of roads and buildings for maintenance of high-precision maps. Here, a shadow or the like may appear in the image, and it is necessary to separate the changed portion to be detected from the change of the shadow or the like. For example, a photographed image of a road surface includes shadows of vehicles running on the road surface and structures on the roadside. When trying to detect changes in the paint on the road surface using such an image, there is a risk that the edge of the shadow will be recognized as the edge of the paint. And since the shadow changes depending on the time of day, season, and weather, there is a risk that it will be detected as a changed part.
 特許文献1には、輝度画像のヒストグラムの谷部の輝度よりも低い輝度を有する領域を影領域と判定し、谷部の輝度よりも高い輝度を有する領域を非影領域と判定することが記載されている。特許文献1の技術では、さらに輝度画像に対して影除去処理を行った上で変化抽出を行う。 Patent Document 1 describes that an area having a luminance lower than the luminance of the valley of a histogram of a luminance image is determined as a shadow area, and an area having a luminance higher than the luminance of the valley is determined as a non-shadow area. It is In the technique disclosed in Japanese Patent Application Laid-Open No. 2002-200002, the change extraction is performed after shadow removal processing is performed on the luminance image.
 特許文献2には、比較する複数の画像同士において対応する画素同士の輝度値の比を閾値処理することにより、画像を構成する部分的な領域が影領域否かを判定することが記載されている、そして、部分的な領域が影領域と判定されたとき、それを画像から除去することが記載されている。 Japanese Patent Application Laid-Open No. 2002-200000 describes determining whether or not a partial area that constitutes an image is a shadow area by performing threshold processing on the ratio of luminance values of corresponding pixels in a plurality of images to be compared. and removing it from the image when a partial region is determined to be a shadow region.
特開2004-252733号公報JP-A-2004-252733 特開2000-251053号公報JP-A-2000-251053
 しかしながら、特許文献1および2の技術では、天候等の撮影条件や他の被写体の特性などにより、必ずしも影を正確に判定できないという問題があると考えられる。また、特許文献1および2の技術では、変化検出に先立って影を除去した中間画像を生成しており、中間画像の精度が変化検出の精度に大きく影響すると考えられる。 However, with the techniques of Patent Documents 1 and 2, it is considered that there is a problem that shadows cannot always be determined accurately due to shooting conditions such as the weather and characteristics of other subjects. Further, in the techniques of Patent Documents 1 and 2, an intermediate image is generated from which shadows are removed prior to change detection, and it is considered that the accuracy of the intermediate image greatly affects the accuracy of change detection.
 本発明が解決しようとする課題としては、容易に精度良く画像の変化検出を行う技術を提供することが一例として挙げられる。 One example of the problem to be solved by the present invention is to provide a technique for detecting changes in images easily and accurately.
 請求項1に記載の発明は、
 第1の日時に撮影された第1画像と、第2の日時に撮影された第2画像と、第3の日時に撮影された第3画像と、第4の日時に撮影された第4画像とを取得する取得部と、
 前記第1画像と前記第3画像との差分を示す第1差分画像、および、前記第2画像と前記第4画像との差分を示す第2差分画像を生成する差分画像生成部と、
 前記第1差分画像と前記第2差分画像との共通成分を第1共通成分として抽出する抽出部とを備え、
 前記第3の日時および前記第4の日時はいずれも、前記第1の日時よりも後、かつ前記第2の日時よりも後である
情報処理装置である。
The invention according to claim 1,
A first image taken on a first date and time, a second image taken on a second date and time, a third image taken on a third date and time, and a fourth image taken on a fourth date and time. an acquisition unit that acquires and
a difference image generating unit that generates a first difference image indicating the difference between the first image and the third image and a second difference image indicating the difference between the second image and the fourth image;
an extraction unit that extracts a common component between the first difference image and the second difference image as a first common component,
In the information processing apparatus, both the third date and time and the fourth date and time are after the first date and time and after the second date and time.
 請求項15に記載の発明は、
 第1の日時に撮影された第1画像と、第2の日時に撮影された第2画像と、第3の日時に撮影された第3画像と、第4の日時に撮影された第4画像とを取得する取得ステップと、
 前記第1画像と前記第3画像との差分を示す第1差分画像、および、前記第2画像と前記第4画像との差分を示す第2差分画像を生成する差分画像生成ステップと、
 前記第1差分画像と前記第2差分画像との共通成分を第1共通成分として抽出する抽出ステップとを備え、
 前記第3の日時および前記第4の日時はいずれも、前記第1の日時よりも後、かつ前記第2の日時よりも後である
情報処理方法である。
The invention according to claim 15,
A first image taken on a first date and time, a second image taken on a second date and time, a third image taken on a third date and time, and a fourth image taken on a fourth date and time. a retrieving step of retrieving and
a difference image generating step of generating a first difference image showing the difference between the first image and the third image and a second difference image showing the difference between the second image and the fourth image;
an extraction step of extracting a common component between the first difference image and the second difference image as a first common component;
In the information processing method, both the third date and time and the fourth date and time are after the first date and time and after the second date and time.
 請求項16に記載の発明は、
 請求項15に記載の情報処理方法の各ステップをコンピュータに実行させるプログラムである。
The invention according to claim 16,
A program that causes a computer to execute each step of the information processing method according to claim 15.
第1の実施形態に係る情報処理装置の機能構成を例示するブロック図である。1 is a block diagram illustrating the functional configuration of an information processing apparatus according to a first embodiment; FIG. 第1の実施形態に係る情報処理装置が行う処理を説明するための図である。4 is a diagram for explaining processing performed by the information processing apparatus according to the first embodiment; FIG. 第1の実施形態に係る情報処理方法の流れを例示するフローチャートである。4 is a flowchart illustrating the flow of an information processing method according to the first embodiment; 第1の実施形態に係る情報処理装置が行う処理を詳しく例示するフローチャートである。4 is a flowchart illustrating in detail processing performed by the information processing apparatus according to the first embodiment; 第1の実施形態に係る情報処理装置が行う処理の変形例を示すフローチャートである。7 is a flow chart showing a modification of the processing performed by the information processing apparatus according to the first embodiment; 情報処理装置を実現するための計算機を例示する図である。It is a figure which illustrates the computer for implement|achieving an information processing apparatus. 第2の実施形態に係る情報処理装置が行う処理を説明するための図である。FIG. 11 is a diagram for explaining processing performed by an information processing apparatus according to the second embodiment; 第2の実施形態に係る情報処理装置が行う処理を説明するための図である。FIG. 11 is a diagram for explaining processing performed by an information processing apparatus according to the second embodiment; 第3の実施形態に係る情報処理装置の機能構成を例示するブロック図である。FIG. 11 is a block diagram illustrating the functional configuration of an information processing apparatus according to a third embodiment; FIG. 第3の実施形態に係る情報処理装置が行う情報処理方法の流れを例示するフローチャートである。11 is a flowchart illustrating the flow of an information processing method performed by an information processing apparatus according to a third embodiment; 第4の実施形態に係る情報処理装置が行う処理を説明するための図である。It is a figure for demonstrating the process which the information processing apparatus which concerns on 4th Embodiment performs.
 以下、本発明の実施の形態について、図面を用いて説明する。尚、すべての図面において、同様な構成要素には同様の符号を付し、適宜説明を省略する。 Embodiments of the present invention will be described below with reference to the drawings. In addition, in all the drawings, the same constituent elements are denoted by the same reference numerals, and the description thereof will be omitted as appropriate.
 以下に示す説明において、特に説明する場合を除き、情報処理装置10の各構成要素は、ハードウエア単位の構成ではなく、機能単位のブロックを示している。情報処理装置10の各構成要素は、任意のコンピュータのCPU、メモリ、メモリにロードされたプログラム、そのプログラムを格納するハードディスクなどの記憶メディア、ネットワーク接続用インタフェースを中心にハードウエアとソフトウエアの任意の組合せによって実現される。そして、その実現方法、装置には様々な変形例がある。 In the following description, each component of the information processing apparatus 10 indicates a functional unit block, not a hardware unit configuration, unless otherwise specified. Each component of the information processing apparatus 10 includes a CPU of an arbitrary computer, a memory, a program loaded in the memory, a storage medium such as a hard disk for storing the program, an interface for network connection, and an arbitrary combination of hardware and software. is realized by a combination of There are various modifications of the implementation method and device.
(第1の実施形態)
 図1は、第1の実施形態に係る情報処理装置10の機能構成を例示するブロック図である。本実施形態に係る情報処理装置10は、取得部120、差分画像生成部140および抽出部160を備える。取得部120は、第1の日時に撮影された第1画像と、第2の日時に撮影された第2画像と、第3の日時に撮影された第3画像と、第4の日時に撮影された第4画像とを取得する。差分画像生成部140は、第1画像と第3画像との差分を示す第1差分画像、および、第2画像と第4画像との差分を示す第2差分画像を生成する。抽出部160は、第1差分画像と第2差分画像との共通成分を第1共通成分として抽出する。第3の日時および第4の日時はいずれも、第1の日時よりも後、かつ第2の日時よりも後である。以下に詳しく説明する。
(First embodiment)
FIG. 1 is a block diagram illustrating the functional configuration of an information processing device 10 according to the first embodiment. The information processing apparatus 10 according to this embodiment includes an acquisition unit 120 , a difference image generation unit 140 and an extraction unit 160 . Acquisition unit 120 obtains a first image taken on a first date and time, a second image taken on a second date and time, a third image taken on a third date and time, and a third image taken on a fourth date and time. and obtain a fourth image. The difference image generator 140 generates a first difference image indicating the difference between the first image and the third image, and a second difference image indicating the difference between the second image and the fourth image. The extraction unit 160 extracts a common component between the first difference image and the second difference image as a first common component. Both the third date and time and the fourth date and time are after the first date and time and after the second date and time. A detailed description is given below.
 図2は、本実施形態に係る情報処理装置10が行う処理を説明するための図である。たとえば画像A1が第1画像に相当し、画像A2が第2画像に相当し、画像B1が第3画像に相当し、画像B2が第4画像に相当する。本図の例において、画像A1~A3、画像B1~B3はそれぞれ道路31を含むオルソ画像であり、道路31には白線30が描かれている。また画像A1~A3、画像B1~B3には、影32が写り込んでいる。情報処理装置10によれば、この白線30の変化を検出することができる。 FIG. 2 is a diagram for explaining the processing performed by the information processing device 10 according to this embodiment. For example, image A1 corresponds to the first image, image A2 corresponds to the second image, image B1 corresponds to the third image, and image B2 corresponds to the fourth image. In the example of this figure, images A1 to A3 and images B1 to B3 are respectively orthoimages including roads 31, and white lines 30 are drawn on the roads 31. FIG. A shadow 32 is reflected in the images A1 to A3 and the images B1 to B3. The information processing apparatus 10 can detect this change in the white line 30 .
 本図の例において画像A1~A3、画像B1~B3は互いに異なる画像である。たとえば画像A1~A3は互いに同じ日の異なる時刻に撮影された画像であり、画像B1~B3は互いに同じ日の異なる時刻に撮影された画像である。画像B1~B3が撮影された日は画像A1~A3が撮影された日よりも後である。 In the example of this figure, images A1 to A3 and images B1 to B3 are different images. For example, images A1 to A3 are images taken at different times on the same day, and images B1 to B3 are images taken at different times on the same day. The date on which the images B1 to B3 were taken is later than the date on which the images A1 to A3 were taken.
 差分画像C1は画像A1と画像B1との差分を示している。差分画像C2は画像A2と画像B2との差分を示している。差分画像C3は画像A3と画像B3との差分を示している。そして、共通成分画像D1は、差分画像C1と差分画像C2と差分画像C3との共通成分を示す画像である。この共通成分画像D1は、白線30の変化部分のみが抽出されたものとなる。たとえば差分画像C1は第1差分画像に相当し、差分画像C2は第2差分画像に相当する。そして共通成分画像D1は、抽出部160で抽出される第1共通成分を示す画像に相当する。 A difference image C1 indicates the difference between the image A1 and the image B1. A difference image C2 indicates the difference between the image A2 and the image B2. A difference image C3 indicates the difference between the image A3 and the image B3. The common component image D1 is an image showing common components of the difference image C1, the difference image C2, and the difference image C3. From this common component image D1, only the changing portion of the white line 30 is extracted. For example, the difference image C1 corresponds to the first difference image, and the difference image C2 corresponds to the second difference image. The common component image D1 corresponds to an image representing the first common component extracted by the extraction unit 160. FIG.
 つまり、本実施形態に係る情報処理装置10によれば、第1画像および第2画像が撮影された時期と、第3画像および第4画像が撮影された時期との間で生じた変化を第1共通成分として抽出できる。また、第1共通成分には、第1画像および第2画像の間で生じた変化および第3画像および第4画像の間で生じた変化は抽出されない。したがって、影などの短期的な変化と、白線や建造物等における長期的な変化を切り分けて検出することができる。 In other words, according to the information processing apparatus 10 according to the present embodiment, the change occurring between the time when the first image and the second image were captured and the time when the third image and the fourth image were captured is can be extracted as one common component. Also, the change between the first and second images and the change between the third and fourth images are not extracted as the first common component. Therefore, short-term changes such as shadows and long-term changes such as white lines and buildings can be separately detected.
 本実施形態に係る方法では、影に特有の特性などを利用していないため、影の判定に関する不確実な要素の影響を受けにくいとともに、短期的な変化を及ぼす影以外の要素にも対応が可能である。短期的な変化を及ぼす事象が落ち葉であるか影であるか雪であるかといった属性も意識することなく、その影響を除去できる。さらに、本実施形態に係る方法では、たとえば影を除去したような中間画像を生成していない。したがって、処理が簡潔かつ低負荷であり、変化部分の検出精度を低下させる要素が入り込みにくい。ひいては、高精度に変化部分を検出することが可能である。 Since the method according to the present embodiment does not use characteristics unique to shadows, it is less likely to be affected by uncertain factors related to shadow determination, and it can also deal with factors other than shadows that cause short-term changes. It is possible. It is possible to remove the influence of short-term changes without being conscious of whether the event is falling leaves, shadows, or snow. Furthermore, the method according to the present embodiment does not generate an intermediate image with shadows removed, for example. Therefore, the processing is simple and the load is low, and factors that lower the detection accuracy of the changed portion are less likely to enter. As a result, it is possible to detect the changed portion with high accuracy.
 図3は、本実施形態に係る情報処理方法の流れを例示するフローチャートである。本実施形態に係る情報処理方法は、取得ステップS10、差分画像生成ステップS20、および抽出ステップS30を含む。取得ステップS10では、第1の日時に撮影された第1画像と、第2の日時に撮影された第2画像と、第3の日時に撮影された第3画像と、第4の日時に撮影された第4画像とが取得される。差分画像生成ステップS20では、第1画像と第3画像との差分を示す第1差分画像、および、第2画像と第4画像との差分を示す第2差分画像が生成される。抽出ステップS30では、第1差分画像と第2差分画像との共通成分が第1共通成分として抽出される。第3の日時および第4の日時はいずれも、第1の日時よりも後、かつ第2の日時よりも後である。本実施形態に係る情報処理方法は、本実施形態に係る情報処理装置10により実行され得る。 FIG. 3 is a flowchart illustrating the flow of the information processing method according to this embodiment. The information processing method according to this embodiment includes an acquisition step S10, a difference image generation step S20, and an extraction step S30. In the acquisition step S10, a first image taken on a first date and time, a second image taken on a second date and time, a third image taken on a third date and time, and a fourth image taken on a fourth date and time. A fourth image is acquired. In the difference image generating step S20, a first difference image indicating the difference between the first image and the third image and a second difference image indicating the difference between the second image and the fourth image are generated. In the extraction step S30, a common component between the first difference image and the second difference image is extracted as a first common component. Both the third date and time and the fourth date and time are after the first date and time and after the second date and time. The information processing method according to this embodiment can be executed by the information processing apparatus 10 according to this embodiment.
 図4は、本実施形態に係る情報処理装置10が行う処理を詳しく例示するフローチャートである。図1および図4を参照して本実施形態に係る情報処理装置10が行う処理について以下に詳しく説明する。取得部120が取得した画像を用いて変化部分を検出する一連の処理を、以下では「検出処理」と呼ぶ。 FIG. 4 is a flowchart illustrating in detail the processing performed by the information processing apparatus 10 according to this embodiment. Processing performed by the information processing apparatus 10 according to the present embodiment will be described in detail below with reference to FIGS. 1 and 4. FIG. A series of processes for detecting changed portions using the image acquired by the acquisition unit 120 is hereinafter referred to as “detection processing”.
 取得部120はたとえば取得部120からアクセス可能な記憶部200から、画像を取得する。図1では記憶部200が情報処理装置10の外部に設けられている例を示しているが、記憶部200は情報処理装置10の内部に設けられていても良い。記憶部200には複数の画像が、撮影日時や撮影位置と関連付けられた状態で予め保持されている。取得部120は、位置および時期を指定して記憶部200から画像を取得することができる。画像の撮影位置は、緯度および経度で特定できる他、個別の交差点やランドマークを示す識別子を用いて特定されてもよい。 The acquisition unit 120 acquires images from, for example, the storage unit 200 accessible from the acquisition unit 120 . Although FIG. 1 shows an example in which the storage unit 200 is provided outside the information processing device 10 , the storage unit 200 may be provided inside the information processing device 10 . A plurality of images are stored in advance in the storage unit 200 in association with shooting dates and shooting positions. Acquisition unit 120 can acquire an image from storage unit 200 by specifying a position and time. The shooting position of the image can be specified by latitude and longitude, or can be specified by using an identifier that indicates an individual intersection or landmark.
 取得部120が取得する画像は特に限定されないが、道路、構造物等を移した画像であり得る。また、取得部120が取得する画像は特に限定されないが、たとえば道路上を移動する移動体(車両、二輪車等)に設けられた撮像装置で撮影された画像である。取得部120が取得する画像は、オルソ画像、道路や構造物等を真上から写した上面画像(top view image)、側面画像、航空写真等であってもよい。また、立体的な変化の検出を行う場合には、取得部120はLiDARで取得した点群データに基づく画像を取得しても良い。中でも取得部120が検出処理のために取得する画像はオルソ画像であることが好ましい。すなわち、第1画像、第2画像、第3画像、および第4画像はいずれもオルソ画像であることが好ましい。そうすれば、変化部分を高精度に検出しやすい。記憶部200に保持された画像がオルソ画像でない場合、取得部120は記憶部200から読みだした画像をオルソ化することにより検出処理に用いる画像を得ても良い。なお、取得部120が検出処理のために取得する画像は互いに同じ種類の画像であることが好ましい。 The image acquired by the acquisition unit 120 is not particularly limited, but may be an image of a road, a structure, or the like. Also, the image acquired by the acquisition unit 120 is not particularly limited, but is, for example, an image captured by an imaging device provided on a moving object (vehicle, two-wheeled vehicle, etc.) that moves on a road. The image acquired by the acquisition unit 120 may be an orthoimage, a top view image of a road, a structure, or the like taken from directly above, a side image, an aerial photograph, or the like. Further, when detecting a three-dimensional change, the acquisition unit 120 may acquire an image based on point cloud data acquired by LiDAR. Among them, the image acquired by the acquisition unit 120 for detection processing is preferably an orthoimage. That is, it is preferable that all of the first image, the second image, the third image, and the fourth image are orthorectified images. By doing so, it is easy to detect the changed portion with high accuracy. If the image held in the storage unit 200 is not an orthorectified image, the acquisition unit 120 may orthorectify the image read from the storage unit 200 to obtain an image used for detection processing. It is preferable that the images acquired by the acquisition unit 120 for the detection process are of the same type.
 情報処理装置10は取得部120が取得した画像を用いて変化部分の検出処理を行う。取得部120が検出処理のために取得する画像には、少なくとも第1画像、第2画像、第3画像、および第4画像が含まれる。ただし、取得部120が検出処理のために取得する画像には、さらに一以上の画像が含まれても良い。取得部120が検出処理のために取得する画像は、同じ対象を含む画像であることが好ましい。また、取得部120が検出処理のために取得する画像は、互いに撮影領域が重なっていることが好ましい。なお、対象は予め定めておく必要はない。 The information processing apparatus 10 uses the image acquired by the acquisition unit 120 to perform processing for detecting changed portions. The images acquired by the acquisition unit 120 for detection processing include at least a first image, a second image, a third image, and a fourth image. However, the image acquired by the acquisition unit 120 for the detection process may further include one or more images. The images acquired by the acquisition unit 120 for detection processing are preferably images including the same object. In addition, it is preferable that the images acquired by the acquisition unit 120 for the detection process have overlapping photographing areas. In addition, it is not necessary to predetermine the target.
 情報処理装置10はたとえば対象の長期的変化を検出することができる。図2の例において、対象は白線である。対象としては特に限定されないが、建物、標識、信号機、案内板等の構造物や、道路に描かれた白線等の区画線、横断歩道、道路標示、路面等が挙げられる。道路に描かれたペイントは、経時劣化や道路工事等により剥がれたり、新たに描き直されたりするため、これらの変化を検出することが必要となる。また、路面のヒビ、破損、凹み等も検出できることが好ましい。 The information processing device 10 can detect, for example, long-term changes in the target. In the example of FIG. 2, the object is the white line. Objects include, but are not limited to, structures such as buildings, signs, traffic lights, and information boards, division lines such as white lines drawn on roads, crosswalks, road markings, road surfaces, and the like. The paint drawn on the road is peeled off due to deterioration over time, road construction, etc., and is repainted anew, so it is necessary to detect these changes. In addition, it is preferable to be able to detect cracks, damage, dents, and the like on the road surface.
 一方、取得部120が検出処理のために取得する画像には、短期変化物がさらに含まれていても良い。短期変化物は一時的に変化するものであり、情報処理装置10は短期変化物による変化を除いた検出結果を出力できる。短期変化物は特に限定されないが、たとえば、影、雪、水たまり、土砂、車両、落下物、落ち葉、およびゴミのうち少なくともいずれかであり得る。 On the other hand, the image acquired by the acquisition unit 120 for detection processing may further include a short-term change substance. A short-term changeable substance changes temporarily, and the information processing apparatus 10 can output a detection result excluding changes due to a short-term changeable substance. The short-term change object is not particularly limited, but may be, for example, at least one of shadows, snow, puddles, earth and sand, vehicles, fallen objects, fallen leaves, and garbage.
 上述したとおり、第1画像は第1の日時に撮影された画像であり、第2画像は第2の日時に撮影された画像であり、第3画像は第3の日時に撮影された画像であり、第4画像は第4の日時に撮影された画像である。第3の日時および第4の日時はいずれも、第1の日時よりも後、かつ第2の日時よりも後である。ここで、第1の日時と第2の日時との間隔は、第1の日時と第3の日時の間隔よりも狭いことが好ましい。第1の日時と第2の日時との間隔は、第2の日時と第4の日時の間隔よりも狭いことが好ましい。第3の日時と第4の日時の間隔は、第1の日時と第3の日時の間隔よりも狭いことが好ましい。また、第3の日時と第4の日時との間隔は、第2の日時と第4の日時の間隔よりも狭いことが好ましい。本実施形態において、第1の日時と第2の日時は互いに異なり、かつ、第3の日時と第4の日時は互いに異なる。第1の日時および第2の日時は第1の時期に含まれ、第3の日時と第4の日時は第2の時期に含まれる。第2の時期は第1の時期よりも後である。情報処理装置10によれば、第1の時期と第2の時期の間の対象の変化を検出できる。第1の時期および第2の時期の長さはそれぞれ特に限定されないが、たとえば1日である。 As described above, the first image is an image taken on the first date and time, the second image is an image taken on the second date and time, and the third image is an image taken on the third date and time. and the fourth image is an image taken on the fourth date and time. Both the third date and time and the fourth date and time are after the first date and time and after the second date and time. Here, the interval between the first date and time and the second date and time is preferably narrower than the interval between the first date and time and the third date and time. The interval between the first date and time and the second date and time is preferably narrower than the interval between the second date and time and the fourth date and time. The interval between the third date and time and the fourth date and time is preferably narrower than the interval between the first date and time and the third date and time. Also, the interval between the third date and time and the fourth date and time is preferably narrower than the interval between the second date and time and the fourth date and time. In this embodiment, the first date and time are different from the second date and time, and the third date and time are different from the fourth date and time. The first date and time and the second date and time are included in the first period, and the third date and time and the fourth date and time are included in the second period. The second epoch is later than the first epoch. According to the information processing device 10, it is possible to detect a change in the target between the first period and the second period. Although the lengths of the first period and the second period are not particularly limited, they are, for example, one day.
 まずS101において、情報処理装置10は、検出対象の位置および時期の指定を受け付ける。検出対象の位置および時期は、たとえばユーザが情報処理装置10に対して入力することができる。検出対象の位置の指定はたとえば緯度および経度でされても良いし、上述したように交差点やランドマークの識別子でされても良い。また、検出対象の位置は予め定められた基準地点からの距離や相対位置で指定されても良い。ランドマークとしては、信号機、標識、バス停、横断歩道等が挙げられる。検出対象の時期の指定は、第1の時期と第2の時期を指定することにより行われる。各時期は、その時期の始点と終点を指定すること、またはその時期の始点と長さを指定することで行える。 First, in S101, the information processing apparatus 10 accepts designation of the detection target position and time. The position and time of the detection target can be input to the information processing device 10 by the user, for example. The position of the detection target may be specified, for example, by latitude and longitude, or may be specified by an intersection or landmark identifier as described above. Also, the position of the detection target may be designated by a distance or a relative position from a predetermined reference point. Landmarks include traffic lights, signs, bus stops, pedestrian crossings, and the like. The designation of the detection target period is performed by designating the first period and the second period. Each epoch can be done by specifying the start and end of the epoch, or by specifying the start and length of the epoch.
 情報処理装置10が検出対象の位置および時期の指定を受け付けると、取得部120はS102において記憶部200から検出対象の位置および時期に該当する画像を選択し、検出処理のために取得する。具体的には取得部120は、検出対象の位置として指定された位置を含む画像を取得する。または、取得部120は検出対象の位置として指定された位置から撮影された画像を取得する。検出対象の位置が交差点やランドマークの識別子で指定される場合、取得部120は、その識別子で表された交差点やランドマークが含まれる画像を取得する。本実施形態において取得部120は第1の時期に撮影された画像を二以上取得し、第2の時期に撮影された画像を二以上取得する。取得部120は、たとえば第1の時期に第1の日時および第2の日時が含まれるように、第1画像および第2画像を取得する。また取得部120は、第2の時期に第3の日時および第4の日時が含まれるように、第3画像および第4画像を取得する。 When the information processing apparatus 10 receives the designation of the detection target position and time, the acquisition unit 120 selects an image corresponding to the detection target position and time from the storage unit 200 in S102 and acquires it for the detection process. Specifically, the acquisition unit 120 acquires an image including the position designated as the detection target position. Alternatively, the acquisition unit 120 acquires an image captured from a position designated as a detection target position. When the detection target position is specified by an intersection or landmark identifier, the acquisition unit 120 acquires an image including the intersection or landmark indicated by the identifier. In the present embodiment, the acquiring unit 120 acquires two or more images captured during the first period, and acquires two or more images captured during the second period. Acquisition unit 120 acquires the first image and the second image such that the first time period includes the first date and time and the second date and time, for example. Acquisition unit 120 also acquires the third image and the fourth image such that the second time period includes the third date and time and the fourth date and time.
 取得部120が画像を取得すると、差分画像生成部140はS201においてエッジ検出を行う。具体的には差分画像生成部140は、取得部120が取得した各画像に対し、エッジ検出処理を行う。エッジ検出処理はたとえば画像の輝度勾配に基づいて行える。ただし色の変化を検出しようとする場合には、画像の色相にさらに基づいてエッジ検出処理を行っても良い。なお、差分画像生成部140は、取得部120が取得した画像に対し、必要に応じて補正処理等を行っても良い。エッジ検出処理により、エッジ画像が得られる。 When the acquisition unit 120 acquires the image, the difference image generation unit 140 performs edge detection in S201. Specifically, the differential image generation unit 140 performs edge detection processing on each image acquired by the acquisition unit 120 . The edge detection process can be based, for example, on the luminance gradients of the image. However, when trying to detect a change in color, edge detection processing may be performed further based on the hue of the image. Note that the difference image generation unit 140 may perform correction processing or the like on the image acquired by the acquisition unit 120 as necessary. An edge image is obtained by edge detection processing.
 次いで差分画像生成部140はS202において、差分画像を複数生成する。差分画像は、第1の時期に撮影されたいずれかの画像と、第2の時期に撮影されたいずれかの画像との差分を示す画像である。具体的には、差分画像生成部140は、S201で得られたエッジ画像の位置を対応付ける。そして、位置が対応付けられた二つのエッジ画像で、対応する位置の画素の値を比較する。すなわち、実空間で同じ位置に対応する画素の値を比較する。そして、画素の値が一致する画素を共通部分と判定し、画素の値が一致しない画素を差分と判定する。こうして、差分と判定された画素をそれ以外の画素と識別可能に示された画像が差分画像である。 Next, in S202, the difference image generation unit 140 generates a plurality of difference images. A difference image is an image showing a difference between any image taken in the first period and any image taken in the second period. Specifically, the differential image generator 140 associates the positions of the edge images obtained in S201. Then, the values of the pixels at the corresponding positions are compared between the two edge images whose positions are associated with each other. That is, the values of pixels corresponding to the same position in the real space are compared. Pixels with matching pixel values are determined as common portions, and pixels with non-matching pixel values are determined as differences. A difference image is an image in which the pixels determined to be the difference are identifiable from the other pixels.
 差分画像を生成するために用いる二画像の組み合わせは特に限定されないが、影などの短期変化物の状態が互いに異なる画像の組み合わせであることが好ましい。たとえば、差分画像生成部140は各画像に関連付けられた撮影日時に基づき太陽の向きを導出し、太陽の向きが所定の角度以上離れている二画像を用いて差分画像を生成することが好ましい。 The combination of two images used to generate the difference image is not particularly limited, but it is preferable to combine images in which the states of short-term objects such as shadows are different from each other. For example, the differential image generation unit 140 preferably derives the direction of the sun based on the shooting date and time associated with each image, and generates the differential image using two images in which the direction of the sun is separated by a predetermined angle or more.
 エッジ画像の位置を対応付ける方法としては、たとえばGPS情報等により画像に撮影された位置と方位が関連付けられている場合、その位置と方位を用いる方法が挙げられる。また、画像に関連付けられた情報に加えて、両画像内の基準物を利用してマッチングさせることも可能である。さらに、両画像内に適当な基準物が存在しない場合には、連続して撮影された画像を含めて広範囲で基準物を探索し。発見した基準物からの相対的な位置関係を求めて対象の画像の位置を推定し、位置合わせの精度を上げることが考えられる。差分画像にはさらに、その差分画像の元となる画像の位置情報に基づいた位置情報を関連付けることができる。 As a method of associating the position of the edge image, for example, if the position and orientation of the image are associated with GPS information or the like, there is a method of using that position and orientation. It is also possible to match using fiducials in both images in addition to the information associated with the images. Furthermore, if there is no suitable reference object in both images, the reference object is searched over a wide range including the successively captured images. It is conceivable to estimate the position of the target image by obtaining the relative positional relationship from the found reference object and improve the accuracy of registration. The difference image can further be associated with position information based on the position information of the original image of the difference image.
 次いで、抽出部160はS301において、複数の差分画像の共通成分を抽出する。抽出部160はこの共通成分を示す画像を生成することができる。具体的には、抽出部160はS202でエッジ画像の位置を対応付けたのと同様にして複数の差分画像の位置を対応付ける。そして、位置が対応付けられた複数の差分画像で、対応する位置の画素の値を比較する。そして、画素の値がすべて一致する画素を共通部分と判定する。または、全ての差分画像のうち、画素の値が一致する割合が所定の閾値以上である画素を共通部分と判定する。こうして、共通部分と判定された画素をそれ以外の画素と識別可能に示された画像が第1共通成分を示す画像である。 Next, in S301, the extraction unit 160 extracts common components of a plurality of difference images. The extractor 160 can generate an image showing this common component. Specifically, the extraction unit 160 associates the positions of the plurality of difference images in the same manner as the positions of the edge images were associated in S202. Then, values of pixels at corresponding positions are compared in a plurality of difference images with associated positions. Then, the pixels having the same pixel values are determined as the common portion. Alternatively, among all the difference images, pixels having a ratio of matching pixel values equal to or higher than a predetermined threshold value are determined to be common portions. Thus, the image showing the pixels determined as the common portion so as to be identifiable from the other pixels is the image showing the first common component.
 図1の例において、情報処理装置10は、第1共通成分を示す画像を出力する出力部180をさらに備える。出力部180はS401において、抽出部160が生成した第1共通成分を示す画像を表示装置に表示させるためのデータを出力する。なお、抽出の結果、差分画像の共通成分がない場合には、出力部180は、変化部分が無いことを示す情報を出力しても良い。なお、出力部180は、第1共通成分を示す画像中の共通成分を示す画素の数の割合が、所定の割合以下である場合に、変化部分が無いことを示す情報を出力しても良い。 In the example of FIG. 1, the information processing device 10 further includes an output unit 180 that outputs an image showing the first common component. In S401, the output unit 180 outputs data for displaying the image representing the first common component generated by the extraction unit 160 on the display device. Note that if the extraction result shows that there is no common component in the difference image, the output unit 180 may output information indicating that there is no changed portion. Note that the output unit 180 may output information indicating that there is no changed portion when the ratio of the number of pixels indicating the common component in the image indicating the first common component is equal to or less than a predetermined ratio. .
 なお、抽出部160がS301を行った後、同じ検出対象の位置および時期について、異なる画像を用いて再度検出処理が行われても良い。そして、複数の検出処理の結果の一致度が所定の基準以上である場合に、出力部180がその結果を出力してもよい。そうすることにより、より確かな結果を出力できる。 Note that after the extraction unit 160 performs S301, detection processing may be performed again using a different image for the same detection target position and time. Then, when the degree of matching of the results of a plurality of detection processes is equal to or higher than a predetermined standard, the output section 180 may output the results. By doing so, more reliable results can be output.
 図5は、本実施形態に係る情報処理装置10が行う処理の変形例を示すフローチャートである。本変形例によれば、検出処理に用いる画像を増やすことにより、変化部分の検出精度を高めることができる。本変形例では、所定の数の差分画像を生成する。本変形例は、以下に説明する点を除いて図4の例と同じである。 FIG. 5 is a flowchart showing a modified example of processing performed by the information processing apparatus 10 according to this embodiment. According to this modified example, by increasing the number of images used in the detection process, it is possible to improve the detection accuracy of the changed portion. In this modified example, a predetermined number of differential images are generated. This modification is the same as the example of FIG. 4 except for the points described below.
 本例のS101は図4のS101と同じである。情報処理装置10が検出対象の位置および時期の指定を受け付けると、取得部120はS103において記憶部200から検出対象の位置および時期に該当する画像を選択し、検出処理のために取得する。本例において取得部120は第1の時期に撮影された画像と、第2の時期に撮影された画像を一つずつ取得する。取得部120が取得する二画像の組み合わせは特に限定されないが、影の状態が互いに異なる画像の組み合わせであることが好ましい。たとえば、取得部120は各画像に関連付けられた撮影日時に基づき太陽の向きを導出し、太陽の向きが所定の角度以上離れている二画像を取得することが好ましい。 S101 in this example is the same as S101 in FIG. When the information processing apparatus 10 receives the designation of the detection target position and time, the acquisition unit 120 selects an image corresponding to the detection target position and time from the storage unit 200 in S103 and acquires it for the detection process. In this example, the acquisition unit 120 acquires one image captured during the first period and one image captured during the second period. The combination of two images acquired by the acquisition unit 120 is not particularly limited, but a combination of images with different shadow states is preferable. For example, the acquisition unit 120 preferably derives the direction of the sun based on the date and time of photography associated with each image, and acquires two images in which the direction of the sun is separated by a predetermined angle or more.
 取得部120が画像を取得すると、差分画像生成部140はS201においてエッジ検出を行う。本例のS201は図4のS201と同じである。 When the acquisition unit 120 acquires the image, the difference image generation unit 140 performs edge detection in S201. S201 in this example is the same as S201 in FIG.
 次いで差分画像生成部140はS203において、差分画像を生成する。差分画像は、第1の時期に撮影された画像と、第2の時期に撮影された画像との差分を示す画像である。差分画像を生成する方法は、図4のS202について説明した方法と同じである。 Next, the difference image generation unit 140 generates a difference image in S203. A difference image is an image showing a difference between an image captured in the first period and an image captured in the second period. The method for generating the differential image is the same as the method described for S202 in FIG.
 次いで、差分画像生成部140はS204において、それまでに生成した差分画像の数が、所定の数に達したか否かを判定する。差分画像の数が、所定の数に達していない場合(S204のN)、S103の処理に戻り、取得部120は再度画像を取得する。差分画像の数が、所定の数に達した場合(S204のY)、差分画像生成部140は、生成した差分画像を全て抽出部160へ出力する。 Next, in S204, the differential image generation unit 140 determines whether the number of differential images generated so far has reached a predetermined number. If the number of difference images has not reached the predetermined number (N of S204), the process returns to S103, and the acquisition unit 120 acquires images again. When the number of difference images reaches a predetermined number (Y of S204), the difference image generation unit 140 outputs all the generated difference images to the extraction unit 160. FIG.
 次いで、抽出部160はS301において、差分画像生成部140から取得した複数の差分画像の共通成分を抽出する。共通部分を抽出する方法は図4のS301と同じである。また、出力部180はS401において、抽出部160が生成した第1共通成分を示す画像を表示装置に表示させるためのデータを出力する。 Next, in S<b>301 , the extraction unit 160 extracts common components of the plurality of difference images acquired from the difference image generation unit 140 . The method of extracting the common part is the same as S301 in FIG. In S401, the output unit 180 also outputs data for displaying the image representing the first common component generated by the extraction unit 160 on the display device.
 なお、抽出部160が三以上の差分画像を用いて共通成分を抽出する場合、同一の画像が二以上の差分画像の生成に用いられても良い。ただし、第2の実施形態で説明するように、短期変化物が含まれない画像でない限り、全ての差分画像の生成に同一の画像を用いることはできない。 Note that when the extraction unit 160 extracts common components using three or more differential images, the same image may be used to generate two or more differential images. However, as described in the second embodiment, the same image cannot be used to generate all the difference images unless the image does not contain short-term change substances.
 情報処理装置10のハードウエア構成について以下に説明する。情報処理装置10の各機能構成部は、各機能構成部を実現するハードウエア(例:ハードワイヤードされた電子回路など)で実現されてもよいし、ハードウエアとソフトウエアとの組み合わせ(例:電子回路とそれを制御するプログラムの組み合わせなど)で実現されてもよい。以下、情報処理装置10の各機能構成部がハードウエアとソフトウエアとの組み合わせで実現される場合について、さらに説明する。 The hardware configuration of the information processing device 10 will be described below. Each functional configuration unit of the information processing apparatus 10 may be implemented by hardware (eg, hardwired electronic circuit) that implements each functional configuration unit, or may be implemented by a combination of hardware and software (eg, combination of an electronic circuit and a program for controlling it, etc.). A case in which each functional configuration unit of the information processing apparatus 10 is implemented by a combination of hardware and software will be further described below.
 図6は、情報処理装置10を実現するための計算機1000を例示する図である。計算機1000は任意の計算機である。例えば計算機1000は、SoC(System On Chip)、Personal Computer(PC)、サーバマシン、タブレット端末、又はスマートフォンなどである。計算機1000は、情報処理装置10を実現するために設計された専用の計算機であってもよいし、汎用の計算機であってもよい。 FIG. 6 is a diagram illustrating a computer 1000 for realizing the information processing apparatus 10. FIG. Computer 1000 is any computer. For example, the computer 1000 is an SoC (System On Chip), a personal computer (PC), a server machine, a tablet terminal, a smart phone, or the like. The computer 1000 may be a dedicated computer designed to implement the information processing apparatus 10, or may be a general-purpose computer.
 計算機1000は、バス1020、プロセッサ1040、メモリ1060、ストレージデバイス1080、入出力インタフェース1100、及びネットワークインタフェース1120を有する。バス1020は、プロセッサ1040、メモリ1060、ストレージデバイス1080、入出力インタフェース1100、及びネットワークインタフェース1120が、相互にデータを送受信するためのデータ伝送路である。ただし、プロセッサ1040などを互いに接続する方法は、バス接続に限定されない。プロセッサ1040は、CPU(Central Processing Unit)、GPU(Graphics Processing Unit)、又は FPGA(Field-Programmable Gate Array)などの種々のプロセッサである。メモリ1060は、RAM(Random Access Memory)などを用いて実現される主記憶装置である。ストレージデバイス1080は、ハードディスク、SSD(Solid State Drive)、メモリカード、又は ROM(Read Only Memory)などを用いて実現される補助記憶装置である。 The computer 1000 has a bus 1020 , a processor 1040 , a memory 1060 , a storage device 1080 , an input/output interface 1100 and a network interface 1120 . The bus 1020 is a data transmission path through which the processor 1040, memory 1060, storage device 1080, input/output interface 1100, and network interface 1120 mutually transmit and receive data. However, the method of connecting processors 1040 and the like to each other is not limited to bus connection. The processor 1040 is various processors such as a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), or an FPGA (Field-Programmable Gate Array). The memory 1060 is a main memory implemented using a RAM (Random Access Memory) or the like. The storage device 1080 is an auxiliary storage device implemented using a hard disk, SSD (Solid State Drive), memory card, ROM (Read Only Memory), or the like.
 入出力インタフェース1100は、計算機1000と入出力デバイスとを接続するためのインタフェースである。例えば入出力インタフェース1100には、キーボードなどの入力装置や、ディスプレイ装置などの出力装置が接続される。 The input/output interface 1100 is an interface for connecting the computer 1000 and input/output devices. For example, the input/output interface 1100 is connected to an input device such as a keyboard and an output device such as a display device.
 ネットワークインタフェース1120は、計算機1000をネットワークに接続するためのインタフェースである。この通信網は、例えば LAN(Local Area Network)や WAN(Wide Area Network)である。ネットワークインタフェース1120がネットワークに接続する方法は、無線接続であってもよいし、有線接続であってもよい。 The network interface 1120 is an interface for connecting the computer 1000 to the network. This communication network is, for example, a LAN (Local Area Network) or a WAN (Wide Area Network). A method for connecting the network interface 1120 to the network may be a wireless connection or a wired connection.
 ストレージデバイス1080は、情報処理装置10の各機能構成部を実現するプログラムモジュールを記憶している。プロセッサ1040は、これら各プログラムモジュールをメモリ1060に読み出して実行することで、各プログラムモジュールに対応する機能を実現する。 The storage device 1080 stores program modules that implement each functional component of the information processing apparatus 10 . The processor 1040 reads each program module into the memory 1060 and executes it, thereby realizing the function corresponding to each program module.
 以上、本実施形態によれば、差分画像生成部140は、第1画像と第3画像との差分を示す第1差分画像、および、第2画像と第4画像との差分を示す第2差分画像を生成する。抽出部160は、第1差分画像と第2差分画像との共通成分を第1共通成分として抽出する。したがって、容易に精度良く画像の変化検出を行うことができる。 As described above, according to the present embodiment, the difference image generation unit 140 generates the first difference image indicating the difference between the first image and the third image, and the second difference image indicating the difference between the second image and the fourth image. Generate an image. The extraction unit 160 extracts a common component between the first difference image and the second difference image as a first common component. Therefore, image change detection can be performed easily and accurately.
(第2の実施形態)
 図7および図8は、第2の実施形態に係る情報処理装置10が行う処理を説明するための図である。本実施形態に係る情報処理装置10は、以下に説明する点を除いて第1の実施形態に係る情報処理装置10と同じである。
(Second embodiment)
7 and 8 are diagrams for explaining the processing performed by the information processing apparatus 10 according to the second embodiment. The information processing apparatus 10 according to this embodiment is the same as the information processing apparatus 10 according to the first embodiment except for the points described below.
 本実施形態において、第1画像、第2画像、第3画像、および第4画像はいずれも道路を含む画像である。そして第1画像と第2画像とは、道路上に短期変化物が写っていない同一の画像である、または、第3画像と第4画像とは、道路上に短期変化物が写っていない同一の画像である。 In this embodiment, the first image, the second image, the third image, and the fourth image are all images containing roads. The first image and the second image are the same image with no short-term changeable objects on the road, or the third image and the fourth image are the same image with no short-term changeable objects on the road. is an image of
 影等の短期変化物が含まれない画像を用いる場合、検出処理に用いる画像の数を減らすことができる。図7の例において、画像A4が第1画像および第2画像に相当し、画像B4が第3画像に相当し、画像B5が第4画像に相当する。すなわち、第1画像と第2画像とは同一の画像である。第1の日時と第2の日時は同じである。画像A4には道路31と白線30が含まれているが、影32は含まれていない。画像B4と画像B5には、道路31、白線30に加え、影32が含まれている。差分画像C4は画像A4と画像B4との差分を示している。差分画像C5は画像A4と画像B5との差分を示している。そして、共通成分画像D2は、差分画像C4と差分画像C5との共通成分を示す画像である。このような第1画像、第2画像、第3画像、および第4画像を用いても、第1の実施形態と同様に変化部分の検出を行える。 When using images that do not contain short-term changes such as shadows, the number of images used for detection processing can be reduced. In the example of FIG. 7, image A4 corresponds to the first and second images, image B4 corresponds to the third image, and image B5 corresponds to the fourth image. That is, the first image and the second image are the same image. The first date and time and the second date and time are the same. Image A4 includes road 31 and white line 30, but shadow 32 is not included. The image B4 and the image B5 include the road 31, the white line 30, and the shadow 32. FIG. A difference image C4 indicates the difference between the image A4 and the image B4. A difference image C5 indicates the difference between the image A4 and the image B5. The common component image D2 is an image showing common components between the difference image C4 and the difference image C5. Using the first image, the second image, the third image, and the fourth image, it is possible to detect a changed portion in the same manner as in the first embodiment.
 また図8の例において、画像A6が第1画像に相当し、画像A7が第2画像に相当し、画像B6が第3画像および第4画像に相当する。すなわち、第3画像と第4画像とは同一の画像である。第3の日時と第4の日時は同じである。画像B6には道路31と白線30が含まれているが、影32は含まれていない。画像A6と画像A7には、道路31、白線30に加え、影32が含まれている。差分画像C6は画像A6と画像B6との差分を示している。差分画像C7は画像A7と画像B6との差分を示している。そして、共通成分画像D3は、差分画像C6と差分画像C7との共通成分を示す画像である。このような第1画像、第2画像、第3画像、および第4画像を用いても、第1の実施形態と同様に変化部分の検出を行える。 Also, in the example of FIG. 8, image A6 corresponds to the first image, image A7 corresponds to the second image, and image B6 corresponds to the third and fourth images. That is, the third image and the fourth image are the same image. The third date and time and the fourth date and time are the same. Image B6 includes road 31 and white line 30, but shadow 32 is not included. The image A6 and the image A7 include a road 31, a white line 30, and a shadow 32. FIG. A difference image C6 indicates the difference between the image A6 and the image B6. A difference image C7 indicates the difference between the image A7 and the image B6. The common component image D3 is an image showing common components of the difference image C6 and the difference image C7. Using the first image, the second image, the third image, and the fourth image, it is possible to detect a changed portion in the same manner as in the first embodiment.
 本実施形態において、記憶部200に保持された各画像には、短期変化物の有無を示す情報が関連付けられている。そして、取得部120は短期変化物が無い画像を選択して取得することができる。取得部120は短期変化物が無い画像を取得した場合、差分画像生成部140はその画像を複数の差分画像の生成に用いることができる。また、差分画像生成部140はその画像を全ての差分画像の生成に用いても良い。 In this embodiment, each image held in the storage unit 200 is associated with information indicating the presence or absence of a short-term changeable substance. Then, the acquiring unit 120 can select and acquire an image without a short-term changeable substance. When the acquisition unit 120 acquires an image without short-term change substances, the difference image generation unit 140 can use the image to generate a plurality of difference images. Further, the differential image generation unit 140 may use the image to generate all the differential images.
 短期変化物が無い画像の例としては、太陽が出ていない時刻や曇天で撮影された画像が挙げられる。記憶部200に保持させる画像には、各画像を予め確認して短期変化物の有無を示す情報を付すことができる。また、各画像には撮影された日時に基づいて短期変化物の有無を示す情報が付されてもよい。 Examples of images with no short-term change include images taken when the sun is out or when the weather is cloudy. The images stored in the storage unit 200 can be attached with information indicating the presence/absence of short-term changeable substances by confirming each image in advance. Further, information indicating the presence or absence of a short-term changeable substance may be attached to each image based on the date and time when the image was taken.
 以上、本実施形態によれば、第1の実施形態と同様の作用および効果が得られる。くわえて、第1画像と第2画像とは、道路上に短期変化物が写っていない同一の画像である、または、第3画像と第4画像とは、道路上に短期変化物が写っていない同一の画像である。したがって、より少ない画像を用いて変化部分の検出を行える。 As described above, according to this embodiment, the same actions and effects as those of the first embodiment can be obtained. In addition, the first image and the second image are the same image with no short-term changeable objects on the road, or the third image and the fourth image are with the short-term changeable objects on the road. not the same image. Therefore, the changed portion can be detected using a smaller number of images.
(第3の実施形態)
 図9は、第3の実施形態に係る情報処理装置10の機能構成を例示するブロック図である。本実施形態に係る情報処理装置10は、以下に説明する点を除いて第1および第2の実施形態の少なくともいずれかに係る情報処理装置10と同じである。
(Third Embodiment)
FIG. 9 is a block diagram illustrating the functional configuration of the information processing device 10 according to the third embodiment. The information processing apparatus 10 according to the present embodiment is the same as the information processing apparatus 10 according to at least one of the first and second embodiments except for the points described below.
 本実施形態に係る情報処理装置10は、検出処理を開始させるか否かを判定する判定部110をさらに備える。そして、判定部110が検出処理を開始させると判定した場合に、検出処理が開始される。検出処理は、上述したとおり、取得部120が取得した画像を用いて変化部分を検出する一連の処理であり、具体的には取得部120が第1画像、第2画像、第3画像、および第4画像を取得し、差分画像生成部140が第1差分画像と第2差分画像とを生成し、抽出部160が第1共通成分を抽出する処理である。検出処理は、取得ステップS10、差分画像生成ステップS20、および抽出ステップS30を含む。以下に詳しく説明する。 The information processing apparatus 10 according to this embodiment further includes a determination unit 110 that determines whether or not to start detection processing. Then, when the determination unit 110 determines to start the detection process, the detection process is started. As described above, the detection process is a series of processes for detecting changed portions using the images acquired by the acquisition unit 120. Specifically, the acquisition unit 120 acquires the first image, the second image, the third image, and the In this processing, the fourth image is acquired, the difference image generation unit 140 generates the first difference image and the second difference image, and the extraction unit 160 extracts the first common component. The detection process includes an acquisition step S10, a difference image generation step S20, and an extraction step S30. A detailed description is given below.
 判定部110は、検出処理を行うタイミングを決定する。特に、判定部110は対象に変化が生じている可能性が比較的高いタイミングで検出処理が行われるよう、検出処理を開始させるか否かの判定を行う。以下に、判定部110が行う判定方法の第1例から第4例について説明する。なお、判定部110が行う判定方法は以下に限定されない。また、情報処理装置10は複数の判定方法を組み合わせて実行しても良い。 The determination unit 110 determines the timing of performing the detection process. In particular, the determination unit 110 determines whether or not to start the detection process so that the detection process is performed at a timing when there is a relatively high possibility that the target has changed. First to fourth examples of the determination method performed by the determination unit 110 will be described below. Note that the determination method performed by the determination unit 110 is not limited to the following. Further, the information processing apparatus 10 may combine and execute a plurality of determination methods.
<第1例>
 第1例において、判定部110は、予め定められた期間が経過する毎に、検出処理を開始させると判定する。判定部110は、直近で検出処理が行われた日時を記憶している。そして、判定部110は、所定の時間ごと(たとえば一日ごと)に、直近で検出処理が行われてからの経過時間を算出する。そして、判定部110は、算出した経過時間が予め定められた期間より長い場合、検出処理を開始させると判定する。一方、算出した経過時間が予め定められた期間より短い場合、検出処理を開始させないと判定する。判定部110は判定を位置ごとに行うことができる。検出処理を開始させると判定された場合、その位置を検出対象の位置として指定する。また、直近の検出処理のタイミングを含む所定の期間を第1の時期とし、算出した経過時間が予め定められた期間より長くなったタイミングを終点とする所定の期間を第2の時期とすることが好ましい。本例によれば、定期的に検出処理を行うことができる。
<First example>
In the first example, the determination unit 110 determines to start the detection process each time a predetermined period elapses. The determination unit 110 stores the date and time when the most recent detection process was performed. Then, determination unit 110 calculates the elapsed time since the most recent detection process was performed at predetermined time intervals (for example, every day). Then, when the calculated elapsed time is longer than a predetermined period, the determination unit 110 determines to start the detection process. On the other hand, if the calculated elapsed time is shorter than the predetermined period, it is determined not to start the detection process. The determination unit 110 can perform determination for each position. If it is determined to start the detection process, that position is designated as the position to be detected. Also, a predetermined period including the timing of the most recent detection process is set as the first period, and a predetermined period ending at the timing when the calculated elapsed time becomes longer than the predetermined period is set as the second period. is preferred. According to this example, detection processing can be performed periodically.
<第2例>
 第2例において、第1画像、第2画像、第3画像、および第4画像はいずれも道路を含む画像である。判定部110は、道路に関する情報を取得する。そして判定部110は道路に関する情報に基づいて、予め定められた事象が発生したと判定した場合に、検出処理を開始させると判定する。
<Second example>
In the second example, the first image, the second image, the third image, and the fourth image are all images containing roads. The determination unit 110 acquires information about roads. Then, the determination unit 110 determines to start the detection process when determining that a predetermined event has occurred based on the road information.
 道路に関する情報はたとえば、道路の管理会社が提供する道路工事情報である。取得部120は、道路に関する情報をたとえば道路情報を提供するサービスのサーバ50から取得することができる。予め定められた事象は、対象に変化を生じさせる可能性が高い事象であり、たとえば道路工事である。道路工事では道路の白線等に変化が生じる可能性が高いことから、道路工事の前後で変化が生じていないかどうか確かめるよう、検出処理が行われるのが好ましい。 Information about roads is, for example, road construction information provided by road management companies. Acquisition unit 120 can acquire information about roads, for example, from server 50 of a service that provides road information. A predetermined event is an event that has a high possibility of causing a change in an object, such as road construction. Since there is a high possibility that the white lines of the road will change during road construction, it is preferable that the detection process be performed to check whether there is any change before and after the road construction.
 道路に関する情報には、事象が生じる日時または期間と、事象が生じる位置が含まれる。情報処理装置10は、検出対象の位置および時期を道路に関する情報に基づいて定めることができる。たとえば取得部120は、事象の発生前に撮影された第1画像および第2画像と、事象の発生後に撮影された第3画像および第4画像とを取得する。すなわち、事象の発生前に第1の時期を指定し、事象の発生後に第2の時期を指定する。また、検出対象の位置として、事象が発生した位置を指定する。 Information about roads includes the date and time or period when an event occurs and the location where the event occurs. The information processing device 10 can determine the position and time of the detection target based on the information on the road. For example, acquisition unit 120 acquires a first image and a second image captured before the occurrence of the event, and a third image and a fourth image captured after the occurrence of the event. That is, a first period of time is specified before the occurrence of the event, and a second period of time is specified after the occurrence of the event. Also, the position at which the event occurred is specified as the detection target position.
 本例において判定部110は、所定の時間ごと(たとえば一日ごと)に、道路に関する情報を取得する。そして、直近の検出処理の後に予め定められた事象が発生したか否かを確認する。判定部110は、予め定められた事象が発生した場合、検出処理を開始させると判定する。一方、予め定められた事象が発生していない場合、検出処理を開始させないと判定する。本例によれば、対象に変化を生じさせる可能性が高い事象が発生したことを検知して検出処理を行うことができる。 In this example, the determination unit 110 acquires information about roads at predetermined intervals (for example, every day). Then, it is checked whether or not a predetermined event has occurred after the most recent detection process. The determination unit 110 determines to start the detection process when a predetermined event occurs. On the other hand, if the predetermined event has not occurred, it is determined not to start the detection process. According to this example, it is possible to perform the detection process by detecting the occurrence of an event that is highly likely to cause a change in the object.
<第3例>
 第3例において、第1画像、第2画像、第3画像、および第4画像はいずれも道路を含む画像である。判定部110は、道路の交通量に関する情報を取得し、交通量に関する情報に基づいて、検出処理を開始させるタイミングを決定する。
<Third example>
In the third example, the first image, the second image, the third image, and the fourth image are all images containing roads. The determination unit 110 acquires information about the traffic volume of the road, and determines the timing for starting the detection process based on the information about the traffic volume.
 本例において判定部110は、交通量に関する情報をたとえば交通情報を提供するサービスのサーバ50から取得する。交通量に関する情報はたとえば、道路上の各位置における時刻ごとの交通量を示す情報である。判定部110は交通量に関する情報を用いて、道路上の各位置における交通量の積算値を算出する。なお、積算値は直近の検出処理の後の積算値でよい。そして判定部110は、算出した積算値が予め定められた値を超えたか否かを判定する。判定部110は、積算値が予め定められた値を超えた場合、検出処理を開始させると判定する。一方、積算値が予め定められた値を超えなかった場合、検出処理を開始させないと判定する。判定部110は判定を位置ごとに行える。そして、検出処理を開始させると判定された場合、その位置を検出対象の位置として指定する。また、直近の検出処理のタイミングを含む所定の期間を第1の時期とし、積算値が予め定められた値を超えたタイミングを終点とする所定の期間を第2の時期とすることが好ましい。 In this example, the determination unit 110 acquires information on traffic volume from the server 50 of a service that provides traffic information, for example. Information about traffic volume is, for example, information indicating traffic volume at each position on the road at each time. The determination unit 110 calculates the integrated value of the traffic volume at each position on the road using the information about the traffic volume. Note that the integrated value may be the integrated value after the most recent detection process. Then, determination unit 110 determines whether or not the calculated integrated value exceeds a predetermined value. The determination unit 110 determines to start the detection process when the integrated value exceeds a predetermined value. On the other hand, if the integrated value does not exceed the predetermined value, it is determined not to start the detection process. The determination unit 110 can perform determination for each position. Then, when it is determined to start the detection process, that position is specified as the position to be detected. Further, it is preferable that a predetermined period including the timing of the most recent detection process is set as the first period, and a predetermined period ending at the timing when the integrated value exceeds a predetermined value is set as the second period.
 交通量が多いほど、道路上のペイントの剥がれが生じる可能性が高い。したがって、交通量に基づいて検出処理のタイミングを決定することで、的確に変化を検出できる。  The more traffic there is, the more likely it is that the paint will peel off on the road. Therefore, by determining the timing of the detection process based on the traffic volume, the change can be accurately detected.
<第4例>
 第4例において、第1画像、第2画像、第3画像、および第4画像はいずれも道路を含む画像である。判定部110は、道路の交通流に関する情報を取得し、交通流に関する情報に基づいて、検出処理を開始させるタイミングを決定する。
<Fourth example>
In the fourth example, the first image, the second image, the third image, and the fourth image are all images containing roads. The determination unit 110 acquires information about the traffic flow on the road and determines the timing for starting the detection process based on the information about the traffic flow.
 本例において判定部110は、交通流に関する情報をたとえば交通情報を提供するサービスのサーバ50から取得する。交通流に関する情報はたとえば、道路上の各分岐点(三叉路、交差点、五叉路等)における時刻ごとの交通の流れの状態を示す情報である。交通流に関する情報は、所定の時間内で分岐点を通過した車両の、進行方向(直進、右折等)ごとの数または割合を示す情報であり得る。たとえば右折車両があった交差点で、ある時期から直線車両のみとなったような場合、交通規制が変更された、すなわち、路面上のマークが変わったという可能性がある。また交通流に関する情報は、道路上の車列の流れの状態を示す情報であり得る。たとえば、ある時点の前後で車列の数が変わった場合、車線の数が変更された、すなわち、路面上のマークが変わったという可能性がある。 In this example, the determination unit 110 acquires information on traffic flow from, for example, the server 50 of a service that provides traffic information. Information about traffic flow is, for example, information indicating the state of traffic flow at each branch point (three-forked road, intersection, five-way intersection, etc.) on a road at each hour. The information on traffic flow may be information indicating the number or ratio of vehicles that have passed through a branch point within a predetermined time for each traveling direction (straight ahead, right turn, etc.). For example, at an intersection where there used to be right-turning vehicles, only straight-line vehicles from a certain period of time, there is a possibility that the traffic regulations have changed, that is, the markings on the road surface have changed. Also, the information about traffic flow may be information indicating the state of the flow of vehicles on the road. For example, if the number of convoys changed before and after a certain point in time, it is possible that the number of lanes changed, ie the markings on the road surface changed.
 判定部110は交通流に関する情報を所定の間隔でモニタし、交通流の変化を検出する。具体的には判定部110は、直近の検出処理が行われたときの交通流に関する情報と最新の交通流に関する情報とを比較する。そして、判定部110は、直近の検出処理が行われたときの交通流に関する情報と最新の交通流に関する情報との差が予め定められた基準値より大きい場合に、検出処理を開始させると判定する。一方、差が予め定められた基準値より大きくない場合に、検出処理を開始させないと判定する。または判定部110は、直近の検出処理が行われたときの交通流に関する情報と最新の交通流に関する情報との類似度を算出し、算出された類似度が予め定められた基準値より小さい場合に、検出処理を開始させると判定する。一方、類似度が予め定められた基準値より小さくない場合に、検出処理を開始させないと判定する。 The determination unit 110 monitors information on traffic flow at predetermined intervals and detects changes in traffic flow. Specifically, the determination unit 110 compares the information about the traffic flow when the most recent detection process was performed and the latest information about the traffic flow. Then, the determination unit 110 determines to start the detection process when the difference between the information about the traffic flow when the most recent detection process is performed and the latest information about the traffic flow is greater than a predetermined reference value. do. On the other hand, when the difference is not larger than the predetermined reference value, it is determined not to start the detection process. Alternatively, if the determination unit 110 calculates the similarity between the information on the traffic flow when the most recent detection process was performed and the latest information on the traffic flow, and the calculated similarity is smaller than a predetermined reference value. , it is determined to start the detection process. On the other hand, when the degree of similarity is not smaller than a predetermined reference value, it is determined not to start the detection process.
 判定部110はより具体的に、交通流に関する情報に基づく各分岐点での直進車両の割合を比較しても良い。この場合判定部110は、直近の検出処理が行われたときの直進車両の割合と最新の直進車両の割合との差が予め定められた基準値より大きい場合に、検出処理を開始させると判定する。一方、この差が予め定められた基準値より大きくない場合に、検出処理を開始させないと判定する。また、判定部110は、交通流に関する情報を用いて道路上の各位置での車線の数を検出して比較しても良い。この場合判定部110は、直近の検出処理が行われたときの車線の数と最新の車線の数とが異なる場合に、検出処理を開始させると判定する。一方、これらの数が同じである場合に、検出処理を開始させないと判定する。 More specifically, the determination unit 110 may compare the proportion of straight vehicles at each junction based on information on traffic flow. In this case, the determination unit 110 determines to start the detection process when the difference between the percentage of vehicles traveling straight when the most recent detection process was performed and the latest percentage of vehicles traveling straight is greater than a predetermined reference value. do. On the other hand, if this difference is not greater than a predetermined reference value, it is determined not to start the detection process. The determination unit 110 may also detect and compare the number of lanes at each position on the road using information on traffic flow. In this case, the determination unit 110 determines to start the detection process when the number of lanes when the most recent detection process is performed is different from the latest number of lanes. On the other hand, if these numbers are the same, it is determined not to start the detection process.
 判定部110は判定を位置または分岐点ごとに行える。そして、検出処理を開始させると判定された場合、その位置または分岐点を検出対象の位置として指定する。また、直近の検出処理のタイミングを含む所定の期間を第1の時期とし、その判定に用いられた最新の交通流に関する情報が得られたタイミングを終点とする所定の期間を、第2の時期とすることが好ましい。 The determination unit 110 can perform determination for each position or branch point. Then, if it is determined to start the detection process, that position or branch point is designated as the position to be detected. A predetermined period including the timing of the most recent detection process is defined as the first period, and a predetermined period ending at the timing when the latest traffic flow information used for the determination is obtained is defined as the second period. It is preferable to
 本実施形態に係る情報処理装置10を実現する計算機のハードウエア構成は、第1の実施形態と同様に、例えば図6によって表される。ただし、本実施形態の情報処理装置10を実現する計算機1000のストレージデバイス1080には、本実施形態の判定部110の機能を実現するプログラムモジュールがさらに記憶される。 The hardware configuration of the computer that implements the information processing apparatus 10 according to this embodiment is represented, for example, by FIG. 6, as in the first embodiment. However, the storage device 1080 of the computer 1000 that implements the information processing apparatus 10 of this embodiment further stores a program module that implements the function of the determination unit 110 of this embodiment.
 図10は、本実施形態に係る情報処理装置10が行う情報処理方法の流れを例示するフローチャートである。本実施形態に係る情報処理方法は、検出処理を開始させるか否かを判定する判定ステップS40をさらに含む。そして、判定ステップS40で検出処理を開始させると判定された場合(S40のY)に、検出処理が開始される。一方、判定ステップS40で検出処理を開始させると判定されなかった場合(S40のN)、検出処理は開始されない。 FIG. 10 is a flowchart illustrating the flow of the information processing method performed by the information processing apparatus 10 according to this embodiment. The information processing method according to this embodiment further includes a determination step S40 of determining whether or not to start the detection process. Then, when it is determined to start the detection process in determination step S40 (Y in S40), the detection process is started. On the other hand, if it is determined not to start the detection process in determination step S40 (N of S40), the detection process is not started.
 以上、本実施形態によれば、第1の実施形態と同様の作用および効果が得られる。くわえて、本実施形態によれば、判定部110が検出処理を開始させると判定した場合に、検出処理が開始される。したがって、適切なタイミングで検出処理を行える。 As described above, according to this embodiment, the same actions and effects as those of the first embodiment can be obtained. In addition, according to the present embodiment, the detection process is started when the determination unit 110 determines to start the detection process. Therefore, detection processing can be performed at appropriate timing.
(第4の実施形態)
 図11は、第4の実施形態に係る情報処理装置10が行う処理を説明するための図である。本実施形態に係る情報処理装置10は、以下に説明する点を除いて第3の実施形態に係る情報処理装置10と同じである。
(Fourth embodiment)
FIG. 11 is a diagram for explaining processing performed by the information processing apparatus 10 according to the fourth embodiment. The information processing apparatus 10 according to this embodiment is the same as the information processing apparatus 10 according to the third embodiment except for the points described below.
 本実施形態に係る情報処理装置10において判定部110は、撮影日時が互いに異なる三つ以上の画像を取得し、三つ以上の画像のうち時間的に連続する各二画像の共通成分を第2共通成分としてそれぞれ抽出する。判定部110は、抽出した複数の第2共通成分を時系列に比較することにより、判定部110が取得した画像中の対象の変化の有無を検出する。そして判定部110は、判定部110が取得した画像中の対象に変化が検出された場合に、検出処理を開始させると判定する。以下に詳しく説明する。 In the information processing apparatus 10 according to the present embodiment, the determination unit 110 obtains three or more images captured on different dates and times, and determines a common component of each two temporally continuous images among the three or more images as a second image. Extract each as a common component. The determination unit 110 detects the presence or absence of a change in the target in the image acquired by the determination unit 110 by comparing the extracted plurality of second common components in time series. Then, the determination unit 110 determines to start the detection process when a change is detected in the object in the image acquired by the determination unit 110 . A detailed description is given below.
 本実施形態によれば、判定部110は、複数の画像に基づいて対象に変化が生じたか否かを判定する。そして、変化が生じた場合に情報処理装置10は検出処理を行い、具体的な変化部分を抽出できる。 According to the present embodiment, the determination unit 110 determines whether or not a change has occurred in the target based on a plurality of images. Then, when a change occurs, the information processing apparatus 10 can perform detection processing and extract a specific changed portion.
 図11の例において判定部110は、撮影日時が互いに異なる画像E1~E6を取得する。画像E1~E6のそれぞれには、道路31、白線30および影32が含まれる。本例において対象は白線30である。画像E1~E6はこの順に撮影された画像である。画像E1~E6はたとえば互いに異なる日に撮影された画像であり得るが、画像の撮影間隔は特に限定されない。判定部110は、画像E1~E6のうち時間的に連続する各二画像の共通成分を第2共通成分としてそれぞれ抽出する。画像F1~F5はそれぞれ第2共通成分を示す画像である。すなわち、判定部110は、画像E1と画像E2の共通成分を示す画像F1、画像E2と画像E3の共通成分を示す画像F2、画像E3と画像E4の共通成分を示す画像F3、画像E4と画像E5の共通成分を示す画像F4、および画像E5と画像E6の共通成分を示す画像F5を生成する。 In the example of FIG. 11, the determination unit 110 acquires images E1 to E6 with different shooting dates and times. Each of the images E1-E6 includes a road 31, a white line 30 and a shadow 32. FIG. In this example, the object is the white line 30 . Images E1 to E6 are images shot in this order. The images E1 to E6 may be images taken on different days, for example, but the image shooting interval is not particularly limited. The determination unit 110 extracts a common component of each two temporally consecutive images among the images E1 to E6 as a second common component. Images F1 to F5 are images showing the second common component. That is, the determination unit 110 determines the image F1 showing the common component of the image E1 and the image E2, the image F2 showing the common component of the image E2 and the image E3, the image F3 showing the common component of the image E3 and the image E4, the image E4 and the image An image F4 showing common components of E5 and an image F5 showing common components of images E5 and E6 are generated.
 次いで、判定部110は、画像F1~F5を時系列に比較することにより、画像E1~E6中の対象の変化の有無を検出する。本例では、画像F1と画像F2との間に違いは無く、画像F2と画像F3との間および画像F3と画像F4との間に違いがある。また、画像F4と画像F5との間に違いは無い。この場合、画像F3の基となった画像E3の撮影タイミングと画像E4の撮影タイミングとの間に変化が生じたと判定できる。この判定部110の判定においても、影等の短期変化物の影響を受けない。 Next, the determination unit 110 detects whether there is a change in the target in the images E1 to E6 by comparing the images F1 to F5 in time series. In this example, there is no difference between images F1 and F2, and there is a difference between images F2 and F3 and between images F3 and F4. Also, there is no difference between the image F4 and the image F5. In this case, it can be determined that there is a change between the imaging timing of the image E3 on which the image F3 is based and the imaging timing of the image E4. The determination by the determining unit 110 is also not affected by short-term changes such as shadows.
 なお、判定部110は最少で三つの画像を用いれば判定が可能である。たとえば画像E1~E3の三画像を用いて同様の処理を行った場合、画像F1と画像F2とに違いが無いことにより、画像E1の撮影タイミングと画像E3の撮影タイミングとの間に変化が生じていないと判定できる。また、画像E2~E4の三画像を用いて同様の処理を行った場合、画像F2と画像F3とに違いがあることにより、画像E2の撮影タイミングと画像E4の撮影タイミングとの間に変化が生じたと判定できる。 Note that the determination unit 110 can perform determination using at least three images. For example, when the same processing is performed using three images E1 to E3, there is no difference between the images F1 and F2, so there is a change between the imaging timing of the image E1 and the imaging timing of the image E3. It can be determined that there is no Further, when the same processing is performed using three images E2 to E4, there is a difference between the image F2 and the image F3, so there is a change between the timing of capturing the image E2 and the timing of capturing the image E4. can be determined to have occurred.
 画像E1~E4の四画像を用いて同様の処理を行った場合、画像F1と画像F2とに違いがなく、画像F2と画像F3とに違いがあることにより、画像E3の撮影タイミングと画像E4の撮影タイミングとの間に変化が生じたと判定できる。このように、四画像以上を用いた場合に変化タイミングをより細かく把握できる。したがって、判定部110は、三つの画像のみから変化のタイミングを判定するよりも、撮影日時が互いに異なる四つ以上の画像を取得して、変化の有無の検出や変化タイミングの判定を行うことが好ましい。 When the same processing is performed using the four images E1 to E4, there is no difference between the images F1 and F2, but there is a difference between the images F2 and F3. It can be determined that a change has occurred between the photographing timing of . In this way, when four or more images are used, the change timing can be grasped more precisely. Therefore, the determination unit 110 can acquire four or more images taken at different dates and times, and detect the presence or absence of change and determine the timing of change, rather than determining the timing of change from only three images. preferable.
 また、このように判定部110は、判定部110が取得した画像中の対象に変化が検出された場合に、その変化タイミングをさらに特定することができる。したがって取得部120は、変化タイミングの前に撮影された第1画像および第2画像と、変化タイミングの後に撮影された第3画像および第4画像とを取得することが好ましい。すなわち、第1の時期を変化タイミングの前に指定し、第2の時期を変化タイミングの後に指定することが好ましい。また、情報処理装置10は、判定部110が取得して判定に用いた画像の撮影位置を検出対象の位置として指定することが好ましい。 In addition, when a change is detected in the object in the image acquired by the determination unit 110, the determination unit 110 can further specify the change timing. Therefore, it is preferable that the acquisition unit 120 acquires the first and second images captured before the change timing and the third and fourth images captured after the change timing. That is, it is preferable to specify the first period before the change timing and the second period after the change timing. Further, the information processing apparatus 10 preferably designates the shooting position of the image acquired by the determination unit 110 and used for determination as the position to be detected.
 判定部110はたとえば三つ以上の画像を、取得部120を介して記憶部200から取得することができる。判定部110が取得する画像は、第1の実施形態で説明した取得部120が取得する画像と同様である。判定部110は、取得部120が検出処理のために取得する画像と同様の画像を取得して、変化の有無の判定に用いることが好ましい。すなわち、判定部110が取得する画像はいずれもオルソ画像であることが好ましい。また、判定部110が取得する画像は互いに同じ種類の画像であることが好ましい。判定部110が取得する画像は互いに撮影領域が重なっていることが好ましい。判定部110は、対象の長期的変化の有無を検出することができる。なお、対象は予め定めておく必要はない。判定部110が取得する画像には短期変化物がさらに含まれていても良い。判定部110が取得する三つ以上の画像は互いに異なる画像であり、互いに異なる日時に撮影された画像である。 The determination unit 110 can acquire, for example, three or more images from the storage unit 200 via the acquisition unit 120. The image acquired by the determination unit 110 is the same as the image acquired by the acquisition unit 120 described in the first embodiment. The determination unit 110 preferably acquires an image similar to the image acquired by the acquisition unit 120 for the detection process, and uses it to determine whether or not there is a change. That is, it is preferable that all the images acquired by the determination unit 110 are orthorectified images. Also, the images acquired by the determination unit 110 are preferably of the same type. It is preferable that the imaging regions of the images acquired by the determination unit 110 overlap each other. The determination unit 110 can detect the presence or absence of long-term changes in the target. In addition, it is not necessary to predetermine the target. The image acquired by the determination unit 110 may further include a short-term change substance. The three or more images acquired by the determination unit 110 are images different from each other, and are images taken at different dates and times.
 判定部110は画像を取得すると、取得した各画像に対してエッジ検出処理を行い、エッジ画像を得る。エッジ検出処理についてはS201の説明において上述した通りである。なお、判定部110は、取得した画像に対し、必要に応じて補正処理等を行っても良い。 After acquiring the images, the determination unit 110 performs edge detection processing on each acquired image to obtain an edge image. The edge detection processing is as described above in the description of S201. Note that the determination unit 110 may perform correction processing or the like on the acquired image as necessary.
 次いで判定部110は、時間的に連続する二画像から得られた、二つのエッジ画像の位置を、S202の説明において上述したのと同様にして、対応付ける。そして、判定部110は位置が対応付けられた二つのエッジ画像で、対応する位置の画素の値を比較し、画素の値が一致する画素を共通部分と判定する。こうして、共通部分と判定された画素をそれ以外の画素と識別可能に示された画像が第2共通成分を示す画像である。 Next, the determination unit 110 associates the positions of the two edge images obtained from the two temporally consecutive images in the same manner as described above in the description of S202. Then, the determining unit 110 compares the values of the pixels at the corresponding positions in the two edge images whose positions are associated with each other, and determines the pixels having the same pixel values as the common portion. Thus, the image showing the pixels determined as the common portion so as to be identifiable from the other pixels is the image showing the second common component.
 判定部110は、取得した画像から第2共通成分を示す画像を複数生成すると、それらを時系列に比較し、変化の有無を検出する。具体的には判定部110は、時間的に連続する二つの第2共通成分を示す画像間の差分を抽出する処理を行い、差分が抽出された場合に、それら二つの画像間に変化があると判定する。一方、差分が抽出されなかった場合に、それら二つの画像間に変化がないと判定する。第2共通成分を示す画像間の差分を抽出する処理は、差分画像生成部140がエッジ画像から差分画像を生成するのと同じ様に行える。なお、判定部110は差分画像中の差分を示す画素の数の割合が、所定の割合以下である場合に、差分が抽出されなかったと判定しても良い。 After generating a plurality of images showing the second common component from the acquired images, the determination unit 110 compares them in time series to detect the presence or absence of change. Specifically, the determination unit 110 performs a process of extracting a difference between two temporally continuous images showing the second common component, and when the difference is extracted, there is a change between the two images. I judge. On the other hand, if no difference is extracted, it is determined that there is no change between the two images. The process of extracting the difference between the images showing the second common component can be performed in the same manner as the difference image generating section 140 generates the difference image from the edge image. Note that the determination unit 110 may determine that no difference has been extracted when the ratio of the number of pixels indicating the difference in the difference image is equal to or less than a predetermined ratio.
 第2共通成分を示すいずれか二つの画像間に変化があると判定された場合、判定部110は少なくとも、判定部110が取得した三つ以上の画像の撮影期間の中に変化タイミングがあると判定できる。判定部110は、判定部110が取得した三つ以上の画像の撮影期間の中に変化タイミングがあると判定すると、検出処理を開始させると判定する。そして、情報処理装置10で検出処理が行われる。 When it is determined that there is a change between any two images showing the second common component, the determination unit 110 determines that there is a change timing at least during the shooting period of the three or more images acquired by the determination unit 110. I can judge. When determining that there is a change timing during the shooting period of the three or more images acquired by the determination unit 110, the determination unit 110 determines to start the detection process. Then, the information processing apparatus 10 performs detection processing.
 判定部110が四つ以上の画像を取得した場合、すなわち、第2共通成分を示す画像が3つ以上生成された場合、判定部110はさらに変化タイミングを詳しく特定してもよい。複数の第2共通成分を示す画像を比較して、前後のいずれとも異なる画像がある場合、変化タイミングはその画像の基となった二画像の間であると特定できる。また、複数の第2共通成分を示す画像を比較して、時間的に最初または最後画像が他の画像と異なる場合、変化タイミングはその最初または最後の画像の基となった二画像の間であると特定できる。 When the determination unit 110 acquires four or more images, that is, when three or more images showing the second common component are generated, the determination unit 110 may further specify the change timing in detail. By comparing a plurality of images showing the second common component, if there is an image that differs from both before and after, it can be specified that the change timing is between the two images that are the basis of that image. In addition, comparing images showing a plurality of second common components, if the first or last image is temporally different from other images, the change timing is can be identified.
 以上、本実施形態によれば、第1の実施形態と同様の作用および効果が得られる。くわえて、本実施形態によれば、判定部110が取得した画像中の対象の変化の有無を検出する。そして判定部110は、判定部110が取得した画像中の対象に変化が検出された場合に、検出処理を開始させると判定する。したがって、適切なタイミングで検出処理を行える。 As described above, according to this embodiment, the same actions and effects as those of the first embodiment can be obtained. In addition, according to this embodiment, the determination unit 110 detects whether or not there is a change in the object in the acquired image. Then, the determination unit 110 determines to start the detection process when a change is detected in the object in the image acquired by the determination unit 110 . Therefore, detection processing can be performed at appropriate timing.
 以上、図面を参照して実施形態及び実施例について述べたが、これらは本発明の例示であり、上記以外の様々な構成を採用することもできる。 Although the embodiments and examples have been described above with reference to the drawings, these are examples of the present invention, and various configurations other than those described above can be adopted.
 この出願は、2021年3月31日に出願された日本出願特願2021-059608号を基礎とする優先権を主張し、その開示の全てをここに取り込む。 This application claims priority based on Japanese Patent Application No. 2021-059608 filed on March 31, 2021, and the entire disclosure thereof is incorporated herein.
10 情報処理装置
30 白線
31 道路
32 影
50 サーバ
110 判定部
120 取得部
140 差分画像生成部
160 抽出部
180 出力部
200 記憶部
1000 計算機
1020 バス
1040 プロセッサ
1060 メモリ
1080 ストレージデバイス
1100 入出力インタフェース
1120 ネットワークインタフェース
10 Information processing device 30 White line 31 Road 32 Shadow 50 Server 110 Judgment unit 120 Acquisition unit 140 Difference image generation unit 160 Extraction unit 180 Output unit 200 Storage unit 1000 Computer 1020 Bus 1040 Processor 1060 Memory 1080 Storage device 1100 Input/output interface 1120 Network interface

Claims (16)

  1.  第1の日時に撮影された第1画像と、第2の日時に撮影された第2画像と、第3の日時に撮影された第3画像と、第4の日時に撮影された第4画像とを取得する取得部と、
     前記第1画像と前記第3画像との差分を示す第1差分画像、および、前記第2画像と前記第4画像との差分を示す第2差分画像を生成する差分画像生成部と、
     前記第1差分画像と前記第2差分画像との共通成分を第1共通成分として抽出する抽出部とを備え、
     前記第3の日時および前記第4の日時はいずれも、前記第1の日時よりも後、かつ前記第2の日時よりも後である
    情報処理装置。
    A first image taken on a first date and time, a second image taken on a second date and time, a third image taken on a third date and time, and a fourth image taken on a fourth date and time. an acquisition unit that acquires and
    a difference image generating unit that generates a first difference image indicating the difference between the first image and the third image and a second difference image indicating the difference between the second image and the fourth image;
    an extraction unit that extracts a common component between the first difference image and the second difference image as a first common component,
    The information processing apparatus, wherein both the third date and time and the fourth date and time are later than the first date and time and later than the second date and time.
  2.  請求項1に記載の情報処理装置において、
     前記第1の日時と前記第2の日時との間隔は、前記第1の日時と前記第3の日時の間隔よりも狭い
    情報処理装置。
    In the information processing device according to claim 1,
    The information processing apparatus, wherein the interval between the first date and time and the second date and time is narrower than the interval between the first date and time and the third date and time.
  3.  請求項1または2に記載の情報処理装置において、
     前記第3の日時と前記第4の日時の間隔は、前記第1の日時と前記第3の日時の間隔よりも狭い
    情報処理装置。
    In the information processing device according to claim 1 or 2,
    The information processing apparatus, wherein the interval between the third date and time and the fourth date and time is narrower than the interval between the first date and time and the third date and time.
  4.  請求項1~3のいずれか一項に記載の情報処理装置において、
     前記第1画像、前記第2画像、前記第3画像、および前記第4画像はいずれもオルソ画像である
    情報処理装置。
    In the information processing device according to any one of claims 1 to 3,
    The information processing apparatus, wherein the first image, the second image, the third image, and the fourth image are all orthorectified images.
  5.  請求項1~4のいずれか一項に記載の情報処理装置において、
     前記第1画像、前記第2画像、前記第3画像、および前記第4画像はいずれも道路を含む画像であり、
     前記第1画像と前記第2画像とは、道路上に短期変化物が写っていない同一の画像である、または、前記第3画像と前記第4画像とは道路上に短期変化物が写っていない同一の画像である
    情報処理装置。
    In the information processing device according to any one of claims 1 to 4,
    each of the first image, the second image, the third image, and the fourth image is an image including a road;
    The first image and the second image are the same image in which short-term changeable objects are not captured on the road, or the third image and the fourth image are the same image in which short-term changeable objects are captured on the road. An information processing device that is not the same image.
  6.  請求項5に記載の情報処理装置において、
     前記短期変化物は、影、雪、水たまり、土砂、車両、落下物、落ち葉、およびゴミのうち少なくともいずれかである
    情報処理装置。
    In the information processing device according to claim 5,
    The information processing apparatus, wherein the short-term changeable object is at least one of shadows, snow, puddles, earth and sand, vehicles, fallen objects, fallen leaves, and garbage.
  7.  請求項1~4のいずれか一項に記載の情報処理装置において、
     前記第1の日時と前記第2の日時は互いに異なり、かつ、前記第3の日時と前記第4の日時は互いに異なる
    情報処理装置。
    In the information processing device according to any one of claims 1 to 4,
    The information processing apparatus, wherein the first date and time and the second date and time are different from each other, and the third date and time and the fourth date and time are different from each other.
  8.  請求項1~7のいずれか一項に記載の情報処理装置において、
     前記第1共通成分を示す画像を出力する出力部をさらに備える
    情報処理装置。
    In the information processing device according to any one of claims 1 to 7,
    The information processing apparatus further comprising an output unit that outputs an image showing the first common component.
  9.  請求項1~8のいずれか一項に記載の情報処理装置において、
     前記取得部が前記第1画像、前記第2画像、前記第3画像、および前記第4画像を取得し、前記差分画像生成部が前記第1差分画像と前記第2差分画像とを生成し、前記抽出部が前記第1共通成分を抽出する検出処理を、開始させるか否かを判定する判定部をさらに備え、
     前記判定部が前記検出処理を開始させると判定した場合に、前記検出処理が開始する
    情報処理装置。
    In the information processing device according to any one of claims 1 to 8,
    The acquisition unit acquires the first image, the second image, the third image, and the fourth image, and the difference image generation unit generates the first difference image and the second difference image, The extraction unit further comprises a determination unit that determines whether or not to start detection processing for extracting the first common component,
    The information processing apparatus in which the detection process starts when the determination unit determines to start the detection process.
  10.  請求項9に記載の情報処理装置において、
     前記判定部は、予め定められた期間が経過する毎に、前記検出処理を開始させると判定する
    情報処理装置。
    In the information processing device according to claim 9,
    The information processing apparatus, wherein the determination unit determines to start the detection process each time a predetermined period elapses.
  11.  請求項9または10に記載の情報処理装置において、
     前記第1画像、前記第2画像、前記第3画像、および前記第4画像はいずれも道路を含む画像であり、
     前記判定部は、
      道路に関する情報を取得し、
      前記道路に関する情報に基づいて、予め定められた事象が発生したと判定した場合に、前記検出処理を開始させると判定し、
     前記取得部は、前記事象の発生前に撮影された前記第1画像および前記第2画像と、前記事象の発生後に撮影された前記第3画像および前記第4画像とを取得する
    情報処理装置。
    In the information processing device according to claim 9 or 10,
    each of the first image, the second image, the third image, and the fourth image is an image including a road;
    The determination unit is
    Get information about roads,
    determining to start the detection process when it is determined that a predetermined event has occurred based on the information about the road;
    The acquisition unit performs information processing for acquiring the first image and the second image captured before the occurrence of the event, and the third image and the fourth image captured after the occurrence of the event. Device.
  12.  請求項9~11のいずれか一項に記載の情報処理装置において、
     前記第1画像、前記第2画像、前記第3画像、および前記第4画像はいずれも道路を含む画像であり、
     前記判定部は、
      道路の交通流に関する情報を取得し、
      前記交通流に関する情報に基づいて、前記検出処理を開始させるタイミングを決定する
    情報処理装置。
    In the information processing device according to any one of claims 9 to 11,
    each of the first image, the second image, the third image, and the fourth image is an image including a road;
    The determination unit is
    Get information about traffic flow on roads,
    An information processing device that determines a timing for starting the detection process based on the information about the traffic flow.
  13.  請求項9~12のいずれか一項に記載の情報処理装置において、
     前記判定部は、
      撮影日時が互いに異なる三つ以上の画像を取得し、
      前記三つ以上の画像のうち時間的に連続する各二画像の共通成分を第2共通成分としてそれぞれ抽出し、
      抽出した複数の前記第2共通成分を時系列に比較することにより、前記判定部が取得した画像中の対象の変化の有無を検出し、
      前記判定部が取得した画像中の前記対象に変化が検出された場合に、前記検出処理を開始させると判定する
    情報処理装置。
    In the information processing device according to any one of claims 9 to 12,
    The determination unit is
    Acquiring three or more images with different shooting dates and times,
    Extracting a common component of each two temporally consecutive images out of the three or more images as a second common component,
    Detecting whether or not there is a change in the target in the image acquired by the determination unit by comparing the extracted plurality of second common components in time series,
    An information processing apparatus that determines to start the detection process when a change is detected in the object in the image acquired by the determination unit.
  14.  請求項13に記載の情報処理装置において、
     前記判定部は、前記判定部が取得した画像中の前記対象に変化が検出された場合に、その変化タイミングをさらに特定し、
     前記取得部は、前記変化タイミングの前に撮影された前記第1画像および前記第2画像と、前記変化タイミングの後に撮影された前記第3画像および前記第4画像とを取得する
    情報処理装置。
    In the information processing device according to claim 13,
    The determination unit further specifies the change timing when a change is detected in the object in the image acquired by the determination unit,
    The information processing device, wherein the acquisition unit acquires the first image and the second image captured before the change timing, and the third image and the fourth image captured after the change timing.
  15.  第1の日時に撮影された第1画像と、第2の日時に撮影された第2画像と、第3の日時に撮影された第3画像と、第4の日時に撮影された第4画像とを取得する取得ステップと、
     前記第1画像と前記第3画像との差分を示す第1差分画像、および、前記第2画像と前記第4画像との差分を示す第2差分画像を生成する差分画像生成ステップと、
     前記第1差分画像と前記第2差分画像との共通成分を第1共通成分として抽出する抽出ステップとを備え、
     前記第3の日時および前記第4の日時はいずれも、前記第1の日時よりも後、かつ前記第2の日時よりも後である
    情報処理方法。
    A first image taken on a first date and time, a second image taken on a second date and time, a third image taken on a third date and time, and a fourth image taken on a fourth date and time. a retrieving step of retrieving and
    a difference image generating step of generating a first difference image showing the difference between the first image and the third image and a second difference image showing the difference between the second image and the fourth image;
    an extraction step of extracting a common component between the first difference image and the second difference image as a first common component;
    The information processing method, wherein both the third date and time and the fourth date and time are after the first date and time and after the second date and time.
  16.  請求項15に記載の情報処理方法の各ステップをコンピュータに実行させるプログラム。 A program that causes a computer to execute each step of the information processing method according to claim 15.
PCT/JP2022/009399 2021-03-31 2022-03-04 Information processing device, information processing method, and program WO2022209583A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-059608 2021-03-31
JP2021059608 2021-03-31

Publications (1)

Publication Number Publication Date
WO2022209583A1 true WO2022209583A1 (en) 2022-10-06

Family

ID=83458552

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/009399 WO2022209583A1 (en) 2021-03-31 2022-03-04 Information processing device, information processing method, and program

Country Status (1)

Country Link
WO (1) WO2022209583A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004151980A (en) * 2002-10-30 2004-05-27 Minolta Co Ltd Moving object detection method
JP2010121970A (en) * 2008-11-17 2010-06-03 Chugoku Electric Power Co Inc:The Moving body recognition system and moving body recognition method
JP2019070631A (en) * 2017-10-11 2019-05-09 株式会社日立システムズ Deterioration diagnosis system using flight vehicle
JP2019100136A (en) * 2017-12-06 2019-06-24 株式会社東芝 Road maintenance management system, road maintenance management method and computer program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004151980A (en) * 2002-10-30 2004-05-27 Minolta Co Ltd Moving object detection method
JP2010121970A (en) * 2008-11-17 2010-06-03 Chugoku Electric Power Co Inc:The Moving body recognition system and moving body recognition method
JP2019070631A (en) * 2017-10-11 2019-05-09 株式会社日立システムズ Deterioration diagnosis system using flight vehicle
JP2019100136A (en) * 2017-12-06 2019-06-24 株式会社東芝 Road maintenance management system, road maintenance management method and computer program

Similar Documents

Publication Publication Date Title
CN108694882B (en) Method, device and equipment for labeling map
US20220092816A1 (en) Vehicle Localization Using Cameras
EP2874097A2 (en) Automatic scene parsing
CN111830953A (en) Vehicle self-positioning method, device and system
CN110969592B (en) Image fusion method, automatic driving control method, device and equipment
JP2007265038A (en) Road image analysis device and road image analysis method
US11157735B2 (en) Cloud detection in aerial imagery
WO2021017211A1 (en) Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal
CN107679470A (en) A kind of traffic mark board detection and recognition methods based on HDR technologies
US20200279395A1 (en) Method and system for enhanced sensing capabilities for vehicles
JP5544595B2 (en) Map image processing apparatus, map image processing method, and computer program
CN109916415A (en) Road type determines method, apparatus, equipment and storage medium
JP4762026B2 (en) Road sign database construction device
JP2021179839A (en) Classification system of features, classification method and program thereof
CN106062850B (en) Signal machine detecting device and semaphore detection method
CN109034214B (en) Method and apparatus for generating a mark
RU2612571C1 (en) Method and system for recognizing urban facilities
CN110889388A (en) Violation identification method, device, equipment and storage medium
JP2007033931A (en) Road recognition system for map generation using satellite image or the like
WO2022209583A1 (en) Information processing device, information processing method, and program
CN110827340B (en) Map updating method, device and storage medium
CN112488010A (en) High-precision target extraction method and system based on unmanned aerial vehicle point cloud data
KR101603293B1 (en) Lane recognition method
Sun et al. Complex building roof detection and strict description from LIDAR data and orthorectified aerial imagery
CN107808160B (en) Three-dimensional building extraction method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22779808

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22779808

Country of ref document: EP

Kind code of ref document: A1