WO2018154654A1 - Moving object detection device and moving object detection program - Google Patents

Moving object detection device and moving object detection program Download PDF

Info

Publication number
WO2018154654A1
WO2018154654A1 PCT/JP2017/006579 JP2017006579W WO2018154654A1 WO 2018154654 A1 WO2018154654 A1 WO 2018154654A1 JP 2017006579 W JP2017006579 W JP 2017006579W WO 2018154654 A1 WO2018154654 A1 WO 2018154654A1
Authority
WO
WIPO (PCT)
Prior art keywords
moving object
image
region
target
edge
Prior art date
Application number
PCT/JP2017/006579
Other languages
French (fr)
Japanese (ja)
Inventor
仁己 小田
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2017/006579 priority Critical patent/WO2018154654A1/en
Priority to JP2019500908A priority patent/JP6532627B2/en
Publication of WO2018154654A1 publication Critical patent/WO2018154654A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Definitions

  • the present invention relates to a technique for detecting a moving object shown in a video frame.
  • the interframe difference is a difference in luminance information between images.
  • luminance information changes due to changes in illuminance in the shooting environment, noise other than the movement of moving objects is included in the result of the inter-frame difference.
  • Patent Document 1 describes a technique for robustly recognizing differences in moving direction between moving objects and changes in moving objects through inter-frame differences and learning.
  • Patent Document 1 does not describe a technique for removing a change in luminance information caused by a change in illuminance when external light is generated. Therefore, in the technique of Patent Document 1, a change in luminance information caused by a change in illuminance when external light occurs is detected as a motion of a moving object.
  • An object of the present invention is to prevent a decrease in detection accuracy caused by factors other than the movement of a moving object.
  • the moving object detection apparatus of the present invention is Using a target frame that is a video frame, a previous frame that is a video frame before the target frame, and a rear frame that is a video frame after the target frame, the luminance of the target frame and the previous frame is determined.
  • a difference image generation unit that generates a first difference image representing a difference and a second difference image representing a difference in luminance between the target frame and the subsequent frame;
  • An edge image generation unit that generates a first edge image representing an edge in the first difference image and a second edge image representing an edge in the second difference image;
  • a moving object image generation unit configured to generate a moving object image indicating an edge common to the first edge image and the second edge image;
  • an image showing an edge common to the first edge image and the second edge image is generated as a moving object image. Therefore, factors other than the movement of the moving object are removed, and a decrease in detection accuracy can be suppressed.
  • FIG. 1 is a configuration diagram of a moving object detection device 100 according to Embodiment 1.
  • FIG. FIG. 6 is a relationship diagram of a previous frame 201, a target frame 202, and a first difference image 211 in the first embodiment.
  • FIG. 6 is a relationship diagram among a target frame 202, a rear frame 203, and a second difference image 212 in the first embodiment.
  • FIG. 4 is a government benefit diagram of the first difference image 211 and the first edge image 221 according to the first embodiment.
  • FIG. 5 is a relationship diagram between a second difference image 212 and a second edge image 222 in the first embodiment.
  • FIG. 6 is a relationship diagram between a first edge image 221, a second edge image 222, and a moving object image 230 in the first embodiment.
  • FIG. 3 is a flowchart of a moving object detection method according to the first embodiment.
  • FIG. FIG. 10 is a relationship diagram among a moving object image 230, a moving object block 241 and a moving object region 242 according to the second embodiment.
  • FIG. 10 shows tracking of a moving object region 242 in the second embodiment.
  • FIG. 10 shows tracking of a moving object region 242 in the second embodiment.
  • FIG. 10 shows tracking of a moving object region 242 in the second embodiment.
  • FIG. 10 is a diagram showing tracking of a stationary area 250 in the second embodiment.
  • FIG. 10 is a diagram showing tracking of a stationary area 250 in the second embodiment.
  • 10 is a flowchart of moving object region detection processing in the second embodiment.
  • 10 is a flowchart of moving object region tracking processing according to the second embodiment.
  • 10 is a flowchart of moving object region tracking processing according to the second embodiment.
  • 10 is a flowchart of moving object region tracking processing according to the second embodiment.
  • Embodiment 1 FIG. A mode for detecting a moving object shown in a video frame will be described with reference to FIGS.
  • the moving object detection device 100 is a computer including hardware such as a processor 901, a memory 902, an auxiliary storage device 903, and an input / output interface 904. These hardwares are connected to each other via signal lines.
  • the processor 901 is an IC (Integrated Circuit) that performs arithmetic processing, and controls other hardware.
  • the processor 901 is a CPU (Central Processing Unit), a DSP (Digital Signal Processor), or a GPU (Graphics Processing Unit).
  • the memory 902 is a volatile storage device.
  • the memory 902 is also called main memory or main memory.
  • the memory 902 is a RAM (Random Access Memory).
  • Data stored in the memory 902 is stored in the auxiliary storage device 903 as necessary.
  • the auxiliary storage device 903 is a nonvolatile storage device.
  • the auxiliary storage device 903 is a ROM (Read Only Memory), a HDD (Hard Disk Drive), or a flash memory. Data stored in the auxiliary storage device 903 is loaded into the memory 902 as necessary.
  • the input / output interface 904 is a port to which an input device and an output device are connected.
  • the input / output interface 904 is a USB terminal
  • the input device is a receiver, a keyboard and a mouse
  • the output device is a transmitter and a display.
  • USB is an abbreviation for Universal Serial Bus.
  • the moving object detection apparatus 100 includes software elements such as a difference image generation unit 110, an edge image generation unit 120, and a moving object image generation unit 130.
  • a software element is an element realized by software.
  • the auxiliary storage device 903 stores a moving object detection program for causing a computer to function as the difference image generation unit 110, the edge image generation unit 120, and the moving object image generation unit 130.
  • the moving object detection program is loaded into the memory 902 and executed by the processor 901.
  • the auxiliary storage device 903 stores an OS (Operating System). At least a part of the OS is loaded into the memory 902 and executed by the processor 901. That is, the processor 901 executes the moving object detection program while executing the OS.
  • Data obtained by executing the moving object detection program is stored in a storage device such as the memory 902, the auxiliary storage device 903, a register in the processor 901, or a cache memory in the processor 901.
  • the memory 902 functions as a storage unit 191 that stores data. However, another storage device may function as the storage unit 191 instead of the memory 902 or together with the memory 902.
  • the input / output interface 904 functions as a reception unit 192 that receives input.
  • the input / output interface 904 functions as an output unit 193 that outputs data.
  • the moving object detection apparatus 100 may include a plurality of processors that replace the processor 901.
  • the plurality of processors share the role of the processor 901.
  • the moving object detection program can be stored in a computer-readable manner on a nonvolatile storage medium such as a magnetic disk, an optical disk, or a flash memory.
  • a nonvolatile storage medium such as a magnetic disk, an optical disk, or a flash memory.
  • a non-volatile storage medium is a tangible medium that is not temporary.
  • the difference image generation unit 110 generates a first difference image 211 using the target frame 202 and the previous frame 201.
  • the target frame 202 is a video frame.
  • a video frame is image data.
  • the target frame 202 in FIGS. 2 and 3 is data of an image obtained by photographing a room with a moving person from above.
  • the previous frame 201 is a video frame before the target frame 202.
  • the previous frame 201 is a video frame immediately before the target frame 202.
  • the room in the target frame 202 is brighter than the room in the previous frame 201.
  • the first difference image 211 is an image representing a difference in luminance between the target frame 202 and the previous frame 201.
  • the first difference image 211 shows the illuminance difference of the room in addition to the moving person.
  • the difference image generation unit 110 generates a second difference image 212 using the target frame 202 and the rear frame 203.
  • the rear frame 203 is a video frame after the target frame 202. Specifically, the rear frame 203 is a video frame next to the target frame 202.
  • the brightness of the room in the target frame 202 is not different from the brightness of the room in the rear frame 203.
  • the second difference image 212 is an image representing a difference in luminance between the target frame 202 and the rear frame 203.
  • the brightness of the room in the target frame 202 is not different from the brightness of the room in the rear frame 203, and thus the moving person appears mainly in the second difference image 212.
  • the edge image generation unit 120 generates a first edge image 221 using the first difference image 211.
  • the first edge image 221 is an image representing an edge in the first difference image 211.
  • An edge is a location that becomes a boundary of luminance. That is, the luminance changes greatly at the edge.
  • the edge image generation unit 120 generates a second edge image 222 using the second difference image 212.
  • the second edge image 222 is an image representing an edge in the second difference image 212.
  • the moving object image generation unit 130 Based on FIG. 6, the function of the moving object image generation unit 130 will be described.
  • the moving object image generation unit 130 generates an image indicating an edge common to the first edge image 221 and the second edge image 222.
  • the generated image is referred to as a moving object image 230.
  • the moving object image 230 corresponds to an image showing the edge of the moving object shown in the target frame.
  • a moving body image 230 in FIG. 6 shows an edge of a person reflected in the target frame 202 (see FIGS. 2 and 3).
  • the operation of the moving object detection apparatus 100 corresponds to a moving object detection method.
  • the procedure of the moving object detection method corresponds to the procedure of the moving object detection program.
  • step S101 the reception unit 192 receives a video frame.
  • the storage unit 191 stores the video frame in association with the time when the video frame is received.
  • step S ⁇ b> 111 the difference image generation unit 110 determines whether the target frame is stored in the storage unit 191.
  • the target frame is a video frame associated with the time before the video frame accepted in step S101. If the target frame is stored in the storage unit 191, the process proceeds to step S112. If the target frame is not stored in the storage unit 191, the process proceeds to step S101.
  • step S112 the difference image generation unit 110 generates a difference image using the target frame and the subsequent frame.
  • the subsequent frame is the video frame accepted in step S101.
  • the storage unit 191 stores the difference image in association with the time when the difference image is generated.
  • the difference image generation unit 110 generates a difference image as follows.
  • the difference image generation unit 110 performs the following processing for each pixel of the target frame.
  • the difference image generation unit 110 selects a pixel of the subsequent frame corresponding to the pixel of the target frame, calculates a luminance difference between the pixel of the target frame and the pixel of the subsequent frame, and a pixel of the difference image corresponding to the pixel of the target frame Is selected, and the luminance difference is set to the pixels of the difference image.
  • the pixel of the Y frame corresponding to the pixel of the X frame is a pixel identified by the same coordinate value as the coordinate value for identifying the pixel of the X frame among the pixels included in the Y frame. That is, the Y frame pixel corresponding to the pixel located at (u, v) in the X frame is the pixel located at (u, v) in the Y frame.
  • step S121 the edge image generation unit 120 generates an edge image using the difference image generated in step S112.
  • the storage unit 191 stores the edge image in association with the time when the edge image is generated.
  • the edge image generation unit 120 generates an edge image by performing edge extraction on the difference image.
  • Edge extraction is a process for extracting a portion that becomes a boundary of luminance as an edge.
  • Edge extraction is a conventional technique and is also called edge detection.
  • step S131 the moving object image generation unit 130 determines whether the first edge image is stored in the storage unit 191.
  • the first edge image is an edge image associated with the time before the edge image generated in step S121. If the first edge image is stored in the storage unit 191, the process proceeds to step S132. If the first edge image is not stored in the storage unit 191, the process proceeds to step S101.
  • step S132 the moving object image generation unit 130 generates a moving object image using the first edge image and the second edge image.
  • the second edge image is the edge image generated in step S121.
  • the storage unit 191 stores the moving body image in association with the time when the moving body image is generated.
  • the moving object image generation unit 130 generates a moving object image as follows.
  • the moving object image generation unit 130 performs the following processing for each pixel of the first edge image.
  • the moving object image generation unit 130 selects a pixel of the second edge image corresponding to the pixel of the first edge image, calculates a logical product of the pixel of the first edge image and the pixel of the second edge image, and calculates the first edge
  • a moving image pixel corresponding to the image pixel is selected, and a logical product is set to the moving image pixel.
  • the pixel of the Y image corresponding to the pixel of the X image is a pixel identified by the same coordinate value as the coordinate value for identifying the pixel of the X image among the pixels included in the Y image.
  • the pixel of the Y image corresponding to the pixel located at (u, v) of the X image is the pixel located at (u, v) of the Y image.
  • the value of a pixel that is a part of the edge is “1”
  • the value of a pixel that is not a part of the edge is “0”.
  • the value of the pixel of the moving object image is “1”.
  • the value of at least one of the pixel of the first edge image and the pixel of the second edge image is “0”
  • the value of the pixel of the moving object image is “0”.
  • step S141 the output unit 193 outputs the moving body image generated in step S132. After step S141, the process proceeds to step S101.
  • Embodiment 1 solves the problem of detecting only moving objects from luminance information that changes due to external light. Since the edge of an object does not change even when external light is generated, detection of a moving object and a moving object position is realized by using robust edge information for the external light. According to the first embodiment, since the luminance change in the video frame caused by the illuminance change is removed, the motion of the moving object can be detected robustly.
  • FIG. 8 shows an example of a video frame whose luminance information has changed due to external light.
  • the front frame 291 is a video frame obtained before the rear frame 292, and the rear frame 292 is a video frame obtained after the front frame 291.
  • the difference image 293 is an image representing a difference in luminance between the previous frame 291 and the subsequent frame 292. Since the external light has increased between the time when the front frame 291 is obtained and the time when the rear frame 292 is obtained, the front frame 291 is darker than the rear frame 292 and the rear frame 292 is brighter than the front frame 291. For this reason, the difference image 293 shows not only the moved person (right person) but also the entire person including the unmoved person (left person) and the unmoved background. However, since the moving object detection apparatus 100 uses the first difference image and the second difference image, the moving object in the video frame is extracted as edge information even if the luminance information of the video frame changes greatly due to external light. Is possible.
  • Embodiment 2 FIG. Regarding the form for tracking a moving object, differences from the first embodiment will be mainly described with reference to FIGS.
  • the moving object detection apparatus 100 includes a moving object region detecting unit 140 and a moving object region tracking unit 150 as software elements in addition to the difference image generating unit 110, the edge image generating unit 120, and the moving object image generating unit 130.
  • the moving object detection program is a program for causing a computer to function as the difference image generating unit 110, the edge image generating unit 120, the moving object image generating unit 130, the moving object region detecting unit 140, and the moving object region tracking unit 150.
  • the moving object image generation unit 130 generates a moving object image for each set of three video frames that are consecutive in time series.
  • the moving object region detection unit 140 detects one or more moving object regions 242 corresponding to the one or more moving objects shown in the target frame from the moving object image 230 based on the edges indicated in the moving object image 230.
  • the moving object area 242 is an area representing a moving object.
  • the moving object region detection unit 140 detects the moving object region 242 by the following procedure. (1) The moving object region detection unit 140 divides the moving object image 230 into a plurality of blocks. (2) The moving object region detection unit 140 calculates the number of edge pixels for each block. The number of edge pixels is the number of pixels indicating a part of the edge. (3) The moving object region detection unit 140 identifies one or more moving object blocks 241 based on the number of edge pixels for each block. The moving object block 241 is a block corresponding to the number of edge pixels larger than the pixel number threshold. (4) The moving object region detection unit 140 generates a rectangular region including adjacent moving object blocks 241. The generated area is the moving object area 242.
  • the moving object region tracking unit 150 detects a moving object region 242T corresponding to the previous moving object region 242B from the target moving object image for each previous moving object region 242B included in the previous moving object image.
  • the moving body region tracking unit 150 detects a previous moving body region 242B that does not correspond to any moving body region 242T included in the target moving body image from the previous moving body image. Then, the moving body region tracking unit 150 detects a region corresponding to the detected previous moving body region 242B as a still region 250 from the target moving body image.
  • the stationary region 250 is a region corresponding to a moving object that is stationary.
  • the moving object region tracking unit 150 detects a moving object region corresponding to the still region for each still region included in the previous moving object image from the target moving object image.
  • the moving body region tracking unit 150 detects a still region that does not correspond to any moving body region included in the target moving body image from the previous moving body image.
  • the detected still area is called a target still area.
  • the moving body region tracking unit 150 detects a region corresponding to the target stationary region from the target moving body image.
  • the detected area is referred to as a still area in the target moving image.
  • the moving region tracking unit 150 may include the target moving image in the target moving image. Discard the static area.
  • the tracking time is a time during which the stationary region is tracked on the assumption that the moving body actively moving is stationary in the stationary region, and is a predetermined time.
  • the moving object region tracking unit 150 operates as follows. First, the moving object region tracking unit 150 sets the tracking time as the remaining time when a stationary region is detected for the first time for each stationary region. Next, the moving object region tracking unit 150 performs, for each stationary region, moving object images from when the stationary region is detected for the first time until a moving object region corresponding to the stationary region is detected (or until the stationary region is discarded). Each time it is generated, the remaining time in the still area is reduced. Then, the moving object region tracking unit 150 discards the still region in which the remaining time becomes zero.
  • the previous moving object image includes a moving object region 242B.
  • the moving object region 242N is a moving object region corresponding to the stationary region 250. It is assumed that the moving object region 242N corresponding to the stationary region 250 is detected before the tracking time elapses. In that case, the moving object region 242B, the stationary region 250, and the moving object region 242N are considered to be active moving object regions.
  • the active moving object area is an area representing an active moving object.
  • An active moving object is a moving object that is not a moved object. That is, the active moving body is a moving body that moves by itself. For example, the active moving object is a person.
  • the target moving object image includes a moving object region 242B. There is no moving object region corresponding to the moving object region 242B in the target moving object image. Therefore, the still region 250 is detected from the target moving object image. There is no moving object region corresponding to the still region 250 in the next moving object image. It is assumed that the moving object region corresponding to the stationary region 250 has not been detected before the tracking time has elapsed. In that case, the moving object region 242B and the stationary region 250 are considered to be passive moving object regions.
  • the passive moving object area is an area representing a passive moving object. Passive moving objects are moved. In other words, passive moving objects are things that do not move by themselves. For example, the passive moving body is a chair. When the tracking time has elapsed, the static area 250 is discarded.
  • the moving object region detection process is performed by the moving object region detection unit 140 every time a new moving object image is generated.
  • step S201 the moving object region detection unit 140 divides the moving object image into a plurality of blocks. Specifically, the moving object region detection unit 140 divides the moving object image for each region of W ⁇ H pixels. A region of W ⁇ H pixels is a block. W and H are arbitrary integers.
  • step S202 the moving object region detection unit 140 calculates the number of edge pixels for each block.
  • the moving object region detection unit 140 calculates the number of edge pixels for each block as follows. When the value of the pixel indicating the edge is X, the moving object region detection unit 140 counts the number of pixels for which X is set. The number of pixels for which X is set is the number of edge pixels. X is a specific value.
  • step S203 the moving object region detection unit 140 identifies moving object blocks included in a plurality of blocks.
  • the moving object region detection unit 140 determines whether the target block is a moving object block as follows.
  • the moving object region detection unit 140 compares the number of edge pixels of the target block with a pixel number threshold value.
  • the number of edge pixels is a predetermined value. When the number of edge pixels of the target block is greater than or equal to the pixel number threshold, the moving object region detection unit 140 determines that the target block is a moving object block. When the number of edge pixels of the target block is less than the pixel number threshold, the moving object region detection unit 140 determines that the target block is not a moving object block.
  • step S203 one or more moving object blocks are specified.
  • step S204 the moving object region detection unit 140 generates one or more moving object regions based on one or more moving object blocks.
  • the moving object region detection unit 140 generates a moving object region for each moving object block as follows.
  • the moving object region detection unit 140 determines whether there is a moving object block adjacent to the target moving object block.
  • the moving object region detection unit 140 generates a rectangle that surrounds the target moving object block and the adjacent moving object block.
  • a region surrounded by the generated rectangle is a moving object region.
  • the target moving object block is a moving object region.
  • the moving object region detection unit 140 assigns an area identifier to the moving object region for each moving object region.
  • the area identifier is an identifier for identifying a moving object area.
  • the area identifier is a serial number.
  • the moving object region detection unit 140 stores the position information and the region identifier in the storage unit 191 in association with each other for each moving object region.
  • the position information is information for specifying the position of the moving object region in the moving object image.
  • the position information is each coordinate value of four vertices in the moving object region.
  • the moving object region tracking process will be described with reference to FIGS.
  • the moving object region tracking process is executed by the moving object region tracking unit 150 after the moving object region detection process every time a new moving object image is generated.
  • a new moving body image is referred to as a target moving body region
  • a moving body image generated immediately before the new moving body image is referred to as a previous moving body image.
  • step S211 the moving body region tracking unit 150 selects one unselected moving body region from the moving body regions included in the target moving body image.
  • step S211 the moving object region selected in step S211 is referred to as a target moving object region.
  • the moving object region tracking unit 150 calculates the distance between the target moving object region and the tracking region for each tracking region included in the previous moving object image.
  • the tracking area is a moving area included in the previous moving body image or a still area included in the previous moving body image.
  • the moving object region tracking unit 150 calculates the distance between the target moving object region and the tracking region using the position information of the target moving object region and the position information of the tracking region. For example, the moving object region tracking unit 150 calculates the distance from the upper left vertex of the target moving object region to the upper left vertex of the tracking region.
  • step S213 the moving object region tracking unit 150 determines whether there is a tracking region corresponding to the target moving object region.
  • the tracking area corresponding to the target moving object area is referred to as a corresponding tracking area.
  • the moving object region tracking unit 150 determines whether there is a tracking region whose distance from the target moving object region is equal to or less than the distance threshold.
  • a tracking region whose distance from the target moving object region is equal to or smaller than the distance threshold is a corresponding tracking region. If there is a corresponding tracking area, the process proceeds to step S214. If there is no corresponding tracking area, the process proceeds to step S215.
  • step S214 the area identifier of the target moving object area is updated to the area identifier of the corresponding tracking area.
  • step S215 the moving object region tracking unit 150 determines whether there is an unselected moving object region in the moving object regions included in the target moving object image. If there is an unselected moving body region, the process proceeds to step S211. If there is no unselected moving object region, the process proceeds to step S221 (see FIG. 17).
  • step S221 the moving object region tracking unit 150 selects one unselected tracking region from the tracking regions included in the previous moving object image.
  • step S212 the tracking area selected in step S221 is referred to as a target tracking area.
  • step S222 the moving object region tracking unit 150 determines whether a moving object region included in the target moving object image includes a moving object region corresponding to the target tracking region.
  • the moving object area corresponding to the target tracking area is referred to as a corresponding moving object area.
  • the moving object region tracking unit 150 determines whether a moving object region having the same region identifier as that of the target tracking region is included in the target moving object image.
  • a moving object area having the same area identifier as the target tracking area is a corresponding moving object area. If there is a corresponding moving body region, the process proceeds to step S223. If there is no corresponding moving object region, the process proceeds to step S231 (see FIG. 18).
  • the moving object region tracking unit 150 determines whether the target tracking region is a moving object region or a stationary region. Specifically, the moving object region tracking unit 150 determines whether the remaining time is stored in association with the region identifier of the target tracking region. When the remaining time is recorded in association with the area identifier of the target tracking area, the target tracking area is a stationary area. When the remaining time is not recorded in association with the area identifier of the target tracking area, the target tracking area is a moving object area. If the target tracking area is a moving object area, the process proceeds to step S235 (see FIG. 18). If the target tracking area is a stationary area, the process proceeds to step S224.
  • step S224 the moving object region tracking unit 150 initializes the remaining time of the target tracking region. Specifically, the moving object region tracking unit 150 updates the remaining time associated with the region identifier of the target tracking region to the tracking time. After step S224, the process proceeds to step S235 (see FIG. 18).
  • step S231 the moving object region tracking unit 150 generates a still region corresponding to the target tracking region in the target moving image.
  • the stationary area corresponding to the target tracking area is called a corresponding stationary area.
  • the moving object region tracking unit 150 stores the position information of the corresponding still region, the region identifier of the corresponding still region, and the remaining time of the corresponding still region in the storage unit 191 in association with the target moving object image.
  • the position information of the corresponding still area is the same as the position information of the target tracking area.
  • the area identifier of the corresponding still area is the same as the area identifier of the target tracking area.
  • the remaining time of the corresponding still area is the same as the tracking time.
  • the moving object region tracking unit 150 determines whether the target tracking region is a moving object region or a stationary region. Specifically, the moving object region tracking unit 150 determines whether the remaining time is stored in association with the region identifier of the target tracking region. When the remaining time is recorded in association with the area identifier of the target tracking area, the target tracking area is a stationary area. When the remaining time is not recorded in association with the area identifier of the target tracking area, the target tracking area is a moving object area. If the target tracking area is a moving object area, the process proceeds to step S235. If the target tracking area is a static area, the process proceeds to step S233.
  • step S233 the moving object region tracking unit 150 takes over the remaining time from the target tracking region to the corresponding still region. Specifically, the moving object region tracking unit 150 updates the remaining time associated with the region identifier of the target still region to the same time as the remaining time associated with the region identifier of the target tracking region.
  • the moving object region tracking unit 150 reduces the remaining time of the corresponding still region. Specifically, the moving object region tracking unit 150 reduces the unit elapsed time from the remaining time associated with the region identifier of the corresponding still region.
  • the unit elapsed time is a time determined in advance as a time interval at which a video frame is input or a time interval at which a moving object image is generated.
  • step S235 the moving object region tracking unit 150 determines whether there is an unselected tracking region in the tracking regions included in the previous moving object image. If there is an unselected tracking area, the process returns to step S221 (see FIG. 17). If there is no unselected tracking area, the process proceeds to step S241.
  • step S241 the moving object region tracking unit 150 discards a still region in which the remaining time is zero among the still regions included in the target moving object image. Specifically, the moving object region tracking unit 150 refers to the remaining time associated with the region identifier of the still region for each still region included in the target moving image. If the remaining time is less than or equal to zero, the moving object region tracking unit 150 deletes the information on the still region (region identifier, position information, remaining time) from the storage unit 191. After step S241, the moving object region tracking process ends.
  • the moving object detection apparatus 100 can extract the moving object region in the video frame as edge information even if the luminance information of the video frame changes greatly due to disturbance. Furthermore, the moving object detection apparatus 100 can detect the position of a stationary moving object using the edge information. The moving object detection apparatus 100 can determine whether the stationary object is an active moving object or a passive moving object.
  • the function of the moving object detection apparatus 100 may be realized by hardware.
  • FIG. 19 shows a configuration when the function of the moving object detection apparatus 100 is realized by hardware.
  • the moving object detection apparatus 100 includes a processing circuit 990.
  • the processing circuit 990 is also called a processing circuit.
  • the processing circuit 990 is a dedicated electronic circuit that implements the difference image generation unit 110, the edge image generation unit 120, the moving object image generation unit 130, the moving object region detection unit 140, the moving object region tracking unit 150, and the storage unit 191.
  • the processing circuit 990 is a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, a logic IC, GA, ASIC, FPGA, or a combination thereof.
  • GA is an abbreviation for Gate Array
  • ASIC is an abbreviation for Application Specific Integrated Circuit
  • FPGA is an abbreviation for Field Programmable Gate Array.
  • the moving object detection apparatus 100 may include a plurality of processing circuits that replace the processing circuit 990.
  • the plurality of processing circuits share the role of the processing circuit 990.
  • the embodiment is an example of a preferred embodiment and is not intended to limit the technical scope of the present invention.
  • the embodiment may be implemented partially or in combination with other embodiments.
  • the procedure described using the flowchart and the like may be changed as appropriate.
  • 100 moving object detection device 110 difference image generating unit, 120 edge image generating unit, 130 moving object image generating unit, 140 moving object region detecting unit, 150 moving object region tracking unit, 191 storage unit, 192 receiving unit, 193 output unit, 201 previous frame 202 target frame, 203 post frame, 211 first difference image, 212 second difference image, 221 first edge image, 222 second edge image, 230 moving object image, 241 moving object block, 242 moving object area, 250 stationary area, 901 Processor, 902 memory, 903 auxiliary storage device, 904 input / output interface, 990 processing circuit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A difference image generation unit (110) generates a first difference image representing the difference in luminance between a target frame and a preceding frame, and a second difference image representing the difference in luminance between the target frame and a succeeding frame. An edge image generation unit (120) generates a first edge image representing edges in the first difference image, and a second edge image representing edges in the second difference image. A moving object image generation unit (130) generates a moving object image indicating edges common to the first edge image and the second edge image.

Description

動体検出装置および動体検出プログラムMoving body detection apparatus and moving body detection program
 本発明は、映像フレームに映った動体を検出する技術に関するものである。 The present invention relates to a technique for detecting a moving object shown in a video frame.
 人のような動体を画像処理によって検出するために、フレーム間差分を利用する技術が広く使われている。フレーム間差分とは、画像同士の輝度情報の差分である。
 しかし、撮影環境における照度の変化などを起因として輝度情報の変化が起こるため、動体の動き以外のノイズがフレーム間差分の結果に含まれてしまう。
In order to detect a moving object such as a person by image processing, a technique using a difference between frames is widely used. The interframe difference is a difference in luminance information between images.
However, since luminance information changes due to changes in illuminance in the shooting environment, noise other than the movement of moving objects is included in the result of the inter-frame difference.
 特許文献1では、フレーム間差分および学習などにより動体同士の移動方向の違いおよび動体の変化をロバストに認識する技術について述べられている。
 しかし、特許文献1では、外光が発生した時の照度の変化を起因とする輝度情報の変化を除去する技術については述べられていない。
 したがって、特許文献1の技術では、外光が発生した時の照度の変化を起因とする輝度情報の変化が動体の動きとして検出されてしまう。
Patent Document 1 describes a technique for robustly recognizing differences in moving direction between moving objects and changes in moving objects through inter-frame differences and learning.
However, Patent Document 1 does not describe a technique for removing a change in luminance information caused by a change in illuminance when external light is generated.
Therefore, in the technique of Patent Document 1, a change in luminance information caused by a change in illuminance when external light occurs is detected as a motion of a moving object.
特開2011-90708号公報JP 2011-90708 A
 画像処理によって動体を検出する場合、動体の動き以外の要因が画像の輝度情報に影響を与えるため、検出精度が低下する。動体の動き以外の要因として、例えば照明のオンまたはオフによる照度の変化が考えられる。 When detecting a moving object by image processing, factors other than the movement of the moving object affect the luminance information of the image, so that the detection accuracy decreases. As a factor other than the movement of the moving body, for example, a change in illuminance due to turning on or off the illumination can be considered.
 本発明は、動体の動き以外の要因による検出精度の低下を抑止できるようにすることを目的とする。 An object of the present invention is to prevent a decrease in detection accuracy caused by factors other than the movement of a moving object.
 本発明の動体検出装置は、
 映像フレームである対象フレームと前記対象フレームよりも前の映像フレームである前フレームと前記対象フレームよりも後の映像フレームである後フレームとを用いて、前記対象フレームと前記前フレームとの輝度の差を表す第1差分画像と、前記対象フレームと前記後フレームとの輝度の差を表す第2差分画像とを生成する差分画像生成部と、
 前記第1差分画像の中のエッジを表す第1エッジ画像と、前記第2差分画像の中のエッジを表す第2エッジ画像とを生成するエッジ画像生成部と、
 前記第1エッジ画像と前記第2エッジ画像とに共通するエッジを示す動体画像を生成する動体画像生成部とを備える。
The moving object detection apparatus of the present invention is
Using a target frame that is a video frame, a previous frame that is a video frame before the target frame, and a rear frame that is a video frame after the target frame, the luminance of the target frame and the previous frame is determined. A difference image generation unit that generates a first difference image representing a difference and a second difference image representing a difference in luminance between the target frame and the subsequent frame;
An edge image generation unit that generates a first edge image representing an edge in the first difference image and a second edge image representing an edge in the second difference image;
A moving object image generation unit configured to generate a moving object image indicating an edge common to the first edge image and the second edge image;
 本発明によれば、第1エッジ画像と第2エッジ画像とに共通するエッジを示す画像が動体画像として生成される。そのため、動体の動き以外の要因が除去され、検出精度の低下を抑止できる。 According to the present invention, an image showing an edge common to the first edge image and the second edge image is generated as a moving object image. Therefore, factors other than the movement of the moving object are removed, and a decrease in detection accuracy can be suppressed.
実施の形態1における動体検出装置100の構成図。1 is a configuration diagram of a moving object detection device 100 according to Embodiment 1. FIG. 実施の形態1における前フレーム201と対象フレーム202と第1差分画像211との関係図。FIG. 6 is a relationship diagram of a previous frame 201, a target frame 202, and a first difference image 211 in the first embodiment. 実施の形態1における対象フレーム202と後フレーム203と第2差分画像212との関係図。FIG. 6 is a relationship diagram among a target frame 202, a rear frame 203, and a second difference image 212 in the first embodiment. 実施の形態1における第1差分画像211と第1エッジ画像221との官益図。FIG. 4 is a government benefit diagram of the first difference image 211 and the first edge image 221 according to the first embodiment. 実施の形態1における第2差分画像212と第2エッジ画像222との関係図。FIG. 5 is a relationship diagram between a second difference image 212 and a second edge image 222 in the first embodiment. 実施の形態1における第1エッジ画像221と第2エッジ画像222と動体画像230との関係図。FIG. 6 is a relationship diagram between a first edge image 221, a second edge image 222, and a moving object image 230 in the first embodiment. 実施の形態1における動体検出方法のフローチャート。3 is a flowchart of a moving object detection method according to the first embodiment. 外光に起因して輝度情報が変化した場合の差分画像293を示す図。The figure which shows the difference image 293 when brightness | luminance information changes resulting from external light. 実施の形態2における動体検出装置100の構成図。The block diagram of the moving body detection apparatus 100 in Embodiment 2. FIG. 実施の形態2における動体画像230と動体ブロック241と動体領域242との関係図。FIG. 10 is a relationship diagram among a moving object image 230, a moving object block 241 and a moving object region 242 according to the second embodiment. 実施の形態2における動体領域242の追跡を示す図。FIG. 10 shows tracking of a moving object region 242 in the second embodiment. 実施の形態2における動体領域242の追跡を示す図。FIG. 10 shows tracking of a moving object region 242 in the second embodiment. 実施の形態2における静止領域250の追跡を示す図。FIG. 10 is a diagram showing tracking of a stationary area 250 in the second embodiment. 実施の形態2における静止領域250の追跡を示す図。FIG. 10 is a diagram showing tracking of a stationary area 250 in the second embodiment. 実施の形態2における動体領域検出処理のフローチャート。10 is a flowchart of moving object region detection processing in the second embodiment. 実施の形態2における動体領域追跡処理のフローチャート。10 is a flowchart of moving object region tracking processing according to the second embodiment. 実施の形態2における動体領域追跡処理のフローチャート。10 is a flowchart of moving object region tracking processing according to the second embodiment. 実施の形態2における動体領域追跡処理のフローチャート。10 is a flowchart of moving object region tracking processing according to the second embodiment. 実施の形態における動体検出装置100のハードウェア構成図。The hardware block diagram of the moving body detection apparatus 100 in embodiment.
 実施の形態および図面において、同じ要素および対応する要素には同じ符号を付している。同じ符号が付された要素の説明は適宜に省略または簡略化する。図中の矢印はデータの流れ又は処理の流れを主に示している。 In the embodiment and the drawings, the same reference numerals are given to the same elements and corresponding elements. Description of elements having the same reference numerals will be omitted or simplified as appropriate. The arrows in the figure mainly indicate the flow of data or the flow of processing.
 実施の形態1.
 映像フレームに映った動体を検出する形態について、図1から図7に基づいて説明する。
Embodiment 1 FIG.
A mode for detecting a moving object shown in a video frame will be described with reference to FIGS.
***構成の説明***
 図1に基づいて、動体検出装置100の構成を説明する。
 動体検出装置100は、プロセッサ901とメモリ902と補助記憶装置903と入出力インタフェース904といったハードウェアを備えるコンピュータである。これらのハードウェアは、信号線を介して互いに接続されている。
*** Explanation of configuration ***
Based on FIG. 1, the structure of the moving body detection apparatus 100 is demonstrated.
The moving object detection device 100 is a computer including hardware such as a processor 901, a memory 902, an auxiliary storage device 903, and an input / output interface 904. These hardwares are connected to each other via signal lines.
 プロセッサ901は、演算処理を行うIC(Integrated Circuit)であり、他のハードウェアを制御する。例えば、プロセッサ901は、CPU(Central Processing Unit)、DSP(Digital Signal Processor)、またはGPU(Graphics Processing Unit)である。
 メモリ902は揮発性の記憶装置である。メモリ902は、主記憶装置またはメインメモリとも呼ばれる。例えば、メモリ902はRAM(Random Access Memory)である。メモリ902に記憶されたデータは必要に応じて補助記憶装置903に保存される。
 補助記憶装置903は不揮発性の記憶装置である。例えば、補助記憶装置903は、ROM(Read Only Memory)、HDD(Hard Disk Drive)、またはフラッシュメモリである。補助記憶装置903に記憶されたデータは必要に応じてメモリ902にロードされる。
The processor 901 is an IC (Integrated Circuit) that performs arithmetic processing, and controls other hardware. For example, the processor 901 is a CPU (Central Processing Unit), a DSP (Digital Signal Processor), or a GPU (Graphics Processing Unit).
The memory 902 is a volatile storage device. The memory 902 is also called main memory or main memory. For example, the memory 902 is a RAM (Random Access Memory). Data stored in the memory 902 is stored in the auxiliary storage device 903 as necessary.
The auxiliary storage device 903 is a nonvolatile storage device. For example, the auxiliary storage device 903 is a ROM (Read Only Memory), a HDD (Hard Disk Drive), or a flash memory. Data stored in the auxiliary storage device 903 is loaded into the memory 902 as necessary.
 入出力インタフェース904は入力装置および出力装置が接続されるポートである。例えば、入出力インタフェース904はUSB端子であり、入力装置はレシーバ、キーボードおよびマウスであり、出力装置はトランスミッタおよびディスプレイである。USBはUniversal Serial Busの略称である。 The input / output interface 904 is a port to which an input device and an output device are connected. For example, the input / output interface 904 is a USB terminal, the input device is a receiver, a keyboard and a mouse, and the output device is a transmitter and a display. USB is an abbreviation for Universal Serial Bus.
 動体検出装置100は、差分画像生成部110とエッジ画像生成部120と動体画像生成部130といったソフトウェア要素を備える。ソフトウェア要素はソフトウェアで実現される要素である。 The moving object detection apparatus 100 includes software elements such as a difference image generation unit 110, an edge image generation unit 120, and a moving object image generation unit 130. A software element is an element realized by software.
 補助記憶装置903には、差分画像生成部110とエッジ画像生成部120と動体画像生成部130としてコンピュータを機能させるための動体検出プログラムが記憶されている。動体検出プログラムは、メモリ902にロードされて、プロセッサ901によって実行される。
 さらに、補助記憶装置903にはOS(Operating System)が記憶されている。OSの少なくとも一部は、メモリ902にロードされて、プロセッサ901によって実行される。
 つまり、プロセッサ901は、OSを実行しながら、動体検出プログラムを実行する。
 動体検出プログラムを実行して得られるデータは、メモリ902、補助記憶装置903、プロセッサ901内のレジスタまたはプロセッサ901内のキャッシュメモリといった記憶装置に記憶される。
The auxiliary storage device 903 stores a moving object detection program for causing a computer to function as the difference image generation unit 110, the edge image generation unit 120, and the moving object image generation unit 130. The moving object detection program is loaded into the memory 902 and executed by the processor 901.
Further, the auxiliary storage device 903 stores an OS (Operating System). At least a part of the OS is loaded into the memory 902 and executed by the processor 901.
That is, the processor 901 executes the moving object detection program while executing the OS.
Data obtained by executing the moving object detection program is stored in a storage device such as the memory 902, the auxiliary storage device 903, a register in the processor 901, or a cache memory in the processor 901.
 メモリ902はデータを記憶する記憶部191として機能する。但し、他の記憶装置が、メモリ902の代わりに、又は、メモリ902と共に、記憶部191として機能してもよい。
 入出力インタフェース904は、入力を受け付ける受付部192として機能する。また、入出力インタフェース904は、データを出力する出力部193として機能する。
The memory 902 functions as a storage unit 191 that stores data. However, another storage device may function as the storage unit 191 instead of the memory 902 or together with the memory 902.
The input / output interface 904 functions as a reception unit 192 that receives input. The input / output interface 904 functions as an output unit 193 that outputs data.
 動体検出装置100は、プロセッサ901を代替する複数のプロセッサを備えてもよい。複数のプロセッサは、プロセッサ901の役割を分担する。 The moving object detection apparatus 100 may include a plurality of processors that replace the processor 901. The plurality of processors share the role of the processor 901.
 動体検出プログラムは、磁気ディスク、光ディスクまたはフラッシュメモリ等の不揮発性の記憶媒体にコンピュータ読み取り可能に記憶することができる。不揮発性の記憶媒体は、一時的でない有形の媒体である。 The moving object detection program can be stored in a computer-readable manner on a nonvolatile storage medium such as a magnetic disk, an optical disk, or a flash memory. A non-volatile storage medium is a tangible medium that is not temporary.
***機能の説明***
 差分画像生成部110とエッジ画像生成部120と動体画像生成部130とのそれぞれの機能を説明する。
*** Description of functions ***
The functions of the difference image generation unit 110, the edge image generation unit 120, and the moving object image generation unit 130 will be described.
 図2および図3に基づいて、差分画像生成部110の機能を説明する。
 図2において、差分画像生成部110は、対象フレーム202と前フレーム201とを用いて、第1差分画像211を生成する。
 対象フレーム202は映像フレームである。映像フレームは画像のデータである。図2および図3の対象フレーム202は、移動している人がいる部屋を上から撮影して得られた画像のデータである。
 前フレーム201は対象フレーム202よりも前の映像フレームである。具体的には、前フレーム201は対象フレーム202の一つ前の映像フレームである。図2において、対象フレーム202における部屋は、前フレーム201における部屋よりも明るい。
 第1差分画像211は、対象フレーム202と前フレーム201との輝度の差を表す画像である。図2において、対象フレーム202における部屋は前フレーム201における部屋よりも明るいため、第1差分画像211には、移動している人の他に、部屋の照度差が表れている。
Based on FIG. 2 and FIG. 3, the function of the difference image generation part 110 is demonstrated.
In FIG. 2, the difference image generation unit 110 generates a first difference image 211 using the target frame 202 and the previous frame 201.
The target frame 202 is a video frame. A video frame is image data. The target frame 202 in FIGS. 2 and 3 is data of an image obtained by photographing a room with a moving person from above.
The previous frame 201 is a video frame before the target frame 202. Specifically, the previous frame 201 is a video frame immediately before the target frame 202. In FIG. 2, the room in the target frame 202 is brighter than the room in the previous frame 201.
The first difference image 211 is an image representing a difference in luminance between the target frame 202 and the previous frame 201. In FIG. 2, since the room in the target frame 202 is brighter than the room in the previous frame 201, the first difference image 211 shows the illuminance difference of the room in addition to the moving person.
 図3において、差分画像生成部110は、対象フレーム202と後フレーム203とを用いて、第2差分画像212を生成する。
 後フレーム203は対象フレーム202よりも後の映像フレームである。具体的には、後フレーム203は対象フレーム202の次の映像フレームである。図3において、対象フレーム202における部屋の明るさは、後フレーム203における部屋の明るさと変わらない。
 第2差分画像212は、対象フレーム202と後フレーム203との輝度の差を表す画像である。図3において、対象フレーム202における部屋の明るさは後フレーム203における部屋の明るさと変わらないため、第2差分画像212には、主に、移動している人が表れている。
In FIG. 3, the difference image generation unit 110 generates a second difference image 212 using the target frame 202 and the rear frame 203.
The rear frame 203 is a video frame after the target frame 202. Specifically, the rear frame 203 is a video frame next to the target frame 202. In FIG. 3, the brightness of the room in the target frame 202 is not different from the brightness of the room in the rear frame 203.
The second difference image 212 is an image representing a difference in luminance between the target frame 202 and the rear frame 203. In FIG. 3, the brightness of the room in the target frame 202 is not different from the brightness of the room in the rear frame 203, and thus the moving person appears mainly in the second difference image 212.
 図4および図5に基づいて、エッジ画像生成部120の機能を説明する。
 図4において、エッジ画像生成部120は、第1差分画像211を用いて第1エッジ画像221を生成する。
 第1エッジ画像221は、第1差分画像211の中のエッジを表す画像である。
 エッジは、輝度の境となる箇所である。つまり、エッジを境にして輝度は大きく変わる。
The function of the edge image generation unit 120 will be described based on FIGS.
In FIG. 4, the edge image generation unit 120 generates a first edge image 221 using the first difference image 211.
The first edge image 221 is an image representing an edge in the first difference image 211.
An edge is a location that becomes a boundary of luminance. That is, the luminance changes greatly at the edge.
 図5において、エッジ画像生成部120は、第2差分画像212を用いて第2エッジ画像222を生成する。
 第2エッジ画像222は、第2差分画像212の中のエッジを表す画像である。
In FIG. 5, the edge image generation unit 120 generates a second edge image 222 using the second difference image 212.
The second edge image 222 is an image representing an edge in the second difference image 212.
 図6に基づいて、動体画像生成部130の機能を説明する。
 動体画像生成部130は、第1エッジ画像221と第2エッジ画像222とに共通するエッジを示す画像を生成する。生成される画像を動体画像230という。
 動体画像230は、対象フレームに映った動体のエッジを示す画像に相当する。図6の動体画像230は、対象フレーム202(図2および図3参照)に映った人のエッジを示している。
Based on FIG. 6, the function of the moving object image generation unit 130 will be described.
The moving object image generation unit 130 generates an image indicating an edge common to the first edge image 221 and the second edge image 222. The generated image is referred to as a moving object image 230.
The moving object image 230 corresponds to an image showing the edge of the moving object shown in the target frame. A moving body image 230 in FIG. 6 shows an edge of a person reflected in the target frame 202 (see FIGS. 2 and 3).
***動作の説明***
 動体検出装置100の動作は動体検出方法に相当する。また、動体検出方法の手順は動体検出プログラムの手順に相当する。
*** Explanation of operation ***
The operation of the moving object detection apparatus 100 corresponds to a moving object detection method. The procedure of the moving object detection method corresponds to the procedure of the moving object detection program.
 図7に基づいて、動体検出方法を説明する。
 ステップS101において、受付部192は映像フレームを受け付ける。
 記憶部191は、映像フレームが受け付けられた時刻に対応付けて映像フレームを記憶する。
The moving object detection method will be described with reference to FIG.
In step S101, the reception unit 192 receives a video frame.
The storage unit 191 stores the video frame in association with the time when the video frame is received.
 ステップS111において、差分画像生成部110は、対象フレームが記憶部191に記憶されているか判定する。
 対象フレームは、ステップS101で受け付けられた映像フレームの前の時刻に対応付けられた映像フレームである。
 対象フレームが記憶部191に記憶されている場合、処理はステップS112に進む。
 対象フレームが記憶部191に記憶されていない場合、処理はステップS101に進む。
In step S <b> 111, the difference image generation unit 110 determines whether the target frame is stored in the storage unit 191.
The target frame is a video frame associated with the time before the video frame accepted in step S101.
If the target frame is stored in the storage unit 191, the process proceeds to step S112.
If the target frame is not stored in the storage unit 191, the process proceeds to step S101.
 ステップS112において、差分画像生成部110は、対象フレームと後フレームとを用いて差分画像を生成する。後フレームは、ステップS101で受け付けられた映像フレームである。
 記憶部191は、差分画像が生成された時刻に対応付けて差分画像を記憶する。
In step S112, the difference image generation unit 110 generates a difference image using the target frame and the subsequent frame. The subsequent frame is the video frame accepted in step S101.
The storage unit 191 stores the difference image in association with the time when the difference image is generated.
 具体的には、差分画像生成部110は、差分画像を以下のように生成する。
 差分画像生成部110は、対象フレームの画素毎に以下の処理を行う。
 差分画像生成部110は、対象フレームの画素に対応する後フレームの画素を選択し、対象フレームの画素と後フレームの画素との輝度差を算出し、対象フレームの画素に対応する差分画像の画素を選択し、差分画像の画素に輝度差を設定する。
 Xフレームの画素に対応するYフレームの画素は、Yフレームに含まれる画素のうち、Xフレームの画素を識別する座標値と同じ座標値で識別される画素である。つまり、Xフレームの(u、v)に位置する画素に対応するYフレームの画素は、Yフレームの(u、v)に位置する画素である。
Specifically, the difference image generation unit 110 generates a difference image as follows.
The difference image generation unit 110 performs the following processing for each pixel of the target frame.
The difference image generation unit 110 selects a pixel of the subsequent frame corresponding to the pixel of the target frame, calculates a luminance difference between the pixel of the target frame and the pixel of the subsequent frame, and a pixel of the difference image corresponding to the pixel of the target frame Is selected, and the luminance difference is set to the pixels of the difference image.
The pixel of the Y frame corresponding to the pixel of the X frame is a pixel identified by the same coordinate value as the coordinate value for identifying the pixel of the X frame among the pixels included in the Y frame. That is, the Y frame pixel corresponding to the pixel located at (u, v) in the X frame is the pixel located at (u, v) in the Y frame.
 ステップS121において、エッジ画像生成部120は、ステップS112で生成された差分画像を用いてエッジ画像を生成する。
 記憶部191は、エッジ画像が生成された時刻に対応付けてエッジ画像を記憶する。
In step S121, the edge image generation unit 120 generates an edge image using the difference image generated in step S112.
The storage unit 191 stores the edge image in association with the time when the edge image is generated.
 具体的には、エッジ画像生成部120は、差分画像に対してエッジ抽出を行うことによって、エッジ画像を生成する。
 エッジ抽出は、輝度の境となる箇所をエッジとして抽出するための処理である。エッジ抽出は、従来の技術であり、エッジ検出とも呼ばれる。
Specifically, the edge image generation unit 120 generates an edge image by performing edge extraction on the difference image.
Edge extraction is a process for extracting a portion that becomes a boundary of luminance as an edge. Edge extraction is a conventional technique and is also called edge detection.
 ステップS131において、動体画像生成部130は、第1エッジ画像が記憶部191に記憶されているか判定する。
 第1エッジ画像は、ステップS121で生成されたエッジ画像の前の時刻に対応付けられたエッジ画像である。
 第1エッジ画像が記憶部191に記憶されている場合、処理はステップS132に進む。
 第1エッジ画像が記憶部191に記憶されていない場合、処理はステップS101に進む。
In step S131, the moving object image generation unit 130 determines whether the first edge image is stored in the storage unit 191.
The first edge image is an edge image associated with the time before the edge image generated in step S121.
If the first edge image is stored in the storage unit 191, the process proceeds to step S132.
If the first edge image is not stored in the storage unit 191, the process proceeds to step S101.
 ステップS132において、動体画像生成部130は、第1エッジ画像と第2エッジ画像とを用いて動体画像を生成する。第2エッジ画像は、ステップS121で生成されたエッジ画像である。
 記憶部191は、動体画像が生成された時刻に対応付けて動体画像を記憶する。
In step S132, the moving object image generation unit 130 generates a moving object image using the first edge image and the second edge image. The second edge image is the edge image generated in step S121.
The storage unit 191 stores the moving body image in association with the time when the moving body image is generated.
 具体的には、動体画像生成部130は、動体画像を以下のように生成する。
 動体画像生成部130は、第1エッジ画像の画素毎に以下の処理を行う。
 動体画像生成部130は、第1エッジ画像の画素に対応する第2エッジ画像の画素を選択し、第1エッジ画像の画素と第2エッジ画像の画素との論理積を算出し、第1エッジ画像の画素に対応する動体画像の画素を選択し、動体画像の画素に論理積を設定する。
 X画像の画素に対応するY画像の画素は、Y画像に含まれる画素のうち、X画像の画素を識別する座標値と同じ座標値で識別される画素である。つまり、X画像の(u、v)に位置する画素に対応するY画像の画素は、Y画像の(u、v)に位置する画素である。
 例えば、第1エッジ画像および第2エッジ画像において、エッジの一部である画素の値を「1」とし、エッジの一部でない画素の値を「0」とする。第1エッジ画像の画素と第2エッジ画像の画素とのそれぞれの値が「1」である場合、動体画像の画素の値は「1」である。第1エッジ画像の画素と第2エッジ画像の画素との少なくとも一方の値が「0」である場合、動体画像の画素の値は「0」である。
Specifically, the moving object image generation unit 130 generates a moving object image as follows.
The moving object image generation unit 130 performs the following processing for each pixel of the first edge image.
The moving object image generation unit 130 selects a pixel of the second edge image corresponding to the pixel of the first edge image, calculates a logical product of the pixel of the first edge image and the pixel of the second edge image, and calculates the first edge A moving image pixel corresponding to the image pixel is selected, and a logical product is set to the moving image pixel.
The pixel of the Y image corresponding to the pixel of the X image is a pixel identified by the same coordinate value as the coordinate value for identifying the pixel of the X image among the pixels included in the Y image. That is, the pixel of the Y image corresponding to the pixel located at (u, v) of the X image is the pixel located at (u, v) of the Y image.
For example, in the first edge image and the second edge image, the value of a pixel that is a part of the edge is “1”, and the value of a pixel that is not a part of the edge is “0”. When the values of the pixels of the first edge image and the pixels of the second edge image are “1”, the value of the pixel of the moving object image is “1”. When the value of at least one of the pixel of the first edge image and the pixel of the second edge image is “0”, the value of the pixel of the moving object image is “0”.
 ステップS141において、出力部193は、ステップS132で生成された動体画像を出力する。
 ステップS141の後、処理はステップS101に進む。
In step S141, the output unit 193 outputs the moving body image generated in step S132.
After step S141, the process proceeds to step S101.
***実施の形態1の効果***
 実施の形態1は、外光によって変化する輝度情報から動体のみを検出するという課題を解決する。外光が発生した時においても物体のエッジは変化しないため、外光にロバストなエッジ情報を使用することによって、動体および動体位置の検出が実現される。
 実施の形態1により、照度変化に起因する映像フレーム中の輝度変化が除去されるため、ロバストに動体の動きの検出が可能となる。
*** Effects of Embodiment 1 ***
The first embodiment solves the problem of detecting only moving objects from luminance information that changes due to external light. Since the edge of an object does not change even when external light is generated, detection of a moving object and a moving object position is realized by using robust edge information for the external light.
According to the first embodiment, since the luminance change in the video frame caused by the illuminance change is removed, the motion of the moving object can be detected robustly.
 図8に、外光に起因して輝度情報が変化した映像フレームの例を示す。
 前フレーム291は後フレーム292の前に得られた映像フレームであり、後フレーム292は前フレーム291の後に得られた映像フレームである。差分画像293は、前フレーム291と後フレーム292との輝度の差分を表す画像である。
 前フレーム291が得られた時から後フレーム292が得られた時までの間に外光が強くなったため、前フレーム291は後フレーム292よりも暗く、後フレーム292は前フレーム291よりも明るい。
 そのため、差分画像293には、移動した人(右側の人)だけでなく、移動していない人(左側の人)と移動しない背景とを含めた全体が映っている。
 しかし、動体検出装置100は、第1差分画像と第2差分画像とを用いるため、外光に起因して映像フレームの輝度情報が大きく変化したとしても、映像フレーム中の動体をエッジ情報として抽出することが可能である。
FIG. 8 shows an example of a video frame whose luminance information has changed due to external light.
The front frame 291 is a video frame obtained before the rear frame 292, and the rear frame 292 is a video frame obtained after the front frame 291. The difference image 293 is an image representing a difference in luminance between the previous frame 291 and the subsequent frame 292.
Since the external light has increased between the time when the front frame 291 is obtained and the time when the rear frame 292 is obtained, the front frame 291 is darker than the rear frame 292 and the rear frame 292 is brighter than the front frame 291.
For this reason, the difference image 293 shows not only the moved person (right person) but also the entire person including the unmoved person (left person) and the unmoved background.
However, since the moving object detection apparatus 100 uses the first difference image and the second difference image, the moving object in the video frame is extracted as edge information even if the luminance information of the video frame changes greatly due to external light. Is possible.
 実施の形態2.
 動体を追跡する形態について、主に実施の形態1と異なる点を図9から図18に基づいて説明する。
Embodiment 2. FIG.
Regarding the form for tracking a moving object, differences from the first embodiment will be mainly described with reference to FIGS.
***構成の説明***
 図9に基づいて、動体検出装置100の構成を説明する。
 動体検出装置100は、差分画像生成部110とエッジ画像生成部120と動体画像生成部130との他に、動体領域検出部140と動体領域追跡部150とをソフトウェア要素として備える。
 動体検出プログラムは、差分画像生成部110とエッジ画像生成部120と動体画像生成部130と動体領域検出部140と動体領域追跡部150としてコンピュータを機能させるためのプログラムである。
*** Explanation of configuration ***
Based on FIG. 9, the structure of the moving body detection apparatus 100 is demonstrated.
The moving object detection apparatus 100 includes a moving object region detecting unit 140 and a moving object region tracking unit 150 as software elements in addition to the difference image generating unit 110, the edge image generating unit 120, and the moving object image generating unit 130.
The moving object detection program is a program for causing a computer to function as the difference image generating unit 110, the edge image generating unit 120, the moving object image generating unit 130, the moving object region detecting unit 140, and the moving object region tracking unit 150.
***機能の説明***
 実施の形態1で説明したように、動体画像生成部130は、時系列順に連続する3枚の映像フレームの組毎に動体画像を生成する。
*** Description of functions ***
As described in Embodiment 1, the moving object image generation unit 130 generates a moving object image for each set of three video frames that are consecutive in time series.
 図10に基づいて、動体領域検出部140の機能を説明する。
 動体領域検出部140は、動体画像230に示されるエッジに基づいて、対象フレームに映った1つ以上の動体に対応する1つ以上の動体領域242を動体画像230から検出する。
 動体領域242は、動体を表す領域である。
Based on FIG. 10, the function of the moving body area | region detection part 140 is demonstrated.
The moving object region detection unit 140 detects one or more moving object regions 242 corresponding to the one or more moving objects shown in the target frame from the moving object image 230 based on the edges indicated in the moving object image 230.
The moving object area 242 is an area representing a moving object.
 具体的には、動体領域検出部140は、動体領域242を以下の手順で検出する。
 (1)動体領域検出部140は、動体画像230を複数のブロックに分割する。
 (2)動体領域検出部140は、ブロック毎にエッジ画素数を算出する。エッジ画素数は、エッジの一部を示す画素の数である。
 (3)動体領域検出部140は、ブロック毎のエッジ画素数に基づいて、1つ以上の動体ブロック241を特定する。動体ブロック241は、画素数閾値より多いエッジ画素数に対応するブロックである。
 (4)動体領域検出部140は、隣接する動体ブロック241同士を含む矩形の領域を生成する。生成される領域が動体領域242である。
Specifically, the moving object region detection unit 140 detects the moving object region 242 by the following procedure.
(1) The moving object region detection unit 140 divides the moving object image 230 into a plurality of blocks.
(2) The moving object region detection unit 140 calculates the number of edge pixels for each block. The number of edge pixels is the number of pixels indicating a part of the edge.
(3) The moving object region detection unit 140 identifies one or more moving object blocks 241 based on the number of edge pixels for each block. The moving object block 241 is a block corresponding to the number of edge pixels larger than the pixel number threshold.
(4) The moving object region detection unit 140 generates a rectangular region including adjacent moving object blocks 241. The generated area is the moving object area 242.
 図11および図12に基づいて、動体領域追跡部150の機能を説明する。
 図11において、動体領域追跡部150は、前の動体画像に含まれる前の動体領域242B毎に前の動体領域242Bに対応する動体領域242Tを対象の動体画像から検出する。
 図12において、動体領域追跡部150は、対象の動体画像に含まれるいずれの動体領域242Tにも対応しない前の動体領域242Bを前の動体画像から検出する。
 そして、動体領域追跡部150は、検出された前の動体領域242Bに対応する領域を対象の動体画像から静止領域250として検出する。
 静止領域250は、静止している動体に対応する領域である。
Based on FIGS. 11 and 12, the function of the moving object region tracking unit 150 will be described.
In FIG. 11, the moving object region tracking unit 150 detects a moving object region 242T corresponding to the previous moving object region 242B from the target moving object image for each previous moving object region 242B included in the previous moving object image.
In FIG. 12, the moving body region tracking unit 150 detects a previous moving body region 242B that does not correspond to any moving body region 242T included in the target moving body image from the previous moving body image.
Then, the moving body region tracking unit 150 detects a region corresponding to the detected previous moving body region 242B as a still region 250 from the target moving body image.
The stationary region 250 is a region corresponding to a moving object that is stationary.
 さらに、動体領域追跡部150は、前の動体画像に含まれる静止領域毎に静止領域に対応する動体領域を対象の動体画像から検出する。
 次に、動体領域追跡部150は、対象の動体画像に含まれるいずれの動体領域にも対応しない静止領域を前の動体画像から検出する。検出される静止領域を対象の静止領域という。
 そして、動体領域追跡部150は、対象の静止領域に対応する領域を対象の動体画像から検出する。検出される領域を対象の動体画像における静止領域という。
Furthermore, the moving object region tracking unit 150 detects a moving object region corresponding to the still region for each still region included in the previous moving object image from the target moving object image.
Next, the moving body region tracking unit 150 detects a still region that does not correspond to any moving body region included in the target moving body image from the previous moving body image. The detected still area is called a target still area.
Then, the moving body region tracking unit 150 detects a region corresponding to the target stationary region from the target moving body image. The detected area is referred to as a still area in the target moving image.
 但し、対象の静止領域が対象の動体画像に対応する時刻より追跡時間以上前の時刻に対応する動体画像から検出された静止領域に対応する場合、動体領域追跡部150は、対象の動体画像における静止領域を破棄する。
 追跡時間は、能動的に動く動体が静止領域で静止していると仮定して静止領域が追跡される時間であり、予め決められた時間である。
However, when the target still region corresponds to a still region detected from a moving image corresponding to a time that is more than the tracking time before the time corresponding to the target moving image, the moving region tracking unit 150 may include the target moving image in the target moving image. Discard the static area.
The tracking time is a time during which the stationary region is tracked on the assumption that the moving body actively moving is stationary in the stationary region, and is a predetermined time.
 具体的には、動体領域追跡部150は以下のように動作する。
 まず、動体領域追跡部150は、静止領域毎に、静止領域が初めて検出されたときに追跡時間を残り時間として設定する。
 次に、動体領域追跡部150は、静止領域毎に、静止領域が初めて検出されたときから静止領域に対応する動体領域が検出されるまで(又は静止領域が破棄されるまで)、動体画像が生成される毎に、静止領域の残り時間を減らす。
 そして、動体領域追跡部150は、残り時間がゼロになった静止領域を破棄する。
Specifically, the moving object region tracking unit 150 operates as follows.
First, the moving object region tracking unit 150 sets the tracking time as the remaining time when a stationary region is detected for the first time for each stationary region.
Next, the moving object region tracking unit 150 performs, for each stationary region, moving object images from when the stationary region is detected for the first time until a moving object region corresponding to the stationary region is detected (or until the stationary region is discarded). Each time it is generated, the remaining time in the still area is reduced.
Then, the moving object region tracking unit 150 discards the still region in which the remaining time becomes zero.
 図13において、前の動体画像は動体領域242Bを含んでいる。動体領域242Bに対応する動体領域は対象の動体画像に無い。そのため、対象の動体画像から静止領域250が検出される。次の動体画像は動体領域242Nを含んでいる。動体領域242Nは静止領域250に対応する動体領域である。
 追跡時間が経過する前に静止領域250に対応する動体領域242Nが検出されたものとする。その場合、動体領域242B、静止領域250および動体領域242Nは能動動体領域であると考えられる。能動動体領域は能動動体を表す領域である。能動動体は、動かされた物ではない動体である。つまり、能動動体は自ら動く動体である。例えば、能動動体は人である。
In FIG. 13, the previous moving object image includes a moving object region 242B. There is no moving object region corresponding to the moving object region 242B in the target moving object image. Therefore, the still region 250 is detected from the target moving object image. The next moving object image includes a moving object region 242N. The moving object region 242N is a moving object region corresponding to the stationary region 250.
It is assumed that the moving object region 242N corresponding to the stationary region 250 is detected before the tracking time elapses. In that case, the moving object region 242B, the stationary region 250, and the moving object region 242N are considered to be active moving object regions. The active moving object area is an area representing an active moving object. An active moving object is a moving object that is not a moved object. That is, the active moving body is a moving body that moves by itself. For example, the active moving object is a person.
 図14において、対象の動体画像は動体領域242Bを含んでいる。動体領域242Bに対応する動体領域は対象の動体画像に無い。そのため、対象の動体画像から静止領域250が検出される。静止領域250に対応する動体領域は次の動体画像に無い。
 追跡時間が経過する前に静止領域250に対応する動体領域が検出されなかったものとする。その場合、動体領域242Bおよび静止領域250は受動動体領域であると考えられる。受動動体領域は受動動体を表す領域である。受動動体は動かされた物である。つまり、受動動体は自ら動かない物である。例えば、受動動体は椅子である。
 追跡時間が経過した場合、静止領域250は破棄される。
In FIG. 14, the target moving object image includes a moving object region 242B. There is no moving object region corresponding to the moving object region 242B in the target moving object image. Therefore, the still region 250 is detected from the target moving object image. There is no moving object region corresponding to the still region 250 in the next moving object image.
It is assumed that the moving object region corresponding to the stationary region 250 has not been detected before the tracking time has elapsed. In that case, the moving object region 242B and the stationary region 250 are considered to be passive moving object regions. The passive moving object area is an area representing a passive moving object. Passive moving objects are moved. In other words, passive moving objects are things that do not move by themselves. For example, the passive moving body is a chair.
When the tracking time has elapsed, the static area 250 is discarded.
***動作の説明***
 図15に基づいて、動体領域検出処理を説明する。
 動体領域検出処理は、新たな動体画像が生成される毎に動体領域検出部140によって行われる。
*** Explanation of operation ***
Based on FIG. 15, a moving body area | region detection process is demonstrated.
The moving object region detection process is performed by the moving object region detection unit 140 every time a new moving object image is generated.
 ステップS201において、動体領域検出部140は、動体画像を複数のブロックに分割する。
 具体的には、動体領域検出部140は、動体画像をW×H画素の領域毎に区切る。W×H画素の領域がブロックである。WおよびHは任意の整数である。
In step S201, the moving object region detection unit 140 divides the moving object image into a plurality of blocks.
Specifically, the moving object region detection unit 140 divides the moving object image for each region of W × H pixels. A region of W × H pixels is a block. W and H are arbitrary integers.
 ステップS202において、動体領域検出部140は、ブロック毎にエッジ画素数を算出する。 In step S202, the moving object region detection unit 140 calculates the number of edge pixels for each block.
 具体的には、動体領域検出部140は、ブロック毎に以下のようにエッジ画素数を算出する。
 エッジを示す画素の値がXである場合、動体領域検出部140は、Xが設定された画素の数を数える。Xが設定された画素の数がエッジ画素数である。Xは特定の値である。
Specifically, the moving object region detection unit 140 calculates the number of edge pixels for each block as follows.
When the value of the pixel indicating the edge is X, the moving object region detection unit 140 counts the number of pixels for which X is set. The number of pixels for which X is set is the number of edge pixels. X is a specific value.
 ステップS203において、動体領域検出部140は、複数のブロックに含まれる動体ブロックを特定する。 In step S203, the moving object region detection unit 140 identifies moving object blocks included in a plurality of blocks.
 具体的には、動体領域検出部140は、対象ブロックが動体ブロックであるか以下のように判定する。
 動体領域検出部140は、対象ブロックのエッジ画素数を画素数閾値と比較する。エッジ画素数は予め決められた値である。
 対象ブロックのエッジ画素数が画素数閾値以上である場合、動体領域検出部140は、対象ブロックが動体ブロックであると判定する。
 対象ブロックのエッジ画素数が画素数閾値未満である場合、動体領域検出部140は、対象ブロックが動体ブロックでないと判定する。
Specifically, the moving object region detection unit 140 determines whether the target block is a moving object block as follows.
The moving object region detection unit 140 compares the number of edge pixels of the target block with a pixel number threshold value. The number of edge pixels is a predetermined value.
When the number of edge pixels of the target block is greater than or equal to the pixel number threshold, the moving object region detection unit 140 determines that the target block is a moving object block.
When the number of edge pixels of the target block is less than the pixel number threshold, the moving object region detection unit 140 determines that the target block is not a moving object block.
 ステップS203により、1つ以上の動体ブロックが特定される。 At step S203, one or more moving object blocks are specified.
 ステップS204において、動体領域検出部140は、1つ以上の動体ブロックに基づいて、1つ以上の動体領域を生成する。 In step S204, the moving object region detection unit 140 generates one or more moving object regions based on one or more moving object blocks.
 具体的には、動体領域検出部140は、動体ブロック毎に以下のように動体領域を生成する。
 動体領域検出部140は、対象の動体ブロックに隣接している動体ブロックが有るか判定する。
 対象の動体ブロックに隣接している動体ブロックが有る場合、動体領域検出部140は、対象の動体ブロックと隣接の動体ブロックとを囲う矩形を生成する。生成された矩形で囲われた領域が動体領域である。
 対象の動体ブロックに隣接している動体ブロックが無い場合、対象の動体ブロックが動体領域である。
Specifically, the moving object region detection unit 140 generates a moving object region for each moving object block as follows.
The moving object region detection unit 140 determines whether there is a moving object block adjacent to the target moving object block.
When there is a moving object block adjacent to the target moving object block, the moving object region detection unit 140 generates a rectangle that surrounds the target moving object block and the adjacent moving object block. A region surrounded by the generated rectangle is a moving object region.
When there is no moving object block adjacent to the target moving object block, the target moving object block is a moving object region.
 ステップS205において、動体領域検出部140は、動体領域毎に動体領域に領域識別子を付与する。領域識別子は動体領域を識別する識別子である。例えば、領域識別子は通し番号である。
 具体的には、動体領域検出部140は、動体領域毎に位置情報と領域識別子とを互いに対応付けて記憶部191に記憶する。位置情報は、動体画像における動体領域の位置を特定する情報である。例えば、位置情報は、動体領域における4つの頂点のそれぞれの座標値である。
In step S205, the moving object region detection unit 140 assigns an area identifier to the moving object region for each moving object region. The area identifier is an identifier for identifying a moving object area. For example, the area identifier is a serial number.
Specifically, the moving object region detection unit 140 stores the position information and the region identifier in the storage unit 191 in association with each other for each moving object region. The position information is information for specifying the position of the moving object region in the moving object image. For example, the position information is each coordinate value of four vertices in the moving object region.
 図16、図17および図18に基づいて、動体領域追跡処理を説明する。
 動体領域追跡処理は、新たな動体画像が生成される毎に動体領域検出処理の後に動体領域追跡部150によって実行される。
 動体領域追跡処理の説明において、新たな動体画像を対象の動体領域といい、新たな動体画像の1つ前に生成された動体画像を前の動体画像という。
The moving object region tracking process will be described with reference to FIGS.
The moving object region tracking process is executed by the moving object region tracking unit 150 after the moving object region detection process every time a new moving object image is generated.
In the description of the moving body region tracking process, a new moving body image is referred to as a target moving body region, and a moving body image generated immediately before the new moving body image is referred to as a previous moving body image.
 ステップS211(図16参照)において、動体領域追跡部150は、対象の動体画像に含まれる動体領域のうち未選択の動体領域を1つ選択する。 In step S211 (see FIG. 16), the moving body region tracking unit 150 selects one unselected moving body region from the moving body regions included in the target moving body image.
 ステップS211以降の説明において、ステップS211で選択された動体領域を対象の動体領域という。 In the description after step S211, the moving object region selected in step S211 is referred to as a target moving object region.
 ステップS212において、動体領域追跡部150は、前の動体画像に含まれる追跡領域毎に、対象の動体領域と追跡領域との間の距離を算出する。
 追跡領域は、前の動体画像に含まれる動体領域または前の動体画像に含まれる静止領域である。
 具体的には、動体領域追跡部150は、対象の動体領域の位置情報と追跡領域の位置情報とを用いて、対象の動体領域と追跡領域との間の距離を算出する。
 例えば、動体領域追跡部150は、対象の動体領域の左上の頂点から追跡領域の左上の頂点までの距離を算出する。
In step S212, the moving object region tracking unit 150 calculates the distance between the target moving object region and the tracking region for each tracking region included in the previous moving object image.
The tracking area is a moving area included in the previous moving body image or a still area included in the previous moving body image.
Specifically, the moving object region tracking unit 150 calculates the distance between the target moving object region and the tracking region using the position information of the target moving object region and the position information of the tracking region.
For example, the moving object region tracking unit 150 calculates the distance from the upper left vertex of the target moving object region to the upper left vertex of the tracking region.
 ステップS213において、動体領域追跡部150は、対象の動体領域に対応する追跡領域が有るか判定する。対象の動体領域に対応する追跡領域を対応追跡領域という。
 具体的には、動体領域追跡部150は、対象の動体領域との距離が距離閾値以下である追跡領域が有るか判定する。対象の動体領域との距離が距離閾値以下である追跡領域が対応追跡領域である。
 対応追跡領域が有る場合、処理はステップS214に進む。
 対応追跡領域が無い場合、処理はステップS215に進む。
In step S213, the moving object region tracking unit 150 determines whether there is a tracking region corresponding to the target moving object region. The tracking area corresponding to the target moving object area is referred to as a corresponding tracking area.
Specifically, the moving object region tracking unit 150 determines whether there is a tracking region whose distance from the target moving object region is equal to or less than the distance threshold. A tracking region whose distance from the target moving object region is equal to or smaller than the distance threshold is a corresponding tracking region.
If there is a corresponding tracking area, the process proceeds to step S214.
If there is no corresponding tracking area, the process proceeds to step S215.
 ステップS214において、対象の動体領域の領域識別子を対応追跡領域の領域識別子に更新する。 In step S214, the area identifier of the target moving object area is updated to the area identifier of the corresponding tracking area.
 ステップS215において、動体領域追跡部150は、対象の動体画像に含まれる動体領域の中に未選択の動体領域が有るか判定する。
 未選択の動体領域が有る場合、処理はステップS211に進む。
 未選択の動体領域が無い場合、処理はステップS221(図17参照)に進む。
In step S215, the moving object region tracking unit 150 determines whether there is an unselected moving object region in the moving object regions included in the target moving object image.
If there is an unselected moving body region, the process proceeds to step S211.
If there is no unselected moving object region, the process proceeds to step S221 (see FIG. 17).
 ステップS221(図17参照)において、動体領域追跡部150は、前の動体画像に含まれる追跡領域のうち未選択の追跡領域を1つ選択する。 In step S221 (see FIG. 17), the moving object region tracking unit 150 selects one unselected tracking region from the tracking regions included in the previous moving object image.
 ステップS212以降の説明において、ステップS221で選択された追跡領域を対象の追跡領域という。 In the description after step S212, the tracking area selected in step S221 is referred to as a target tracking area.
 ステップS222において、動体領域追跡部150は、対象の動体画像に含まれる動体領域の中に、対象の追跡領域に対応する動体領域が有るか判定する。対象の追跡領域に対応する動体領域を対応動体領域という。
 具体的には、動体領域追跡部150は、対象の追跡領域と領域識別子が同じ動体領域が対象の動体画像に含まれるか判定する。対象の追跡領域と領域識別子が同じ動体領域が対応動体領域である。
 対応動体領域が有る場合、処理はステップS223に進む。
 対応動体領域が無い場合、処理はステップS231(図18参照)に進む。
In step S222, the moving object region tracking unit 150 determines whether a moving object region included in the target moving object image includes a moving object region corresponding to the target tracking region. The moving object area corresponding to the target tracking area is referred to as a corresponding moving object area.
Specifically, the moving object region tracking unit 150 determines whether a moving object region having the same region identifier as that of the target tracking region is included in the target moving object image. A moving object area having the same area identifier as the target tracking area is a corresponding moving object area.
If there is a corresponding moving body region, the process proceeds to step S223.
If there is no corresponding moving object region, the process proceeds to step S231 (see FIG. 18).
 ステップS223において、動体領域追跡部150は、対象の追跡領域が動体領域と静止領域とのいずれであるか判定する。
 具体的には、動体領域追跡部150は、対象の追跡領域の領域識別子に対応付けて残り時間が記憶されているか判定する。対象の追跡領域の領域識別子に対応付けて残り時間が記録されている場合、対象の追跡領域は静止領域である。対象の追跡領域の領域識別子に対応付けて残り時間が記録されていない場合、対象の追跡領域は動体領域である。
 対象の追跡領域が動体領域である場合、処理はステップS235(図18参照)に進む。
 対象の追跡領域が静止領域である場合、処理はステップS224に進む。
In step S223, the moving object region tracking unit 150 determines whether the target tracking region is a moving object region or a stationary region.
Specifically, the moving object region tracking unit 150 determines whether the remaining time is stored in association with the region identifier of the target tracking region. When the remaining time is recorded in association with the area identifier of the target tracking area, the target tracking area is a stationary area. When the remaining time is not recorded in association with the area identifier of the target tracking area, the target tracking area is a moving object area.
If the target tracking area is a moving object area, the process proceeds to step S235 (see FIG. 18).
If the target tracking area is a stationary area, the process proceeds to step S224.
 ステップS224において、動体領域追跡部150は、対象の追跡領域の残り時間を初期化する。
 具体的には、動体領域追跡部150は、対象の追跡領域の領域識別子に対応付けられた残り時間を追跡時間に更新する。
 ステップS224の後、処理はステップS235(図18参照)に進む。
In step S224, the moving object region tracking unit 150 initializes the remaining time of the target tracking region.
Specifically, the moving object region tracking unit 150 updates the remaining time associated with the region identifier of the target tracking region to the tracking time.
After step S224, the process proceeds to step S235 (see FIG. 18).
 ステップS231(図18参照)において、動体領域追跡部150は、対象の動体画像の中に、対象の追跡領域に対応する静止領域を生成する。対象の追跡領域に対応する静止領域を対応静止領域という。
 具体的には、動体領域追跡部150は、対象の動体画像に対応付けて対応静止領域の位置情報と対応静止領域の領域識別子と対応静止領域の残り時間とを記憶部191に記憶する。対応静止領域の位置情報は、対象の追跡領域の位置情報と同じである。対応静止領域の領域識別子は、対象の追跡領域の領域識別子と同じである。対応静止領域の残り時間は、追跡時間と同じ時間である。
In step S231 (see FIG. 18), the moving object region tracking unit 150 generates a still region corresponding to the target tracking region in the target moving image. The stationary area corresponding to the target tracking area is called a corresponding stationary area.
Specifically, the moving object region tracking unit 150 stores the position information of the corresponding still region, the region identifier of the corresponding still region, and the remaining time of the corresponding still region in the storage unit 191 in association with the target moving object image. The position information of the corresponding still area is the same as the position information of the target tracking area. The area identifier of the corresponding still area is the same as the area identifier of the target tracking area. The remaining time of the corresponding still area is the same as the tracking time.
 ステップS232において、動体領域追跡部150は、対象の追跡領域が動体領域と静止領域とのいずれであるか判定する。
 具体的には、動体領域追跡部150は、対象の追跡領域の領域識別子に対応付けて残り時間が記憶されているか判定する。対象の追跡領域の領域識別子に対応付けて残り時間が記録されている場合、対象の追跡領域は静止領域である。対象の追跡領域の領域識別子に対応付けて残り時間が記録されていない場合、対象の追跡領域は動体領域である。
 対象の追跡領域が動体領域である場合、処理はステップS235に進む。
 対象の追跡領域が静止領域である場合、処理はステップS233に進む。
In step S232, the moving object region tracking unit 150 determines whether the target tracking region is a moving object region or a stationary region.
Specifically, the moving object region tracking unit 150 determines whether the remaining time is stored in association with the region identifier of the target tracking region. When the remaining time is recorded in association with the area identifier of the target tracking area, the target tracking area is a stationary area. When the remaining time is not recorded in association with the area identifier of the target tracking area, the target tracking area is a moving object area.
If the target tracking area is a moving object area, the process proceeds to step S235.
If the target tracking area is a static area, the process proceeds to step S233.
 ステップS233において、動体領域追跡部150は、対象の追跡領域から対応静止領域に残り時間を引き継ぐ。
 具体的には、動体領域追跡部150は、対象静止領域の領域識別子に対応付けられた残り時間を、対象の追跡領域の領域識別子に対応付けられた残り時間と同じ時間に更新する。
In step S233, the moving object region tracking unit 150 takes over the remaining time from the target tracking region to the corresponding still region.
Specifically, the moving object region tracking unit 150 updates the remaining time associated with the region identifier of the target still region to the same time as the remaining time associated with the region identifier of the target tracking region.
 ステップS234において、動体領域追跡部150は、対応静止領域の残り時間を減らす。
 具体的には、動体領域追跡部150は、対応静止領域の領域識別子に対応付けられた残り時間から単位経過時間を減らす。単位経過時間は、映像フレームが入力される時間間隔
または動体画像が生成される時間間隔として予め決められた時間である。
In step S234, the moving object region tracking unit 150 reduces the remaining time of the corresponding still region.
Specifically, the moving object region tracking unit 150 reduces the unit elapsed time from the remaining time associated with the region identifier of the corresponding still region. The unit elapsed time is a time determined in advance as a time interval at which a video frame is input or a time interval at which a moving object image is generated.
 ステップS235において、動体領域追跡部150は、前の動体画像に含まれる追跡領域の中に未選択の追跡領域が有るか判定する。
 未選択の追跡領域が有る場合、処理はステップS221(図17参照)に戻る。
 未選択の追跡領域が無い場合、処理はステップS241に進む。
In step S235, the moving object region tracking unit 150 determines whether there is an unselected tracking region in the tracking regions included in the previous moving object image.
If there is an unselected tracking area, the process returns to step S221 (see FIG. 17).
If there is no unselected tracking area, the process proceeds to step S241.
 ステップS241において、動体領域追跡部150は、対象の動体画像に含まれる静止領域のうち残り時間がゼロである静止領域を破棄する。
 具体的には、動体領域追跡部150は、対象の動体画像に含まれる静止領域毎に静止領域の領域識別子に対応付けられた残り時間を参照する。そして、残り時間がゼロ以下である場合、動体領域追跡部150は、その静止領域の情報(領域識別子、位置情報、残り時間)を記憶部191から削除する。
 ステップS241の後、動体領域追跡処理は終了する。
In step S241, the moving object region tracking unit 150 discards a still region in which the remaining time is zero among the still regions included in the target moving object image.
Specifically, the moving object region tracking unit 150 refers to the remaining time associated with the region identifier of the still region for each still region included in the target moving image. If the remaining time is less than or equal to zero, the moving object region tracking unit 150 deletes the information on the still region (region identifier, position information, remaining time) from the storage unit 191.
After step S241, the moving object region tracking process ends.
***実施の形態2の効果***
 動体検出装置100は、外乱に起因して映像フレームの輝度情報が大きく変化したとしても、映像フレーム中の動体領域をエッジ情報として抽出することが可能である。
 さらに、動体検出装置100は、エッジ情報を用いて、静止した動体の位置を検出することが可能である。また、動体検出装置100は、静止した物体が能動動体と受動動体とのいずれであるかを判別することが可能である。
*** Effects of Embodiment 2 ***
The moving object detection apparatus 100 can extract the moving object region in the video frame as edge information even if the luminance information of the video frame changes greatly due to disturbance.
Furthermore, the moving object detection apparatus 100 can detect the position of a stationary moving object using the edge information. The moving object detection apparatus 100 can determine whether the stationary object is an active moving object or a passive moving object.
***実施の形態の補足***
 実施の形態において、動体検出装置100の機能はハードウェアで実現してもよい。
 図19に、動体検出装置100の機能がハードウェアで実現される場合の構成を示す。
 動体検出装置100は処理回路990を備える。処理回路990はプロセッシングサーキットリともいう。
 処理回路990は、差分画像生成部110とエッジ画像生成部120と動体画像生成部130と動体領域検出部140と動体領域追跡部150と記憶部191とを実現する専用の電子回路である。
 例えば、処理回路990は、単一回路、複合回路、プログラム化したプロセッサ、並列プログラム化したプロセッサ、ロジックIC、GA、ASIC、FPGAまたはこれらの組み合わせである。GAはGate Arrayの略称であり、ASICはApplication Specific Integrated Circuitの略称であり、FPGAはField Programmable Gate Arrayの略称である。
*** Supplement to the embodiment ***
In the embodiment, the function of the moving object detection apparatus 100 may be realized by hardware.
FIG. 19 shows a configuration when the function of the moving object detection apparatus 100 is realized by hardware.
The moving object detection apparatus 100 includes a processing circuit 990. The processing circuit 990 is also called a processing circuit.
The processing circuit 990 is a dedicated electronic circuit that implements the difference image generation unit 110, the edge image generation unit 120, the moving object image generation unit 130, the moving object region detection unit 140, the moving object region tracking unit 150, and the storage unit 191.
For example, the processing circuit 990 is a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, a logic IC, GA, ASIC, FPGA, or a combination thereof. GA is an abbreviation for Gate Array, ASIC is an abbreviation for Application Specific Integrated Circuit, and FPGA is an abbreviation for Field Programmable Gate Array.
 動体検出装置100は、処理回路990を代替する複数の処理回路を備えてもよい。複数の処理回路は、処理回路990の役割を分担する。 The moving object detection apparatus 100 may include a plurality of processing circuits that replace the processing circuit 990. The plurality of processing circuits share the role of the processing circuit 990.
 実施の形態は、好ましい形態の例示であり、本発明の技術的範囲を制限することを意図するものではない。実施の形態は、部分的に実施してもよいし、他の形態と組み合わせて実施してもよい。フローチャート等を用いて説明した手順は、適宜に変更してもよい。 The embodiment is an example of a preferred embodiment and is not intended to limit the technical scope of the present invention. The embodiment may be implemented partially or in combination with other embodiments. The procedure described using the flowchart and the like may be changed as appropriate.
 100 動体検出装置、110 差分画像生成部、120 エッジ画像生成部、130 動体画像生成部、140 動体領域検出部、150 動体領域追跡部、191 記憶部、192 受付部、193 出力部、201 前フレーム、202 対象フレーム、203 後フレーム、211 第1差分画像、212 第2差分画像、221 第1エッジ画像、222 第2エッジ画像、230 動体画像、241 動体ブロック、242 動体領域、250 静止領域、901 プロセッサ、902 メモリ、903 補助記憶装置、904 入出力インタフェース、990 処理回路。 100 moving object detection device, 110 difference image generating unit, 120 edge image generating unit, 130 moving object image generating unit, 140 moving object region detecting unit, 150 moving object region tracking unit, 191 storage unit, 192 receiving unit, 193 output unit, 201 previous frame 202 target frame, 203 post frame, 211 first difference image, 212 second difference image, 221 first edge image, 222 second edge image, 230 moving object image, 241 moving object block, 242 moving object area, 250 stationary area, 901 Processor, 902 memory, 903 auxiliary storage device, 904 input / output interface, 990 processing circuit.

Claims (7)

  1.  映像フレームである対象フレームと前記対象フレームよりも前の映像フレームである前フレームと前記対象フレームよりも後の映像フレームである後フレームとを用いて、前記対象フレームと前記前フレームとの輝度の差を表す第1差分画像と、前記対象フレームと前記後フレームとの輝度の差を表す第2差分画像とを生成する差分画像生成部と、
     前記第1差分画像の中のエッジを表す第1エッジ画像と、前記第2差分画像の中のエッジを表す第2エッジ画像とを生成するエッジ画像生成部と、
     前記第1エッジ画像と前記第2エッジ画像とに共通するエッジを示す動体画像を生成する動体画像生成部と
    を備える動体検出装置。
    Using a target frame that is a video frame, a previous frame that is a video frame before the target frame, and a rear frame that is a video frame after the target frame, the luminance of the target frame and the previous frame is determined. A difference image generation unit that generates a first difference image representing a difference and a second difference image representing a difference in luminance between the target frame and the subsequent frame;
    An edge image generation unit that generates a first edge image representing an edge in the first difference image and a second edge image representing an edge in the second difference image;
    A moving object detection apparatus comprising: a moving object image generating unit that generates a moving object image indicating an edge common to the first edge image and the second edge image.
  2.  前記動体画像に示されるエッジに基づいて、前記対象フレームに映った1つ以上の動体に対応する1つ以上の動体領域を前記動体画像から検出する動体領域検出部を備える
    請求項1に記載の動体検出装置。
    The moving object area detection part which detects one or more moving object area | regions corresponding to the one or more moving object reflected in the said object frame from the said moving object image based on the edge shown by the said moving object image. Moving object detection device.
  3.  前記動体領域検出部は、前記動体画像を複数のブロックに分割し、ブロック毎にエッジの一部を示す画素の数であるエッジ画素数を算出し、画素数閾値より多いエッジ画素数に対応するブロックである動体ブロックを特定し、隣接する動体ブロック同士を含む矩形の領域を前記動体領域として生成する
    請求項2に記載の動体検出装置。
    The moving object region detection unit divides the moving object image into a plurality of blocks, calculates the number of edge pixels, which is the number of pixels indicating a part of the edge for each block, and corresponds to the number of edge pixels larger than the pixel number threshold. The moving body detection apparatus according to claim 2, wherein a moving body block that is a block is specified, and a rectangular area including adjacent moving body blocks is generated as the moving body area.
  4.  前記動体画像生成部は、連続する3枚の映像フレームの組毎に動体画像を生成し、
     前記動体検出装置は、前の動体画像に含まれる前の動体領域毎に前の動体領域に対応する動体領域を対象の動体画像から検出し、前記対象の動体画像に含まれるいずれの動体領域にも対応しない前の動体領域を前記前の動体画像から検出し、検出された前の動体領域に対応する領域を前記対象の動体画像から静止領域として検出する動体領域追跡部を備える
    請求項2または請求項3に記載の動体検出装置。
    The moving image generation unit generates a moving image for each set of three consecutive video frames,
    The moving object detection device detects a moving object area corresponding to a previous moving object area for each previous moving object area included in a previous moving object image from a target moving object image, and detects any moving object area included in the target moving object image. A moving body region tracking unit that detects a previous moving body region that does not correspond to the previous moving body image and detects a region corresponding to the detected previous moving body region as a stationary region from the target moving body image. The moving object detection device according to claim 3.
  5.  前記動体領域追跡部は、前記前の動体画像に含まれる静止領域毎に静止領域に対応する動体領域を対象の動体画像から検出し、前記対象の動体画像に含まれるいずれの動体領域にも対応しない静止領域を前記前の動体画像から検出し、検出された静止領域に対応する領域を前記対象の動体画像から前記対象の動体画像における静止領域として検出する
    請求項4に記載の動体検出装置。
    The moving object region tracking unit detects a moving object region corresponding to a stationary region for each stationary region included in the previous moving object image from the target moving image, and corresponds to any moving object region included in the target moving image. The moving body detection device according to claim 4, wherein a still area that is not detected is detected from the previous moving body image, and an area corresponding to the detected still area is detected from the target moving body image as a still area in the target moving body image.
  6.  前記動体領域追跡部は、前記対象の動体画像に含まれるいずれの動体領域とも対応しない静止領域である対象の静止領域が前記対象の動体画像に対応する時刻より追跡時間以上前の時刻に対応する動体画像から検出された静止領域に対応する場合、前記対象の動体画像における静止領域を破棄する
    請求項5に記載の動体検出装置。
    The moving object region tracking unit corresponds to a time that is equal to or longer than a tracking time before a time when a target still region that is a still region that does not correspond to any moving object region included in the target moving image corresponds to the target moving image. The moving object detection device according to claim 5, wherein when the object corresponds to a still area detected from the moving object image, the still area in the target moving object image is discarded.
  7.  映像フレームである対象フレームと前記対象フレームよりも前の映像フレームである前フレームと前記対象フレームよりも後の映像フレームである後フレームとを用いて、前記対象フレームと前記前フレームとの輝度の差を表す第1差分画像と、前記対象フレームと前記後フレームとの輝度の差を表す第2差分画像とを生成する差分画像生成処理と、
     前記第1差分画像の中のエッジを表す第1エッジ画像と、前記第2差分画像の中のエッジを表す第2エッジ画像とを生成するエッジ画像生成処理と、
     前記第1エッジ画像と前記第2エッジ画像とに共通するエッジを示す動体画像を生成する動体画像生成処理と
    をコンピュータに実行させるための動体検出プログラム。
    Using a target frame that is a video frame, a previous frame that is a video frame before the target frame, and a rear frame that is a video frame after the target frame, the luminance of the target frame and the previous frame is determined. A difference image generation process for generating a first difference image representing a difference and a second difference image representing a difference in luminance between the target frame and the subsequent frame;
    Edge image generation processing for generating a first edge image representing an edge in the first difference image and a second edge image representing an edge in the second difference image;
    A moving object detection program for causing a computer to execute a moving object image generation process for generating a moving object image indicating an edge common to the first edge image and the second edge image.
PCT/JP2017/006579 2017-02-22 2017-02-22 Moving object detection device and moving object detection program WO2018154654A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2017/006579 WO2018154654A1 (en) 2017-02-22 2017-02-22 Moving object detection device and moving object detection program
JP2019500908A JP6532627B2 (en) 2017-02-22 2017-02-22 Moving body detection device and moving body detection program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/006579 WO2018154654A1 (en) 2017-02-22 2017-02-22 Moving object detection device and moving object detection program

Publications (1)

Publication Number Publication Date
WO2018154654A1 true WO2018154654A1 (en) 2018-08-30

Family

ID=63253382

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/006579 WO2018154654A1 (en) 2017-02-22 2017-02-22 Moving object detection device and moving object detection program

Country Status (2)

Country Link
JP (1) JP6532627B2 (en)
WO (1) WO2018154654A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004013615A (en) * 2002-06-07 2004-01-15 Matsushita Electric Ind Co Ltd Moving object monitoring device
JP2005004799A (en) * 1998-01-07 2005-01-06 Toshiba Corp Object extraction apparatus
JP2005115932A (en) * 2003-09-16 2005-04-28 Matsushita Electric Works Ltd Human body sensing device using image
JP2006031153A (en) * 2004-07-13 2006-02-02 Matsushita Electric Ind Co Ltd Person counting device and person counting method
JP2013228956A (en) * 2012-04-26 2013-11-07 Toshiba Teli Corp Thronging monitoring device and thronging monitoring program
JP2015070359A (en) * 2013-09-27 2015-04-13 株式会社京三製作所 Person counting device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005004799A (en) * 1998-01-07 2005-01-06 Toshiba Corp Object extraction apparatus
JP2004013615A (en) * 2002-06-07 2004-01-15 Matsushita Electric Ind Co Ltd Moving object monitoring device
JP2005115932A (en) * 2003-09-16 2005-04-28 Matsushita Electric Works Ltd Human body sensing device using image
JP2006031153A (en) * 2004-07-13 2006-02-02 Matsushita Electric Ind Co Ltd Person counting device and person counting method
JP2013228956A (en) * 2012-04-26 2013-11-07 Toshiba Teli Corp Thronging monitoring device and thronging monitoring program
JP2015070359A (en) * 2013-09-27 2015-04-13 株式会社京三製作所 Person counting device

Also Published As

Publication number Publication date
JPWO2018154654A1 (en) 2019-06-27
JP6532627B2 (en) 2019-06-19

Similar Documents

Publication Publication Date Title
JP6509275B2 (en) Method and apparatus for updating a background model used for image background subtraction
US10163027B2 (en) Apparatus for and method of processing image based on object region
US20110211749A1 (en) System And Method For Processing Video Using Depth Sensor Information
JP5762250B2 (en) Image signal processing apparatus and image signal processing method
KR20210006276A (en) Image processing method for flicker mitigation
US9213898B2 (en) Object detection and extraction from image sequences
KR20170032033A (en) Foreground area extracting method and apparatus
US10614336B2 (en) Method, system, and computer-readable recording medium for image-based object tracking
CN108229281B (en) Neural network generation method, face detection device and electronic equipment
US10872423B2 (en) Image detection device, image detection method and storage medium storing program
JP5954212B2 (en) Image processing apparatus, image processing method, and image processing program
WO2018154654A1 (en) Moving object detection device and moving object detection program
JP6723492B2 (en) Fog identification device, fog identification method, and fog identification program
JP6399122B2 (en) Face detection apparatus and control method thereof
Teknomo et al. Background image generation using boolean operations
CN110782425A (en) Image processing method, image processing device and electronic equipment
KR101853211B1 (en) Complexity Reduction of SIFT for Video based on Frame Difference in the Mobile GPU environment
US9842406B2 (en) System and method for determining colors of foreground, and computer readable recording medium therefor
JP6603123B2 (en) Animal body detection apparatus, detection method, and program
JP5838112B2 (en) Method, program and apparatus for separating a plurality of subject areas
JP2012104053A (en) Background difference processing device, background difference processing method and computer program
JP6998922B2 (en) Notification judgment device, notification judgment method and notification judgment program
JP6945517B2 (en) Leftovers detection device, leftovers detection method and leftovers detection program
JP2017084363A (en) Shade detection device and method
US9852352B2 (en) System and method for determining colors of foreground, and computer readable recording medium therefor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17897272

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019500908

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17897272

Country of ref document: EP

Kind code of ref document: A1