WO2021261141A1 - Object detection device and object detection method - Google Patents

Object detection device and object detection method Download PDF

Info

Publication number
WO2021261141A1
WO2021261141A1 PCT/JP2021/019505 JP2021019505W WO2021261141A1 WO 2021261141 A1 WO2021261141 A1 WO 2021261141A1 JP 2021019505 W JP2021019505 W JP 2021019505W WO 2021261141 A1 WO2021261141 A1 WO 2021261141A1
Authority
WO
WIPO (PCT)
Prior art keywords
region
motion
detection
area
detected
Prior art date
Application number
PCT/JP2021/019505
Other languages
French (fr)
Japanese (ja)
Inventor
真也 阪田
Original Assignee
オムロン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オムロン株式会社 filed Critical オムロン株式会社
Publication of WO2021261141A1 publication Critical patent/WO2021261141A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation

Definitions

  • the present invention relates to a technique for detecting an area of an object from a captured image.
  • Patent Document 1 proposes a technique related to face detection.
  • the detected area may be too large or too small with respect to the object area. That is, the area of the object cannot be detected with high accuracy. Not only is it too large, but it can also be too small, so it is not appropriate to resize (always enlarge or always reduce) the detected area at a given magnification.
  • a plurality of regions may be detected as regions with motion for one moving object (the regions of one moving object may be detected by being divided into a plurality of regions). That is, the region of the moving object cannot be detected with high accuracy.
  • the present invention has been made in view of the above circumstances, and an object of the present invention is to provide a technique capable of detecting a region of an object with high accuracy.
  • the present invention adopts the following configuration.
  • the first aspect of the present invention is that there is movement from an object detection means that detects an object region, which is a region of an object, from an captured image, and from the object region that is detected by the object detection means in the image.
  • an object detection device comprising a detection means for detecting a motion region which is a region and a correction means for correcting the object region based on the motion region detected by the detection means.
  • Objects are, for example, vehicles (cars, planes, etc.), natural objects (trees, flowers, mountains, etc.), human bodies, faces, and the like.
  • the motion region is, for example, an region composed of only motion pixels (pixels with motion), a smallest rectangular region including an region composed of only motion pixels, and the like.
  • the motion area detected in it more accurately represents the true object area than the detected object area.
  • the object region is corrected based on the motion region detected from the object region, it is possible to obtain a region that more accurately represents the true object region as the corrected object region. ..
  • other processing based on the object region AE, AF, exclusion of erroneously detected regions, etc. can be preferably performed.
  • the correction means may correct the object region based on the smallest region including the plurality of movement regions. By doing so, even when a plurality of motion regions are detected, it is possible to obtain a region that more accurately represents the true object region as the corrected object region.
  • the detection means has a motion detection means for detecting a motion region from the image and a selection means for selecting a motion region located in the object region from the detection result of the motion detection means, and the correction means includes the correction means.
  • the object region may be corrected based on the movement region selected by the selection means.
  • the selection means may select a movement region whose center position is included in the object region.
  • the selection means may select a movement region that is entirely included in the object region.
  • the correction means may correct the object region based on the movement region whose size is equal to or larger than the threshold value detected by the detection means. By doing so, it is possible to suppress erroneous correction of the object area.
  • the detection means may not detect the movement area whose size is less than the threshold value, or the correction means may not use the movement area whose size is less than the threshold value.
  • the size of the motion region may be the size of the entire motion region (the total number of pixels in the motion region), the number of motion pixels, or the like.
  • the detecting means may detect the moving region by the background subtraction method.
  • the background subtraction method is, for example, a method of detecting a pixel in an image in which the difference (absolute value) of the pixel value from a predetermined background image is equal to or greater than a predetermined threshold value as motion pixels.
  • the detection means may detect the movement region by the inter-frame difference method.
  • the inter-frame difference method for example, among the captured current images (current frames), the pixels whose pixel value difference from the captured past images (past frames) is equal to or larger than a predetermined threshold are motion pixels. It is a method to detect as.
  • the detection means may detect the movement region from a region including the object region and a region around the object region. By doing so, even if the detected object area is too small, it is possible to obtain a region that more accurately represents the true object region as the corrected object region.
  • the peripheral area is, for example, an area obtained by excluding the object area from an area obtained by enlarging the detected object area by a predetermined time, or an area from the edge of the detected object area to a position outside by a predetermined number of pixels. And so on.
  • the object may be a human body.
  • the second aspect of the present invention is the object detection step of detecting the object area which is the area of the object from the captured image, and the motion from the object area detected in the object detection step in the image.
  • an object detection method comprising a detection step for detecting a motion region which is a region and a correction step for correcting the object region based on the motion region detected in the detection step.
  • the present invention can be regarded as an object detection system having at least a part of the above configuration or function.
  • the present invention also provides, a program for causing a computer to execute an object detection method or an object detection system control method, including at least a part of the above processing, or such a program non-temporarily. It can also be regarded as a recorded computer-readable recording medium.
  • the region of an object can be detected with high accuracy.
  • FIG. 1 is a block diagram showing a configuration example of an object detection device to which the present invention is applied.
  • FIG. 2A is a schematic diagram showing a rough configuration example of an object detection system according to an embodiment of the present invention
  • FIG. 2B is a configuration example of a PC (object detection device) according to the embodiment.
  • FIG. 3 is a flowchart showing an example of a processing flow of a PC according to an embodiment of the present invention.
  • FIG. 4 is a diagram showing a specific example of the operation according to the embodiment of the present invention.
  • FIG. 5 is a diagram showing a specific example of the operation according to the embodiment of the present invention.
  • FIG. 6 is a diagram showing a specific example of the operation according to the embodiment of the present invention.
  • the detected area may be too large or too small for the object area. That is, the area of the object cannot be detected with high accuracy. Not only is it too large, but it can also be too small, so it is not appropriate to resize (always enlarge or always reduce) the detected area at a given magnification.
  • a plurality of regions may be detected as regions with motion for one moving object (the regions of one moving object may be detected by being divided into a plurality of regions). That is, the region of the moving object cannot be detected with high accuracy.
  • the area of the object (including the moving body) is not detected accurately, other processing based on the area may not be suitable.
  • the exposure may be controlled to be suitable for the background in AE (automatic exposure), and may not be controlled to the exposure suitable for the object. ..
  • AE automatic exposure
  • AF autofocus
  • the focus may be on the background and the object may not be in focus.
  • the process of excluding an area that is too large or an area that is too small as an erroneously detected area the area corresponding to the object may be unintentionally excluded.
  • FIG. 1 is a block diagram showing a configuration example of an object detection device 100 to which the present invention is applied.
  • the object detection device 100 includes an object detection unit 101, a detection unit 102, and a correction unit 103.
  • the object detection unit 101 detects an object area, which is an object area, from the captured image.
  • the detection unit 102 detects a motion region, which is a motion region, from the object region detected by the object detection unit 101 in the captured image.
  • the correction unit 103 corrects the object area detected by the object detection unit 101 based on the movement area detected by the detection unit 102.
  • the object detection unit 101 is an example of the object detection means of the present invention
  • the detection unit 102 is an example of the detection means of the present invention
  • the correction unit 103 is an example of the correction means of the present invention.
  • the object is, for example, a vehicle (car, airplane, etc.), a natural object (tree, flower, mountain, etc.), a human body, a face, or the like.
  • the motion region is, for example, an region composed of only motion pixels (pixels with motion), a smallest rectangular region including an region composed of only motion pixels, and the like.
  • the motion area detected in it more accurately represents the true object area than the detected object area.
  • the object region is corrected based on the motion region detected from the object region, it is possible to obtain a region that more accurately represents the true object region as the corrected object region. ..
  • other processing based on the object region AE, AF, exclusion of erroneously detected regions, etc. can be preferably performed.
  • FIG. 2A is a schematic diagram showing a rough configuration example of the object detection system according to the present embodiment.
  • the object detection system according to the present embodiment includes a camera 10 and a PC200 (personal computer; object detection device).
  • the camera 10 and the PC 200 are connected to each other by wire or wirelessly.
  • the camera 10 captures an image and outputs it to the PC 200.
  • the camera 10 is not particularly limited, and for example, an IR that detects natural light, a camera that measures distance (for example, a stereo camera), and a camera that measures temperature (for example, IR (Infrared Ray) light). It may be either a camera) or the like.
  • the captured image is not particularly limited, and may be, for example, an RGB image, an HSV image, a gray scale image, or the like.
  • the PC 200 detects an object from an image captured by the camera 10.
  • the PC200 displays an object detection result (presence / absence of an object, an area where an object is detected, etc.) on a display unit, records an object detection result on a storage medium, and records an object detection result on another terminal (remote). Output to the administrator's smartphone etc. in the ground).
  • the PC 200 may be built in the camera 10.
  • the display unit and the storage medium described above may or may not be a part of the PC200.
  • the installation location of the PC 200 is not particularly limited.
  • the PC 200 may or may not be installed in the same room as the camera 10.
  • the PC 200 may or may not be a computer on the cloud.
  • the PC 200 may be a terminal such as a smartphone carried by an administrator.
  • FIG. 2B is a block diagram showing a configuration example of the PC200.
  • the PC 200 has an input unit 210, a control unit 220, a storage unit 230, and an output unit 240.
  • the input unit 210 sequentially performs a process of acquiring an captured image (frame of a moving image) from the camera 10 and outputting it to the control unit 220.
  • the camera 10 may sequentially capture still images, and in that case, the input unit 210 sequentially acquires the captured still images from the camera 10 and outputs them to the control unit 220. conduct.
  • the control unit 220 includes a CPU (Central Processing Unit), a RAM (Random Access Memory), a ROM (Read Only Memory), etc., and controls each component and performs various information processing.
  • the control unit 220 detects an object from the captured image and outputs the detection result of the object (presence / absence of the object, region where the object is detected, etc.) to the output unit 240.
  • the storage unit 230 stores programs executed by the control unit 220, various data used by the control unit 220, and the like.
  • the storage unit 230 is an auxiliary storage device such as a hard disk drive or a solid state drive.
  • the output unit 240 displays the detection result (object detection result) output by the control unit 220 on the display unit, records it on a storage medium, or uses another terminal (such as a smartphone of an administrator at a remote location). Output to.
  • the control unit 220 will be described in more detail.
  • the control unit 220 includes an object detection unit 221, a detection unit 222, and a correction unit 223.
  • the object detection unit 221 acquires an image captured by the camera 10 from the input unit 210, and detects an object area, which is an object area, from the acquired image. Then, the object detection unit 221 outputs the detection result of the object region to the detection unit 222.
  • the object area detected by the object detection unit 221 may be too large or too small with respect to the true object area.
  • the object detection unit 221 is an example of the object detection means of the present invention.
  • an object region may be detected using a detector (identifier) that combines boosting with image features such as HoG and Haar-like.
  • An object region may be detected using a trained model generated by existing machine learning, specifically generated by deep learning (eg, R-CNN, Fast R-CNN, YOLO, SSD, etc.). The trained model may be used to detect the object region.
  • the detection unit 222 acquires the image captured by the camera 10 from the input unit 210, and acquires the detection result of the object region from the object detection unit 221.
  • the detection unit 222 detects a motion region, which is a motion region, from the object region detected by the object detection unit 221 in the acquired image. Then, the detection unit 222 outputs the detection result of the movement region to the correction unit 223.
  • the detection unit 222 is an example of the detection means of the present invention.
  • the detection unit 222 may detect the motion region by the background subtraction method or may detect the motion region by the interframe subtraction method.
  • the background subtraction method is, for example, a method of detecting a pixel in an image in which the difference (absolute value) of the pixel value from a predetermined background image is equal to or greater than a predetermined threshold value as motion pixels.
  • the inter-frame difference method for example, among the captured current images (current frames), the pixels whose pixel value difference from the captured past images (past frames) is equal to or larger than a predetermined threshold are motion pixels. It is a method to detect as.
  • the past frame is a frame before a predetermined number of the current frame, and the predetermined number is 1 or more.
  • the predetermined number (the number of frames from the current frame to the past frame) is determined according to the frame rate of the processing of the control unit 220 (the processing of detecting and correcting the object area), the frame rate of imaging by the camera 10, and the like. You may.
  • the motion region may be a region consisting of only motion pixels (pixels with motion), or may be a minimum rectangular region including an region consisting of only motion pixels.
  • the detection unit 222 may detect a rectangular contour circumscribing a region consisting of only motion pixels, and may detect the region having the contour as a motion region. According to this method, the smallest rectangular area including the area consisting of only the moving pixels is detected as the moving area.
  • the detection unit 222 may detect the movement region by labeling. In labeling, each moving pixel of the captured image is selected as a pixel of interest.
  • the movement pixel to which the label (the number of the movement region) is attached exists around the attention pixel, the same label as the movement pixel is attached to the attention pixel. If the moving pixel with the label does not exist around the pixel of interest, a new label is attached to the pixel of interest.
  • a region consisting only of motion pixels is detected as a motion region.
  • the motion pixels (moving pixels around the pixel of interest) referred to in labeling are not particularly limited. For example, 8 pixels adjacent to the pixel of interest may be referred to, or 18 pixels separated by 2 pixels from the pixel of interest may be referred to.
  • the correction unit 223 corrects the object area detected by the object detection unit 221 based on the detection result of the detection unit 222. Then, the correction unit 223 outputs the information of the corrected object area to the output unit 240 as the detection result of the object.
  • the correction unit 223 is an example of the correction means of the present invention.
  • the correction unit 223 corrects the object region detected by the object detection unit 221 based on the movement region. By doing so, it is possible to obtain a region that more accurately represents the true object region as the corrected object region.
  • the correction unit 223 corrects the object region detected by the object detection unit 221 based on the minimum region including the plurality of movement regions. By doing so, even when a plurality of motion regions are detected, it is possible to obtain a region that more accurately represents the true object region as the corrected object region.
  • the shape of the corrected object region may or may not be a predetermined shape (rectangle or the like).
  • the detection unit 222 will be described in more detail.
  • the detection unit 222 has a motion detection unit 222-1 and a selection unit 222-2.
  • the motion detection unit 222-1 acquires an image captured by the camera 10 from the input unit 210, and detects a motion region from the acquired image. Then, the motion detection unit 222-1 outputs the detection result of the motion region to the selection unit 222-2.
  • the motion detection unit 222-1 may detect the motion region from the entire acquired image, or may detect the motion region from a part (predetermined region) of the acquired image. As described above, any algorithm may be used for motion detection by the motion detection unit 222-1.
  • the motion detection unit 222-1 is an example of the motion detection means of the present invention.
  • the selection unit 222-2 acquires the detection result of the motion region from the motion detection unit 222-1, and acquires the detection result of the object region from the object detection unit 221.
  • the selection unit 222-2 selects a movement region located in the object region detected by the object detection unit 221 from one or more movement regions detected by the motion detection unit 222-1.
  • the method of selecting the movement region is not particularly limited, and for example, the selection unit 222-2 may select a movement region whose center position is included in the object region, or may select a movement region whose entire center position is included in the object region. You may choose.
  • the selection unit 222-2 outputs the selection result of the movement region to the correction unit 223 as the detection result of the movement region by the detection unit 222.
  • the selection unit 222-2 is an example of the selection means of the present invention.
  • FIG. 3 is a flowchart showing an example of the processing flow of the PC200.
  • the PC 200 repeatedly executes the processing flow of FIG.
  • the repetition cycle of the processing flow of FIG. 3 is not particularly limited, but in the present embodiment, it is assumed that the processing flow of FIG. 3 is repeated at the frame rate (for example, 30 fps) of the image taken by the camera 10.
  • the input unit 210 acquires the captured image from the camera 10 (step S301).
  • FIG. 4 shows an example of the image 400 captured by the camera 10.
  • the image 400 shows the human body 410.
  • the object detection unit 221 detects the object region from the image acquired in step S301 (step S302). For example, from the image 400 of FIG. 4, the object region 420 including the human body 410 is detected as the region of the human body 410. The object area 420 is much larger than the human body 410.
  • the motion detection unit 222-1 detects the motion region from the image acquired in step S301 (step S303). For example, the motion regions 431 to 435 are detected from the image 400 of FIG.
  • the selection unit 222-2 selects a movement region located in the object region detected in step S302 from one or more movement regions detected in step S303 (step S304). For example, among the movement areas 431 to 435 in FIG. 4, the movement areas 431 to 434 included in the object area 420 are selected.
  • the correction unit 223 corrects the object region detected in step S302 based on the smallest region including the movement region selected in step S304 (step S305).
  • the object region 420 in FIG. 4 is corrected to the smallest rectangular region 440 including the motion regions 431 to 434.
  • the rectangular region 440 (corrected object region) more accurately represents the true region of the human body 410 than the object region 420. That is, the object region can be detected with high accuracy by the correction in step S305.
  • the object region 420 may be corrected to a region slightly different from the rectangular region 440.
  • the output unit 240 outputs the correction result (corrected object area) of step S305 to the display unit, the storage medium, the smartphone, and the like (step S306).
  • the object region is corrected based on the motion region detected from the object region, so that the true object region is more accurately represented as the corrected object region. You can get the area.
  • other processing based on the object region AE, AF, exclusion of erroneously detected regions, etc.
  • AE AE, AF, exclusion of erroneously detected regions, etc.
  • the correction unit 223 may correct the object area based on the movement area whose size is equal to or larger than the threshold value detected by the detection unit 222. By doing so, it is possible to suppress erroneous correction of the object area.
  • the detection unit 222 may not detect the movement region whose size is less than the threshold value, or the correction unit 223 may not use the movement region whose size is less than the threshold value.
  • the size of the motion region may be the size of the entire motion region (the total number of pixels in the motion region), the number of motion pixels, or the like.
  • the same objects and regions as those in FIG. 4 are designated by the same reference numerals as those in FIG.
  • the motion regions 531 and 532 due to noise are detected.
  • the object region 420 is corrected to the smallest rectangular region 540 including the motion regions 431 to 434, 531, 532, so that there is almost no change (erroneous correction).
  • the object region 420 can be corrected to the rectangular region 440 by using the motion regions 431 to 434 excluding the motion regions 531, 532.
  • the detection unit 222 may detect a motion region from a region including an object region detected by the object detection unit 221 and a region around the object region.
  • the selection unit 222-2 may select a movement region located in a region including an object region detected by the object detection unit 221 and a region around the object region. By doing so, even if the detected object area is too small, it is possible to obtain a region that more accurately represents the true object region as the corrected object region.
  • the peripheral area is, for example, an area obtained by excluding the object area from an area obtained by enlarging the detected object area by a predetermined time, or an area from the edge of the detected object area to a position outside by a predetermined number of pixels. And so on.
  • the predetermined magnification or the predetermined number of pixels may differ between the horizontal direction (horizontal direction) and the vertical direction (vertical direction) of the image.
  • the object region 620 which is much smaller than the human body 610, is detected as the region of the human body 610 from the captured image 600, and only the motion region 631 is detected in the object region 620. Therefore, if the area 650 around the object area 620 is not taken into consideration, the object area 420 is reduced to the motion area 631 (wrong correction). Since the motion regions 632 to 634 are detected in the peripheral region 650, the object region 420 can be expanded to the smallest rectangular region 640 including the motion regions 631 to 634, considering the peripheral region 650. The region 640 (corrected object region) more accurately represents the true region of the human body 610 than the object region 620.
  • An object detection means (101,221) that detects an object area, which is an object area, from the captured image, and Among the images, the detection means (102, 222) for detecting the motion region, which is a motion region, from the object region detected by the object detection means.
  • An object detection device (100, 200) comprising a correction means (103, 223) for correcting the object region based on the movement region detected by the detection means.
  • An object detection method comprising a correction step (S305) for correcting the object region based on the movement region detected in the detection step.
  • Object detection device 101 Object detection unit 102: Detection unit 103: Correction unit 10: Camera 200: PC (object detection device) 210: Input unit 220: Control unit 230: Storage unit 240: Output unit 221: Object detection unit 222: Detection unit 223: Correction unit 222-1: Motion detection unit 222-2: Selection unit 400: Image 410: Human body 420: Object area 431 to 435: Motion area 440: Rectangular area (corrected object area) 500: Image 531 and 532: Motion area 540: Rectangular area (corrected object area) 600: Image 610: Human body 620: Object area 631 to 634: Movement area 640: Rectangular area (corrected object area) 650: Area (area around the detected object area)

Abstract

This object detection device comprises an object detection means for detecting, from a captured image, an object region which is a region of an object, a detection means for detecting, from the object region detected by the object detection means in the image, a motion region which is a region including a motion, and a correction means for correcting the object region on the basis of the motion region detected by the detection means.

Description

物体検出装置および物体検出方法Object detection device and object detection method
 本発明は、撮像された画像から物体の領域を検出する技術に関する。 The present invention relates to a technique for detecting an area of an object from a captured image.
 物体検出や動き検出に関する従来技術として、様々な技術が提案されている。例えば、特許文献1では、顔検出に関する技術が提案されている。 Various techniques have been proposed as conventional techniques for object detection and motion detection. For example, Patent Document 1 proposes a technique related to face detection.
特開2006-293720号公報Japanese Unexamined Patent Publication No. 2006-293720
 しかしながら、従来の物体検出では、検出された領域が物体の領域に対して大きすぎたり、小さすぎたりすることがある。つまり、物体の領域を高精度に検出することができない。大きすぎることだけでなく、小さすぎることもあるため、検出された領域のサイズを所定の倍率で変更(常に拡大または常に縮小)するのは適切ではない。動き検出でも、1つの動体に対して、動きのある領域として、複数の領域が検出されることがある(1つの動体の領域が複数に分裂して検出されることがある)。つまり、動体の領域を高精度に検出することができない。 However, in the conventional object detection, the detected area may be too large or too small with respect to the object area. That is, the area of the object cannot be detected with high accuracy. Not only is it too large, but it can also be too small, so it is not appropriate to resize (always enlarge or always reduce) the detected area at a given magnification. Even in motion detection, a plurality of regions may be detected as regions with motion for one moving object (the regions of one moving object may be detected by being divided into a plurality of regions). That is, the region of the moving object cannot be detected with high accuracy.
 そして、物体(動体を含む)の領域が正確に検出されない場合には、当該領域に基づく他の処理を好適に行えないことがある。 And, if the area of the object (including the moving body) is not detected accurately, other processing based on the area may not be suitable.
 本発明は上記実情に鑑みなされたものであって、物体の領域を高精度に検出することのできる技術を提供することを目的とする。 The present invention has been made in view of the above circumstances, and an object of the present invention is to provide a technique capable of detecting a region of an object with high accuracy.
 上記目的を達成するために本発明は、以下の構成を採用する。 In order to achieve the above object, the present invention adopts the following configuration.
 本発明の第一側面は、撮像された画像から、物体の領域である物体領域を検出する物体検出手段と、前記画像のうち、前記物体検出手段により検出された前記物体領域から、動きのある領域である動き領域を検出する検出手段と、前記検出手段により検出された前記動き領域に基づいて、前記物体領域を補正する補正手段とを有することを特徴とする物体検出装置を提供する。物体は、例えば、乗り物(車や飛行機など)、自然物(木や花、山など)、人体、顔などである。動き領域は、例えば、動き画素(動きのある画素)のみからなる領域や、動き画素のみからなる領域を含む最小の矩形領域などである。 The first aspect of the present invention is that there is movement from an object detection means that detects an object region, which is a region of an object, from an captured image, and from the object region that is detected by the object detection means in the image. Provided is an object detection device comprising a detection means for detecting a motion region which is a region and a correction means for correcting the object region based on the motion region detected by the detection means. Objects are, for example, vehicles (cars, planes, etc.), natural objects (trees, flowers, mountains, etc.), human bodies, faces, and the like. The motion region is, for example, an region composed of only motion pixels (pixels with motion), a smallest rectangular region including an region composed of only motion pixels, and the like.
 検出された物体領域が大きすぎる場合に、検出された物体領域よりも、その中で検出された動き領域のほうが、真の物体領域を正確に表す。上述した構成によれば、物体領域から検出された動き領域に基づいて当該物体領域が補正されるため、補正後の物体領域として、真の物体領域をより正確に表した領域を得ることができる。ひいては、物体領域に基づく他の処理(AEやAF、誤検出された領域の排除など)を好適に行うことができる。 When the detected object area is too large, the motion area detected in it more accurately represents the true object area than the detected object area. According to the above-described configuration, since the object region is corrected based on the motion region detected from the object region, it is possible to obtain a region that more accurately represents the true object region as the corrected object region. .. As a result, other processing based on the object region (AE, AF, exclusion of erroneously detected regions, etc.) can be preferably performed.
 検出された物体領域から複数の動き領域が検出された場合には、各動き領域が同じ物体の一部である可能性が高い。このため、前記検出手段が複数の動き領域を検出した場合に、前記補正手段は、前記複数の動き領域を含む最小の領域に基づいて、前記物体領域を補正するとしてもよい。こうすることで、複数の動き領域が検出された場合においても、補正後の物体領域として、真の物体領域をより正確に表した領域を得ることができる。 When multiple movement areas are detected from the detected object area, it is highly possible that each movement area is a part of the same object. Therefore, when the detection means detects a plurality of movement regions, the correction means may correct the object region based on the smallest region including the plurality of movement regions. By doing so, even when a plurality of motion regions are detected, it is possible to obtain a region that more accurately represents the true object region as the corrected object region.
 前記検出手段は、前記画像から動き領域を検出する動き検出手段と、前記動き検出手段の検出結果から、前記物体領域に位置する動き領域を選択する選択手段とを有し、前記補正手段は、前記選択手段により選択された前記動き領域に基づいて、前記物体領域を補正するとしてもよい。このとき、前記選択手段は、中心位置が前記物体領域に含まれた動き領域を選択するとしてもよい。前記選択手段は、全体が前記物体領域に含まれた動き領域を選択するとしてもよい。 The detection means has a motion detection means for detecting a motion region from the image and a selection means for selecting a motion region located in the object region from the detection result of the motion detection means, and the correction means includes the correction means. The object region may be corrected based on the movement region selected by the selection means. At this time, the selection means may select a movement region whose center position is included in the object region. The selection means may select a movement region that is entirely included in the object region.
 小さい動き領域は、ノイズなどを誤検出した領域である可能性が高い。このため、前記補正手段は、前記検出手段により検出された、サイズが閾値以上である動き領域に基づいて、前記物体領域を補正するとしてもよい。こうすることで、物体領域の誤った補正を抑制することができる。ここで、サイズが閾値未満の動き領域を検出手段が検出しないようにしてもよいし、サイズが閾値未満の動き領域を補正手段が使用しないようにしてもよい。動き領域のサイズは、動き領域全体のサイズ(動き領域の全画素数)であってもよいし、動き画素の数などであってもよい。 There is a high possibility that the small movement area is an area where noise or the like is erroneously detected. Therefore, the correction means may correct the object region based on the movement region whose size is equal to or larger than the threshold value detected by the detection means. By doing so, it is possible to suppress erroneous correction of the object area. Here, the detection means may not detect the movement area whose size is less than the threshold value, or the correction means may not use the movement area whose size is less than the threshold value. The size of the motion region may be the size of the entire motion region (the total number of pixels in the motion region), the number of motion pixels, or the like.
 前記検出手段は、背景差分法により前記動き領域を検出するとしてもよい。背景差分法は、例えば、撮像された画像のうち、所定の背景画像との画素値の差分(絶対値)が所定の閾値以上の画素を、動き画素として検出する方法である。 The detecting means may detect the moving region by the background subtraction method. The background subtraction method is, for example, a method of detecting a pixel in an image in which the difference (absolute value) of the pixel value from a predetermined background image is equal to or greater than a predetermined threshold value as motion pixels.
 前記検出手段は、フレーム間差分法により前記動き領域を検出するとしてもよい。フレーム間差分法は、例えば、撮像された現在の画像(現在のフレーム)のうち、撮像された過去の画像(過去のフレーム)との画素値の差分が所定の閾値以上の画素を、動き画素として検出する方法である。 The detection means may detect the movement region by the inter-frame difference method. In the inter-frame difference method, for example, among the captured current images (current frames), the pixels whose pixel value difference from the captured past images (past frames) is equal to or larger than a predetermined threshold are motion pixels. It is a method to detect as.
 前記検出手段は、前記物体領域とその周辺の領域とからなる領域から、前記動き領域を検出するとしてもよい。こうすることで、検出された物体領域が小さすぎる場合にも、補正後の物体領域として、真の物体領域をより正確に表した領域を得ることができる。ここで、周辺の領域は、例えば、検出された物体領域を所定倍に拡大した領域から当該物体領域を除いた領域や、検出された物体領域の縁から所定画素数だけ外側の位置までの領域などである。 The detection means may detect the movement region from a region including the object region and a region around the object region. By doing so, even if the detected object area is too small, it is possible to obtain a region that more accurately represents the true object region as the corrected object region. Here, the peripheral area is, for example, an area obtained by excluding the object area from an area obtained by enlarging the detected object area by a predetermined time, or an area from the edge of the detected object area to a position outside by a predetermined number of pixels. And so on.
 前記物体は人体であるとしてもよい。 The object may be a human body.
 本発明の第二側面は、撮像された画像から、物体の領域である物体領域を検出する物体検出ステップと、前記画像のうち、前記物体検出ステップにおいて検出された前記物体領域から、動きのある領域である動き領域を検出する検出ステップと、前記検出ステップにおいて検出された前記動き領域に基づいて、前記物体領域を補正する補正ステップとを有することを特徴とする物体検出方法を提供する。 The second aspect of the present invention is the object detection step of detecting the object area which is the area of the object from the captured image, and the motion from the object area detected in the object detection step in the image. Provided is an object detection method comprising a detection step for detecting a motion region which is a region and a correction step for correcting the object region based on the motion region detected in the detection step.
 なお、本発明は、上記構成ないし機能の少なくとも一部を有する物体検出システムとして捉えることができる。また、本発明は、上記処理の少なくとも一部を含む、物体検出方法又は物体検出システムの制御方法や、これらの方法をコンピュータに実行させるためのプログラム、又は、そのようなプログラムを非一時的に記録したコンピュータ読取可能な記録媒体として捉えることもできる。上記構成及び処理の各々は技術的な矛盾が生じない限り互いに組み合わせて本発明を構成することができる。 The present invention can be regarded as an object detection system having at least a part of the above configuration or function. The present invention also provides, a program for causing a computer to execute an object detection method or an object detection system control method, including at least a part of the above processing, or such a program non-temporarily. It can also be regarded as a recorded computer-readable recording medium. Each of the above configurations and processes can be combined with each other to construct the present invention as long as no technical contradiction occurs.
 本発明によれば、物体の領域を高精度に検出することができる。 According to the present invention, the region of an object can be detected with high accuracy.
図1は、本発明が適用された物体検出装置の構成例を示すブロック図である。FIG. 1 is a block diagram showing a configuration example of an object detection device to which the present invention is applied. 図2(A)は、本発明の実施形態に係る物体検出システムの大まかな構成例を示す模式図であり、図2(B)は、当該実施形態に係るPC(物体検出装置)の構成例を示すブロック図である。FIG. 2A is a schematic diagram showing a rough configuration example of an object detection system according to an embodiment of the present invention, and FIG. 2B is a configuration example of a PC (object detection device) according to the embodiment. It is a block diagram which shows. 図3は、本発明の実施形態に係るPCの処理フロー例を示すフローチャートである。FIG. 3 is a flowchart showing an example of a processing flow of a PC according to an embodiment of the present invention. 図4は、本発明の実施形態に係る動作の具体例を示す図である。FIG. 4 is a diagram showing a specific example of the operation according to the embodiment of the present invention. 図5は、本発明の実施形態に係る動作の具体例を示す図である。FIG. 5 is a diagram showing a specific example of the operation according to the embodiment of the present invention. 図6は、本発明の実施形態に係る動作の具体例を示す図である。FIG. 6 is a diagram showing a specific example of the operation according to the embodiment of the present invention.
 <適用例>
 本発明の適用例について説明する。
<Application example>
An application example of the present invention will be described.
 従来の物体検出では、検出された領域が物体の領域に対して大きすぎたり、小さすぎたりすることがある。つまり、物体の領域を高精度に検出することができない。大きすぎることだけでなく、小さすぎることもあるため、検出された領域のサイズを所定の倍率で変更(常に拡大または常に縮小)するのは適切ではない。動き検出でも、1つの動体に対して、動きのある領域として、複数の領域が検出されることがある(1つの動体の領域が複数に分裂して検出されることがある)。つまり、動体の領域を高精度に検出することができない。 In conventional object detection, the detected area may be too large or too small for the object area. That is, the area of the object cannot be detected with high accuracy. Not only is it too large, but it can also be too small, so it is not appropriate to resize (always enlarge or always reduce) the detected area at a given magnification. Even in motion detection, a plurality of regions may be detected as regions with motion for one moving object (the regions of one moving object may be detected by being divided into a plurality of regions). That is, the region of the moving object cannot be detected with high accuracy.
 そして、物体(動体を含む)の領域が正確に検出されない場合には、当該領域に基づく他の処理を好適に行えないことがある。例えば、検出された領域に物体の背景が多く含まれている場合には、AE(自動露出)において、背景に適した露出に制御されるなどし、物体に適した露出に制御されないことがある。同様に、AF(オートフォーカス)において、背景に合焦されるなどし、物体に合焦されないことがある。また、大きすぎる領域や、小さすぎる領域などを、誤検出された領域として排除する処理において、物体に対応する領域が意図に反して排除されることがある。 And, if the area of the object (including the moving body) is not detected accurately, other processing based on the area may not be suitable. For example, if the detected area contains a large amount of the background of the object, the exposure may be controlled to be suitable for the background in AE (automatic exposure), and may not be controlled to the exposure suitable for the object. .. Similarly, in AF (autofocus), the focus may be on the background and the object may not be in focus. Further, in the process of excluding an area that is too large or an area that is too small as an erroneously detected area, the area corresponding to the object may be unintentionally excluded.
 図1は、本発明が適用された物体検出装置100の構成例を示すブロック図である。物体検出装置100は、物体検出部101、検出部102、及び、補正部103を有する。物体検出部101は、撮像された画像から、物体の領域である物体領域を検出する。検出部102は、撮像された画像のうち、物体検出部101により検出された物体領域から、動きのある領域である動き領域を検出する。補正部103は、物体検出部101により検出された物体領域を、検出部102により検出された動き領域に基づいて補正する。物体検出部101は本発明の物体検出手段の一例であり、検出部102は本発明の検出手段の一例であり、補正部103は本発明の補正手段の一例である。ここで、物体は、例えば、乗り物(車や飛行機など)、自然物(木や花、山など)、人体、顔などである。動き領域は、例えば、動き画素(動きのある画素)のみからなる領域や、動き画素のみからなる領域を含む最小の矩形領域などである。 FIG. 1 is a block diagram showing a configuration example of an object detection device 100 to which the present invention is applied. The object detection device 100 includes an object detection unit 101, a detection unit 102, and a correction unit 103. The object detection unit 101 detects an object area, which is an object area, from the captured image. The detection unit 102 detects a motion region, which is a motion region, from the object region detected by the object detection unit 101 in the captured image. The correction unit 103 corrects the object area detected by the object detection unit 101 based on the movement area detected by the detection unit 102. The object detection unit 101 is an example of the object detection means of the present invention, the detection unit 102 is an example of the detection means of the present invention, and the correction unit 103 is an example of the correction means of the present invention. Here, the object is, for example, a vehicle (car, airplane, etc.), a natural object (tree, flower, mountain, etc.), a human body, a face, or the like. The motion region is, for example, an region composed of only motion pixels (pixels with motion), a smallest rectangular region including an region composed of only motion pixels, and the like.
 検出された物体領域が大きすぎる場合に、検出された物体領域よりも、その中で検出された動き領域のほうが、真の物体領域を正確に表す。上述した構成によれば、物体領域から検出された動き領域に基づいて当該物体領域が補正されるため、補正後の物体領域として、真の物体領域をより正確に表した領域を得ることができる。ひいては、物体領域に基づく他の処理(AEやAF、誤検出された領域の排除など)を好適に行うことができる。 When the detected object area is too large, the motion area detected in it more accurately represents the true object area than the detected object area. According to the above-described configuration, since the object region is corrected based on the motion region detected from the object region, it is possible to obtain a region that more accurately represents the true object region as the corrected object region. .. As a result, other processing based on the object region (AE, AF, exclusion of erroneously detected regions, etc.) can be preferably performed.
 <実施形態>
 本発明の実施形態について説明する。
<Embodiment>
An embodiment of the present invention will be described.
 図2(A)は、本実施形態に係る物体検出システムの大まかな構成例を示す模式図である。本実施形態に係る物体検出システムは、カメラ10と、PC200(パーソナルコンピュータ;物体検出装置)とを有する。カメラ10とPC200は有線または無線で互いに接続されている。カメラ10は、画像を撮像してPC200へ出力する。カメラ10は特に限定されず、例えば、自然光を検知するカメラ、距離を測定するカメラ(例えばステレオカメラ)、温度を測定するカメラ(例えば、赤外光(IR(Infrared Ray)光)を検知するIRカメラ)などのいずれであってもよい。撮像された画像も特に限定されず、例えば、RGB画像、HSV画像、グレースケール画像などのいずれであってもよい。PC200は、カメラ10で撮像された画像から物体を検出する。PC200は、物体の検出結果(物体の有無、物体が検出された領域など)を表示部に表示したり、物体の検出結果を記憶媒体に記録したり、物体の検出結果を他の端末(遠隔地にいる管理者のスマートフォンなど)へ出力したりする。 FIG. 2A is a schematic diagram showing a rough configuration example of the object detection system according to the present embodiment. The object detection system according to the present embodiment includes a camera 10 and a PC200 (personal computer; object detection device). The camera 10 and the PC 200 are connected to each other by wire or wirelessly. The camera 10 captures an image and outputs it to the PC 200. The camera 10 is not particularly limited, and for example, an IR that detects natural light, a camera that measures distance (for example, a stereo camera), and a camera that measures temperature (for example, IR (Infrared Ray) light). It may be either a camera) or the like. The captured image is not particularly limited, and may be, for example, an RGB image, an HSV image, a gray scale image, or the like. The PC 200 detects an object from an image captured by the camera 10. The PC200 displays an object detection result (presence / absence of an object, an area where an object is detected, etc.) on a display unit, records an object detection result on a storage medium, and records an object detection result on another terminal (remote). Output to the administrator's smartphone etc. in the ground).
 なお、本実施形態ではPC10がカメラ10とは別体の装置であるものとするが、PC200はカメラ10に内蔵されてもよい。上述した表示部や記憶媒体は、PC200の一部であってもよいし、そうでなくてもよい。また、PC200の設置場所は特に限定されない。例えば、PC200はカメラ10と同じ部屋に設置されてもよいし、そうでなくてもよい。PC200はクラウド上のコンピュータであってもよいし、そうでなくてもよい。PC200は、管理者に携帯されるスマートフォンなどの端末であってもよい。 Although the PC 10 is a device separate from the camera 10 in the present embodiment, the PC 200 may be built in the camera 10. The display unit and the storage medium described above may or may not be a part of the PC200. Further, the installation location of the PC 200 is not particularly limited. For example, the PC 200 may or may not be installed in the same room as the camera 10. The PC 200 may or may not be a computer on the cloud. The PC 200 may be a terminal such as a smartphone carried by an administrator.
 図2(B)は、PC200の構成例を示すブロック図である。PC200は、入力部210、制御部220、記憶部230、及び、出力部240を有する。 FIG. 2B is a block diagram showing a configuration example of the PC200. The PC 200 has an input unit 210, a control unit 220, a storage unit 230, and an output unit 240.
 本実施形態では、カメラ10が動画を撮像するとする。入力部210は、撮像された画像(動画のフレーム)をカメラ10から取得して制御部220へ出力する処理を、順次行う。なお、カメラ10は静止画の撮像を順次行うものであってもよく、その場合は、入力部210は、撮像された静止画をカメラ10から取得して制御部220へ出力する処理を、順次行う。 In this embodiment, it is assumed that the camera 10 captures a moving image. The input unit 210 sequentially performs a process of acquiring an captured image (frame of a moving image) from the camera 10 and outputting it to the control unit 220. The camera 10 may sequentially capture still images, and in that case, the input unit 210 sequentially acquires the captured still images from the camera 10 and outputs them to the control unit 220. conduct.
 制御部220は、CPU(Central Processing Unit)やRAM(Random Access Memory)、ROM(Read Only Memory)などを含み、各構成要素の制御や、各種情報処理などを行う。本実施形態では、制御部220は、撮像された画像から物体を検出し、物体の検出結果(物体の有無、物体が検出された領域など)を出力部240へ出力する。 The control unit 220 includes a CPU (Central Processing Unit), a RAM (Random Access Memory), a ROM (Read Only Memory), etc., and controls each component and performs various information processing. In the present embodiment, the control unit 220 detects an object from the captured image and outputs the detection result of the object (presence / absence of the object, region where the object is detected, etc.) to the output unit 240.
 記憶部230は、制御部220で実行されるプログラムや、制御部220で使用される各種データなどを記憶する。例えば、記憶部230は、ハードディスクドライブ、ソリッドステートドライブ、等の補助記憶装置である。 The storage unit 230 stores programs executed by the control unit 220, various data used by the control unit 220, and the like. For example, the storage unit 230 is an auxiliary storage device such as a hard disk drive or a solid state drive.
 出力部240は、制御部220により出力された検出結果(物体の検出結果)を、表示部に表示したり、記憶媒体に記録したり、他の端末(遠隔地にいる管理者のスマートフォンなど)へ出力したりする。 The output unit 240 displays the detection result (object detection result) output by the control unit 220 on the display unit, records it on a storage medium, or uses another terminal (such as a smartphone of an administrator at a remote location). Output to.
 制御部220について、より詳細に説明する。制御部220は、物体検出部221、検出部222、及び、補正部223を有する。 The control unit 220 will be described in more detail. The control unit 220 includes an object detection unit 221, a detection unit 222, and a correction unit 223.
 物体検出部221は、カメラ10により撮像された画像を入力部210から取得し、取得した画像から、物体の領域である物体領域を検出する。そして、物体検出部221は、物体領域の検出結果を、検出部222へ出力する。物体検出部221により検出された物体領域は、真の物体領域に対して大きすぎたり、小さすぎたりすることがある。物体検出部221は本発明の物体検出手段の一例である。 The object detection unit 221 acquires an image captured by the camera 10 from the input unit 210, and detects an object area, which is an object area, from the acquired image. Then, the object detection unit 221 outputs the detection result of the object region to the detection unit 222. The object area detected by the object detection unit 221 may be too large or too small with respect to the true object area. The object detection unit 221 is an example of the object detection means of the present invention.
 なお、物体検出部221による物体検出にはどのようなアルゴリズムを用いてもよい。例えば、HoGやHaar-likeなどの画像特徴とブースティングを組み合わせた検出器(識別器)を用いて物体領域を検出してもよい。既存の機械学習により生成された学習済みモデルを用いて物体領域を検出してもよく、具体的にはディープラーニング(例えば、R-CNN、Fast R-CNN、YOLO、SSDなど)により生成された学習済みモデルを用いて物体領域を検出してもよい。 Any algorithm may be used for object detection by the object detection unit 221. For example, an object region may be detected using a detector (identifier) that combines boosting with image features such as HoG and Haar-like. An object region may be detected using a trained model generated by existing machine learning, specifically generated by deep learning (eg, R-CNN, Fast R-CNN, YOLO, SSD, etc.). The trained model may be used to detect the object region.
 検出部222は、カメラ10により撮像された画像を入力部210から取得し、物体領域の検出結果を物体検出部221から取得する。検出部222は、取得した画像のうち、物体検出部221により検出された物体領域から、動きのある領域である動き領域を検出する。そして、検出部222は、動き領域の検出結果を、補正部223へ出力する。検出部222は本発明の検出手段の一例である。 The detection unit 222 acquires the image captured by the camera 10 from the input unit 210, and acquires the detection result of the object region from the object detection unit 221. The detection unit 222 detects a motion region, which is a motion region, from the object region detected by the object detection unit 221 in the acquired image. Then, the detection unit 222 outputs the detection result of the movement region to the correction unit 223. The detection unit 222 is an example of the detection means of the present invention.
 なお、検出部222による動き検出にはどのようなアルゴリズムを用いてもよい。例えば、検出部222は、背景差分法により動き領域を検出してもよいし、フレーム間差分法により動き領域を検出してもよい。背景差分法は、例えば、撮像された画像のうち、所定の背景画像との画素値の差分(絶対値)が所定の閾値以上の画素を、動き画素として検出する方法である。フレーム間差分法は、例えば、撮像された現在の画像(現在のフレーム)のうち、撮像された過去の画像(過去のフレーム)との画素値の差分が所定の閾値以上の画素を、動き画素として検出する方法である。フレーム間差分法において、例えば、過去のフレームは、現在のフレームの所定数前のフレームであり、所定数は1以上である。所定数(現在のフレームから過去のフレームまでのフレーム数)は、制御部220の処理(物体領域を検出し補正する処理)のフレームレートや、カメラ10による撮像のフレームレートなどに応じて決定されてもよい。 Any algorithm may be used for motion detection by the detection unit 222. For example, the detection unit 222 may detect the motion region by the background subtraction method or may detect the motion region by the interframe subtraction method. The background subtraction method is, for example, a method of detecting a pixel in an image in which the difference (absolute value) of the pixel value from a predetermined background image is equal to or greater than a predetermined threshold value as motion pixels. In the inter-frame difference method, for example, among the captured current images (current frames), the pixels whose pixel value difference from the captured past images (past frames) is equal to or larger than a predetermined threshold are motion pixels. It is a method to detect as. In the inter-frame difference method, for example, the past frame is a frame before a predetermined number of the current frame, and the predetermined number is 1 or more. The predetermined number (the number of frames from the current frame to the past frame) is determined according to the frame rate of the processing of the control unit 220 (the processing of detecting and correcting the object area), the frame rate of imaging by the camera 10, and the like. You may.
 また、上述したように、動き領域は、動き画素(動きのある画素)のみからなる領域であってもよいし、動き画素のみからなる領域を含む最小の矩形領域であってもよい。例えば、検出部222は、動き画素のみからなる領域に外接する矩形状の輪郭を検出し、当該輪郭を有する領域を、動き領域として検出してもよい。この方法によれば、動き画素のみからなる領域を含む最小の矩形領域が、動き領域として検出される。検出部222は、ラベリングにより動き領域を検出してもよい。ラベリングでは、撮像された画像の各動き画素が注目画素として選択される。そして、ラベル(動き領域の番号)が付与された動き画素が注目画素の周囲に存在する場合に、当該動き画素と同じラベルが注目画素に付与される。ラベルが付与された動き画素が注目画素の周囲に存在しない場合には、新たなラベルが注目画素に付与される。この方法によれば、動き画素のみからなる領域が、動き領域として検出される。なお、ラベリングにおいて参照される動き画素(注目画素の周囲の動き画素)は特に限定されない。例えば、注目画素に隣接する8画素が参照されてもよいし、注目画素から2画素分離れた18画素が参照されてもよい。 Further, as described above, the motion region may be a region consisting of only motion pixels (pixels with motion), or may be a minimum rectangular region including an region consisting of only motion pixels. For example, the detection unit 222 may detect a rectangular contour circumscribing a region consisting of only motion pixels, and may detect the region having the contour as a motion region. According to this method, the smallest rectangular area including the area consisting of only the moving pixels is detected as the moving area. The detection unit 222 may detect the movement region by labeling. In labeling, each moving pixel of the captured image is selected as a pixel of interest. Then, when the movement pixel to which the label (the number of the movement region) is attached exists around the attention pixel, the same label as the movement pixel is attached to the attention pixel. If the moving pixel with the label does not exist around the pixel of interest, a new label is attached to the pixel of interest. According to this method, a region consisting only of motion pixels is detected as a motion region. The motion pixels (moving pixels around the pixel of interest) referred to in labeling are not particularly limited. For example, 8 pixels adjacent to the pixel of interest may be referred to, or 18 pixels separated by 2 pixels from the pixel of interest may be referred to.
 補正部223は、検出部222の検出結果に基づいて、物体検出部221により検出された物体領域を補正する。そして、補正部223は、物体の検出結果として、補正後の物体領域の情報を出力部240へ出力する。補正部223は本発明の補正手段の一例である。 The correction unit 223 corrects the object area detected by the object detection unit 221 based on the detection result of the detection unit 222. Then, the correction unit 223 outputs the information of the corrected object area to the output unit 240 as the detection result of the object. The correction unit 223 is an example of the correction means of the present invention.
 例えば、検出された物体領域が大きすぎる場合に、検出された物体領域よりも、その中で検出された動き領域のほうが、真の物体領域を正確に表す。このため、検出部222が1つ動き領域を検出した場合に、補正部223は、物体検出部221により検出された物体領域を、動き領域に基づいて補正する。こうすることで、補正後の物体領域として、真の物体領域をより正確に表した領域を得ることができる。 For example, when the detected object area is too large, the motion area detected in the detected object area more accurately represents the true object area than the detected object area. Therefore, when the detection unit 222 detects one movement region, the correction unit 223 corrects the object region detected by the object detection unit 221 based on the movement region. By doing so, it is possible to obtain a region that more accurately represents the true object region as the corrected object region.
 また、検出された物体領域から複数の動き領域が検出された場合には、各動き領域が同じ物体の一部である可能性が高い。このため、検出部222が複数の動き領域を検出した場合に、補正部223は、物体検出部221により検出された物体領域を、複数の動き領域を含む最小の領域に基づいて補正する。こうすることで、複数の動き領域が検出された場合においても、補正後の物体領域として、真の物体領域をより正確に表した領域を得ることができる。なお、補正後の物体領域の形状は所定の形状(矩形など)であってもよいし、そうでなくてもよい。 Also, when multiple movement areas are detected from the detected object area, it is highly possible that each movement area is a part of the same object. Therefore, when the detection unit 222 detects a plurality of movement regions, the correction unit 223 corrects the object region detected by the object detection unit 221 based on the minimum region including the plurality of movement regions. By doing so, even when a plurality of motion regions are detected, it is possible to obtain a region that more accurately represents the true object region as the corrected object region. The shape of the corrected object region may or may not be a predetermined shape (rectangle or the like).
 検出部222について、より詳細に説明する。検出部222は、動き検出部222-1と選択部222-2を有する。 The detection unit 222 will be described in more detail. The detection unit 222 has a motion detection unit 222-1 and a selection unit 222-2.
 動き検出部222-1は、カメラ10により撮像された画像を入力部210から取得し、取得した画像から動き領域を検出する。そして、動き検出部222-1は、動き領域の検出結果を、選択部222-2へ出力する。動き検出部222-1は、取得した画像の全体から動き領域を検出してもよいし、取得した画像の一部(所定の領域)から動き領域を検出してもよい。上述したように、動き検出部222-1による動き検出にはどのようなアルゴリズムを用いてもよい。動き検出部222-1は本発明の動き検出手段の一例である。 The motion detection unit 222-1 acquires an image captured by the camera 10 from the input unit 210, and detects a motion region from the acquired image. Then, the motion detection unit 222-1 outputs the detection result of the motion region to the selection unit 222-2. The motion detection unit 222-1 may detect the motion region from the entire acquired image, or may detect the motion region from a part (predetermined region) of the acquired image. As described above, any algorithm may be used for motion detection by the motion detection unit 222-1. The motion detection unit 222-1 is an example of the motion detection means of the present invention.
 選択部222-2は、動き領域の検出結果を動き検出部222-1から取得し、物体領域の検出結果を物体検出部221から取得する。選択部222-2は、動き検出部222-1により検出された1つ以上の動き領域から、物体検出部221により検出された物体領域に位置する動き領域を選択する。動き領域の選択方法は特に限定されず、例えば、選択部222-2は、中心位置が物体領域に含まれた動き領域を選択してもよいし、全体が物体領域に含まれた動き領域を選択してもよい。そして、選択部222-2は、動き領域の選択結果を、検出部222による動き領域の検出結果として、補正部223へ出力する。選択部222-2は本発明の選択手段の一例である。 The selection unit 222-2 acquires the detection result of the motion region from the motion detection unit 222-1, and acquires the detection result of the object region from the object detection unit 221. The selection unit 222-2 selects a movement region located in the object region detected by the object detection unit 221 from one or more movement regions detected by the motion detection unit 222-1. The method of selecting the movement region is not particularly limited, and for example, the selection unit 222-2 may select a movement region whose center position is included in the object region, or may select a movement region whose entire center position is included in the object region. You may choose. Then, the selection unit 222-2 outputs the selection result of the movement region to the correction unit 223 as the detection result of the movement region by the detection unit 222. The selection unit 222-2 is an example of the selection means of the present invention.
 図3は、PC200の処理フロー例を示すフローチャートである。PC200は、図3の処理フローを繰り返し実行する。図3の処理フローの繰り返し周期は特に限定されないが、本実施形態では、カメラ10による撮像のフレームレート(例えば30fps)で図3の処理フローが繰り返されるとする。 FIG. 3 is a flowchart showing an example of the processing flow of the PC200. The PC 200 repeatedly executes the processing flow of FIG. The repetition cycle of the processing flow of FIG. 3 is not particularly limited, but in the present embodiment, it is assumed that the processing flow of FIG. 3 is repeated at the frame rate (for example, 30 fps) of the image taken by the camera 10.
 まず、入力部210は、撮像された画像をカメラ10から取得する(ステップS301)。図4は、カメラ10により撮像された画像400の一例を示す。画像400には、人体410が写っている。 First, the input unit 210 acquires the captured image from the camera 10 (step S301). FIG. 4 shows an example of the image 400 captured by the camera 10. The image 400 shows the human body 410.
 次に、物体検出部221は、ステップS301で取得された画像から物体領域を検出する(ステップS302)。例えば、図4の画像400から、人体410の領域として、人体410を含む物体領域420が検出される。物体領域420は、人体410よりもはるかに大きい。 Next, the object detection unit 221 detects the object region from the image acquired in step S301 (step S302). For example, from the image 400 of FIG. 4, the object region 420 including the human body 410 is detected as the region of the human body 410. The object area 420 is much larger than the human body 410.
 次に、動き検出部222-1は、ステップS301で取得された画像から動き領域を検出する(ステップS303)。例えば、図4の画像400から、動き領域431~435が検出される。 Next, the motion detection unit 222-1 detects the motion region from the image acquired in step S301 (step S303). For example, the motion regions 431 to 435 are detected from the image 400 of FIG.
 次に、選択部222-2は、ステップS303で検出された1つ以上の動き領域から、ステップS302で検出された物体領域に位置する動き領域を選択する(ステップS304)。例えば、図4の動き領域431~435のうち、物体領域420に含まれた動き領域431~434が選択される。 Next, the selection unit 222-2 selects a movement region located in the object region detected in step S302 from one or more movement regions detected in step S303 (step S304). For example, among the movement areas 431 to 435 in FIG. 4, the movement areas 431 to 434 included in the object area 420 are selected.
 次に、補正部223は、ステップS302で検出された物体領域を、ステップS304で選択された動き領域を含む最小の領域に基づいて補正する(ステップS305)。例えば、図4の物体領域420が、動き領域431~434を含む最小の矩形領域440に補正される。矩形領域440(補正後の物体領域)は、物体領域420よりも、人体410の真の領域を正確に表している。つまり、ステップS305の補正により、物体領域を高精度に検出することができる。なお、物体領域の補正では、基準となる領域に物体領域が近づけられればよく、基準となる領域に物体領域を完全に一致させなくてもよい。例えば、物体領域420は、矩形領域440と若干異なる領域に補正されてもよい。 Next, the correction unit 223 corrects the object region detected in step S302 based on the smallest region including the movement region selected in step S304 (step S305). For example, the object region 420 in FIG. 4 is corrected to the smallest rectangular region 440 including the motion regions 431 to 434. The rectangular region 440 (corrected object region) more accurately represents the true region of the human body 410 than the object region 420. That is, the object region can be detected with high accuracy by the correction in step S305. In the correction of the object area, it is sufficient that the object area is brought close to the reference area, and it is not necessary to completely match the object area with the reference area. For example, the object region 420 may be corrected to a region slightly different from the rectangular region 440.
 次に、出力部240は、ステップS305の補正結果(補正後の物体領域)を、表示部、記憶媒体、スマートフォンなどへ出力する(ステップS306)。 Next, the output unit 240 outputs the correction result (corrected object area) of step S305 to the display unit, the storage medium, the smartphone, and the like (step S306).
 以上述べたように、本実施形態によれば、物体領域から検出された動き領域に基づいて当該物体領域が補正されるため、補正後の物体領域として、真の物体領域をより正確に表した領域を得ることができる。ひいては、物体領域に基づく他の処理(AEやAF、誤検出された領域の排除など)を好適に行うことができる。 As described above, according to the present embodiment, the object region is corrected based on the motion region detected from the object region, so that the true object region is more accurately represented as the corrected object region. You can get the area. As a result, other processing based on the object region (AE, AF, exclusion of erroneously detected regions, etc.) can be preferably performed.
 なお、小さい動き領域は、ノイズなどを誤検出した領域である可能性が高い。このため、補正部223は、検出部222により検出された、サイズが閾値以上である動き領域に基づいて、物体領域を補正してもよい。こうすることで、物体領域の誤った補正を抑制することができる。ここで、サイズが閾値未満の動き領域を検出部222が検出しないようにしてもよいし、サイズが閾値未満の動き領域を補正部223が使用しないようにしてもよい。動き領域のサイズは、動き領域全体のサイズ(動き領域の全画素数)であってもよいし、動き画素の数などであってもよい。 It is highly possible that the small movement area is an area where noise or the like is erroneously detected. Therefore, the correction unit 223 may correct the object area based on the movement area whose size is equal to or larger than the threshold value detected by the detection unit 222. By doing so, it is possible to suppress erroneous correction of the object area. Here, the detection unit 222 may not detect the movement region whose size is less than the threshold value, or the correction unit 223 may not use the movement region whose size is less than the threshold value. The size of the motion region may be the size of the entire motion region (the total number of pixels in the motion region), the number of motion pixels, or the like.
 図5を用いて具体例を説明する。図5において、図4と同じ物体や領域には、図4と同じ符号が付されている。図5の例では、撮像された画像500の物体領域420内で、図4の動き領域431~434の他に、ノイズによる動き領域531,532が検出されている。動き領域のサイズを考慮しない場合には、物体領域420は、動き領域431~434,531,532を含む最小の矩形領域540に補正されるため、ほぼ変わらない(誤った補正)。動き領域のサイズを考慮すれば、動き領域531,532を除外して、動き領域431~434を用いて、物体領域420を矩形領域440に補正することができる。 A specific example will be described with reference to FIG. In FIG. 5, the same objects and regions as those in FIG. 4 are designated by the same reference numerals as those in FIG. In the example of FIG. 5, in the object region 420 of the captured image 500, in addition to the motion regions 431 to 434 of FIG. 4, the motion regions 531 and 532 due to noise are detected. When the size of the motion region is not taken into consideration, the object region 420 is corrected to the smallest rectangular region 540 including the motion regions 431 to 434, 531, 532, so that there is almost no change (erroneous correction). Considering the size of the motion region, the object region 420 can be corrected to the rectangular region 440 by using the motion regions 431 to 434 excluding the motion regions 531, 532.
 また、検出部222は、物体検出部221により検出された物体領域とその周辺の領域とからなる領域から、動き領域を検出してもよい。換言すれば、選択部222-2は、物体検出部221により検出された物体領域とその周辺の領域とからなる領域に位置する動き領域を選択してもよい。こうすることで、検出された物体領域が小さすぎる場合にも、補正後の物体領域として、真の物体領域をより正確に表した領域を得ることができる。ここで、周辺の領域は、例えば、検出された物体領域を所定倍に拡大した領域から当該物体領域を除いた領域や、検出された物体領域の縁から所定画素数だけ外側の位置までの領域などである。画像の水平方向(左右方向)と垂直方向(上下方向)とで、所定倍や所定画素数が異なっていてもよい。 Further, the detection unit 222 may detect a motion region from a region including an object region detected by the object detection unit 221 and a region around the object region. In other words, the selection unit 222-2 may select a movement region located in a region including an object region detected by the object detection unit 221 and a region around the object region. By doing so, even if the detected object area is too small, it is possible to obtain a region that more accurately represents the true object region as the corrected object region. Here, the peripheral area is, for example, an area obtained by excluding the object area from an area obtained by enlarging the detected object area by a predetermined time, or an area from the edge of the detected object area to a position outside by a predetermined number of pixels. And so on. The predetermined magnification or the predetermined number of pixels may differ between the horizontal direction (horizontal direction) and the vertical direction (vertical direction) of the image.
 図6を用いて具体例を説明する。図6の例では、撮像された画像600から、人体610の領域として、人体610よりもはるかに小さい物体領域620が検出されており、物体領域620内では動き領域631のみが検出されている。このため、物体領域620の周辺の領域650を考慮しない場合には、物体領域420は、動き領域631に縮小されてしまう(誤った補正)。周辺の領域650内で動き領域632~634が検出されているため、周辺の領域650を考慮すれば、物体領域420を、動き領域631~634を含む最小の矩形領域640に拡大できる。領域640(補正後の物体領域)は、物体領域620よりも、人体610の真の領域を正確に表している。 A specific example will be described with reference to FIG. In the example of FIG. 6, the object region 620, which is much smaller than the human body 610, is detected as the region of the human body 610 from the captured image 600, and only the motion region 631 is detected in the object region 620. Therefore, if the area 650 around the object area 620 is not taken into consideration, the object area 420 is reduced to the motion area 631 (wrong correction). Since the motion regions 632 to 634 are detected in the peripheral region 650, the object region 420 can be expanded to the smallest rectangular region 640 including the motion regions 631 to 634, considering the peripheral region 650. The region 640 (corrected object region) more accurately represents the true region of the human body 610 than the object region 620.
 <その他>
 上記実施形態は、本発明の構成例を例示的に説明するものに過ぎない。本発明は上記の具体的な形態には限定されることはなく、その技術的思想の範囲内で種々の変形が可能である。
<Others>
The above-described embodiment is merely an example of a configuration example of the present invention. The present invention is not limited to the above-mentioned specific form, and various modifications can be made within the scope of its technical idea.
 <付記1>
 撮像された画像から、物体の領域である物体領域を検出する物体検出手段(101,221)と、
 前記画像のうち、前記物体検出手段により検出された前記物体領域から、動きのある領域である動き領域を検出する検出手段(102,222)と、
 前記検出手段により検出された前記動き領域に基づいて、前記物体領域を補正する補正手段(103,223)と
を有することを特徴とする物体検出装置(100,200)。
<Appendix 1>
An object detection means (101,221) that detects an object area, which is an object area, from the captured image, and
Among the images, the detection means (102, 222) for detecting the motion region, which is a motion region, from the object region detected by the object detection means.
An object detection device (100, 200) comprising a correction means (103, 223) for correcting the object region based on the movement region detected by the detection means.
 <付記2>
 撮像された画像から、物体の領域である物体領域を検出する物体検出ステップ(S302)と、
 前記画像のうち、前記物体検出ステップにおいて検出された前記物体領域から、動きのある領域である動き領域を検出する検出ステップ(S303,S304)と、
 前記検出ステップにおいて検出された前記動き領域に基づいて、前記物体領域を補正する補正ステップ(S305)と
を有することを特徴とする物体検出方法。
<Appendix 2>
An object detection step (S302) for detecting an object area, which is an object area, from the captured image, and
Among the images, a detection step (S303, S304) for detecting a motion region, which is a motion region, from the object region detected in the object detection step.
An object detection method comprising a correction step (S305) for correcting the object region based on the movement region detected in the detection step.
 100:物体検出装置 101:物体検出部 102:検出部 103:補正部
 10:カメラ 200:PC(物体検出装置)
 210:入力部 220:制御部 230:記憶部 240:出力部
 221:物体検出部 222:検出部 223:補正部
 222-1:動き検出部 222-2:選択部
 400:画像 410:人体 420:物体領域
 431~435:動き領域 440:矩形領域(補正後の物体領域)
 500:画像 531,532:動き領域 540:矩形領域(補正後の物体領域)
 600:画像 610:人体 620:物体領域 631~634:動き領域
 640:矩形領域(補正後の物体領域)
 650:領域(検出された物体領域の周辺の領域)
100: Object detection device 101: Object detection unit 102: Detection unit 103: Correction unit 10: Camera 200: PC (object detection device)
210: Input unit 220: Control unit 230: Storage unit 240: Output unit 221: Object detection unit 222: Detection unit 223: Correction unit 222-1: Motion detection unit 222-2: Selection unit 400: Image 410: Human body 420: Object area 431 to 435: Motion area 440: Rectangular area (corrected object area)
500: Image 531 and 532: Motion area 540: Rectangular area (corrected object area)
600: Image 610: Human body 620: Object area 631 to 634: Movement area 640: Rectangular area (corrected object area)
650: Area (area around the detected object area)

Claims (12)

  1.  撮像された画像から、物体の領域である物体領域を検出する物体検出手段と、
     前記画像のうち、前記物体検出手段により検出された前記物体領域から、動きのある領域である動き領域を検出する検出手段と、
     前記検出手段により検出された前記動き領域に基づいて、前記物体領域を補正する補正手段と
    を有することを特徴とする物体検出装置。
    An object detection means that detects an object area, which is an object area, from the captured image,
    A detection means for detecting a motion region, which is a motion region, from the object region detected by the object detection means in the image.
    An object detection device comprising a correction means for correcting the object region based on the movement region detected by the detection means.
  2.  前記検出手段が複数の動き領域を検出した場合に、前記補正手段は、前記複数の動き領域を含む最小の領域に基づいて、前記物体領域を補正する
    ことを特徴とする請求項1に記載の物体検出装置。
    The first aspect of claim 1, wherein when the detecting means detects a plurality of motion regions, the correction means corrects the object region based on the smallest region including the plurality of motion regions. Object detector.
  3.  前記検出手段は、
      前記画像から動き領域を検出する動き検出手段と、
      前記動き検出手段の検出結果から、前記物体領域に位置する動き領域を選択する選択手段と
    を有し、
     前記補正手段は、前記選択手段により選択された前記動き領域に基づいて、前記物体領域を補正する
    ことを特徴とする請求項1または2に記載の物体検出装置。
    The detection means
    A motion detecting means for detecting a motion region from the image,
    It has a selection means for selecting a movement region located in the object region from the detection result of the movement detection means.
    The object detection device according to claim 1 or 2, wherein the correction means corrects the object region based on the movement region selected by the selection means.
  4.  前記選択手段は、中心位置が前記物体領域に含まれた動き領域を選択する
    ことを特徴とする請求項3に記載の物体検出装置。
    The object detection device according to claim 3, wherein the selection means selects a movement region whose center position is included in the object region.
  5.  前記選択手段は、全体が前記物体領域に含まれた動き領域を選択する
    ことを特徴とする請求項3に記載の物体検出装置。
    The object detection device according to claim 3, wherein the selection means selects a motion region as a whole included in the object region.
  6.  前記補正手段は、前記検出手段により検出された、サイズが閾値以上である動き領域に基づいて、前記物体領域を補正する
    ことを特徴とする請求項1~5のいずれか1項に記載の物体検出装置。
    The object according to any one of claims 1 to 5, wherein the correction means corrects the object region based on a motion region whose size is equal to or larger than a threshold value detected by the detection means. Detection device.
  7.  前記検出手段は、背景差分法により前記動き領域を検出する
    ことを特徴とする請求項1~6のいずれか1項に記載の物体検出装置。
    The object detection device according to any one of claims 1 to 6, wherein the detection means detects the motion region by a background subtraction method.
  8.  前記検出手段は、フレーム間差分法により前記動き領域を検出する
    ことを特徴とする請求項1~6のいずれか1項に記載の物体検出装置。
    The object detection device according to any one of claims 1 to 6, wherein the detection means detects the movement region by an inter-frame difference method.
  9.  前記検出手段は、前記物体領域とその周辺の領域とからなる領域から、前記動き領域を検出する
    ことを特徴とする請求項1~8のいずれか1項に記載の物体検出装置。
    The object detection device according to any one of claims 1 to 8, wherein the detection means detects the movement region from a region including the object region and a region around the object region.
  10.  前記物体は人体である
    ことを特徴とする請求項1~9のいずれか1項に記載の物体検出装置。
    The object detection device according to any one of claims 1 to 9, wherein the object is a human body.
  11.  撮像された画像から、物体の領域である物体領域を検出する物体検出ステップと、
     前記画像のうち、前記物体検出ステップにおいて検出された前記物体領域から、動きのある領域である動き領域を検出する検出ステップと、
     前記検出ステップにおいて検出された前記動き領域に基づいて、前記物体領域を補正する補正ステップと
    を有することを特徴とする物体検出方法。
    An object detection step that detects an object area, which is an object area, from the captured image,
    In the image, a detection step of detecting a motion region, which is a motion region, from the object region detected in the object detection step,
    A method for detecting an object, which comprises a correction step for correcting the object region based on the motion region detected in the detection step.
  12.  請求項11に記載の物体検出方法の各ステップをコンピュータに実行させるためのプログラム。 A program for causing a computer to execute each step of the object detection method according to claim 11.
PCT/JP2021/019505 2020-06-22 2021-05-24 Object detection device and object detection method WO2021261141A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-106754 2020-06-22
JP2020106754A JP7435298B2 (en) 2020-06-22 2020-06-22 Object detection device and object detection method

Publications (1)

Publication Number Publication Date
WO2021261141A1 true WO2021261141A1 (en) 2021-12-30

Family

ID=79244472

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/019505 WO2021261141A1 (en) 2020-06-22 2021-05-24 Object detection device and object detection method

Country Status (2)

Country Link
JP (1) JP7435298B2 (en)
WO (1) WO2021261141A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000020722A (en) * 1998-07-03 2000-01-21 Nec Corp Device and method for extracting object from moving image
JP2002163657A (en) * 2000-11-24 2002-06-07 Matsushita Electric Ind Co Ltd Device and method for recognizing image and recording medium storing program therefor
JP2019036009A (en) * 2017-08-10 2019-03-07 富士通株式会社 Control program, control method, and information processing device
WO2019069581A1 (en) * 2017-10-02 2019-04-11 ソニー株式会社 Image processing device and image processing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000020722A (en) * 1998-07-03 2000-01-21 Nec Corp Device and method for extracting object from moving image
JP2002163657A (en) * 2000-11-24 2002-06-07 Matsushita Electric Ind Co Ltd Device and method for recognizing image and recording medium storing program therefor
JP2019036009A (en) * 2017-08-10 2019-03-07 富士通株式会社 Control program, control method, and information processing device
WO2019069581A1 (en) * 2017-10-02 2019-04-11 ソニー株式会社 Image processing device and image processing method

Also Published As

Publication number Publication date
JP2022002019A (en) 2022-01-06
JP7435298B2 (en) 2024-02-21

Similar Documents

Publication Publication Date Title
US10037599B2 (en) Automatic gain control filter in a video analysis system
US10733705B2 (en) Information processing device, learning processing method, learning device, and object recognition device
JP7447302B2 (en) Method and system for hand gesture-based control of devices
JP5457606B2 (en) Image processing method and apparatus
US20160173787A1 (en) Surveillance camera with heat map function
US10121251B2 (en) Method for controlling tracking using a color model, corresponding apparatus and non-transitory program storage device
JP2020149642A (en) Object tracking device and object tracking method
WO2018159467A1 (en) Mobile entity detector, mobile entity detection method, and computer-readable recording medium
US11457158B2 (en) Location estimation device, location estimation method, and program recording medium
JP2019185556A (en) Image analysis device, method, and program
US9842260B2 (en) Image processing apparatus and image processing method of performing image segmentation
WO2021261141A1 (en) Object detection device and object detection method
JP2023518284A (en) Method and system for hand-gesture-based control of devices
CN111160340A (en) Moving target detection method and device, storage medium and terminal equipment
JP6939065B2 (en) Image recognition computer program, image recognition device and image recognition method
US10140503B2 (en) Subject tracking apparatus, control method, image processing apparatus, and image pickup apparatus
US11507768B2 (en) Information processing apparatus, information processing method, and storage medium
WO2020179638A1 (en) Person detection device and person detection method
KR20150060032A (en) System and method for motion detecting
US10885348B2 (en) Information processing device, information processing method, and storage medium
US11921816B2 (en) Information processing apparatus that specifies a subject and method, image capturing apparatus, and image capturing system
US20230065506A1 (en) Image processing apparatus, control method thereof, and image capturing apparatus
WO2021261125A1 (en) Movable body detection device and movable body detection method
CN110300253B (en) Image processing apparatus and method, and storage medium storing instructions
TW202314587A (en) Object detection system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21830197

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21830197

Country of ref document: EP

Kind code of ref document: A1