WO2011108258A1 - Vehicle-surroundings monitoring apparatus and vehicle-surroundings monitoring method - Google Patents

Vehicle-surroundings monitoring apparatus and vehicle-surroundings monitoring method Download PDF

Info

Publication number
WO2011108258A1
WO2011108258A1 PCT/JP2011/001191 JP2011001191W WO2011108258A1 WO 2011108258 A1 WO2011108258 A1 WO 2011108258A1 JP 2011001191 W JP2011001191 W JP 2011001191W WO 2011108258 A1 WO2011108258 A1 WO 2011108258A1
Authority
WO
WIPO (PCT)
Prior art keywords
detection
image feature
vehicle
image
feature amount
Prior art date
Application number
PCT/JP2011/001191
Other languages
French (fr)
Japanese (ja)
Inventor
福田久哉
Original Assignee
パナソニック株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2010046184A external-priority patent/JP2013093639A/en
Application filed by パナソニック株式会社 filed Critical パナソニック株式会社
Publication of WO2011108258A1 publication Critical patent/WO2011108258A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/26Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view to the rear of the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/165Anti-collision systems for passive traffic, e.g. including static obstacles, trees
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes

Definitions

  • the present invention relates to a vehicle periphery monitoring device and a vehicle periphery monitoring method for detecting an object such as a person from a photographed image around the vehicle.
  • target object an object to be monitored
  • a typical technique for detecting an object using an in-vehicle camera is to determine whether or not the image area of the object is included in the captured image by pattern matching of the image with the captured image around the vehicle ( For example, see Patent Document 1).
  • image feature amounts of an object are prepared in advance as a database, a captured image is scanned with a search window, and a determination is made as to the consistency of image feature amounts for each image region cut out with the search window.
  • There is also a technique for determining whether or not the object is a person based on the size of the object using a radar in combination with the in-vehicle camera see, for example, Patent Document 2.
  • Similar objects of human beings can be structures, road textures, fences, trees, traffic signs, and the like.
  • the image feature data is subdivided and prepared for various types, orientations, orientations, and positions of the object, It may be possible to tighten the criteria for consistency.
  • the database of the image feature amount becomes larger as the image feature amount data is subdivided, there is a limit to the improvement in detection accuracy from the viewpoint of apparatus cost and processing load. Further, since the database is finite and there is a possibility that the object is melted into the background, there is a limit to the improvement in detection accuracy in the prior art.
  • An object of the present invention is to provide a vehicle periphery monitoring device and a vehicle periphery monitoring method capable of improving the detection accuracy when detecting an object from a captured image.
  • the vehicle periphery monitoring apparatus of the present invention includes a detection processing unit that detects an object from captured images around the vehicle, and an image feature amount that approximates an image feature amount of an object that the detection processing unit has newly detected as an object. And an erroneous detection determination unit that determines that the new detection is a false detection on the condition that the object has been detected as an object at the same position in the past.
  • the step of detecting an object from a captured image around the vehicle and the object having an image feature amount that approximates the image feature amount of the object newly detected as the object are the same in the past. And a step of determining that the new detection is a false detection on the condition that the object has been detected at the position.
  • the present invention it is possible to avoid a similar object that is fixed in one place such as a tree from being repeatedly determined to be a moving object such as a person, so that the object can be detected from a captured image. It is possible to improve the detection accuracy when detecting.
  • the system block diagram which shows the structure of the vehicle periphery monitoring system containing the vehicle periphery monitoring apparatus which concerns on one embodiment of this invention
  • the figure which shows an example of the picked-up image in this Embodiment The figure for demonstrating an example of the detection method of the target object in this Embodiment
  • the figure explaining the effect of the vehicle periphery monitoring system which concerns on this Embodiment The figure which
  • FIG. 1 is a system configuration diagram showing a configuration of a vehicle periphery monitoring system including a vehicle periphery monitoring device according to an embodiment of the present invention.
  • the present embodiment is an example in which the present invention is applied to an in-vehicle camera ECU (electronic control unit) installed inside a vehicle.
  • ECU electronic control unit
  • the vehicle periphery monitoring system 100 includes an imaging unit 200, a position information acquisition unit 300, a detection database (database) 400, a history database (DB) 500, an output unit 600, and an ECU 700.
  • the imaging unit 200 captures an image around the vehicle and outputs a captured image signal of a captured image (captured image) in frame units to the ECU 700 described later.
  • the imaging unit 200 is a so-called in-vehicle camera attached to a vehicle, for example, a digital video camera provided with a CCD (charge coupled coupled image sensor) image sensor or a CMOS (complementary metal oxide semiconductor) image sensor. More specifically, the imaging unit 200 is, for example, a rear camera that is installed near or at the top of a license plate or emblem on the rear of the vehicle and images the rear of the vehicle.
  • the imaging unit 200 is a side camera that is installed on a side mirror of a vehicle and images the side of the vehicle, for example.
  • the imaging unit 200 is a front camera that captures an image of the front of the vehicle installed near the emblem at the front of the vehicle or behind a rear-view mirror inside the front window.
  • the imaging unit 200 is a rear camera.
  • FIG. 2 is a diagram illustrating an example of an installation state of the imaging unit 200 on the vehicle.
  • the imaging unit 200 is installed on the license plate at the rear of the vehicle 811 so as to look down the road surface 812 behind the vehicle 810.
  • the height h from the road surface 812 of the imaging unit 200 and the depression angle ⁇ of the optical axis of the imaging unit 200 are set so that the range to be monitored can be captured.
  • the position information acquisition unit 300 of FIG. 1 acquires position information indicating the current position of the vehicle or the imaging unit 200 in response to a request from the ECU 700 described later or periodically, and outputs the acquired position information to the ECU 700.
  • the position information is, for example, position information of the navigation system and position information of GPS (global positioning system), and includes latitude / longitude information and information regarding the direction of the vehicle and the imaging unit 200.
  • the detection database 400 stores detection information including image feature amounts extracted from an image of an object to be monitored as a database.
  • the detection database 400 includes a rewritable storage medium such as FlashROM (read (only memory).
  • FlashROM read (only memory).
  • the image feature amount is information that numerically represents the shape feature of the object such as the shape of the head, the shape of the shoulder, the shape of the arm, etc., and is information used for object detection by image recognition described later. It is.
  • FIG. 3 is a diagram showing an example of the contents of the detection database 400.
  • the detection database 400 includes detection information 820-1 and 820-2 for each of a plurality of images taken for each of a subject, a shooting direction, a posture of the subject, and a set of clothes and belongings of the subject. 820-n are stored.
  • the subjects are adults, elderly people, children, men, women, and the like.
  • the shooting direction is front, sideways, diagonal, or the like.
  • the posture of the subject is upright, walking, bending, sitting, etc.
  • the subject's clothes and belongings are clothes, Japanese clothes, umbrellas, bags, etc.
  • Each detection information 820 is given identification information such as “person pattern A” and includes image feature amounts such as a luminance gradient distribution and contour feature amounts. That is, the individual detection information 820 is information that defines an object registered as a monitoring target.
  • the history database 500 of FIG. 1 stores, as history information, a set of detection information of an object detected by the ECU 700 and position information of the vehicle (the imaging unit 200) when the detection is performed, output from the ECU 700. And store.
  • the history database 500 is composed of a rewritable storage medium such as FlashROM, for example.
  • the output unit 600 receives a display video signal described later from the ECU 700, and displays a video on the screen according to the input display video signal.
  • the output unit 600 includes an input interface for the screen, and outputs the content of the user operation on the screen to the ECU 700.
  • the output unit 600 is, for example, a navigation system monitor provided with a touch panel, a monitor in an instrument panel, or a monitor built in a room mirror.
  • the ECU 700 detects an object from the captured image by comparing the image feature amount extracted from the captured image of the imaging unit 200 and the image feature amount of each detection information stored in the detection database 400. Then, ECU 700 uses output unit 600 to display a result display screen showing the detection result superimposed on the captured image.
  • the ECU 700 refers to the history database 500. Then, the ECU 700 performs the new detection on the condition that an object having an image feature amount approximate to the image feature amount of the object newly detected as the target object has been detected as the target object at the same position in the past. It is determined that it is a false detection. This is because a person is an object that moves and deforms and is unlikely to be photographed with the same appearance at the same position at different times. Then, ECU 700 reflects the determination result of erroneous detection on the result display screen and corrects the content of the detection information in detection database 400 so that the same erroneous detection is not repeated.
  • the approximate image feature amount represents an image feature amount approximated to the extent that it is generally recognized as the same object, and the same position is generally recognized as not moving. Represents the positional range to be applied.
  • ECU 700 has a detection processing unit 710, an erroneous detection determination unit 720, and a database correction unit 730.
  • the detection processing unit 710 inputs a captured image from the image capturing unit 200 and extracts image feature amounts of each region portion of the input captured image. Then, the detection processing unit 710 searches the extracted database feature amount in the detection database 400, and when there is detection information of the image feature amount that approximates the extracted image feature amount, the object is present in the corresponding region portion. It is determined that it exists. For example, the detection processing unit 710 calculates a ratio of rectangular regions in which the similarity between the edge shape included in the rectangular region and the edge shape included in the detection information exceeds a predetermined value for a plurality of rectangular regions obtained by further dividing the region part. calculate.
  • the detection processing unit 710 determines that the image feature amount of the region portion is close to the image feature amount of the detection information. Then, the detection processing unit 710 outputs the detection result including the position of the image area of the target object and the captured video to the erroneous detection determination unit 720. In addition, when the detection processing unit 710 determines that an object is present (hereinafter referred to as “object detection”), the detection processing unit 710 identifies the extracted image feature amount and the corresponding detection information together with the detection result and the captured video. The information is output to the erroneous detection determination unit 720.
  • object detection an object detection
  • the erroneous detection determination unit 720 generates a display video signal of a result display screen in which the detection result of the detection processing unit 710 is superimposed on the captured video input from the detection processing unit 710, and outputs the display video signal to the output unit 600. Further, the erroneous detection determination unit 720 acquires the current position information from the position information acquisition unit 300 when the detection processing unit 710 detects an object. Then, the erroneous detection determination unit 720 outputs a set of the detected image feature amount of the target object and the acquired position information to the history database 500.
  • the erroneous detection determination unit 720 searches the history database 500 for a set of image feature amounts and position information of objects detected by the detection processing unit 710 as objects. Then, the erroneous detection determination unit 720 determines that the target object detection is erroneous detection when there is a set of the same image information and the same position information. When the erroneous detection determination unit 720 determines that it is erroneous detection, the erroneous detection determination unit 720 does not superimpose the detection result of the target object on the captured video.
  • the erroneous detection determination unit 720 determines erroneous detection
  • the erroneous detection determination unit 720 outputs the identification information and the image feature amount (that is, the image feature amount of the similar object) input from the detection processing unit 710 to the database correction unit 730.
  • the correction of the detection database 400 is instructed.
  • the database correction unit 730 corrects the detection information indicated by the identification information input from the erroneous detection determination unit 720 so that the erroneous detection caused by the image feature amount input from the erroneous detection determination unit 720 is not repeated. That is, the database correction unit 730 corrects the detection information that is the basis of the erroneous detection so that the erroneous detection due to the image feature amount of the erroneously detected similar object is not repeated.
  • the ECU 700 is a central processing unit (CPU), a storage medium such as a ROM storing a control program, a working memory such as a random access memory (RAM), a storage medium such as a hard disk for storing various data, and a communication circuit. Can be realized. In this case, the function of each unit described above is realized by the CPU executing the control program.
  • the vehicle periphery monitoring system 100 having such a configuration monitors a person as an object, the object having an image feature amount that approximates the image feature amount of the newly detected object is the same position in the past. When it has been detected by the above, it can be determined that it is a false detection.
  • FIG. 4 is a flowchart showing the operation of the ECU 700.
  • step S1100 the detection processing unit 710 reads all detection information (hereinafter referred to as “dictionary data”) stored in the detection database 400 from the detection database 400, and stores it in an internal memory.
  • prediction data all detection information
  • step S1200 the detection processing unit 710 reads image data of a new captured image from the imaging unit 200, and stores it in an internal memory.
  • FIG. 5 is a diagram illustrating an example of a captured image. As shown in FIG. 5, the captured image 830 is an image that shows a state behind the vehicle.
  • step S1300 of FIG. 4 the detection processing unit 710 detects an object by image recognition on the captured image using the read image data and dictionary data.
  • FIG. 6 is a diagram for explaining a pattern matching technique as an example of an object detection technique based on image recognition.
  • the detection processing unit 710 scans the captured image 841 with a search window 842 for determining whether or not it is an image area of the target object.
  • the detection processing unit 710 extracts the image feature amount of the image area in the search window 842 for each position of the search window 842, and the image feature amount of the extracted image feature amount and each detection information of the dictionary data read in step S1100. Compare the amount. Then, if there is a certain level of correlation between these image feature quantities, the detection processing unit 710 selects the image area (area 843 in the example of FIG. 6) in the search window 842 at that time as the image of the object. It is determined as an area.
  • the detection processing unit 710 may erroneously detect the similar object as a target object.
  • FIG. 7 is a diagram illustrating a case where the object is correctly detected as the object. As illustrated in FIG. 7, when the captured image 851 does not include a similar object and includes a person 852 that is a target, the detection processing unit 710 uses the image area 853 of the person 852 as the image area of the target. Is determined.
  • FIG. 8 is a diagram showing a case where a similar object is mistakenly detected as a target object.
  • the detection processing unit 710 displays an image region 863 of the similar object 862 as an image of the target object. It is determined that the area.
  • step S1400 in FIG. 4 the detection processing unit 710 determines whether or not an object has been detected from the captured image. If the object is not detected (S1400: NO), the detection processing unit 710 outputs the detection result and the captured image to the erroneous detection determination unit 720, and proceeds to step S1500. In addition, when the target is detected (S1400: YES), the detection processing unit 710 identifies the detection result, the captured video, the image feature amount of the detected target, and the detection information that is the basis of the target detection. The information is output to the erroneous detection determination unit 720, and the process proceeds to step S1600. However, this object detection has the possibility of false detection (see FIG. 8) for a similar object.
  • step S1500 the erroneous detection determination unit 720 causes the output unit 600 to display a result display screen in which the detection result of the detection processing unit 710 is superimposed on the captured video input from the detection processing unit 710.
  • the result display screen is, for example, an image obtained by adding a marker to the image area of the target in the captured video.
  • the erroneous detection determination unit 720 outputs a set of image feature amounts and position information related to the object detection to the history database 500.
  • the history database 500 stores a set of input image feature amounts and position information as history information.
  • FIG. 9 is a diagram showing an example of the contents of history information.
  • the history information 870 includes a detection area feature quantity (image feature quantity of an object image area) 871 such as a luminance gradient, a camera position, and a detection area position (position information) in the screen. ) And additional information 872 including the shooting date and time.
  • a detection area feature quantity image feature quantity of an object image area
  • additional information 872 including the shooting date and time.
  • step S1700 in FIG. 4 the detection processing unit 710 determines whether or not an instruction to end the process is given by a user power-off operation or the like. If the termination of the process is not instructed (S1700: NO), the detection processing unit 710 returns to step S1200 and continues monitoring the periphery of the vehicle.
  • the erroneous detection determination unit 720 receives the notification of object detection from the detection processing unit 710 and acquires position information from the position information acquisition unit 300.
  • the position information is position information when the captured image to be processed is captured, and is information indicating the current position and orientation of the vehicle (imaging unit 200).
  • step S1800 the erroneous detection determination unit 720 reads all history information (hereinafter referred to as “history data”) including the same position information as the position information acquired in step S1600 from the history database 500. Save to memory. Note that the processing from step S1800 to step 2000 described later is executed for each detected object.
  • the erroneous detection determination unit 720 can acquire history data, and the history information includes image feature values that approximate the image feature values input from the detection processing unit 710 in the acquired history data. It is determined whether or not exists. That is, the false detection determination unit 720 compares the new detection result with the history data.
  • the erroneous detection determination unit 720 proceeds to step S1500. move on. The case where history data could not be acquired means that there is no history information including the same position information.
  • the erroneous detection determination unit 720 can acquire history data, and history information including an image feature amount approximate to the image feature amount input from the detection processing unit 710 exists in the acquired history data. (S1900: YES), the process proceeds to step S2000. At this time, the erroneous detection determination unit 720 invalidates the corresponding object detection and outputs the identification information and the image feature amount input from the detection processing unit 710 to the database correction unit 730.
  • the database correction unit 730 registers the image feature amount of the image area where the erroneous detection has occurred (hereinafter referred to as “false detection area”) in the dictionary data of the detection database 400. That is, the database correction unit 730 reads the detection information indicated by the identification information input from the erroneous detection determination unit 720 from the detection database 400 and compares it with the image feature amount input from the detection processing unit 710. In other words, the database correction unit 730 compares the detection information indicated by the detection information that is the basis of the erroneous detection with the image feature amount of the similar object that has been erroneously detected. Then, the database correction unit 730 corrects the detection information that is the basis of the erroneous detection so that the erroneous detection is not performed again on the similar object based on the comparison result.
  • FIG. 10 is a diagram illustrating an example of correction contents of detection information.
  • the database correction unit 730 extracts the difference between the image feature amount of the detection information 891 and the image feature amount 892 of the similar object.
  • “ ⁇ ” and “XX” correspond to the difference.
  • the database correction unit 730 adds the extracted difference to the additional feature amount information 893 prepared for the detection information 891, and the non-detection image feature that should not be detected in the object. Add as quantity.
  • the location of the additional feature amount information 893 is added to the detection information 891. In this example, “address 100” corresponds to the location.
  • the detection processing unit 710 refers to the corresponding additional feature amount information together when the location of the additional feature amount information is included in the detection information.
  • the detection processing unit 710 determines that the image area is the target image area. Judge that there is no. That is, if the correlation between the non-detection image feature amount of the additional feature amount information and the image feature amount of the image region to be determined is high, the detection processing unit 710 determines that the image region is the target image region. Judge that there is no. Therefore, by correcting such detection information, the criteria for determining the coincidence with the image feature amount of the object becomes strict, and erroneous detection is difficult to be repeated.
  • the detection processing unit 710 reads the dictionary data again from the detection database 400, stores it again in the internal memory, and proceeds to step S1500.
  • the detection processing unit 710 may monitor whether or not the detection database 400 is corrected, and may read the dictionary data again when the correction is made. Further, the detection processing unit 710 may re-read dictionary data upon receiving notification of dictionary data correction from the erroneous detection determination unit 720 or the database correction unit 730.
  • ECU 700 determines that the detection is a false detection. In addition, when it is determined that there is a false detection, the ECU 700 can correct the detection result and the dictionary data so as to avoid erroneous detection results being presented and repeated false detection of similar objects that are present in a fixed manner.
  • the vehicle periphery monitoring system 100 can maintain the detection accuracy of the object while preventing repeated erroneous detection will be described with reference to FIG.
  • FIG. 11A it is assumed that an image area 881-1 of the switchboard is extracted as a human image area from a photographed image 880-1 on a certain day. This is actually a false detection. Then, as shown in FIG. 11B, the image area 881-2 of the switchboard is erroneously extracted as a human image area from the captured image 880-2 photographed at the same place at a later date. This is because detection information that is a basis for erroneous detection remains in the detection database 400.
  • the ECU 700 corrects the detection information that is the basis of the erroneous detection by the above-described operation so that the image area 881-2 of the switchboard is not erroneously detected as the human image area again. This avoids repeated false detections.
  • a traffic sign image area 881-3 is extracted as a human image area from a photographed image 880-3 on a certain day at a certain place. Since this is actually a false detection, when a false detection is made again at a later date, the detection information that is the basis of the false detection is similarly corrected.
  • an object having an image feature amount that approximates the image feature amount of an object that is newly detected as a target object is set as the target object at the same position in the past.
  • the vehicle periphery monitoring system 100 according to the present embodiment can prevent the similar object from being repeatedly determined as the target object at the same place, and can detect the target object from the captured image. Accuracy can be improved.
  • the vehicle periphery monitoring system 100 determines that the detection is an erroneous detection, the detection information used for detecting the object is not detected so that the erroneous detection is not repeated for the same similar object. Correct it. Thereby, the vehicle periphery monitoring system 100 according to the present embodiment can prevent the erroneous detection itself from being repeated, can reduce the number of object detections that should be erroneously detected, and reduces the processing load. I can do it.
  • the vehicle periphery monitoring system 100 determines that an object having an approximate image feature amount has been detected at the same position in the past, and then makes an error after performing the predetermined number of times twice or more for the same object.
  • a detection confirmation screen may be displayed.
  • the ECU 700 adds the number of times of the determination to the history information, and determines that the number of times of the determination is false detection on the condition that the number of times added to the history information exceeds an arbitrary threshold value (repetition number). good.
  • the vehicle periphery monitoring system 100 does not consider the direction of the vehicle (imaging unit) as the position information, and determines whether or not each object is at the same position by matching the current captured image and the past captured image. You may judge. In this case, the ECU 700 needs to save a captured image as history data. Thereby, the vehicle periphery monitoring system 100 can make the information regarding the direction of a vehicle (imaging part) unnecessary, and can improve versatility.
  • the vehicle periphery monitoring system 100 may add color information or the like as an image feature amount used for object detection. Thereby, detection accuracy can be further improved.
  • the vehicle periphery monitoring system 100 may delete the history information corresponding to the correction from the history database. Thereby, the amount of memory required for the history database can be suppressed, and the size and cost of the apparatus can be reduced.
  • the vehicle periphery monitoring system 100 may make an inquiry to the user as to whether it is a false detection.
  • the erroneous detection determination unit 720 can acquire history data, and history information including image feature amounts that approximate the image feature amounts input from the detection processing unit 710 in the acquired history data. It is determined whether or not the condition of existence is satisfied (S1900 in FIG. 4). When the erroneous detection determination unit 720 determines that such a condition is satisfied (S1900: YES), the erroneous detection confirmation screen display video signal for confirming to the user whether or not it may be determined as erroneous detection. Generate and output to the output unit 600.
  • FIG. 12 is a diagram showing an example of a false detection confirmation screen.
  • the erroneous detection confirmation screen 880 displays a marker 883 indicating the image area 882 of the object detected as the target object superimposed on the captured image 881. Furthermore, the erroneous detection confirmation screen 880 displays a correction key 884 for correcting dictionary data superimposed on the captured image 881. The user confirms the image area 882 indicated by the marker 883 with his / her eyes, and performs a pressing operation on the correction key 884 when the user recognizes that the image area 882 is similar.
  • the false detection determination unit 720 determines that the object detection is a false detection when the correction key 884 of the false detection confirmation screen 880 is pressed within a predetermined time after the display of the false detection confirmation screen 880 is started.
  • the false detection determination unit 720 determines that the object detection is not a false detection when the correction key 884 of the false detection confirmation screen 880 is not pressed within a predetermined time after the display of the false detection confirmation screen 880 is started. .
  • the erroneous detection determination unit 720 invalidates the corresponding target object detection only when it is determined that the target object detection is erroneous detection, and uses the identification information and the image feature amount input from the detection processing unit 710. The data is output to the database correction unit 730.
  • the vehicle periphery monitoring system 100 determines whether or not to make a user inquiry based on the degree of coincidence of image feature amounts and the like, and only when it is determined to make a user inquiry, an erroneous detection confirmation screen (correction key) is displayed. Display may be performed.
  • the vehicle periphery monitoring device and the vehicle periphery monitoring method according to the present invention are useful as a vehicle periphery monitoring device and a vehicle periphery monitoring method that can improve detection accuracy when detecting an object from a captured image.

Abstract

Provided is a vehicle-surroundings monitoring apparatus, wherein detection precision upon detecting a subject to be monitored from captured images can be improved. An ECU (700) of a vehicle-surroundings monitoring system (100) comprises a detection processing unit (710) that detects the subject to be monitored from captured images of the surroundings of a vehicle, and a false-detection evaluation unit (720) that evaluates that a new object detected by the detection processing unit (710) as a subject to be monitored is a false-detection, on the condition that an object having an amount of image feature that closely resembles the amount of image feature of the newly detected object has been detected in the past as a subject to be monitored at the same position.

Description

車両周辺監視装置および車両周辺監視方法Vehicle periphery monitoring device and vehicle periphery monitoring method
 本発明は、車両の周辺の撮影画像から人等の対象物を検出する車両周辺監視装置および車両周辺監視方法に関する。 The present invention relates to a vehicle periphery monitoring device and a vehicle periphery monitoring method for detecting an object such as a person from a photographed image around the vehicle.
 車両の走行中においては、特に人との衝突回避が重要である。このため、従来、車載カメラを用いて、車両の周囲に存在する人等の監視対象となる物体(以下「対象物」という)を検出して、運転者への警報を発生する装置が存在する。 ∙ Avoiding collisions with people is particularly important when the vehicle is running. For this reason, conventionally, there is an apparatus that detects an object to be monitored (hereinafter referred to as “target object”) such as a person around the vehicle using an in-vehicle camera and generates an alarm to the driver. .
 車載カメラを用いて対象物を検出する代表的な技術は、車両の周囲の撮影画像に対する画像のパターンマッチングにより、対象物の画像領域が撮影画像に含まれるか否かを判定するものである(例えば特許文献1参照)。この技術では、事前に対象物の画像特徴量を予めデータベースとして用意しておき、撮影画像を探索窓で走査し、探索窓で切り出される画像領域毎に画像特徴量の一致性についての判定を行う。また、車載カメラにレーダーを併用して、対象物の大きさに基づいて対象物が人か否かを判定する技術も存在する(例えば特許文献2参照)。 A typical technique for detecting an object using an in-vehicle camera is to determine whether or not the image area of the object is included in the captured image by pattern matching of the image with the captured image around the vehicle ( For example, see Patent Document 1). In this technique, image feature amounts of an object are prepared in advance as a database, a captured image is scanned with a search window, and a determination is made as to the consistency of image feature amounts for each image region cut out with the search window. . There is also a technique for determining whether or not the object is a person based on the size of the object using a radar in combination with the in-vehicle camera (see, for example, Patent Document 2).
特開2009-070344号公報JP 2009-070344 A 特開2005-157765号公報JP 2005-157765 A
 しかしながら、これらの従来技術は、対象物と画像特徴量および大きさが同一であって対象物ではない物体(以下「類似物体」という)が撮影画像に含まれるとき、この類似物体を対象物と誤って検出してしまうおそれがある。人の類似物体には、構造物や路面のテクスチャ、フェンス、樹木、交通標識等が成り得る。 However, in these conventional techniques, when a captured image includes an object (hereinafter referred to as “similar object”) that has the same image feature amount and size as the object and is not the object, There is a risk of detection. Similar objects of human beings can be structures, road textures, fences, trees, traffic signs, and the like.
 従来技術において、このような誤検出を低減し検出精度を向上させるためには、対象物の様々な種別、向き、姿勢、および位置毎に画像特徴量のデータを細分化して用意しておき、一致性の判定基準を厳しくする事が考えられる。ところが、画像特徴量のデータを細分化すればするほど画像特徴量のデータベースは大きくなることから、装置コストおよび処理負荷の観点から、検出精度の向上には限界がある。また、データベースは有限であり、更に対象物が背景に溶け込んでいる場合等もあり得る事から、やはり従来技術における検出精度の向上には限界がある。 In the prior art, in order to reduce such false detection and improve the detection accuracy, the image feature data is subdivided and prepared for various types, orientations, orientations, and positions of the object, It may be possible to tighten the criteria for consistency. However, since the database of the image feature amount becomes larger as the image feature amount data is subdivided, there is a limit to the improvement in detection accuracy from the viewpoint of apparatus cost and processing load. Further, since the database is finite and there is a possibility that the object is melted into the background, there is a limit to the improvement in detection accuracy in the prior art.
 本発明の目的は、撮影画像から対象物を検出する際の検出精度を向上させる事ができる車両周辺監視装置および車両周辺監視方法を提供する事である。 An object of the present invention is to provide a vehicle periphery monitoring device and a vehicle periphery monitoring method capable of improving the detection accuracy when detecting an object from a captured image.
 本発明の車両周辺監視装置は、車両の周辺の撮影画像から対象物を検出する検出処理部と、前記検出処理部が新たに対象物として検出した物体の画像特徴量に近似する画像特徴量を有する物体が、過去に同一の位置で対象物として検出されていた事を条件として、当該新たな検出を誤検出であると判定する誤検出判定部とを有する。 The vehicle periphery monitoring apparatus of the present invention includes a detection processing unit that detects an object from captured images around the vehicle, and an image feature amount that approximates an image feature amount of an object that the detection processing unit has newly detected as an object. And an erroneous detection determination unit that determines that the new detection is a false detection on the condition that the object has been detected as an object at the same position in the past.
 本発明の車両周辺監視方法は、車両の周辺の撮影画像から対象物を検出するステップと、新たに対象物として検出した物体の画像特徴量に近似する画像特徴量を有する物体が、過去に同一の位置で対象物として検出されていた事を条件として、当該新たな検出を誤検出であると判定するステップとを有する。 According to the vehicle periphery monitoring method of the present invention, the step of detecting an object from a captured image around the vehicle and the object having an image feature amount that approximates the image feature amount of the object newly detected as the object are the same in the past. And a step of determining that the new detection is a false detection on the condition that the object has been detected at the position.
 本発明によれば、樹木等の1つの場所に固定して存在する類似物体が、人等の移動する対象物であると繰り返し判定されるのを回避する事ができるので、撮影画像から対象物を検出する際の検出精度を向上させる事ができる。 According to the present invention, it is possible to avoid a similar object that is fixed in one place such as a tree from being repeatedly determined to be a moving object such as a person, so that the object can be detected from a captured image. It is possible to improve the detection accuracy when detecting.
本発明の一実施の形態に係る車両周辺監視装置を含む車両周辺監視システムの構成を示すシステム構成図The system block diagram which shows the structure of the vehicle periphery monitoring system containing the vehicle periphery monitoring apparatus which concerns on one embodiment of this invention 本実施の形態における撮像部の車両への設置状態の一例を示す図The figure which shows an example of the installation state to the vehicle of the imaging part in this Embodiment 本実施の形態における検出データベースの内容の一例を示す図The figure which shows an example of the content of the detection database in this Embodiment 本実施の形態に係るECUの動作を示すフローチャートThe flowchart which shows operation | movement of ECU which concerns on this Embodiment 本実施の形態における撮影画像の一例を示す図The figure which shows an example of the picked-up image in this Embodiment 本実施の形態における対象物の検出手法の一例を説明するための図The figure for demonstrating an example of the detection method of the target object in this Embodiment 本実施の形態における対象物が対象物として検出された場合を示す図The figure which shows the case where the target object in this Embodiment is detected as a target object 本実施の形態における類似物体が対象物として誤って検出された場合を示す図The figure which shows the case where the similar object in this Embodiment is detected accidentally as a target object 本実施の形態における履歴情報の内容の一例を示す図The figure which shows an example of the content of the log | history information in this Embodiment 本実施の形態における検出情報の修正内容の一例を示すAn example of correction contents of detection information in the present embodiment is shown. 本実施の形態に係る車両周辺監視システムの効果を説明する図The figure explaining the effect of the vehicle periphery monitoring system which concerns on this Embodiment 本実施の形態における誤検出確認画面の一例を示す図The figure which shows an example of the false detection confirmation screen in this Embodiment
 以下、本発明の一実施の形態について、図面を参照して詳細に説明する。 Hereinafter, an embodiment of the present invention will be described in detail with reference to the drawings.
 図1は、本発明の一実施の形態に係る車両周辺監視装置を含む車両周辺監視システムの構成を示すシステム構成図である。本実施の形態は、本発明を、車両内部に設置された、車載カメラ用のECU(electronic control unit:電子制御ユニット)に適用した例である FIG. 1 is a system configuration diagram showing a configuration of a vehicle periphery monitoring system including a vehicle periphery monitoring device according to an embodiment of the present invention. The present embodiment is an example in which the present invention is applied to an in-vehicle camera ECU (electronic control unit) installed inside a vehicle.
 図1において、車両周辺監視システム100は、撮像部200、位置情報取得部300、検出データベース(database:DB)400、履歴データベース(DB)500、出力部600、およびECU700を有する。 1, the vehicle periphery monitoring system 100 includes an imaging unit 200, a position information acquisition unit 300, a detection database (database) 400, a history database (DB) 500, an output unit 600, and an ECU 700.
 撮像部200は、車両の周囲の映像を撮像し、フレーム単位の撮影映像(撮影画像)の撮影映像信号を、後述のECU700へ出力する。撮像部200は、車両に取り付けられた所謂車載カメラであり、例えば、CCD(charge coupled devise image sensor)撮像素子もしくはCMOS(complementary metal oxide semiconductor)撮像素子を備えたデジタルビデオカメラである。より具体的には、撮像部200は、例えば、車両の後部のナンバープレートやエンブレム付近もしくは最上部に設置されて車両の後方を撮像するリアカメラである。また、撮像部200は、例えば、車両のサイドミラーに設置されて車両の側方を撮像するサイドカメラである。また、撮像部200は、例えば、車両の前部のエンブレム付近もしくはフロントウィンドウの内側のルームミラーの後方等に設置された車両の前方を撮像するフロントカメラである。ここでは、撮像部200は、リアカメラであるものとする。 The imaging unit 200 captures an image around the vehicle and outputs a captured image signal of a captured image (captured image) in frame units to the ECU 700 described later. The imaging unit 200 is a so-called in-vehicle camera attached to a vehicle, for example, a digital video camera provided with a CCD (charge coupled coupled image sensor) image sensor or a CMOS (complementary metal oxide semiconductor) image sensor. More specifically, the imaging unit 200 is, for example, a rear camera that is installed near or at the top of a license plate or emblem on the rear of the vehicle and images the rear of the vehicle. The imaging unit 200 is a side camera that is installed on a side mirror of a vehicle and images the side of the vehicle, for example. The imaging unit 200 is a front camera that captures an image of the front of the vehicle installed near the emblem at the front of the vehicle or behind a rear-view mirror inside the front window. Here, it is assumed that the imaging unit 200 is a rear camera.
 図2は、撮像部200の車両への設置状態の一例を示す図である。 FIG. 2 is a diagram illustrating an example of an installation state of the imaging unit 200 on the vehicle.
 図2に示すように、撮像部200は、車両811後部のナンバープレートの上に、車両810後方の路面812を見下ろす向きで設置される。撮像部200の路面812からの高さhおよび撮像部200の光軸の俯角θは、監視の対象となる範囲を撮影する事ができるように設定される。 As shown in FIG. 2, the imaging unit 200 is installed on the license plate at the rear of the vehicle 811 so as to look down the road surface 812 behind the vehicle 810. The height h from the road surface 812 of the imaging unit 200 and the depression angle θ of the optical axis of the imaging unit 200 are set so that the range to be monitored can be captured.
 図1の位置情報取得部300は、後述のECU700からの要求に応じて、または周期的に、車両または撮像部200の現在の位置を示す位置情報を取得し、取得した位置情報をECU700へ出力する。位置情報は、例えば、ナビゲーションシステムの位置情報、およびGPS(global positioning system)の位置情報であり、緯度経度情報および車両および撮像部200の向きに関する情報を含む。 The position information acquisition unit 300 of FIG. 1 acquires position information indicating the current position of the vehicle or the imaging unit 200 in response to a request from the ECU 700 described later or periodically, and outputs the acquired position information to the ECU 700. To do. The position information is, for example, position information of the navigation system and position information of GPS (global positioning system), and includes latitude / longitude information and information regarding the direction of the vehicle and the imaging unit 200.
 検出データベース400は、監視の対象となる対象物の画像から抽出された画像特徴量を含む検出情報を、データベース化して格納する。検出データベース400は、例えば、FlashROM(read only memory)等の書き換え可能な記憶媒体から成る。ここでは、対象物は人であるものとする。また、画像特徴量は、例えば頭部の形、肩の形、腕の形等、対象物の形状的な特徴を数値的に表す情報であり、後述の画像認識による対象物検出に用いられる情報である。 The detection database 400 stores detection information including image feature amounts extracted from an image of an object to be monitored as a database. The detection database 400 includes a rewritable storage medium such as FlashROM (read (only memory). Here, it is assumed that the object is a person. The image feature amount is information that numerically represents the shape feature of the object such as the shape of the head, the shape of the shoulder, the shape of the arm, etc., and is information used for object detection by image recognition described later. It is.
 図3は、検出データベース400の内容の一例を示す図である。 FIG. 3 is a diagram showing an example of the contents of the detection database 400.
 図3に示すように、検出データベース400は、被写体、撮影方向、被写体の姿勢、および被写体の衣服や持ち物の組毎に撮影された複数の画像のそれぞれについて、検出情報820-1、820-2、・・・、820-nを格納している。被写体は、大人、老人、子供、男性、女性等である。撮影方向は、正面、横向き、斜め等である。被写体の姿勢は、直立、歩行、かがみ、座り等である。被写体の衣服や持ち物は、洋服、和服、傘、かばん等である。各検出情報820は、「人物パターンA」等の識別情報が付与されると共に、輝度勾配分布および輪郭特徴量等の画像特徴量を含んでいる。すなわち、個々の検出情報820は、監視の対象として登録された対象物を定義する情報である。 As shown in FIG. 3, the detection database 400 includes detection information 820-1 and 820-2 for each of a plurality of images taken for each of a subject, a shooting direction, a posture of the subject, and a set of clothes and belongings of the subject. 820-n are stored. The subjects are adults, elderly people, children, men, women, and the like. The shooting direction is front, sideways, diagonal, or the like. The posture of the subject is upright, walking, bending, sitting, etc. The subject's clothes and belongings are clothes, Japanese clothes, umbrellas, bags, etc. Each detection information 820 is given identification information such as “person pattern A” and includes image feature amounts such as a luminance gradient distribution and contour feature amounts. That is, the individual detection information 820 is information that defines an object registered as a monitoring target.
 図1の履歴データベース500は、ECU700から出力された、ECU700が検出した対象物の検出情報と検出が行われたときの車両(撮像部200)の位置情報との組を、履歴情報として、蓄積して格納する。履歴データベース500は、例えば、FlashROM等の書き換え可能な記憶媒体から成る。 The history database 500 of FIG. 1 stores, as history information, a set of detection information of an object detected by the ECU 700 and position information of the vehicle (the imaging unit 200) when the detection is performed, output from the ECU 700. And store. The history database 500 is composed of a rewritable storage medium such as FlashROM, for example.
 出力部600は、ECU700から後述の表示映像信号を入力し、入力した表示映像信号に従い、画面に映像を表示する。また、出力部600は、画面に対する入力インタフェースを備え、画面に対するユーザ操作の内容を、ECU700に出力する。出力部600は、例えば、タッチパネルを供えた、ナビゲーションシステムのモニタ、インパネ(instrument panel)内のモニタ、またはルームミラー内に内蔵されたモニタである。 The output unit 600 receives a display video signal described later from the ECU 700, and displays a video on the screen according to the input display video signal. The output unit 600 includes an input interface for the screen, and outputs the content of the user operation on the screen to the ECU 700. The output unit 600 is, for example, a navigation system monitor provided with a touch panel, a monitor in an instrument panel, or a monitor built in a room mirror.
 ECU700は、撮像部200の撮影画像から抽出される画像特徴量と、検出データベース400に格納された各検出情報の画像特徴量との比較により、撮影画像から対象物を検出する。そして、ECU700は、撮影画像に重畳して検出結果を示す結果表示画面を、出力部600を用いて表示する。 The ECU 700 detects an object from the captured image by comparing the image feature amount extracted from the captured image of the imaging unit 200 and the image feature amount of each detection information stored in the detection database 400. Then, ECU 700 uses output unit 600 to display a result display screen showing the detection result superimposed on the captured image.
 また、ECU700は、履歴データベース500を参照する。そして、ECU700は、新たに対象物として検出した物体の画像特徴量に近似する画像特徴量を有する物体が、過去に同一の位置で対象物として検出されていた事を条件として、その新たな検出を誤検出であると判定する。これは、人が、移動および変形する物体であり、異なる時間に同じ位置で同じ外観で撮影される可能性が低い事を理由とする。そして、ECU700は、誤検出であるとの判定結果を結果表示画面に反映させると共に、同じ誤検出が繰り返されないように、検出データベース400の検出情報の内容を修正する。 Also, the ECU 700 refers to the history database 500. Then, the ECU 700 performs the new detection on the condition that an object having an image feature amount approximate to the image feature amount of the object newly detected as the target object has been detected as the target object at the same position in the past. It is determined that it is a false detection. This is because a person is an object that moves and deforms and is unlikely to be photographed with the same appearance at the same position at different times. Then, ECU 700 reflects the determination result of erroneous detection on the result display screen and corrects the content of the detection information in detection database 400 so that the same erroneous detection is not repeated.
 なお、ここで、近似する画像特徴量とは、同一の物体であると一般的に認識される程度に近似する画像特徴量を表し、同一の位置とは、移動していないと一般的に認識される位置的範囲を表す。 Here, the approximate image feature amount represents an image feature amount approximated to the extent that it is generally recognized as the same object, and the same position is generally recognized as not moving. Represents the positional range to be applied.
 ECU700は、検出処理部710、誤検出判定部720、およびデータベース修正部730を有する。 ECU 700 has a detection processing unit 710, an erroneous detection determination unit 720, and a database correction unit 730.
 検出処理部710は、撮像部200から撮影画像を入力し、入力した撮影画像の各領域部分の画像特徴量を抽出する。そして、検出処理部710は、抽出した画像特徴量を検出データベース400で検索し、抽出した画像特徴量に近似する画像特徴量の検出情報が存在する場合には、該当する領域部分に対象物が存在すると判定する。例えば、検出処理部710は、領域部分を更に細かく区切った複数の矩形領域について、矩形領域に含まれるエッジ形状と検出情報に含まれるエッジ形状との相似度が所定値を超える矩形領域の比率を算出する。そして、検出処理部710は、この比率が一定の閾値を超えるとき、当該領域部分の画像特徴量と検出情報の画像特徴量とが近似していると判定する。そして、検出処理部710は、対象物の画像領域の位置を含む検出結果および撮影映像を、誤検出判定部720へ出力する。また、検出処理部710は、対象物が存在するとの判定(以下「対象物検出」という)を行ったときは、検出結果および撮影映像と共に、抽出した画像特徴量、および該当する検出情報の識別情報を、誤検出判定部720へ出力する。 The detection processing unit 710 inputs a captured image from the image capturing unit 200 and extracts image feature amounts of each region portion of the input captured image. Then, the detection processing unit 710 searches the extracted database feature amount in the detection database 400, and when there is detection information of the image feature amount that approximates the extracted image feature amount, the object is present in the corresponding region portion. It is determined that it exists. For example, the detection processing unit 710 calculates a ratio of rectangular regions in which the similarity between the edge shape included in the rectangular region and the edge shape included in the detection information exceeds a predetermined value for a plurality of rectangular regions obtained by further dividing the region part. calculate. Then, when this ratio exceeds a certain threshold value, the detection processing unit 710 determines that the image feature amount of the region portion is close to the image feature amount of the detection information. Then, the detection processing unit 710 outputs the detection result including the position of the image area of the target object and the captured video to the erroneous detection determination unit 720. In addition, when the detection processing unit 710 determines that an object is present (hereinafter referred to as “object detection”), the detection processing unit 710 identifies the extracted image feature amount and the corresponding detection information together with the detection result and the captured video. The information is output to the erroneous detection determination unit 720.
 誤検出判定部720は、検出処理部710から入力された撮影映像に検出処理部710の検出結果を重畳した結果表示画面の表示映像信号を生成し、出力部600へ出力する。また、誤検出判定部720は、検出処理部710が対象物検出を行ったときには、位置情報取得部300から現在の位置情報を取得する。そして、誤検出判定部720は、検出された対象物の画像特徴量と取得した位置情報との組を、履歴データベース500へ出力する。 The erroneous detection determination unit 720 generates a display video signal of a result display screen in which the detection result of the detection processing unit 710 is superimposed on the captured video input from the detection processing unit 710, and outputs the display video signal to the output unit 600. Further, the erroneous detection determination unit 720 acquires the current position information from the position information acquisition unit 300 when the detection processing unit 710 detects an object. Then, the erroneous detection determination unit 720 outputs a set of the detected image feature amount of the target object and the acquired position information to the history database 500.
 但し、誤検出判定部720は、検出処理部710が対象物として検出した物体の画像特徴量および位置情報の組を、履歴データベース500で検索する。そして、誤検出判定部720は、近似する画像特徴量と同一の位置情報との組が存在する場合には、当該対象物検出を誤検出であると判定する。誤検出判定部720は、誤検出と判定したとき、対象物であるとの検出結果を撮影映像に重畳しない。また、誤検出判定部720は、誤検出と判定したとき、データベース修正部730に対し、検出処理部710から入力された識別情報および画像特徴量(つまり類似物体の画像特徴量)を出力し、検出データベース400の修正を指示する。 However, the erroneous detection determination unit 720 searches the history database 500 for a set of image feature amounts and position information of objects detected by the detection processing unit 710 as objects. Then, the erroneous detection determination unit 720 determines that the target object detection is erroneous detection when there is a set of the same image information and the same position information. When the erroneous detection determination unit 720 determines that it is erroneous detection, the erroneous detection determination unit 720 does not superimpose the detection result of the target object on the captured video. Further, when the erroneous detection determination unit 720 determines erroneous detection, the erroneous detection determination unit 720 outputs the identification information and the image feature amount (that is, the image feature amount of the similar object) input from the detection processing unit 710 to the database correction unit 730. The correction of the detection database 400 is instructed.
 データベース修正部730は、誤検出判定部720から入力された識別情報が示す検出情報を、誤検出判定部720から入力された画像特徴量に起因する誤検出が繰り返されないように、修正する。すなわち、データベース修正部730は、誤検出の基となった検出情報を、誤検出された類似物体の画像特徴量に起因する誤検出が繰り返されないように、修正する。 The database correction unit 730 corrects the detection information indicated by the identification information input from the erroneous detection determination unit 720 so that the erroneous detection caused by the image feature amount input from the erroneous detection determination unit 720 is not repeated. That is, the database correction unit 730 corrects the detection information that is the basis of the erroneous detection so that the erroneous detection due to the image feature amount of the erroneously detected similar object is not repeated.
 ECU700は、CPU(central processing unit)、制御プログラムを格納したROM等の記憶媒体、RAM(random access memory)等の作業用メモリ、各種データを格納するためのハードディスク等の記憶媒体、および通信回路等により実現する事ができる。この場合、上記した各部の機能は、CPUが制御プログラムを実行する事により実現される。 The ECU 700 is a central processing unit (CPU), a storage medium such as a ROM storing a control program, a working memory such as a random access memory (RAM), a storage medium such as a hard disk for storing various data, and a communication circuit. Can be realized. In this case, the function of each unit described above is realized by the CPU executing the control program.
 このような構成を有する車両周辺監視システム100は、人を対象物として監視を行う際に、新たに検出した対象物の画像特徴量に近似する画像特徴量を有する対象物が過去に同一の位置で検出されていたとき、誤検出であると判定することができる。 When the vehicle periphery monitoring system 100 having such a configuration monitors a person as an object, the object having an image feature amount that approximates the image feature amount of the newly detected object is the same position in the past. When it has been detected by the above, it can be determined that it is a false detection.
 次に、ECU700の動作について説明する。 Next, the operation of ECU 700 will be described.
 図4は、ECU700の動作を示すフローチャートである。 FIG. 4 is a flowchart showing the operation of the ECU 700.
 まず、ステップS1100において、検出処理部710は、検出データベース400から、検出データベース400に格納された全ての検出情報(以下「辞書データ」という)を読み込み、内部的なメモリに保存する。 First, in step S1100, the detection processing unit 710 reads all detection information (hereinafter referred to as “dictionary data”) stored in the detection database 400 from the detection database 400, and stores it in an internal memory.
 そして、ステップS1200において、検出処理部710は、撮像部200から、新たな撮影画像の画像データを読み込み、内部的なメモリに保存する。 In step S1200, the detection processing unit 710 reads image data of a new captured image from the imaging unit 200, and stores it in an internal memory.
 図5は、撮影画像の一例を示す図である。図5に示すように、撮影画像830は、車両後方の様子を映し出す画像である。 FIG. 5 is a diagram illustrating an example of a captured image. As shown in FIG. 5, the captured image 830 is an image that shows a state behind the vehicle.
 そして、図4のステップS1300において、検出処理部710は、読み込んだ画像データおよび辞書データを用いて、撮影画像に対する画像認識による対象物の検出を行う。 In step S1300 of FIG. 4, the detection processing unit 710 detects an object by image recognition on the captured image using the read image data and dictionary data.
 図6は、画像認識による対象物の検出手法の一例として、パターンマッチングの手法を説明するための図である。 FIG. 6 is a diagram for explaining a pattern matching technique as an example of an object detection technique based on image recognition.
 図6に示すように、検出処理部710は、撮影画像841に対して、対象物の画像領域であるか否かを判断するための探索窓842を走査させる。検出処理部710は、探索窓842の位置毎に、探索窓842内の画像領域の画像特徴量を抽出し、抽出した画像特徴量と、ステップS1100で読み込んだ辞書データの各検出情報の画像特徴量とを比較する。そして、検出処理部710は、これらの画像特徴量の間に一定以上の相関性があれば、そのときの探索窓842内の画像領域(図6の例では領域843)を、対象物の画像領域と判定する。 As shown in FIG. 6, the detection processing unit 710 scans the captured image 841 with a search window 842 for determining whether or not it is an image area of the target object. The detection processing unit 710 extracts the image feature amount of the image area in the search window 842 for each position of the search window 842, and the image feature amount of the extracted image feature amount and each detection information of the dictionary data read in step S1100. Compare the amount. Then, if there is a certain level of correlation between these image feature quantities, the detection processing unit 710 selects the image area (area 843 in the example of FIG. 6) in the search window 842 at that time as the image of the object. It is determined as an area.
 但し、類似物体が撮影画像に含まれるとき、検出処理部710は、類似物体を、対象物と誤って検出し得る。 However, when a similar object is included in the captured image, the detection processing unit 710 may erroneously detect the similar object as a target object.
 図7は、対象物が対象物として正しく検出された場合を示す図である。図7に示すように、撮影画像851に類似物体が含まれず対象物である人852が含まれる場合には、検出処理部710は、人852の画像領域853を、対象物の画像領域であると判定する。 FIG. 7 is a diagram illustrating a case where the object is correctly detected as the object. As illustrated in FIG. 7, when the captured image 851 does not include a similar object and includes a person 852 that is a target, the detection processing unit 710 uses the image area 853 of the person 852 as the image area of the target. Is determined.
 図8は、類似物体が対象物として誤って検出された場合を示す図である。図8に示すように、撮影画像861に、交通標識と樹木が重なった物体等の類似物体862が含まれるときは、検出処理部710は、類似物体862の画像領域863を、対象物の画像領域であると判定する。 FIG. 8 is a diagram showing a case where a similar object is mistakenly detected as a target object. As illustrated in FIG. 8, when the captured image 861 includes a similar object 862 such as an object in which a traffic sign and a tree overlap, the detection processing unit 710 displays an image region 863 of the similar object 862 as an image of the target object. It is determined that the area.
 そして、図4のステップS1400において、検出処理部710は、撮影画像から対象物を検出したか否かを判断する。検出処理部710は、対象物が検出されなかった場合は(S1400:NO)、検出結果および撮影映像を誤検出判定部720へ出力し、ステップS1500へ進む。また、検出処理部710は、対象物が検出された場合は(S1400:YES)、検出結果、撮影映像、検出した対象物の画像特徴量、および対象物検出の基となった検出情報の識別情報を、誤検出判定部720へ出力して、ステップS1600へ進む。但し、この対象物検出は、類似物体に対する誤検出(図8参照)である可能性を有する。 In step S1400 in FIG. 4, the detection processing unit 710 determines whether or not an object has been detected from the captured image. If the object is not detected (S1400: NO), the detection processing unit 710 outputs the detection result and the captured image to the erroneous detection determination unit 720, and proceeds to step S1500. In addition, when the target is detected (S1400: YES), the detection processing unit 710 identifies the detection result, the captured video, the image feature amount of the detected target, and the detection information that is the basis of the target detection. The information is output to the erroneous detection determination unit 720, and the process proceeds to step S1600. However, this object detection has the possibility of false detection (see FIG. 8) for a similar object.
 ステップS1500において、誤検出判定部720は、検出処理部710から入力された撮影映像に検出処理部710の検出結果を重畳した結果表示画面を、出力部600で表示させる。結果表示画面は、例えば、撮影映像のうち、対象物の画像領域にマーカを付した画像である。また、誤検出判定部720は、対象物検出に係る画像特徴量および位置情報の組を、履歴データベース500へ出力する。履歴データベース500は、入力された画像特徴量および位置情報の組を、履歴情報として蓄積する。 In step S1500, the erroneous detection determination unit 720 causes the output unit 600 to display a result display screen in which the detection result of the detection processing unit 710 is superimposed on the captured video input from the detection processing unit 710. The result display screen is, for example, an image obtained by adding a marker to the image area of the target in the captured video. Further, the erroneous detection determination unit 720 outputs a set of image feature amounts and position information related to the object detection to the history database 500. The history database 500 stores a set of input image feature amounts and position information as history information.
 図9は、履歴情報の内容の一例を示す図である。 FIG. 9 is a diagram showing an example of the contents of history information.
 図9に示すように、履歴情報870は、輝度勾配等の検出領域の特徴量(対象物の画像領域の画像特徴量)871と、カメラの位置および画面内での検出領域の位置(位置情報)と撮影日時とを含む付加情報872を記述する。 As shown in FIG. 9, the history information 870 includes a detection area feature quantity (image feature quantity of an object image area) 871 such as a luminance gradient, a camera position, and a detection area position (position information) in the screen. ) And additional information 872 including the shooting date and time.
 そして、図4のステップS1700において、検出処理部710は、ユーザによる電源オフ操作等により処理の終了を指示されたか否かを判断する。検出処理部710は、処理の終了を指示されていない場合は(S1700:NO)、ステップS1200へ戻り、車両周辺の監視を継続する。 In step S1700 in FIG. 4, the detection processing unit 710 determines whether or not an instruction to end the process is given by a user power-off operation or the like. If the termination of the process is not instructed (S1700: NO), the detection processing unit 710 returns to step S1200 and continues monitoring the periphery of the vehicle.
 一方、ステップS1600において、誤検出判定部720は、検出処理部710からの対象物検出の通知を受けて、位置情報取得部300から位置情報を取得する。この位置情報は、つまり、処理対象となっている撮影画像が撮影されたときの位置情報であり、現在の車両(撮像部200)の位置および向きを示す情報である。 On the other hand, in step S 1600, the erroneous detection determination unit 720 receives the notification of object detection from the detection processing unit 710 and acquires position information from the position information acquisition unit 300. In other words, the position information is position information when the captured image to be processed is captured, and is information indicating the current position and orientation of the vehicle (imaging unit 200).
 そして、ステップS1800において、誤検出判定部720は、履歴データベース500から、ステップS1600において取得した位置情報と同一の位置情報を含む全ての履歴情報(以下「履歴データ」という)を読み込み、内部的なメモリに保存する。なお、このステップS1800から後述のステップ2000までの処理は、検出された対象物毎に実行される。 In step S1800, the erroneous detection determination unit 720 reads all history information (hereinafter referred to as “history data”) including the same position information as the position information acquired in step S1600 from the history database 500. Save to memory. Note that the processing from step S1800 to step 2000 described later is executed for each detected object.
 そして、ステップS1900において、誤検出判定部720は、履歴データを取得することができ、かつ、取得した履歴データに検出処理部710から入力された画像特徴量に近似する画像特徴量を含む履歴情報が存在するか否かを判断する。すなわち、誤検出判定部720は、新たな検出結果を、履歴データと比較する。 In step S1900, the erroneous detection determination unit 720 can acquire history data, and the history information includes image feature values that approximate the image feature values input from the detection processing unit 710 in the acquired history data. It is determined whether or not exists. That is, the false detection determination unit 720 compares the new detection result with the history data.
 誤検出判定部720は、履歴データを取得することができなかった場合、または、取得した履歴データに近似の画像特徴量を含む履歴情報が存在しない場合には(S1900:NO)、ステップS1500へ進む。履歴データを取得することができなかった場合とは、つまり、同一の位置情報を含む履歴情報が存在しなかった場合である。また、誤検出判定部720は、履歴データを取得することができ、かつ、取得した履歴データに検出処理部710から入力された画像特徴量に近似する画像特徴量を含む履歴情報が存在する場合には(S1900:YES)、ステップS2000へ進む。このとき、誤検出判定部720は、該当する対象物検出を無効にすると共に、データベース修正部730に対し、検出処理部710から入力された識別情報および画像特徴量を出力する。 When the history data cannot be acquired or when there is no history information including approximate image feature amounts in the acquired history data (S1900: NO), the erroneous detection determination unit 720 proceeds to step S1500. move on. The case where history data could not be acquired means that there is no history information including the same position information. In addition, the erroneous detection determination unit 720 can acquire history data, and history information including an image feature amount approximate to the image feature amount input from the detection processing unit 710 exists in the acquired history data. (S1900: YES), the process proceeds to step S2000. At this time, the erroneous detection determination unit 720 invalidates the corresponding object detection and outputs the identification information and the image feature amount input from the detection processing unit 710 to the database correction unit 730.
 そして、ステップS2000において、データベース修正部730は、誤検出が発生した画像領域(以下「誤検出領域」という)の画像特徴量を、検出データベース400の辞書データに登録する。すなわち、データベース修正部730は、誤検出判定部720から入力された識別情報が示す検出情報を検出データベース400から読み出し、検出処理部710から入力された画像特徴量と比較する。換言すると、データベース修正部730は、誤検出の基となった検出情報が示す検出情報と、誤検出された類似物体の画像特徴量とを、比較する。そして、データベース修正部730は、比較結果に基づいて、その類似物体に対する誤検出が再び行われないように、誤検出の基となった検出情報を修正する。 In step S2000, the database correction unit 730 registers the image feature amount of the image area where the erroneous detection has occurred (hereinafter referred to as “false detection area”) in the dictionary data of the detection database 400. That is, the database correction unit 730 reads the detection information indicated by the identification information input from the erroneous detection determination unit 720 from the detection database 400 and compares it with the image feature amount input from the detection processing unit 710. In other words, the database correction unit 730 compares the detection information indicated by the detection information that is the basis of the erroneous detection with the image feature amount of the similar object that has been erroneously detected. Then, the database correction unit 730 corrects the detection information that is the basis of the erroneous detection so that the erroneous detection is not performed again on the similar object based on the comparison result.
 図10は、検出情報の修正内容の一例を示す図である。 FIG. 10 is a diagram illustrating an example of correction contents of detection information.
 例えば、図10(A)に示す検出情報891に起因して、図10(B)に示す画像特徴量(パターン)892の画像領域の類似物体が誤検出されたとする。このとき、データベース修正部730は、検出情報891の画像特徴量と、類似物体の画像特徴量892との差分を抽出する。この例では、「□□」および「××」が差分に相当する。そして、データベース修正部730は、図10(C)に示すように、抽出した差分を、検出情報891用に用意した追加特徴量情報893に、対象物において非検出となるべき非検出用画像特徴量として追加する。更に、データベース修正部730は、図10(D)に示すように、。検出情報891に、その追加特徴量情報893の所在を追加する。この例では、「アドレス100」が所在に相当する。 For example, it is assumed that a similar object in the image area of the image feature amount (pattern) 892 shown in FIG. 10B is erroneously detected due to the detection information 891 shown in FIG. At this time, the database correction unit 730 extracts the difference between the image feature amount of the detection information 891 and the image feature amount 892 of the similar object. In this example, “□□” and “XX” correspond to the difference. Then, as shown in FIG. 10C, the database correction unit 730 adds the extracted difference to the additional feature amount information 893 prepared for the detection information 891, and the non-detection image feature that should not be detected in the object. Add as quantity. Furthermore, as shown in FIG. The location of the additional feature amount information 893 is added to the detection information 891. In this example, “address 100” corresponds to the location.
 検出処理部710は、検出情報に追加特徴量情報の所在が含まれるとき、該当する追加特徴量情報を併せて参照する。そして、検出処理部710は、追加特徴量情報の非検出用画像特徴量が、判定の対象としている画像領域の画像特徴量に含まれる場合には、当該画像領域を、対象物の画像領域ではないと判定する。すなわち、検出処理部710は、追加特徴量情報の非検出用画像特徴量と、判定の対象としている画像領域の画像特徴量との相関性が高ければ、当該画像領域が対象物の画像領域ではないと判定する。したがって、このような検出情報の修正により、対象物の画像特徴量に一致性の判断基準が厳しくなり、誤検出が繰り返され難くなる。 The detection processing unit 710 refers to the corresponding additional feature amount information together when the location of the additional feature amount information is included in the detection information. When the non-detection image feature quantity of the additional feature quantity information is included in the image feature quantity of the image area to be determined, the detection processing unit 710 determines that the image area is the target image area. Judge that there is no. That is, if the correlation between the non-detection image feature amount of the additional feature amount information and the image feature amount of the image region to be determined is high, the detection processing unit 710 determines that the image region is the target image region. Judge that there is no. Therefore, by correcting such detection information, the criteria for determining the coincidence with the image feature amount of the object becomes strict, and erroneous detection is difficult to be repeated.
 そして、図4のステップS2100において、検出処理部710は、検出データベース400から、辞書データを再度読み込み、内部的なメモリに保存し直して、ステップS1500へ進む。検出処理部710は、検出データベース400に対する修正の有無を監視し、修正があったときに辞書データの再読み込みを行っても良い。また、検出処理部710は、誤検出判定部720またはデータベース修正部730からの辞書データ修正の通知を受けて、辞書データの再読み込みを行っても良い。 4, the detection processing unit 710 reads the dictionary data again from the detection database 400, stores it again in the internal memory, and proceeds to step S1500. The detection processing unit 710 may monitor whether or not the detection database 400 is corrected, and may read the dictionary data again when the correction is made. Further, the detection processing unit 710 may re-read dictionary data upon receiving notification of dictionary data correction from the erroneous detection determination unit 720 or the database correction unit 730.
 そして、検出処理部710は、処理の終了を指示されると(S1700:YES)、一連の処理を終了する。 And the detection process part 710 will complete | finish a series of processes, if the completion | finish of a process is instruct | indicated (S1700: YES).
 このような動作により、ECU700は、新たに対象物として検出した物体の画像特徴量に近似する画像特徴量を有する物体が、過去に同一の位置で対象物として検出されていたとき、その新たな検出を誤検出であると判定することができる。また、ECU700は、誤検出と判定したとき、誤った検出結果の提示や固定して存在する類似物体に対する誤検出の繰り返しを回避するように、検出結果および辞書データを修正する事ができる。 By such an operation, when an object having an image feature amount approximate to the image feature amount of the object newly detected as the target object is detected as the target object at the same position in the past, ECU 700 It can be determined that the detection is a false detection. In addition, when it is determined that there is a false detection, the ECU 700 can correct the detection result and the dictionary data so as to avoid erroneous detection results being presented and repeated false detection of similar objects that are present in a fixed manner.
 ここで、本実施の形態に係る車両周辺監視システム100が、誤検出の繰り返しを防止しつつ対象物の検出精度を維持する事ができるという点について、図11を用いて説明する。 Here, the point that the vehicle periphery monitoring system 100 according to the present embodiment can maintain the detection accuracy of the object while preventing repeated erroneous detection will be described with reference to FIG.
 図11(A)に示すように、ある場所のある日の撮影画像880-1から、配電盤の画像領域881-1が、人の画像領域として抽出されたとする。これは、実際には誤検出である。すると、後日、図11(B)に示すように、同じ場所で撮影された撮影画像880-2からも、配電盤の画像領域881-2が、人の画像領域として誤って抽出される。これは、検出データベース400に、誤検出の基となった検出情報が残っているためである。 As shown in FIG. 11A, it is assumed that an image area 881-1 of the switchboard is extracted as a human image area from a photographed image 880-1 on a certain day. This is actually a false detection. Then, as shown in FIG. 11B, the image area 881-2 of the switchboard is erroneously extracted as a human image area from the captured image 880-2 photographed at the same place at a later date. This is because detection information that is a basis for erroneous detection remains in the detection database 400.
 ところが、人のように移動および変形する対象物の場合、上述の通り、異なる時間に同じの位置で同じ外観で撮影される可能性は、非常に低い。したがって、このように同じ場所で同じ検出情報に基づいて複数回検出される物体は、人ではないといえる。 However, in the case of an object that moves and deforms like a person, as described above, the possibility of being photographed with the same appearance at the same position at different times is very low. Therefore, it can be said that an object detected multiple times based on the same detection information at the same place is not a person.
 そこで、ECU700は、上述の動作により、誤検出の基となった検出情報を修正し、配電盤の画像領域881-2が再び人の画像領域として誤検出されないようにする。これにより、誤検出の繰り返しが回避される。 Therefore, the ECU 700 corrects the detection information that is the basis of the erroneous detection by the above-described operation so that the image area 881-2 of the switchboard is not erroneously detected as the human image area again. This avoids repeated false detections.
 また、図11(C)に示すように、ある場所のある日の撮影画像880-3から、交通標識の画像領域881-3が、人の画像領域として抽出されたとする。これは、実際には誤検出であるため、後日、再び誤検出されたときには、同様に、誤検出の基となった検出情報が修正されることになる。 Further, as shown in FIG. 11C, it is assumed that a traffic sign image area 881-3 is extracted as a human image area from a photographed image 880-3 on a certain day at a certain place. Since this is actually a false detection, when a false detection is made again at a later date, the detection information that is the basis of the false detection is similarly corrected.
 そして、図11(D)に示すように、後日の撮影画像880-4において、交通標識に重なるようにして、同じ場所に人が立っていたとする。ところが、この場合、人が交通標識に重なった画像領域881-4は、交通標識のみの画像領域881-3と位置は同じでも、画像特徴量は異なる。したがって、この画像領域881-4は、人の画像領域として正しく検出される事になる。このように、本実施の形態に係る車両周辺監視システム100は、辞書データを修正しても、対象物に対する検出精度を低下させない。 Then, as shown in FIG. 11D, it is assumed that a person stands at the same place so as to overlap the traffic sign in the captured image 880-4 at a later date. However, in this case, the image area 881-4 where the person overlaps the traffic sign has the same position as the image area 881-3 of the traffic sign only, but the image feature amount is different. Therefore, the image area 881-4 is correctly detected as a human image area. Thus, even if the vehicle periphery monitoring system 100 according to the present embodiment corrects the dictionary data, the detection accuracy for the object is not lowered.
 以上説明したように、本実施の形態に係る車両周辺監視システム100は、新たに対象物として検出した物体の画像特徴量に近似する画像特徴量を有する物体が過去に同一の位置で対象物として検出されていたとき、誤検出であると判定する。これにより、本実施の形態に係る車両周辺監視システム100は、同じ場所で類似物体が繰り返し対象物であると判定されるのを防止する事ができ、撮影画像から対象物を検出する際の検出精度を向上させる事ができる。 As described above, in the vehicle periphery monitoring system 100 according to the present embodiment, an object having an image feature amount that approximates the image feature amount of an object that is newly detected as a target object is set as the target object at the same position in the past. When it is detected, it is determined that it is a false detection. Thereby, the vehicle periphery monitoring system 100 according to the present embodiment can prevent the similar object from being repeatedly determined as the target object at the same place, and can detect the target object from the captured image. Accuracy can be improved.
 また、本実施の形態に係る車両周辺監視システム100は、誤検出であると判定したときに、同じ類似物体に対して誤検出が繰り返されないように、対象物の検出に用いられる検出情報を修正する。これにより、本実施の形態に係る車両周辺監視システム100は、誤検出自体の繰り返しを防止することができると共に、誤検出判定を行うべき対象物検出の数を減らす事ができ、処理負荷を軽減する事ができる。 Further, when the vehicle periphery monitoring system 100 according to the present embodiment determines that the detection is an erroneous detection, the detection information used for detecting the object is not detected so that the erroneous detection is not repeated for the same similar object. Correct it. Thereby, the vehicle periphery monitoring system 100 according to the present embodiment can prevent the erroneous detection itself from being repeated, can reduce the number of object detections that should be erroneously detected, and reduces the processing load. I can do it.
 なお、車両周辺監視システム100は、近似の画像特徴量を有する対象物が過去に同一の位置で検出されていたとの判定が、同一の対象について2回以上の所定の回数行われてから、誤検出確認画面を表示しても良い。この場合には、ECU700は、例えば、かかる判定の回数を履歴情報に付加し、履歴情報に付加された回数が任意の閾値(繰り返し数)を超えた事を条件として、誤検出と判定すれば良い。 Note that the vehicle periphery monitoring system 100 determines that an object having an approximate image feature amount has been detected at the same position in the past, and then makes an error after performing the predetermined number of times twice or more for the same object. A detection confirmation screen may be displayed. In this case, for example, the ECU 700 adds the number of times of the determination to the history information, and determines that the number of times of the determination is false detection on the condition that the number of times added to the history information exceeds an arbitrary threshold value (repetition number). good.
 また、車両周辺監視システム100は、位置情報として車両(撮像部)の向きを考慮せず、現在の撮影画像と過去の撮影画像とのマッチングにより、各物体が同一の位置にあるか否かを判断しても良い。この場合には、ECU700は、履歴データとして、撮影画像をも保存する必要がある。これにより、車両周辺監視システム100は、車両(撮像部)の向きに関する情報を不要とする事ができ、汎用性を高める事ができる。 Further, the vehicle periphery monitoring system 100 does not consider the direction of the vehicle (imaging unit) as the position information, and determines whether or not each object is at the same position by matching the current captured image and the past captured image. You may judge. In this case, the ECU 700 needs to save a captured image as history data. Thereby, the vehicle periphery monitoring system 100 can make the information regarding the direction of a vehicle (imaging part) unnecessary, and can improve versatility.
 また、車両周辺監視システム100は、対象物検出に用いる画像特徴量として、色情報等を加えても良い。これにより、検出精度を更に向上させる事ができる。 Further, the vehicle periphery monitoring system 100 may add color information or the like as an image feature amount used for object detection. Thereby, detection accuracy can be further improved.
 また、車両周辺監視システム100は、検出情報を修正したときには、同じ誤検出は行われないことから、その修正に対応する履歴情報を履歴データベースから削除しても良い。これにより、履歴データベースに必要なメモリ量を抑えることができ、装置の小型化および低コスト化を図ることができる。 Further, since the same erroneous detection is not performed when the detection information is corrected, the vehicle periphery monitoring system 100 may delete the history information corresponding to the correction from the history database. Thereby, the amount of memory required for the history database can be suppressed, and the size and cost of the apparatus can be reduced.
 また、車両周辺監視システム100は、ユーザに対して、誤検出か否かの問い合わせを行うようにしても良い。この場合、誤検出判定部720は、例えば、履歴データを取得することができ、かつ、取得した履歴データに検出処理部710から入力された画像特徴量に近似する画像特徴量を含む履歴情報が存在するという条件が満たされるか否かを判断する(図4のS1900)。そして、誤検出判定部720は、かかる条件が満たされると判断したとき(S1900:YES)、誤検出と判定して良いか否かをユーザに確認するための誤検出確認画面の表示映像信号を生成し、出力部600へ出力する。 Also, the vehicle periphery monitoring system 100 may make an inquiry to the user as to whether it is a false detection. In this case, for example, the erroneous detection determination unit 720 can acquire history data, and history information including image feature amounts that approximate the image feature amounts input from the detection processing unit 710 in the acquired history data. It is determined whether or not the condition of existence is satisfied (S1900 in FIG. 4). When the erroneous detection determination unit 720 determines that such a condition is satisfied (S1900: YES), the erroneous detection confirmation screen display video signal for confirming to the user whether or not it may be determined as erroneous detection. Generate and output to the output unit 600.
 図12は、誤検出確認画面の一例を示す図である。 FIG. 12 is a diagram showing an example of a false detection confirmation screen.
 図12に示すように、誤検出確認画面880は、撮影画像881に、対象物として検出された物体の画像領域882を示すマーカ883を重畳して表示する。更に、誤検出確認画面880は、撮影画像881に、辞書データの修正を行うための修正キー884を重畳して表示する。ユーザは、マーカ883で示される画像領域882を目で確認し、類似物体であると認めるときは、修正キー884に対する押下操作を行う。誤検出判定部720は、誤検出確認画面880の表示を開始してから所定時間内に誤検出確認画面880の修正キー884が押下されたとき、対象物検出が誤検出であると決定する。一方、誤検出判定部720は、誤検出確認画面880の表示を開始してから所定時間内に誤検出確認画面880の修正キー884が押下されないとき、対象物検出が誤検出ではないと決定する。 As shown in FIG. 12, the erroneous detection confirmation screen 880 displays a marker 883 indicating the image area 882 of the object detected as the target object superimposed on the captured image 881. Furthermore, the erroneous detection confirmation screen 880 displays a correction key 884 for correcting dictionary data superimposed on the captured image 881. The user confirms the image area 882 indicated by the marker 883 with his / her eyes, and performs a pressing operation on the correction key 884 when the user recognizes that the image area 882 is similar. The false detection determination unit 720 determines that the object detection is a false detection when the correction key 884 of the false detection confirmation screen 880 is pressed within a predetermined time after the display of the false detection confirmation screen 880 is started. On the other hand, the false detection determination unit 720 determines that the object detection is not a false detection when the correction key 884 of the false detection confirmation screen 880 is not pressed within a predetermined time after the display of the false detection confirmation screen 880 is started. .
 そして、誤検出判定部720は、対象物検出が誤検出であると決定した場合にのみ、該当する対象物検出を無効にすると共に、検出処理部710から入力された識別情報および画像特徴量をデータベース修正部730へ出力する。 Then, the erroneous detection determination unit 720 invalidates the corresponding target object detection only when it is determined that the target object detection is erroneous detection, and uses the identification information and the image feature amount input from the detection processing unit 710. The data is output to the database correction unit 730.
 これにより、たまたま同じ位置に同じ外観で撮影された対象物が存在したときに、その対象物が検出されなくなるのを防ぐ事が可能となる。なお、車両周辺監視システム100は、ユーザ問い合わせを行うか否かを、画像特徴量の一致度等に基づいて決定し、ユーザ問い合わせを行うと決定したときのみ、誤検出確認画面(修正キー)の表示を行うようにしても良い。 This makes it possible to prevent the object from being detected when the object photographed with the same appearance is present at the same position. Note that the vehicle periphery monitoring system 100 determines whether or not to make a user inquiry based on the degree of coincidence of image feature amounts and the like, and only when it is determined to make a user inquiry, an erroneous detection confirmation screen (correction key) is displayed. Display may be performed.
 2010年3月3日出願の特願2010-46184の日本出願および2010年3月30日出願の特願2010-78491の日本出願に含まれる明細書、図面および要約書の開示内容は、すべて本願に援用される。 The disclosure of the description, drawings and abstract contained in the Japanese application of Japanese Patent Application No. 2010-46184 filed on March 3, 2010 and the Japanese Patent Application No. 2010-78491 filed on March 30, 2010 is hereby incorporated by reference. Incorporated.
 本発明に係る車両周辺監視装置および車両周辺監視方法は、撮影画像から対象物を検出する際の検出精度を向上させる事ができる車両周辺監視装置および車両周辺監視方法として有用である。 The vehicle periphery monitoring device and the vehicle periphery monitoring method according to the present invention are useful as a vehicle periphery monitoring device and a vehicle periphery monitoring method that can improve detection accuracy when detecting an object from a captured image.
 100 車両周辺監視システム
 200 撮像部
 300 位置情報取得部
 400 検出データベース
 500 履歴データベース
 600 出力部
 700 ECU
 710 検出処理部
 720 誤検出判定部
 730 データベース修正部
DESCRIPTION OF SYMBOLS 100 Vehicle periphery monitoring system 200 Image pick-up part 300 Position information acquisition part 400 Detection database 500 History database 600 Output part 700 ECU
710 Detection processing unit 720 False detection determination unit 730 Database correction unit

Claims (6)

  1.  車両の周辺の撮影画像から対象物を検出する検出処理部と、
     前記検出処理部が新たに対象物として検出した物体の画像特徴量に近似する画像特徴量を有する物体が、過去に同一の位置で対象物として検出されていた事を条件として、当該新たな検出を誤検出であると判定する誤検出判定部と、を有する、
     車両周辺監視装置。
    A detection processing unit that detects an object from captured images around the vehicle;
    The new detection is performed on the condition that an object having an image feature amount that approximates the image feature amount of the object that the detection processing unit has newly detected as the target object has been detected as the target object at the same position in the past. An erroneous detection determination unit that determines that it is erroneous detection;
    Vehicle periphery monitoring device.
  2.  前記検出処理部が対象物として検出した物体の画像特徴量および位置の履歴データを蓄積する履歴データベース、を更に有し、
     前記誤検出判定部は、
     前記検出処理部が新たに対象物として検出した物体の画像特徴量および位置の組について、その画像特徴量に近似する画像特徴量とその位置と同一との位置の組を、前記履歴データベースにおいて検索する、
     請求項1記載の車両周辺監視装置。
    A history database for accumulating history data of image features and positions of objects detected by the detection processing unit as objects;
    The erroneous detection determination unit
    With respect to a set of image feature amounts and positions of an object newly detected by the detection processing unit as a target object, a search is performed in the history database for a set of image feature amounts that approximate the image feature amounts and positions that are the same as the positions. To
    The vehicle periphery monitoring apparatus according to claim 1.
  3.  前記誤検出判定部は、
     前記検出処理部が新たに対象物として検出した物体の画像特徴量に近似する画像特徴量を有する物体が、過去に同一の位置で対象物として所定回数以上検出されていた事を条件として、当該新たな検出を誤検出であると判定する、
     請求項1記載の車両周辺監視装置。
    The erroneous detection determination unit
    On the condition that an object having an image feature amount approximate to the image feature amount of the object newly detected by the detection processing unit as the target object has been detected as the target object at the same position in the past more than a predetermined number of times, the new It is determined that false detection is a false detection,
    The vehicle periphery monitoring apparatus according to claim 1.
  4.  前記検出処理部が前記対象物の検出に用いる検出情報を格納する検出データベースと、
     前記誤検出判定部が誤検出であると判定すると、当該誤検出が再度行われないように前記検出情報を修正するデータベース修正部と、を更に有する、
     請求項1記載の車両周辺監視装置。
    A detection database for storing detection information used by the detection processing unit to detect the object;
    A database correction unit that corrects the detection information so that the erroneous detection is not performed again when the erroneous detection determination unit determines that the erroneous detection has occurred;
    The vehicle periphery monitoring apparatus according to claim 1.
  5.  前記誤検出判定部は、
     前記車両の位置に基づいて、前記物体の位置の同一性を判定する、
     請求項1記載の車両周辺監視装置。
    The erroneous detection determination unit
    Determining the identity of the position of the object based on the position of the vehicle;
    The vehicle periphery monitoring apparatus according to claim 1.
  6.  車両の周辺の撮影画像から対象物を検出するステップと、
     新たに対象物として検出した物体の画像特徴量に近似する画像特徴量を有する物体が、過去に同一の位置で対象物として検出されていた事を条件として、当該新たな検出を誤検出であると判定するステップと、を有する、
     車両周辺監視方法。
    Detecting an object from captured images around the vehicle;
    The new detection is erroneously detected on the condition that an object having an image feature amount that approximates the image feature amount of the object that is newly detected as the target object has been detected as the target object at the same position in the past. Determining.
    Vehicle periphery monitoring method.
PCT/JP2011/001191 2010-03-03 2011-03-01 Vehicle-surroundings monitoring apparatus and vehicle-surroundings monitoring method WO2011108258A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2010-046184 2010-03-03
JP2010046184A JP2013093639A (en) 2010-03-03 2010-03-03 Vehicle periphery monitoring device
JP2010078491 2010-03-30
JP2010-078491 2010-03-30

Publications (1)

Publication Number Publication Date
WO2011108258A1 true WO2011108258A1 (en) 2011-09-09

Family

ID=44541928

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/001191 WO2011108258A1 (en) 2010-03-03 2011-03-01 Vehicle-surroundings monitoring apparatus and vehicle-surroundings monitoring method

Country Status (1)

Country Link
WO (1) WO2011108258A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013042206A1 (en) * 2011-09-20 2013-03-28 トヨタ自動車株式会社 Subject change detection device and subject change detection method
JP2018084443A (en) * 2016-11-21 2018-05-31 株式会社リコー Image processing apparatus, image processing system, image processing method, and image processing program
JP2020152521A (en) * 2019-03-20 2020-09-24 東芝エレベータ株式会社 Elevator user detection system
US10885355B2 (en) 2016-11-08 2021-01-05 Mitsubishi Electric Cornoration Object detection device and object detection method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008026985A (en) * 2006-07-18 2008-02-07 Toyota Motor Corp On-vehicle pedestrian detector
JP2008174028A (en) * 2007-01-16 2008-07-31 Sumitomo Electric Ind Ltd Target detection system and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008026985A (en) * 2006-07-18 2008-02-07 Toyota Motor Corp On-vehicle pedestrian detector
JP2008174028A (en) * 2007-01-16 2008-07-31 Sumitomo Electric Ind Ltd Target detection system and method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013042206A1 (en) * 2011-09-20 2013-03-28 トヨタ自動車株式会社 Subject change detection device and subject change detection method
CN103814401A (en) * 2011-09-20 2014-05-21 丰田自动车株式会社 Subject change detection device and subject change detection method
JPWO2013042206A1 (en) * 2011-09-20 2015-03-26 トヨタ自動車株式会社 Object change detection device and object change detection method
US10885355B2 (en) 2016-11-08 2021-01-05 Mitsubishi Electric Cornoration Object detection device and object detection method
JP2018084443A (en) * 2016-11-21 2018-05-31 株式会社リコー Image processing apparatus, image processing system, image processing method, and image processing program
JP2020152521A (en) * 2019-03-20 2020-09-24 東芝エレベータ株式会社 Elevator user detection system

Similar Documents

Publication Publication Date Title
US8005266B2 (en) Vehicle surroundings monitoring apparatus
JP5706874B2 (en) Vehicle periphery monitoring device
JP4263737B2 (en) Pedestrian detection device
JP5442164B2 (en) Moving object peripheral image correction device
US8126210B2 (en) Vehicle periphery monitoring device, vehicle periphery monitoring program, and vehicle periphery monitoring method
JP5480777B2 (en) Object display device and object display method
EP1891580B1 (en) Method and a system for detecting a road at night
US9158738B2 (en) Apparatus for monitoring vicinity of a vehicle
US20110142286A1 (en) Detective information registration device, target object detection device, electronic device, method of controlling detective information registration device, method of controlling target object detection device, control program for detective information registration device, and control program for target object detection device
KR20130112550A (en) Apparatus for setting parking position based on around view image and method thereof
US8526681B2 (en) On-vehicle image processing device for vehicular control
JP5066558B2 (en) Vehicle shadow recognition device
JP2003134508A (en) Information providing device for vehicle
KR20150067680A (en) System and method for gesture recognition of vehicle
CN112078571B (en) Automatic parking method, automatic parking equipment, storage medium and automatic parking device
US20180270444A1 (en) Image recording system, image recording method and storage medium recording image recording program
WO2011108258A1 (en) Vehicle-surroundings monitoring apparatus and vehicle-surroundings monitoring method
JP2007323578A (en) Vehicle periphery monitoring device
US7599546B2 (en) Image information processing system, image information processing method, image information processing program, and automobile
JP4735242B2 (en) Gaze target object identification device
JP2013093639A (en) Vehicle periphery monitoring device
JP3999088B2 (en) Obstacle detection device
KR20130053605A (en) Apparatus and method for displaying around view of vehicle
JP4176558B2 (en) Vehicle periphery display device
CN112660121A (en) Hidden danger vehicle identification early warning method and device, vehicle-mounted terminal and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11750371

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11750371

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP