WO2023210753A1 - Driving assistance device and driving assistance method - Google Patents

Driving assistance device and driving assistance method Download PDF

Info

Publication number
WO2023210753A1
WO2023210753A1 PCT/JP2023/016675 JP2023016675W WO2023210753A1 WO 2023210753 A1 WO2023210753 A1 WO 2023210753A1 JP 2023016675 W JP2023016675 W JP 2023016675W WO 2023210753 A1 WO2023210753 A1 WO 2023210753A1
Authority
WO
WIPO (PCT)
Prior art keywords
traffic
image
vehicle
driving support
unit
Prior art date
Application number
PCT/JP2023/016675
Other languages
French (fr)
Japanese (ja)
Inventor
修一 青山
Original Assignee
株式会社Jvcケンウッド
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Jvcケンウッド filed Critical 株式会社Jvcケンウッド
Publication of WO2023210753A1 publication Critical patent/WO2023210753A1/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Definitions

  • the present invention relates to a driving support device and a driving support method.
  • Patent Document 1 There is a known technology that detects deceleration objects such as traffic lights using images or radar and notifies vehicle occupants. (For example, Patent Document 1)
  • the driving support device described in Patent Document 1 detects objects to be decelerated, such as traffic lights, and notifies the occupants of the vehicle.
  • objects to be decelerated such as traffic lights
  • the traffic light is in a blind spot due to a preceding vehicle or the like and cannot be detected by a camera or radar mounted on the vehicle.
  • unnecessary information will be output, which may be a nuisance to the vehicle occupant.
  • the present disclosure has been made in view of the above, and there are situations in which it is difficult for vehicle drivers to see traffic control objects that restrict vehicle travel, such as traffic lights, stop signs, railroad crossings, and crosswalks.
  • the object of the present invention is to provide a driving support device and a driving support method that can appropriately notify a driver of the presence of a traffic regulating object when the traffic control object occurs.
  • a driving support device includes a current position acquisition unit that acquires the current position of a vehicle, and a traffic restriction that acquires traffic restriction object information regarding a traffic restriction object that exists in the direction of travel of the vehicle.
  • an object information acquisition unit an image acquisition unit that acquires an image of the surroundings of the vehicle including at least the traveling direction of the vehicle; an image recognition unit that recognizes a recognition target object photographed in the image;
  • the vehicle is characterized by comprising: a determination unit that determines that the recognition target object that is present and is the traffic regulation object is not recognized; and a notification control unit that controls notification based on the result of the determination. shall be.
  • a driving support method includes a step in which a computer acquires the current position of a vehicle, and a step in which a computer acquires traffic restriction object information regarding a traffic restriction object existing in the traveling direction of the vehicle. , a step of acquiring an image of the surroundings of the vehicle including at least the traveling direction of the vehicle, a step of recognizing a recognition target object photographed in the image, and a step in which the traffic restriction object exists and the traffic restriction object
  • the present invention is characterized by executing the steps of determining that the recognition target object is not recognized, and controlling notification based on the result of the determination.
  • a driving support device and a driving support method that can appropriately notify a driver of the existence of a traffic restriction object when the traffic restriction object is difficult to visually recognize.
  • FIG. 1 is a block diagram showing an example of the configuration of a driving support device according to a first embodiment.
  • FIG. 2 is a flowchart illustrating an example of a processing procedure of the driving support device according to the first embodiment.
  • FIG. 3 is a block diagram showing an example of the configuration of the driving support device according to the second embodiment.
  • FIG. 4 is a block diagram showing an example of the configuration of a driving support device according to the third embodiment.
  • FIG. 1 is a block diagram showing an example of the configuration of a driving support system 1 according to the first embodiment.
  • the driving support system 1 includes a driving support device 10, a GNSS (Global Navigation Satellite System) receiver 20, a map database 30, a camera 40, a monitor 50, and a speaker 60.
  • GNSS Global Navigation Satellite System
  • the GNSS receiver 20 receives a GNSS signal from a GNSS satellite (not shown).
  • the GNSS receiver 20 outputs the received radio signal to the driving support device 10.
  • the GNSS receiver 20 measures the current position using the received radio wave signal, and outputs information on the measured current position to the driving support device 10.
  • the GNSS receiver 20 is a device that is implemented as a function of a mobile device owned by a vehicle occupant such as a driver or a passenger, and outputs received radio wave signals or measured current position information to the driving support device 10. There may be.
  • the map database 30 stores various types of map information.
  • the map information includes location information of traffic control objects such as traffic lights, stop signs, railroad crossings, and crosswalks that require vehicles to stop or slow down and that restrict the movement of vehicles.
  • the camera 40 is mounted on the vehicle and photographs a range including the direction of travel of the vehicle.
  • the camera 40 may be a single camera or may be configured as a group of multiple cameras. Further, the camera 40 may be a portable device held by a vehicle occupant such as a driver.
  • the monitor 50 is an image display device mounted on the vehicle.
  • the monitor 50 is a display device specific to the driving support system 1 or a display device shared with other systems.
  • the monitor 50 may be a display unit mounted on a portable device owned by a vehicle occupant such as a driver.
  • the speaker 60 is an audio output device mounted on the vehicle.
  • the speaker 60 is an audio output device specific to the driving support system 1 or an audio output device shared with other systems.
  • the speaker 60 may be a speaker mounted on a portable device owned by a vehicle occupant such as a driver.
  • the driving support device 10 includes a current position acquisition section 11, a traffic restriction information acquisition section 12, an image acquisition section 13, an image recognition section 14, a determination section 15, and a notification control section 16.
  • the current position acquisition unit 11 uses the radio wave signal received by the GNSS receiver 20 to measure and acquire the current position of the vehicle. Alternatively, the current position acquisition unit 11 acquires current position information of the vehicle that has been positioned by the GNSS receiver 20.
  • the current position acquisition unit 11 may be configured to include a GNSS receiver 20. The current position acquisition unit 11 outputs the acquired current position of the vehicle to the traffic regulation object information acquisition unit 12.
  • the traffic regulation object information acquisition unit 12 Based on the current position acquired by the current position acquisition unit 11, the traffic regulation object information acquisition unit 12 refers to the map database 30 to identify traffic lights, stop signs, railroad crossings, and crosswalks that exist in the direction of travel of the vehicle. Obtains traffic regulation object information regarding traffic regulation objects that require vehicles to stop or decelerate, such as traffic regulation objects that restrict the movement of vehicles. The traffic regulation object information acquisition unit 12 also obtains traffic regulation object type information indicating which type of the acquired traffic regulation object is, such as a traffic light, a stop sign, a railroad crossing, or a crosswalk, and the current state of the vehicle. It is preferable to also acquire information regarding the distance from the location to the traffic control object.
  • the traffic regulation object information acquisition unit 12 obtains traffic regulation object information regarding traffic regulation objects that exist within a predetermined range that is visible to the driver, such as within 100 m of the vehicle's direction of travel. get.
  • the traffic restriction object information acquisition unit 12 outputs the acquired information regarding the traffic restriction object to the determination unit 15.
  • the image acquisition unit 13 acquires an image of the surroundings of the vehicle including at least the traveling direction of the vehicle from the camera 40.
  • the image acquisition unit 13 may acquire a plurality of images from a plurality of camera groups 40.
  • the image acquisition section 13 may include the camera 40, or may be an input interface section that is connected to the separate camera 40 and acquires an output signal output from the camera 40.
  • the image acquisition unit 13 outputs the acquired image to the image recognition unit 14.
  • the image recognition unit 14 performs image recognition processing of a predetermined recognition target object photographed in the image acquired by the image acquisition unit 13.
  • the image recognition unit 14 recognizes traffic control objects such as traffic lights, stop signs, railroad crossings, and crosswalks that require the vehicle to stop or slow down and that restrict the movement of the vehicle. , performs the recognition process.
  • the image recognition unit 14 outputs the recognition result to the determination unit 15.
  • the image recognition unit 14 includes a trained model for traffic regulation object recognition as a trained model for image recognition.
  • the image recognition unit 14 recognizes a traffic restriction object that restricts vehicle travel from the image acquired by the image acquisition unit 13 using an image recognition technique that uses a trained model for recognizing a traffic restriction object.
  • a trained model for image recognition is a learning model in which input data, which is a set of images and training data, is input to a neural network and subjected to machine learning.
  • input data which is a set of images and training data
  • a neural network for example, an image of a landscape taken by a camera mounted on a vehicle, or an image of a traffic light, a sign, a railroad crossing, a crosswalk, etc. is used as the image.
  • teacher data teacher data which is set to 1 when a traffic regulating object is photographed in the image, and is set to 0 when a traffic regulating object is not photographed in the image.
  • the neural network included in the image recognition unit 14 inputs the image acquired by the image acquisition unit 13 to a trained model for recognizing traffic restriction objects, and calculates the possibility that a traffic restriction object is photographed in the input image. , output as a value between 1 and 0. The closer the output value is to 1, the higher the possibility that a traffic restriction object is photographed in the input image, and the closer the output value is to 0, the higher the possibility that a traffic restriction object is photographed in the input image. low. If this output value is a predetermined threshold value, for example, 0.8 or more, the image recognition unit 14 adds a recognition result that a traffic regulation object is photographed to the image acquired by the image acquisition unit 13. , is output to the determination unit 15.
  • a predetermined threshold value for example, 0.8 or more
  • the image recognition unit 14 may recognize the recognition target using a known technique such as pattern matching.
  • the determination unit 15 determines that the traffic regulation object information acquisition unit 12 has acquired the information that a traffic regulation object exists in the direction of travel of the vehicle, and the image recognition unit 14 has recognized the traffic regulation object that is the recognition target object. Determine that there is no such thing. In other words, even though there is a traffic restriction object in the direction of travel of the vehicle, the image recognition result does not recognize the traffic restriction object, indicating that there is a high possibility that the driver will not be able to recognize the traffic restriction object. In this case, it is determined that the driver should be notified. The determination unit 15 outputs the determined result to the notification control unit 16.
  • the notification control unit 16 controls the notification to the driver by display or audio. For example, informing the driver that there is a traffic control object in the direction of travel of the vehicle, such as "There is a traffic light ahead” or "There is a railroad crossing 50 meters ahead", is given via the monitor 50 or speaker 60. This is to notify the driver by display or audio presentation.
  • the notification control unit 16 controls at least one of outputting predetermined image data and text data to the monitor 50, and outputting predetermined warning sounds and audio data to the speaker 60.
  • the notification control unit 16 is not limited to notifying the driver using the monitor 50 or the speaker 60.
  • the notification control unit 16 may notify the driver through, for example, a buzzer, an LED (light emitting diode), or a vibration device (not shown) in addition to the monitor 50 or the speaker 60.
  • FIG. 2 is a flowchart illustrating an example of a processing procedure of the driving support device according to the first embodiment.
  • step S10 the current position acquisition unit 11 acquires the current position of the vehicle based on the radio wave signal received by the GNSS receiver 20.
  • the current position acquisition unit 11 outputs the acquired current position information to the traffic regulation object information acquisition unit 12. After that, the process advances to step S11.
  • the traffic restriction object information acquisition unit 12 refers to the map information database 30 based on the acquired current position and acquires information on traffic restriction objects existing within a predetermined range in the direction of travel of the vehicle.
  • the traffic regulation object information acquisition section 12 outputs the acquired traffic regulation object information to the determination section 15 when the traffic regulation object exists within a predetermined range in the traveling direction of the vehicle. After that, the process advances to step S12.
  • step S12 the image acquisition unit 13 acquires an image taken by the camera 40.
  • the image acquisition unit 13 outputs the acquired image to the image recognition unit 14. After that, the process advances to step S13.
  • step S13 the image recognition unit 14 performs image recognition processing on the acquired image and recognizes the traffic regulation object that is the recognition target.
  • the image recognition unit 14 outputs the recognition result, in other words, the result of whether or not the traffic restriction object has been recognized, to the determination unit 15. After that, the process advances to step S14.
  • step S14 the determination unit 15 determines that the traffic restriction object information acquisition unit 12 has acquired information that the traffic restriction object exists within a predetermined range in the traveling direction of the vehicle, and that the image recognition unit 14 performs image recognition. If the result indicates that there is a high possibility that the driver will not be able to recognize the traffic control object, it is determined that the driver should be notified.
  • the determination unit 15 determines to notify the driver (Yes in S14)
  • the process proceeds to step S15. If the determination unit 15 does not determine that the driver should be notified (No in S14), the process returns to step S10 and the process is executed again.
  • step S15 the notification control unit 16 controls notification to the driver by display or audio.
  • the notification control unit 16 outputs notification to the driver as image data or text data to the monitor 50, or outputs it as audio data to the speaker 60. Then end the flow.
  • the driving support device 10 installs traffic control objects such as traffic lights, stop signs, railroad crossings, and crosswalks that restrict the movement of vehicles within a predetermined range in the direction of travel of the vehicle. , and the relevant traffic restriction object is not recognized in the image taken by the camera mounted on the vehicle, it is determined that there is a high possibility that the driver is unable to see the traffic restriction object, and the relevant traffic restriction object is not recognized in the image taken by the camera mounted on the vehicle. Information regarding regulated objects can be notified to the driver.
  • traffic control objects such as traffic lights, stop signs, railroad crossings, and crosswalks that restrict the movement of vehicles within a predetermined range in the direction of travel of the vehicle.
  • the angle of view of the camera and the driver's field of view may vary depending on the positional relationship with the traffic control object, such as when the traffic light is located at a high position out of the driver's field of view. If it is difficult to see, it becomes difficult to see.
  • the first embodiment it is possible to notify the driver of information regarding the traffic regulating object in a state where it becomes difficult for the driver to visually recognize the traffic regulating object.
  • FIG. 3 is a block diagram showing an example of the configuration of the driving support system 1 according to the second embodiment.
  • the driving support device 10a according to the second embodiment has a different configuration from the driving support device 10 according to the first embodiment in the following points.
  • the image recognition unit 14a has a function of recognizing obstacles existing in front of the vehicle using images.
  • the determination unit 15a determines whether the size of the obstacle recognized by the image recognition unit 14a on the image is equal to or larger than a predetermined value, and determines whether the traffic regulating object is within a predetermined range in the direction of travel of the vehicle. It has a function of determining that the driver should be notified if an obstacle exists and the size of the obstacle on the image is equal to or larger than a predetermined value.
  • the current position acquisition unit 11, the traffic regulation object information acquisition unit 12, the image acquisition unit 13, and the notification control unit 16 have the same configuration and function as the driving support device 10 according to the first embodiment. A detailed explanation thereof will be omitted.
  • the image recognition unit 14a performs image recognition processing of a predetermined recognition target object photographed in the image acquired by the image acquisition unit 13. In this embodiment, the image recognition unit 14a recognizes an obstacle as a recognition target object. The image recognition section 14a outputs the recognition result to the determination section 15a.
  • the obstacles may be, for example, a vehicle traveling in front of the vehicle, a vehicle parked on the road, a vehicle traveling in the opposite lane, a signboard, a tree such as a garden tree or a street tree, a telephone pole, or the like.
  • the image recognition unit 14a performs machine learning on the following trained models for image recognition, such as vehicles traveling ahead, vehicles parked on the road, vehicles traveling in the opposite lane, signboards, garden trees, and street trees.
  • a trained model for recognizing obstacles such as trees and telephone poles is used.
  • the image is an image of a landscape taken by a camera mounted on a vehicle or an image of various obstacles, and if an obstacle is photographed in the image as training data, 1. Data is used that is set to 0 if no obstacle is photographed in the image.
  • the image recognition unit 14a inputs an image to a trained model for obstacle recognition, and outputs the probability that an obstacle is photographed in the input image as a value between 1 and 0. The closer the output value is to 1, the higher the possibility that an obstacle is photographed in the input image, and the closer the output value is to 0, the lower the possibility that an obstacle is photographed in the input image. If this output value is a predetermined threshold value, for example, 0.8 or more, the image recognition unit 13 determines that an obstacle is photographed in the image acquired by the image acquisition unit 12.
  • the image recognition unit 14a may recognize the recognition target using a known technique such as pattern matching.
  • the determination unit 15a measures the size of the obstacle recognized by the image recognition unit 14a on the image. For example, the determination unit 15a determines the size of the obstacle recognized by the image recognition unit 14a on the image by pixels indicating at least one of the length and width of the image of the obstacle recognized on the image. It is measured as the number of pixels or the area of the area of the image of the obstacle that is recognized on the image.
  • the determination unit 15a determines that the traffic restriction object information acquisition unit 12 has acquired the information that a traffic restriction object exists in the direction of travel of the vehicle, and that the obstacle, which is the recognition target object recognized by the image recognition unit 14a, is included in the image. It is determined that the occupied recognition area is larger than a predetermined value.
  • the recognition area occupied by the obstacle in the image is larger than a predetermined value, which means that the number of pixels in the vertical length of the recognition area where the obstacle is recognized is larger than the predetermined value, for example, the entire image. This is a case where the number of pixels is 1/2 or more of the number of pixels in the vertical length of .
  • the determination unit 15a determines that It is determined that the driver is to be notified.
  • the determination unit 15a outputs the determined result to the notification control unit 16.
  • the determination unit 15a determines the number of pixels in the horizontal direction where the obstacle is recognized and the area of the area where the obstacle is recognized. The determination may be made based on the number of pixels showing . Further, it is preferable that these predetermined values can be changed according to the user's designation or the driving environment.
  • the determination unit 15a may set a predetermined value for each type of traffic regulation information acquired by the traffic regulation object information acquisition unit 12. This is because the size on the image differs depending on the type of traffic regulation information. For example, for each type of traffic regulation information, a threshold value for the number of pixels in the horizontal length of the recognized obstacle or the number of pixels indicating the area is determined in advance as a predetermined value. The determination unit 15a determines, for each type of traffic regulation information, whether the number of pixels representing the horizontal length or the number of pixels representing the area of the obstacle recognized by the image recognition unit 14a is greater than or equal to a predetermined value. judge.
  • the determination unit 15a determines that the traffic regulation object information acquisition unit 12 has acquired the information that a stop sign exists in the direction of travel of the vehicle, and the number of pixels in the horizontal direction in which the obstacle has been recognized. , or when the number of pixels indicating the area is greater than or equal to a predetermined value corresponding to a stop sign, the determination unit 15a sends an output to the notification control unit 16 indicating that there is a high possibility that the driver will not be able to recognize the traffic regulation object. Output to.
  • predetermined values may be determined for traffic lights, railroad crossings, pedestrian crossings, and the like.
  • the driving support device 10a includes traffic control objects such as traffic lights, stop signs, railroad crossings, and crosswalks that restrict the movement of vehicles within a predetermined range in the direction of travel of the vehicle. , and an obstacle larger than a predetermined value is recognized in the image taken by the camera mounted on the vehicle, the driver's field of vision is narrowed by the obstacle, and the driver It is possible to determine that there is a high possibility that the traffic regulating object is not visible, and to notify the driver of information regarding the traffic regulating object.
  • traffic control objects such as traffic lights, stop signs, railroad crossings, and crosswalks that restrict the movement of vehicles within a predetermined range in the direction of travel of the vehicle.
  • the predetermined value is determined based on the type of traffic regulating object.
  • the image taken by the camera has a size larger than a predetermined value determined corresponding to the type of traffic restriction object.
  • Embodiment 2 allows more appropriate determination.
  • FIG. 4 is a block diagram showing an example of the configuration of the driving support system 1 according to the third embodiment.
  • the driving support device 10b according to the third embodiment has a different configuration from the driving support device 10a according to the second embodiment in the following points.
  • the determining unit 15b determines whether the traffic regulating object exists within a predetermined range in the traveling direction of the vehicle, and whether the position of the recognition target object recognized by the image recognition unit 14a on the image includes a predetermined area. If the traffic regulating object exists within a predetermined range in the direction of travel of the vehicle and the position occupied by the recognition target object on the image includes a predetermined region, the driver is notified. It has a function to determine if it is done.
  • the current position acquisition unit 11, traffic regulation object information acquisition unit 12, image acquisition unit 13, image recognition unit 14a, and notification control unit 16 have the same configuration as the driving support device 10a according to the second embodiment. Since it has a function, its detailed explanation will be omitted.
  • the determination unit 15b determines the position on the image where the obstacle, which is the recognition target recognized by the image recognition unit 14a, is recognized, and determines whether each pixel or each small area including a plurality of pixels constituting the image is included in the recognition area. In other words, each pixel or each small area is determined as information on whether or not an obstacle is recognized.
  • the determining unit 15b determines that a traffic regulating object exists and that the position occupied by the recognition target object on the image includes a predetermined area.
  • the position of the recognition target object on the image includes a predetermined area defined in advance, which means that the position of a pixel or small area in the image area where there is a high possibility that a traffic control object exists, and the determined obstacle. This means that the area where the object is recognized coincides with the area where the object is recognized.
  • the determination unit 15b determines that the image recognition result indicates that the driver cannot recognize the traffic restriction object when a traffic restriction object exists and the position occupied by the recognition target object on the image includes a predetermined area. It is determined that the driver is to be notified as this indicates that there is a high possibility.
  • the determination unit 15b outputs the determined result to the notification control unit 16.
  • the determination unit 15b may set a predetermined area as a predetermined area for each type of traffic regulation information acquired by the traffic regulation object information acquisition unit 12. This is because the position occupied on the image differs depending on the type of traffic regulation information. For example, a specific area on the upper left side of the image is predetermined as a predetermined area where a stop sign is likely to be image recognized. A specific area on the upper left side of the image is a predetermined area corresponding to a stop sign. The determination unit 15b determines whether the recognized position of the obstacle recognized by the image recognition unit 14a matches a specific area on the upper left side of the image.
  • the determination unit 15b determines that when the traffic regulation object information acquisition unit 12 has acquired information that a stop sign exists in the direction of travel of the vehicle and an obstacle is recognized in a specific area on the upper left side of the image, The determination unit 15b outputs to the notification control unit 16 an output indicating that there is a high possibility that the driver will not be able to recognize the traffic regulating object.
  • traffic lights, railroad crossings, pedestrian crossings, etc. may be defined as corresponding predetermined areas.
  • the driving support device 10b installs traffic control objects such as traffic lights, stop signs, railroad crossings, and crosswalks that restrict the movement of vehicles within a predetermined range in the direction of travel of the vehicle. and an obstacle is recognized at a specific position on the image taken by the camera mounted on the vehicle, the area on the image where there is a high possibility that a traffic restriction object exists is blocked by the obstacle. It is possible to determine that there is a high possibility that the driver cannot see the traffic regulating object, and to notify the driver of information regarding the traffic regulating object.
  • traffic control objects such as traffic lights, stop signs, railroad crossings, and crosswalks that restrict the movement of vehicles within a predetermined range in the direction of travel of the vehicle.
  • an obstacle is recognized at a specific position on the image taken by the camera mounted on the vehicle, the area on the image where there is a high possibility that a traffic restriction object exists is blocked by the obstacle. It is possible to determine that there is a high possibility that the driver cannot see the traffic regulating object, and to notify the driver of information regarding the
  • the predetermined area is determined based on the type of traffic regulating object. This obtains information on the presence of a predetermined traffic regulation object in the direction of travel of the vehicle, and if the obstacle is recognized in a specific area corresponding to the predetermined traffic regulation object, the driver's field of vision becomes obstructed. It can be determined that there is a high possibility that the driver cannot see the traffic restriction object because the road is narrowed by an object. Embodiment 3 allows more appropriate determination.
  • the driving support device 10a of the second embodiment may include an embodiment of the driving support device 10 of the first embodiment. That is, the image recognition unit 14a in the driving support device 10a simultaneously recognizes the traffic restriction object and the obstacle, and the determination unit 15a in the driving support device 10a determines that the traffic restriction object cannot be recognized and the obstacle occupies the image. It may be determined that the recognition area is equal to or larger than a predetermined value, and outputs to the notification control unit 16 an output indicating that there is a high possibility that the driver will not be able to recognize the traffic regulating object.
  • the driving support device 10b of the third embodiment may include an embodiment of the driving support device 10 of the first embodiment or the driving support device 10a of the second embodiment. That is, the determination unit 15b in the driving support device 10b of the third embodiment determines that the traffic regulating object cannot be recognized and the position on the image includes a predetermined area, or that the obstacle occupies the image. Determines that the recognition area is greater than or equal to a predetermined value and that the position on the image includes the predetermined area, indicating that there is a high possibility that the driver will not be able to recognize the traffic regulation object. The output may be output to the notification control section 16.
  • the driving support devices of Embodiments 1 to 3 may include at least one of a recording control section (not shown) and a communication control section (not shown).
  • the traffic restriction object information acquisition unit 12 has acquired the information that there is a traffic restriction object in the direction of travel of the vehicle, and the image recognition units 14 and 14a have recognized the recognition target object.
  • the recording control section controls recording the image acquired by the image acquisition section 13 in a recording section (not shown).
  • the traffic restriction object information acquisition unit 12 has acquired information on the presence of a traffic restriction object in the direction of travel of the vehicle, and the image recognition units 14 and 14a have recognized the recognition object, and the result is that the driver
  • the communication control unit controls the transmission of the image acquired by the image acquisition unit 13 to an external server (not shown) or a mobile terminal owned by the driver.
  • a recording control section (not shown) and a communication control section (not shown) may perform recording control and communication control based on the determination results of the determination sections 15, 15a, and 15b, respectively.
  • the recording control section (not shown) or the communication control section (not shown) may be provided in place of the notification control section of the driving support device of the first to third embodiments.
  • the driving support device is configured to perform only recording control or communication control without notifying the driver.
  • the present disclosure has been described as a hardware configuration, but the present disclosure is not limited to this.
  • the present disclosure can also implement arbitrary processing by causing a processor to execute a computer program.
  • Non-transitory computer-readable media include various types of tangible recording media, including, for example, magnetic recording media such as hard disk drives, optical recording media, magneto-optical recording media, and semiconductor memory.
  • Semiconductor memory can be various types of rewritable ROM (Read Only Memory) and RAM (Random Access Memory), and also includes those provided as SSD (Solid State Drive).
  • the program may also be delivered to the computer via various types of temporary computer-readable media, such as electrical signals, optical signals, , and electromagnetic waves.
  • the driving support device and driving support method of the present disclosure can be used, for example, in a drive recorder.
  • Driving support system 10
  • Driving support device 11
  • Current position acquisition unit 12
  • Traffic control object information acquisition unit 13
  • Image acquisition unit 14 14a
  • Image recognition unit 15a
  • Judgment unit 16
  • Notification control unit 20
  • GNSS receiver 30
  • Map database 40
  • Camera 50
  • Monitor 60
  • Speaker 60

Abstract

A driving assistance device 10 comprises: a current position acquiring unit 11 that acquires a current position of a vehicle; a traffic-regulating object information acquiring unit 12 that acquires traffic-regulating object information related to a traffic-regulating object that is present in a travel direction of the vehicle; an image acquiring unit 13 that acquires an image taken of the surroundings of the vehicle, including at least the travel direction of the vehicle; an image recognition unit 14 that recognizes an object to be recognized captured in the image; a determination unit 15 that determines that the traffic-regulating object is present and that the object to be recognized, which is the traffic-regulating object, is not being recognized; and a notification control unit 16 that controls notification based on a result of the determination.

Description

運転支援装置及び運転支援方法Driving support device and driving support method
 本発明は、運転支援装置及び運転支援方法に関する。 The present invention relates to a driving support device and a driving support method.
 信号機などの減速対象物を画像やレーダにて検出し、車両の乗員に報知する技術が知られている。(例えば特許文献1) There is a known technology that detects deceleration objects such as traffic lights using images or radar and notifies vehicle occupants. (For example, Patent Document 1)
特開2021-133777号公報JP 2021-133777 Publication
 特許文献1に記載される走行支援装置は、信号機などの減速対象物を検出し、車両の乗員に報知している。しかしながら、信号機が先行車両等によって死角にあり、車両に搭載したカメラやレーダによって信号機を検出することができない場合がある。また車両の乗員が信号機を視認出来ている場合には、不要な情報が出力されることになり、乗員に煩わしく感じられる虞があった。 The driving support device described in Patent Document 1 detects objects to be decelerated, such as traffic lights, and notifies the occupants of the vehicle. However, there are cases where the traffic light is in a blind spot due to a preceding vehicle or the like and cannot be detected by a camera or radar mounted on the vehicle. Furthermore, if the vehicle occupant can see the traffic light, unnecessary information will be output, which may be a nuisance to the vehicle occupant.
 本開示は、上記に鑑みてなされたものであって、車両の運転者が信号機、一時停止標識、踏切、および、横断歩道などの、車両走行を規制する交通規制物を視認しにくい状況である場合に、その交通規制物の存在を運転者に適切に報知することができる運転支援装置及び運転支援方法を提供するものである。 The present disclosure has been made in view of the above, and there are situations in which it is difficult for vehicle drivers to see traffic control objects that restrict vehicle travel, such as traffic lights, stop signs, railroad crossings, and crosswalks. The object of the present invention is to provide a driving support device and a driving support method that can appropriately notify a driver of the presence of a traffic regulating object when the traffic control object occurs.
 上記目的を達成するため、本開示に係る運転支援装置は、車両の現在位置を取得する現在位置取得部と、前記車両の進行方向に存在する交通規制物に関する交通規制物情報を取得する交通規制物情報取得部と、前記車両の進行方向を少なくとも含む車両周囲を撮影した画像を取得する画像取得部と、前記画像に撮影された認識対象物を認識する画像認識部と、前記交通規制物が存在し、かつ、前記交通規制物である前記認識対象物が認識されていないこと、を判定する判定部と、前記判定の結果に基づいた報知を制御する報知制御部と、を備えることを特徴とする。 In order to achieve the above object, a driving support device according to the present disclosure includes a current position acquisition unit that acquires the current position of a vehicle, and a traffic restriction that acquires traffic restriction object information regarding a traffic restriction object that exists in the direction of travel of the vehicle. an object information acquisition unit; an image acquisition unit that acquires an image of the surroundings of the vehicle including at least the traveling direction of the vehicle; an image recognition unit that recognizes a recognition target object photographed in the image; The vehicle is characterized by comprising: a determination unit that determines that the recognition target object that is present and is the traffic regulation object is not recognized; and a notification control unit that controls notification based on the result of the determination. shall be.
 上記目的を達成するため、本開示に係る運転支援方法は、コンピュータが、車両の現在位置を取得するステップと、前記車両の進行方向に存在する交通規制物に関する交通規制物情報を取得するステップと、前記車両の進行方向を少なくとも含む車両周囲を撮影した画像を取得するステップと、前記画像に撮影された認識対象物を認識するステップと、前記交通規制物が存在し、かつ、前記交通規制物である前記認識対象物が認識されていないこと、を判定するステップと、前記判定の結果に基づいた報知を制御するステップと、を実行することを特徴とする。 In order to achieve the above object, a driving support method according to the present disclosure includes a step in which a computer acquires the current position of a vehicle, and a step in which a computer acquires traffic restriction object information regarding a traffic restriction object existing in the traveling direction of the vehicle. , a step of acquiring an image of the surroundings of the vehicle including at least the traveling direction of the vehicle, a step of recognizing a recognition target object photographed in the image, and a step in which the traffic restriction object exists and the traffic restriction object The present invention is characterized by executing the steps of determining that the recognition target object is not recognized, and controlling notification based on the result of the determination.
 本開示によれば、交通規制物を視認しにくい状況である場合に、その交通規制物の存在を運転者に適切に報知することができる運転支援装置及び運転支援方法を提供することができる。 According to the present disclosure, it is possible to provide a driving support device and a driving support method that can appropriately notify a driver of the existence of a traffic restriction object when the traffic restriction object is difficult to visually recognize.
図1は、実施形態1に係る運転支援装置の構成の一例を示すブロック図である。FIG. 1 is a block diagram showing an example of the configuration of a driving support device according to a first embodiment. 図2は、実施形態1に係る運転支援装置の処理手順の一例を示すフローチャートである。FIG. 2 is a flowchart illustrating an example of a processing procedure of the driving support device according to the first embodiment. 図3は、実施形態2に係る運転支援装置の構成の一例を示すブロック図である。FIG. 3 is a block diagram showing an example of the configuration of the driving support device according to the second embodiment. 図4は、実施形態3に係る運転支援装置の構成の一例を示すブロック図である。FIG. 4 is a block diagram showing an example of the configuration of a driving support device according to the third embodiment.
 以下、発明の実施形態を通じて本発明を説明するが、特許請求の範囲に係る発明を以下の実施形態に限定するものではない。また、実施形態で説明する構成の全てが課題を解決するための手段として必須であるとは限らない。説明の明確化のため、以下の記載および図面は、適宜、省略、および簡略化がなされている。 Hereinafter, the present invention will be explained through embodiments of the invention, but the claimed invention is not limited to the following embodiments. Furthermore, not all of the configurations described in the embodiments are essential as means for solving the problem. For clarity of explanation, the following description and drawings are omitted and simplified as appropriate.
 <実施形態1>
 図1を用いて、本開示の実施形態1について説明する。図1は、実施形態1に係る運転支援システム1の構成の一例を示すブロック図である。
<Embodiment 1>
Embodiment 1 of the present disclosure will be described using FIG. 1. FIG. 1 is a block diagram showing an example of the configuration of a driving support system 1 according to the first embodiment.
 運転支援システム1は、運転支援装置10と、GNSS(Global Navigation Satellite System)受信機20と、地図データベース30と、カメラ40と、モニタ50と、スピーカ60を備える。 The driving support system 1 includes a driving support device 10, a GNSS (Global Navigation Satellite System) receiver 20, a map database 30, a camera 40, a monitor 50, and a speaker 60.
 GNSS受信機20は、図示しないGNSS衛星からGNSS信号を受信する。GNSS受信機20は、受信した電波の信号を運転支援装置10に出力する。または、GNSS受信機20は、受信した電波の信号を用いて現在位置を測位し、測位した現在位置の情報を運転支援装置10に出力する。GNSS受信機20は、運転者や同乗者などの車両の乗員が保有する携帯機器の機能として実装され、受信した電波の信号または測位した現在位置の情報を、運転支援装置10に出力する機器であってもよい。 The GNSS receiver 20 receives a GNSS signal from a GNSS satellite (not shown). The GNSS receiver 20 outputs the received radio signal to the driving support device 10. Alternatively, the GNSS receiver 20 measures the current position using the received radio wave signal, and outputs information on the measured current position to the driving support device 10. The GNSS receiver 20 is a device that is implemented as a function of a mobile device owned by a vehicle occupant such as a driver or a passenger, and outputs received radio wave signals or measured current position information to the driving support device 10. There may be.
 地図データベース30には、各種の地図情報が格納されている。地図情報には、信号機、一時停止標識、踏切、および、横断歩道などの、車両が停止または減速する必要があり、車両の走行を規制する交通規制物の位置情報が含まれている。 The map database 30 stores various types of map information. The map information includes location information of traffic control objects such as traffic lights, stop signs, railroad crossings, and crosswalks that require vehicles to stop or slow down and that restrict the movement of vehicles.
 カメラ40は、車両に搭載され、車両の進行方向を含む範囲を撮影する。カメラ40は、ひとつのカメラであってもよく、複数のカメラ群として構成されていてもよい。またカメラ40は、運転者などの車両の乗員が保有する携帯機器であってもよい。 The camera 40 is mounted on the vehicle and photographs a range including the direction of travel of the vehicle. The camera 40 may be a single camera or may be configured as a group of multiple cameras. Further, the camera 40 may be a portable device held by a vehicle occupant such as a driver.
 モニタ50は、車両に搭載される画像の表示装置である。モニタ50は、運転支援システム1に固有の表示装置、または、他のシステムと共用した表示装置などである。モニタ50は、運転者などの車両の乗員が保有する携帯機器に搭載される表示部であってもよい。 The monitor 50 is an image display device mounted on the vehicle. The monitor 50 is a display device specific to the driving support system 1 or a display device shared with other systems. The monitor 50 may be a display unit mounted on a portable device owned by a vehicle occupant such as a driver.
 スピーカ60は、車両に搭載される音声の出力装置である。スピーカ60は、運転支援システム1に固有の音声の出力装置、または、他のシステムと共用した音声の出力装置などである。スピーカ60は、運転者などの車両の乗員が保有する携帯機器に搭載されるスピーカであってもよい。 The speaker 60 is an audio output device mounted on the vehicle. The speaker 60 is an audio output device specific to the driving support system 1 or an audio output device shared with other systems. The speaker 60 may be a speaker mounted on a portable device owned by a vehicle occupant such as a driver.
 運転支援装置10は、現在位置取得部11と、交通規制物情報取得部12と、画像取得部13と、画像認識部14と、判定部15と、報知制御部16と、を備える。 The driving support device 10 includes a current position acquisition section 11, a traffic restriction information acquisition section 12, an image acquisition section 13, an image recognition section 14, a determination section 15, and a notification control section 16.
 現在位置取得部11は、GNSS受信機20が受信した電波の信号を用い、車両の現在位置を測位して取得する。または、現在位置取得部11は、GNSS受信機20が測位した車両の現在位置情報を取得する。現在位置取得部11は、GNSS受信機20を含んで構成されてもよい。現在位置取得部11は、取得した車両の現在位置を、交通規制物情報取得部12へ出力する。 The current position acquisition unit 11 uses the radio wave signal received by the GNSS receiver 20 to measure and acquire the current position of the vehicle. Alternatively, the current position acquisition unit 11 acquires current position information of the vehicle that has been positioned by the GNSS receiver 20. The current position acquisition unit 11 may be configured to include a GNSS receiver 20. The current position acquisition unit 11 outputs the acquired current position of the vehicle to the traffic regulation object information acquisition unit 12.
 交通規制物情報取得部12は、現在位置取得部11が取得した現在位置に基づき、地図データベース30を参照して、車両の進行方向に存在する、信号機、一時停止標識、踏切、および、横断歩道などの、車両が停止または減速する必要があり、車両の走行を規制する交通規制物に関する交通規制物情報を取得する。また、交通規制物情報取得部12は、取得した交通規制物が、信号機、一時停止標識、踏切、および、横断歩道などのいずれの種別であるかを示す交通規制物種別情報や、車両の現在位置から交通規制物までの距離に関する情報を合わせて取得することが好ましい。 Based on the current position acquired by the current position acquisition unit 11, the traffic regulation object information acquisition unit 12 refers to the map database 30 to identify traffic lights, stop signs, railroad crossings, and crosswalks that exist in the direction of travel of the vehicle. Obtains traffic regulation object information regarding traffic regulation objects that require vehicles to stop or decelerate, such as traffic regulation objects that restrict the movement of vehicles. The traffic regulation object information acquisition unit 12 also obtains traffic regulation object type information indicating which type of the acquired traffic regulation object is, such as a traffic light, a stop sign, a railroad crossing, or a crosswalk, and the current state of the vehicle. It is preferable to also acquire information regarding the distance from the location to the traffic control object.
 交通規制物情報取得部12は、車両の進行方向のうち、例えば車両進行方向の100m以内などの、運転者が視認可能な範囲である所定範囲内に存在する交通規制物に関する交通規制物情報を取得する。 The traffic regulation object information acquisition unit 12 obtains traffic regulation object information regarding traffic regulation objects that exist within a predetermined range that is visible to the driver, such as within 100 m of the vehicle's direction of travel. get.
 交通規制物情報取得部12は、取得した交通規制物に関する情報を、判定部15に出力する。 The traffic restriction object information acquisition unit 12 outputs the acquired information regarding the traffic restriction object to the determination unit 15.
 画像取得部13は、カメラ40から車両の進行方向を少なくとも含む車両周囲を撮影した画像を取得する。画像取得部13は、複数のカメラ群40から複数の画像を取得してもよい。画像取得部13は、カメラ40を含めて構成されてもよく、別体であるカメラ40と接続され、カメラ40から出力される出力信号を取得する、入力インタフェース部であってもよい。画像取得部13は、取得した画像を画像認識部14に出力する。 The image acquisition unit 13 acquires an image of the surroundings of the vehicle including at least the traveling direction of the vehicle from the camera 40. The image acquisition unit 13 may acquire a plurality of images from a plurality of camera groups 40. The image acquisition section 13 may include the camera 40, or may be an input interface section that is connected to the separate camera 40 and acquires an output signal output from the camera 40. The image acquisition unit 13 outputs the acquired image to the image recognition unit 14.
 画像認識部14は、画像取得部13が取得した画像に撮影されている、所定の認識対象物を画像認識する処理を行う。本実施形態では、画像認識部14は、信号機、一時停止標識、踏切、および、横断歩道などの、車両が停止または減速する必要があり、車両の走行を規制する交通規制物を認識対象物として、認識する処理を行う。画像認識部14は、認識した結果を判定部15に出力する。 The image recognition unit 14 performs image recognition processing of a predetermined recognition target object photographed in the image acquired by the image acquisition unit 13. In the present embodiment, the image recognition unit 14 recognizes traffic control objects such as traffic lights, stop signs, railroad crossings, and crosswalks that require the vehicle to stop or slow down and that restrict the movement of the vehicle. , performs the recognition process. The image recognition unit 14 outputs the recognition result to the determination unit 15.
 画像認識部14は、画像認識用の学習済モデルとして交通規制物認識用の学習済モデルを備える。画像認識部14は、画像取得部13が取得した画像から、交通規制物認識用の学習済モデルを使用する画像認識技術を用いて、車両の走行を規制する交通規制物を認識する。 The image recognition unit 14 includes a trained model for traffic regulation object recognition as a trained model for image recognition. The image recognition unit 14 recognizes a traffic restriction object that restricts vehicle travel from the image acquired by the image acquisition unit 13 using an image recognition technique that uses a trained model for recognizing a traffic restriction object.
 画像認識用の学習済モデルとは、画像と教師データとをセットにした入力データをニューラルネットワークに入力し、機械学習させた学習モデルである。本実施形態では、画像として、例えば車両に搭載されたカメラが撮影した風景を撮影した画像や、信号機、標識、踏切、および、横断歩道などを撮影した画像を用いる。また教師データとして、その画像に交通規制物が撮影されている場合に1とし、その画像に交通規制物が撮影されていない場合に0とする、教師データを用いる。 A trained model for image recognition is a learning model in which input data, which is a set of images and training data, is input to a neural network and subjected to machine learning. In this embodiment, for example, an image of a landscape taken by a camera mounted on a vehicle, or an image of a traffic light, a sign, a railroad crossing, a crosswalk, etc. is used as the image. In addition, as teacher data, teacher data is used which is set to 1 when a traffic regulating object is photographed in the image, and is set to 0 when a traffic regulating object is not photographed in the image.
 画像認識部14が備えるニューラルネットワークは、例えば、交通規制物認識用の学習済モデルに、画像取得部13が取得した画像を入力し、入力した画像に交通規制物が撮影されている可能性を、1から0の間の値として出力する。出力値が1に近いほど、入力された画像に交通規制物が撮影されている可能性が高く、出力値が0に近いほど、入力された画像に交通規制物が撮影されている可能性が低い。この出力値が予め定められた閾値である、例えば0.8以上である場合、画像認識部14は、画像取得部13が取得した画像に、交通規制物が撮影されているとする認識結果を、判定部15に出力する。 For example, the neural network included in the image recognition unit 14 inputs the image acquired by the image acquisition unit 13 to a trained model for recognizing traffic restriction objects, and calculates the possibility that a traffic restriction object is photographed in the input image. , output as a value between 1 and 0. The closer the output value is to 1, the higher the possibility that a traffic restriction object is photographed in the input image, and the closer the output value is to 0, the higher the possibility that a traffic restriction object is photographed in the input image. low. If this output value is a predetermined threshold value, for example, 0.8 or more, the image recognition unit 14 adds a recognition result that a traffic regulation object is photographed to the image acquired by the image acquisition unit 13. , is output to the determination unit 15.
 画像認識部14は、上述の形態のほか、例えばパターンマッチングなどの公知の技術を用いて、認識対象物を認識しても良い。 In addition to the above-mentioned configuration, the image recognition unit 14 may recognize the recognition target using a known technique such as pattern matching.
 判定部15は、交通規制物情報取得部12が車両の進行方向に交通規制物が存在する情報を取得しており、かつ、画像認識部14が認識対象物である交通規制物を認識していないこと、を判定する。言い換えると、車両の進行方向に交通規制物が存在するにも関わらず、画像認識の結果が交通規制物を認識しておらず、運転者が交通規制物を認識できない可能性が高いことを示す場合に、運転者への報知を行うと判定する。判定部15は、判定した結果を報知制御部16に出力する。 The determination unit 15 determines that the traffic regulation object information acquisition unit 12 has acquired the information that a traffic regulation object exists in the direction of travel of the vehicle, and the image recognition unit 14 has recognized the traffic regulation object that is the recognition target object. Determine that there is no such thing. In other words, even though there is a traffic restriction object in the direction of travel of the vehicle, the image recognition result does not recognize the traffic restriction object, indicating that there is a high possibility that the driver will not be able to recognize the traffic restriction object. In this case, it is determined that the driver should be notified. The determination unit 15 outputs the determined result to the notification control unit 16.
 報知制御部16は、判定部15が運転者への報知を行うと判定した場合、表示または音声による運転者に報知する制御を行う。運転者への報知とは、例えば、「この先に信号機があります」、「50m先、踏切です」などの、交通規制物が車両の進行方向に存在することを、モニタ50またはスピーカ60を介した表示または音声での提示によって、運転者に報知することである。 When the determination unit 15 determines to notify the driver, the notification control unit 16 controls the notification to the driver by display or audio. For example, informing the driver that there is a traffic control object in the direction of travel of the vehicle, such as "There is a traffic light ahead" or "There is a railroad crossing 50 meters ahead", is given via the monitor 50 or speaker 60. This is to notify the driver by display or audio presentation.
 報知制御部16は、モニタ50に予め定めた画像データやテキストデータを出力するか、スピーカ60に予め定めた警告音や音声データを出力するかの、少なくともいずれかの制御を行う。 The notification control unit 16 controls at least one of outputting predetermined image data and text data to the monitor 50, and outputting predetermined warning sounds and audio data to the speaker 60.
 報知制御部16は運転者への報知を、モニタ50またはスピーカ60によって行うことに限定されない。報知制御部16は運転者への報知を、モニタ50またはスピーカ60以外に、例えば、図示しないブザーやLED(Light Emitting Diode)、振動機器を介して行ってもよい。 The notification control unit 16 is not limited to notifying the driver using the monitor 50 or the speaker 60. The notification control unit 16 may notify the driver through, for example, a buzzer, an LED (light emitting diode), or a vibration device (not shown) in addition to the monitor 50 or the speaker 60.
 次に、上述した実施形態1に係る運転支援装置10の処理手順を、図2に示すフローチャートを参照して説明する。図2は、実施形態1に係る運転支援装置の処理手順の一例を示すフローチャートである。 Next, the processing procedure of the driving support device 10 according to the first embodiment described above will be explained with reference to the flowchart shown in FIG. 2. FIG. 2 is a flowchart illustrating an example of a processing procedure of the driving support device according to the first embodiment.
 初めに、ステップS10において、現在位置取得部11はGNSS受信機20が受信した電波の信号に基づいて、車両の現在位置を取得する。現在位置取得部11は取得した現在位置情報を交通規制物情報取得部12に出力する。その後ステップS11に進む。 First, in step S10, the current position acquisition unit 11 acquires the current position of the vehicle based on the radio wave signal received by the GNSS receiver 20. The current position acquisition unit 11 outputs the acquired current position information to the traffic regulation object information acquisition unit 12. After that, the process advances to step S11.
 ステップS11において、交通規制物情報取得部12は、取得した現在位置に基づいて地図情報データベース30を参照し、車両の進行方向の所定範囲内に存在する交通規制物の情報を取得する。交通規制物情報取得部12は、交通規制物が車両の進行方向の所定範囲内に存在する場合、取得した交通規制物情報を、判定部15に出力する。その後ステップS12に進む。 In step S11, the traffic restriction object information acquisition unit 12 refers to the map information database 30 based on the acquired current position and acquires information on traffic restriction objects existing within a predetermined range in the direction of travel of the vehicle. The traffic regulation object information acquisition section 12 outputs the acquired traffic regulation object information to the determination section 15 when the traffic regulation object exists within a predetermined range in the traveling direction of the vehicle. After that, the process advances to step S12.
 ステップS12において、画像取得部13は、カメラ40が撮影した画像を取得する。画像取得部13は取得した画像を画像認識部14に出力する。その後ステップS13に進む。 In step S12, the image acquisition unit 13 acquires an image taken by the camera 40. The image acquisition unit 13 outputs the acquired image to the image recognition unit 14. After that, the process advances to step S13.
 ステップS13において、画像認識部14は、取得した画像に画像認識処理を行い、認識対象である交通規制物を認識する。画像認識部14は認識した結果、言い換えると、交通規制物を認識したか否かの結果を、判定部15に出力する。その後ステップS14に進む。 In step S13, the image recognition unit 14 performs image recognition processing on the acquired image and recognizes the traffic regulation object that is the recognition target. The image recognition unit 14 outputs the recognition result, in other words, the result of whether or not the traffic restriction object has been recognized, to the determination unit 15. After that, the process advances to step S14.
 ステップS14において、判定部15は、交通規制物情報取得部12が、交通規制物が車両の進行方向の所定範囲内に存在する情報を取得しており、かつ、画像認識部14における画像認識の結果が、運転者が交通規制物を認識できない可能性が高いことを示す場合に、運転者に報知を行うと判定する。判定部15が運転者に報知を行うと判定した場合(S14でYes)、ステップS15に進む。判定部15が運転者に報知を行うと判定しない場合(S14でNo)、ステップS10に戻り、処理を再度実行する。 In step S14, the determination unit 15 determines that the traffic restriction object information acquisition unit 12 has acquired information that the traffic restriction object exists within a predetermined range in the traveling direction of the vehicle, and that the image recognition unit 14 performs image recognition. If the result indicates that there is a high possibility that the driver will not be able to recognize the traffic control object, it is determined that the driver should be notified. When the determination unit 15 determines to notify the driver (Yes in S14), the process proceeds to step S15. If the determination unit 15 does not determine that the driver should be notified (No in S14), the process returns to step S10 and the process is executed again.
 ステップS15において、報知制御部16は、表示または音声により、運転者への報知を制御する。報知制御部16は、運転者への報知を、画像データまたはテキストデータとしてモニタ50に出力するか、音声データとしてスピーカ60に出力する。その後フローを終了する。 In step S15, the notification control unit 16 controls notification to the driver by display or audio. The notification control unit 16 outputs notification to the driver as image data or text data to the monitor 50, or outputs it as audio data to the speaker 60. Then end the flow.
 上述したように、実施形態1によれば、運転支援装置10は、車両の進行方向の所定範囲内に信号機、一時停止標識、踏切、および、横断歩道などの車両の走行を規制する交通規制物があり、かつ、車両に搭載したカメラが撮影した画像に当該の交通規制物が認識されていない場合に、運転者が交通規制物を視認できていない可能性が高いと判定し、当該の交通規制物に関する情報を、運転者に報知することができる。このような構成とすることで、運転者が視認出来ている可能性が高い場合での報知を抑制しながら、運転者が視認出来てない可能性が低い場合には交通規制物が車両進行方向に存在することを適切に報知し、車両走行の安全性を高めることができる。 As described above, according to the first embodiment, the driving support device 10 installs traffic control objects such as traffic lights, stop signs, railroad crossings, and crosswalks that restrict the movement of vehicles within a predetermined range in the direction of travel of the vehicle. , and the relevant traffic restriction object is not recognized in the image taken by the camera mounted on the vehicle, it is determined that there is a high possibility that the driver is unable to see the traffic restriction object, and the relevant traffic restriction object is not recognized in the image taken by the camera mounted on the vehicle. Information regarding regulated objects can be notified to the driver. By adopting such a configuration, while suppressing the notification when there is a high possibility that the driver can see the object, when there is a low possibility that the driver cannot see the traffic regulating object, it is It is possible to appropriately notify the presence of the vehicle, thereby increasing the safety of vehicle driving.
 運転者が交通規制物を視認しにくくなる要因を例示する。例えば、障害物が視界を遮っている場合、視認しにくくなる。例えば、夜間または霧などの悪天候によって鮮明な画像が取得されにくくなり、画像認識の精度が低い、言い換えると、運転者からの視認性が低い場合、視認しにくくなる。例えば、信号機に接近して停車している場合に、信号機が運転者視界に入らない高い位置にある場合のように、交通規制物との位置関係により、カメラ画角及び運転者の視界での視認が困難である場合、視認しにくくなる。実施形態1によれば、このような運転者が交通規制物を視認しにくくなる状態に、当該の交通規制物に関する情報を、運転者に報知することができる。 Here are some examples of factors that make it difficult for drivers to see traffic control objects. For example, if an obstacle obstructs the view, it becomes difficult to see. For example, if it is difficult to obtain a clear image due to bad weather such as nighttime or fog, and the accuracy of image recognition is low, in other words, visibility to the driver is low, it becomes difficult to visually recognize the image. For example, when the driver is stopped close to a traffic light, the angle of view of the camera and the driver's field of view may vary depending on the positional relationship with the traffic control object, such as when the traffic light is located at a high position out of the driver's field of view. If it is difficult to see, it becomes difficult to see. According to the first embodiment, it is possible to notify the driver of information regarding the traffic regulating object in a state where it becomes difficult for the driver to visually recognize the traffic regulating object.
 <実施形態2>
 本開示の実施形態2について、図3を用いて説明する。図3は、実施形態2に係る運転支援システム1の構成の一例を示すブロック図である。実施形態2に係る運転支援装置10aは、実施形態1に係る運転支援装置10と、以下の点で異なる構成を有する。
<Embodiment 2>
Embodiment 2 of the present disclosure will be described using FIG. 3. FIG. 3 is a block diagram showing an example of the configuration of the driving support system 1 according to the second embodiment. The driving support device 10a according to the second embodiment has a different configuration from the driving support device 10 according to the first embodiment in the following points.
 画像認識部14aは、車両前方に存在する障害物を画像により認識する機能を有する。 The image recognition unit 14a has a function of recognizing obstacles existing in front of the vehicle using images.
 判定部15aは、画像認識部14aが認識した障害物が画像上に占める大きさが予め定めた所定値以上であるか否かを判定し、交通規制物が車両の進行方向の所定範囲内に存在し、かつ、障害物が画像上に占める大きさが予め定めた所定値以上である場合に、運転者に報知を行うと判定する機能を有する。 The determination unit 15a determines whether the size of the obstacle recognized by the image recognition unit 14a on the image is equal to or larger than a predetermined value, and determines whether the traffic regulating object is within a predetermined range in the direction of travel of the vehicle. It has a function of determining that the driver should be notified if an obstacle exists and the size of the obstacle on the image is equal to or larger than a predetermined value.
 運転支援装置10aにおいて、現在位置取得部11,交通規制物情報取得部12および画像取得部13及び報知制御部16は、実施形態1に係る運転支援装置10と同様の構成および機能を有するため、その詳細な説明は省略する。 In the driving support device 10a, the current position acquisition unit 11, the traffic regulation object information acquisition unit 12, the image acquisition unit 13, and the notification control unit 16 have the same configuration and function as the driving support device 10 according to the first embodiment. A detailed explanation thereof will be omitted.
 画像認識部14aは、画像取得部13が取得した画像に撮影されている、所定の認識対象物を画像認識する処理を行う。本実施形態では、画像認識部14aは、障害物を認識対象物として認識する。画像認識部14aは、認識した結果を判定部15aに出力する。 The image recognition unit 14a performs image recognition processing of a predetermined recognition target object photographed in the image acquired by the image acquisition unit 13. In this embodiment, the image recognition unit 14a recognizes an obstacle as a recognition target object. The image recognition section 14a outputs the recognition result to the determination section 15a.
 障害物は、例えば、車両の前方を走行する前方走行車両、路上駐車車両、対向車線走行車両、看板、庭木および街路樹などのような樹木、および、電信柱などでもよい。 The obstacles may be, for example, a vehicle traveling in front of the vehicle, a vehicle parked on the road, a vehicle traveling in the opposite lane, a signboard, a tree such as a garden tree or a street tree, a telephone pole, or the like.
 本実施形態では、画像認識部14aは、画像認識用の学習済モデルとして、以下の機械学習を行った、例えば、前方走行車両、路上駐車車両、対向車線走行車両、看板、庭木および街路樹などのような樹木、および、電信柱などの障害物認識用の学習済モデルを用いる。すなわち、画像として、例えば車両に搭載されたカメラが撮影した風景を撮影した画像や、各種の障害物を撮影した画像を用い、教師データとして、その画像に障害物が撮影されている場合に1とし、その画像に障害物が撮影されていない場合に0とするデータを用いる。 In the present embodiment, the image recognition unit 14a performs machine learning on the following trained models for image recognition, such as vehicles traveling ahead, vehicles parked on the road, vehicles traveling in the opposite lane, signboards, garden trees, and street trees. A trained model for recognizing obstacles such as trees and telephone poles is used. In other words, if the image is an image of a landscape taken by a camera mounted on a vehicle or an image of various obstacles, and if an obstacle is photographed in the image as training data, 1. Data is used that is set to 0 if no obstacle is photographed in the image.
 画像認識部14aは、例えば、障害物認識用の学習済モデルに画像を入力し、入力した画像に障害物が撮影されている可能性を、1から0の間の値として出力する。出力値が1に近いほど、入力された画像に障害物が撮影されている可能性が高く、出力値が0に近いほど、入力された画像に障害物が撮影されている可能性が低い。この出力値が予め定められた閾値である、例えば0.8以上である場合、画像認識部13は、画像取得部12が取得した画像に、障害物が撮影されていると判定する。 For example, the image recognition unit 14a inputs an image to a trained model for obstacle recognition, and outputs the probability that an obstacle is photographed in the input image as a value between 1 and 0. The closer the output value is to 1, the higher the possibility that an obstacle is photographed in the input image, and the closer the output value is to 0, the lower the possibility that an obstacle is photographed in the input image. If this output value is a predetermined threshold value, for example, 0.8 or more, the image recognition unit 13 determines that an obstacle is photographed in the image acquired by the image acquisition unit 12.
 画像認識部14aは、上述の形態のほか、例えばパターンマッチングなどの公知の技術を用いて、認識対象物を認識しても良い。 In addition to the above-mentioned configuration, the image recognition unit 14a may recognize the recognition target using a known technique such as pattern matching.
 判定部15aは、画像認識部14aが認識した障害物が、画像上に占める大きさを測定する。判定部15aは、例えば、画像認識部14aにより認識された障害物の画像上の大きさを、画像上で認識される障害物の像の縦および横のうち少なくともいずれかの長さを示すピクセル数、または、画像上で認識される障害物の像の領域の面積を示すピクセル数として測定する。 The determination unit 15a measures the size of the obstacle recognized by the image recognition unit 14a on the image. For example, the determination unit 15a determines the size of the obstacle recognized by the image recognition unit 14a on the image by pixels indicating at least one of the length and width of the image of the obstacle recognized on the image. It is measured as the number of pixels or the area of the area of the image of the obstacle that is recognized on the image.
 判定部15aは、交通規制物情報取得部12が車両の進行方向に交通規制物が存在する情報を取得しており、かつ、画像認識部14aが認識した認識対象物である障害物が画像に占める認識領域が、予め定めた所定値以上の大きさであること、を判定する。障害物が画像に占める認識領域が予め定めた所定値以上の大きさであるとは、障害物を認識した認識領域の縦の長さのピクセル数が、予め定めた所定値以上、例えば画像全体の縦の長さのピクセル数の1/2以上のピクセル数である場合のことである。判定部15aは、画像認識の結果が、車両前方を走行する障害物が画像のうちの所定範囲以上の領域を占めており、運転者が交通規制物を認識できない可能性が高いことを示す場合に、運転者への報知を行うと判定する。判定部15aは、判定した結果を報知制御部16に出力する。 The determination unit 15a determines that the traffic restriction object information acquisition unit 12 has acquired the information that a traffic restriction object exists in the direction of travel of the vehicle, and that the obstacle, which is the recognition target object recognized by the image recognition unit 14a, is included in the image. It is determined that the occupied recognition area is larger than a predetermined value. The recognition area occupied by the obstacle in the image is larger than a predetermined value, which means that the number of pixels in the vertical length of the recognition area where the obstacle is recognized is larger than the predetermined value, for example, the entire image. This is a case where the number of pixels is 1/2 or more of the number of pixels in the vertical length of . If the image recognition result indicates that the obstacle running in front of the vehicle occupies a predetermined area or more of the image and there is a high possibility that the driver will not be able to recognize the traffic regulating object, the determination unit 15a determines that It is determined that the driver is to be notified. The determination unit 15a outputs the determined result to the notification control unit 16.
 判定部15aは、障害物の画像上の大きさが予め定めた所定値以上である場合として、障害物が認識された横方向の長さのピクセル数や、障害物が認識された領域の面積を示すピクセル数を基準とした判定を行ってもよい。またこれらの所定値は、ユーザの指定により、または走行環境により、変更可能であることが好ましい。 When the size of the obstacle on the image is equal to or larger than a predetermined value, the determination unit 15a determines the number of pixels in the horizontal direction where the obstacle is recognized and the area of the area where the obstacle is recognized. The determination may be made based on the number of pixels showing . Further, it is preferable that these predetermined values can be changed according to the user's designation or the driving environment.
 判定部15aは、交通規制物情報取得部12が取得した交通規制情報の種別ごとに、予め定めた所定値を設定してもよい。これは、交通規制情報の種別ごとに、画像上の大きさが異なるためである。例えば、交通規制情報の種別ごとに、障害物が認識された横方向の長さのピクセル数、または、面積を示すピクセル数の閾値を所定値として予め定めておく。判定部15aは、交通規制情報の種別ごとに、画像認識部14aが認識した障害物の横方向の長さのピクセル数、または、面積を示すピクセル数が、所定値以上であるか否かを判定する。具体的には、判定部15aは、交通規制物情報取得部12が車両の進行方向に一時停止標識が存在する情報を取得しており、障害物が認識された横方向の長さのピクセル数、または、面積を示すピクセル数が、一時停止標識に対応する所定値以上である場合、判定部15aは、運転者が交通規制物を認識できない可能性が高いことを示す出力を報知制御部16に出力する。信号機、踏切、横断歩道なども、同様にそれぞれに対応する所定値を定めてよい。 The determination unit 15a may set a predetermined value for each type of traffic regulation information acquired by the traffic regulation object information acquisition unit 12. This is because the size on the image differs depending on the type of traffic regulation information. For example, for each type of traffic regulation information, a threshold value for the number of pixels in the horizontal length of the recognized obstacle or the number of pixels indicating the area is determined in advance as a predetermined value. The determination unit 15a determines, for each type of traffic regulation information, whether the number of pixels representing the horizontal length or the number of pixels representing the area of the obstacle recognized by the image recognition unit 14a is greater than or equal to a predetermined value. judge. Specifically, the determination unit 15a determines that the traffic regulation object information acquisition unit 12 has acquired the information that a stop sign exists in the direction of travel of the vehicle, and the number of pixels in the horizontal direction in which the obstacle has been recognized. , or when the number of pixels indicating the area is greater than or equal to a predetermined value corresponding to a stop sign, the determination unit 15a sends an output to the notification control unit 16 indicating that there is a high possibility that the driver will not be able to recognize the traffic regulation object. Output to. Similarly, predetermined values may be determined for traffic lights, railroad crossings, pedestrian crossings, and the like.
 上述したように、実施形態2によれば、運転支援装置10aは、車両の進行方向の所定範囲内に信号機、一時停止標識、踏切、および、横断歩道などの車両の走行を規制する交通規制物があり、かつ、車両に搭載したカメラが撮影した画像に所定値以上のの大きさである障害物が認識されている場合に、運転者の視界が障害物によって狭められており、運転者が交通規制物を視認できていない可能性が高いと判定し、当該の交通規制物に関する情報を、運転者に報知することができる。このような構成とすることで、運転者が視認出来ている可能性が高い場合での報知を抑制しながら、運転者が視認出来てない可能性が低い場合には交通規制物が車両進行方向に存在することを適切に報知し、車両走行の安全性を高めることができる。 As described above, according to the second embodiment, the driving support device 10a includes traffic control objects such as traffic lights, stop signs, railroad crossings, and crosswalks that restrict the movement of vehicles within a predetermined range in the direction of travel of the vehicle. , and an obstacle larger than a predetermined value is recognized in the image taken by the camera mounted on the vehicle, the driver's field of vision is narrowed by the obstacle, and the driver It is possible to determine that there is a high possibility that the traffic regulating object is not visible, and to notify the driver of information regarding the traffic regulating object. By adopting such a configuration, while suppressing the notification when there is a high possibility that the driver can see the object, when there is a low possibility that the driver cannot see the traffic regulating object, it is It is possible to appropriately notify the presence of the vehicle, thereby increasing the safety of vehicle driving.
 実施形態2によれば、予め定めた所定値は、交通規制物の種別に基づいて定められる。これにより、車両の進行方向に所定の交通規制物が存在する情報を取得しており、カメラが撮影した画像に、交通規制物の種別に対応して定められた所定値以上の大きさである障害物が認識されている場合に、運転者の視界が障害物によって狭められており、運転者が交通規制物を視認できていない可能性が高いと判定することができる。実施形態2は、より適切に判定することができる。 According to the second embodiment, the predetermined value is determined based on the type of traffic regulating object. Through this, information is obtained that a predetermined traffic restriction object exists in the direction of travel of the vehicle, and the image taken by the camera has a size larger than a predetermined value determined corresponding to the type of traffic restriction object. When an obstacle is recognized, it can be determined that the driver's field of view is narrowed by the obstacle and there is a high possibility that the driver cannot see the traffic regulating object. Embodiment 2 allows more appropriate determination.
 <実施形態3>
 本開示の実施形態3について、図4を用いて説明する。図4は、実施形態3に係る運転支援システム1の構成の一例を示すブロック図である。実施形態3に係る運転支援装置10bは、実施形態2に係る運転支援装置10aと、以下の点で異なる構成を有する。
<Embodiment 3>
Embodiment 3 of the present disclosure will be described using FIG. 4. FIG. 4 is a block diagram showing an example of the configuration of the driving support system 1 according to the third embodiment. The driving support device 10b according to the third embodiment has a different configuration from the driving support device 10a according to the second embodiment in the following points.
 判定部15bは、交通規制物が車両の進行方向の所定範囲内に存在すること、及び、画像認識部14aが認識した認識対象物が画像上に占める位置が、予め定めた所定領域を含むか否かを判定し、交通規制物が車両の進行方向の所定範囲内に存在し、かつ、認識対象物が画像上に占める位置が、予め定めた所定領域を含む場合に、運転者に報知を行うと判定する機能を有する。 The determining unit 15b determines whether the traffic regulating object exists within a predetermined range in the traveling direction of the vehicle, and whether the position of the recognition target object recognized by the image recognition unit 14a on the image includes a predetermined area. If the traffic regulating object exists within a predetermined range in the direction of travel of the vehicle and the position occupied by the recognition target object on the image includes a predetermined region, the driver is notified. It has a function to determine if it is done.
 運転支援装置10bにおいて、現在位置取得部11,交通規制物情報取得部12、画像取得部13、画像認識部14a及び報知制御部16は、実施形態2に係る運転支援装置10aと同様の構成および機能を有するため、その詳細な説明は省略する。 In the driving support device 10b, the current position acquisition unit 11, traffic regulation object information acquisition unit 12, image acquisition unit 13, image recognition unit 14a, and notification control unit 16 have the same configuration as the driving support device 10a according to the second embodiment. Since it has a function, its detailed explanation will be omitted.
 判定部15bは、画像認識部14aが認識した認識対象物である障害物が認識された画像上の位置を、画像を構成する各ピクセルまたは複数のピクセルを含む各小領域が、認識領域に含まれているか否かの情報として、言い換えると、各ピクセルまたは各小領域が、障害物が認識された領域であるか否かの情報として、判定される。 The determination unit 15b determines the position on the image where the obstacle, which is the recognition target recognized by the image recognition unit 14a, is recognized, and determines whether each pixel or each small area including a plurality of pixels constituting the image is included in the recognition area. In other words, each pixel or each small area is determined as information on whether or not an obstacle is recognized.
 判定部15bは、交通規制物が存在し、かつ、認識対象物が画像上に占める位置が、予め定めた所定領域を含むこと、を判定する。認識対象物が画像上に占める位置が予め定めた所定領域を含むとは、画像領域に予め定められた、交通規制物が存在する可能性が高いピクセルや小領域の位置と、判定された障害物が認識された領域とが一致することである。判定部15bは、交通規制物が存在し、かつ、認識対象物が画像上に占める位置が、予め定めた所定領域を含む場合に、画像認識の結果が、運転者が交通規制物を認識できない可能性が高いことを示すとして、運転者への報知を行うと判定する。判定部15bは、判定した結果を報知制御部16に出力する。 The determining unit 15b determines that a traffic regulating object exists and that the position occupied by the recognition target object on the image includes a predetermined area. The position of the recognition target object on the image includes a predetermined area defined in advance, which means that the position of a pixel or small area in the image area where there is a high possibility that a traffic control object exists, and the determined obstacle. This means that the area where the object is recognized coincides with the area where the object is recognized. The determination unit 15b determines that the image recognition result indicates that the driver cannot recognize the traffic restriction object when a traffic restriction object exists and the position occupied by the recognition target object on the image includes a predetermined area. It is determined that the driver is to be notified as this indicates that there is a high possibility. The determination unit 15b outputs the determined result to the notification control unit 16.
 判定部15bは、予め定めた所定領域として、交通規制物情報取得部12が取得した交通規制情報の種別ごとに、所定領域を設定してもよい。これは、交通規制情報の種別ごとに画像上に占める位置が異なるためである。例えば、画像上の左側上方部の特定領域を、一時停止標識が画像認識される可能性が高い位置である所定領域として予め定めておく。画像上の左側上方部の特定領域が、一時停止標識に対応する所定領域である。判定部15bは、画像認識部14aが認識した障害物の認識位置が、画像上の左側上方部の特定領域と一致するか否かを判定する。判定部15bは、交通規制物情報取得部12が車両の進行方向に一時停止標識が存在する情報を取得しており、障害物が画像上の左側上方部の特定領域に認識されている場合、判定部15bは、運転者が交通規制物を認識できない可能性が高いことを示す出力を報知制御部16に出力する。信号機、踏切、横断歩道なども、同様にそれぞれに対応する所定領域として定めてよい。 The determination unit 15b may set a predetermined area as a predetermined area for each type of traffic regulation information acquired by the traffic regulation object information acquisition unit 12. This is because the position occupied on the image differs depending on the type of traffic regulation information. For example, a specific area on the upper left side of the image is predetermined as a predetermined area where a stop sign is likely to be image recognized. A specific area on the upper left side of the image is a predetermined area corresponding to a stop sign. The determination unit 15b determines whether the recognized position of the obstacle recognized by the image recognition unit 14a matches a specific area on the upper left side of the image. The determination unit 15b determines that when the traffic regulation object information acquisition unit 12 has acquired information that a stop sign exists in the direction of travel of the vehicle and an obstacle is recognized in a specific area on the upper left side of the image, The determination unit 15b outputs to the notification control unit 16 an output indicating that there is a high possibility that the driver will not be able to recognize the traffic regulating object. Similarly, traffic lights, railroad crossings, pedestrian crossings, etc. may be defined as corresponding predetermined areas.
 上述したように、実施形態3によれば、運転支援装置10bは、車両の進行方向の所定範囲内に信号機、一時停止標識、踏切、および、横断歩道などの車両の走行を規制する交通規制物があり、かつ、車両に搭載したカメラが撮影した画像上の特定位置に障害物が認識されている場合に、交通規制物が存在する可能性が高い画像上の領域が、障害物によって遮られており、運転者が交通規制物を視認できていない可能性が高いと判定し、当該の交通規制物に関する情報を、運転者に報知することができる。このような構成とすることで、運転者が視認出来ている可能性が高い場合での報知を抑制しながら、運転者が視認出来てない可能性が低い場合には交通規制物が車両進行方向に存在することを適切に報知し、車両走行の安全性を高めることができる。 As described above, according to the third embodiment, the driving support device 10b installs traffic control objects such as traffic lights, stop signs, railroad crossings, and crosswalks that restrict the movement of vehicles within a predetermined range in the direction of travel of the vehicle. and an obstacle is recognized at a specific position on the image taken by the camera mounted on the vehicle, the area on the image where there is a high possibility that a traffic restriction object exists is blocked by the obstacle. It is possible to determine that there is a high possibility that the driver cannot see the traffic regulating object, and to notify the driver of information regarding the traffic regulating object. By adopting such a configuration, while suppressing the notification when there is a high possibility that the driver can see the object, when there is a low possibility that the driver cannot see the traffic regulating object, it is It is possible to appropriately notify the presence of the vehicle, thereby increasing the safety of vehicle driving.
 実施形態3によれば、予め定めた所定領域は、交通規制物の種別に基づいて定められる。これにより、車両の進行方向に所定の交通規制物が存在する情報を取得しており、障害物が、所定の交通規制物に対応する特定領域に認識されている場合、運転者の視界が障害物によって狭められており、運転者が交通規制物を視認できていない可能性が高いと判定することができる。実施形態3は、より適切に判定することができる。 According to the third embodiment, the predetermined area is determined based on the type of traffic regulating object. This obtains information on the presence of a predetermined traffic regulation object in the direction of travel of the vehicle, and if the obstacle is recognized in a specific area corresponding to the predetermined traffic regulation object, the driver's field of vision becomes obstructed. It can be determined that there is a high possibility that the driver cannot see the traffic restriction object because the road is narrowed by an object. Embodiment 3 allows more appropriate determination.
 <その他の実施形態>
 本開示において上述した実施形態1乃至3に示す具体例等は、発明の理解を容易とするための例示にすぎず、特に断る場合を除き、本発明を限定するものではない。
<Other embodiments>
The specific examples shown in Embodiments 1 to 3 described above in the present disclosure are merely illustrative to facilitate understanding of the invention, and do not limit the invention unless otherwise specified.
 なお本開示は、実施形態1乃至3に限られたものではなく、趣旨を逸脱しない範囲で適宜変更することが可能である。 Note that the present disclosure is not limited to Embodiments 1 to 3, and can be modified as appropriate without departing from the spirit.
 実施形態2の運転支援装置10aは、実施形態1の運転支援装置10の実施形態を含んでもよい。すなわち、運転支援装置10aにおける画像認識部14aは、交通規制物及び障害物を同時に認識し、運転支援装置10aにおける判定部15aは、交通規制物が認識できず、かつ、障害物が画像に占める認識領域が予め定めた所定値以上であること、を判定して、運転者が交通規制物を認識できない可能性が高いことを示す出力を報知制御部16に出力する形態であってもよい。 The driving support device 10a of the second embodiment may include an embodiment of the driving support device 10 of the first embodiment. That is, the image recognition unit 14a in the driving support device 10a simultaneously recognizes the traffic restriction object and the obstacle, and the determination unit 15a in the driving support device 10a determines that the traffic restriction object cannot be recognized and the obstacle occupies the image. It may be determined that the recognition area is equal to or larger than a predetermined value, and outputs to the notification control unit 16 an output indicating that there is a high possibility that the driver will not be able to recognize the traffic regulating object.
 同様に実施形態3の運転支援装置10bは、実施形態1の運転支援装置10または実施形態2の運転支援装置10aの実施形態を含んでもよい。すなわち、実施形態3の運転支援装置10bにおける判定部15bは、交通規制物が認識できず、かつ、画像上に占める位置が、予め定めた所定領域を含むこと、または、障害物が画像に占める認識領域が予め定めた所定値以上であり、かつ、画像上に占める位置が、予め定めた所定領域を含むことを判定して、運転者が交通規制物を認識できない可能性が高いことを示す出力を報知制御部16に出力する形態であってもよい。 Similarly, the driving support device 10b of the third embodiment may include an embodiment of the driving support device 10 of the first embodiment or the driving support device 10a of the second embodiment. That is, the determination unit 15b in the driving support device 10b of the third embodiment determines that the traffic regulating object cannot be recognized and the position on the image includes a predetermined area, or that the obstacle occupies the image. Determines that the recognition area is greater than or equal to a predetermined value and that the position on the image includes the predetermined area, indicating that there is a high possibility that the driver will not be able to recognize the traffic regulation object. The output may be output to the notification control section 16.
 実施形態1乃至3の運転支援装置に、図示しない記録制御部及び図示しない通信制御部の少なくともいずれかを備えてもよい。このとき、交通規制物情報取得部12が車両の進行方向に交通規制物が存在する情報を取得しており、かつ、画像認識部14、14aが認識対象物を認識した結果が、運転者が交通規制物を認識できない可能性が高いことを示す場合に、画像取得部13が取得した画像を、記録制御部が図示しない記録部に記録する制御を行う。または、交通規制物情報取得部12が車両の進行方向に交通規制物が存在する情報を取得しており、かつ、画像認識部14、14aが認識対象物を認識した結果が、運転者が交通規制物を認識できない可能性が高いことを示す場合に、画像取得部13が取得した画像を、通信制御部が図示しない外部サーバや運転者が有する携帯端末等に送信する制御を行う。図示しない記録制御部及び図示しない通信制御部は、それぞれ、判定部15,15a,15bの判定結果に基づいて記録制御及び通信制御を行ってもよい。 The driving support devices of Embodiments 1 to 3 may include at least one of a recording control section (not shown) and a communication control section (not shown). At this time, the traffic restriction object information acquisition unit 12 has acquired the information that there is a traffic restriction object in the direction of travel of the vehicle, and the image recognition units 14 and 14a have recognized the recognition target object. When it is indicated that there is a high possibility that the traffic regulating object cannot be recognized, the recording control section controls recording the image acquired by the image acquisition section 13 in a recording section (not shown). Alternatively, the traffic restriction object information acquisition unit 12 has acquired information on the presence of a traffic restriction object in the direction of travel of the vehicle, and the image recognition units 14 and 14a have recognized the recognition object, and the result is that the driver When it is determined that there is a high possibility that the restricted object cannot be recognized, the communication control unit controls the transmission of the image acquired by the image acquisition unit 13 to an external server (not shown) or a mobile terminal owned by the driver. A recording control section (not shown) and a communication control section (not shown) may perform recording control and communication control based on the determination results of the determination sections 15, 15a, and 15b, respectively.
 また、図示しない記録制御部及び図示しない通信制御部いずれかは、実施形態1乃至3の運転支援装置の報知制御部の代わりとして備えられる形態であってもよい。このとき運転支援装置は、運転者への報知を行わず、記録制御または通信制御のみを行う形態となる。 Furthermore, either the recording control section (not shown) or the communication control section (not shown) may be provided in place of the notification control section of the driving support device of the first to third embodiments. At this time, the driving support device is configured to perform only recording control or communication control without notifying the driver.
 上述の実施形態では、本開示をハードウェアの構成として説明したが、本開示は、これに限定されるものではない。本開示は、任意の処理を、プロセッサにコンピュータプログラムを実行させることにより実現することも可能である。 In the embodiments described above, the present disclosure has been described as a hardware configuration, but the present disclosure is not limited to this. The present disclosure can also implement arbitrary processing by causing a processor to execute a computer program.
 上述の例において、プログラムは、様々なタイプの非一時的なコンピュータ可読媒体を用いて格納され、コンピュータに供給することができる。非一時的なコンピュータ可読媒体は、様々なタイプの実体のある記録媒体を含み、例えば、ハードディスクドライブ等の磁気記録媒体、光記録媒体、光磁気記録媒体、半導体メモリを含む。半導体メモリは、書き換え可能な様々な種類のROM(Read Only Memory)やRAM(Random Access Memory)とすることもでき、またSSD(Solid State Drive)として提供されるものも含む。また、プログラムは、プログラムは、様々なタイプの一時的なコンピュータ可読媒体によって電線及び光ファイバ等の有線通信路、又は無線通信路を介してコンピュータに供給されてもよく、例えば電気信号、光信号、及び電磁波を含む。 In the examples described above, the program may be stored and provided to the computer using various types of non-transitory computer-readable media. Non-transitory computer-readable media include various types of tangible recording media, including, for example, magnetic recording media such as hard disk drives, optical recording media, magneto-optical recording media, and semiconductor memory. Semiconductor memory can be various types of rewritable ROM (Read Only Memory) and RAM (Random Access Memory), and also includes those provided as SSD (Solid State Drive). The program may also be delivered to the computer via various types of temporary computer-readable media, such as electrical signals, optical signals, , and electromagnetic waves.
 特許請求の範囲、明細書、および図面中において示した装置および方法における各処理の実行順序は、任意の順序で実現しうる。特許請求の範囲、明細書および図面中の動作フローに関して、便宜上「初めに」、「次に」等を用いて説明したとしても、この順序で実施することが必須であることを意味するものではない。 The execution order of each process in the apparatus and method shown in the claims, specification, and drawings can be realized in any order. Even if the claims, specification, and operational flows in the drawings are explained using "first", "next", etc. for convenience, this does not mean that it is essential to perform them in this order. do not have.
 本開示の運転支援装置及び運転支援方法は、例えば、ドライブレコーダに使用することができる。 The driving support device and driving support method of the present disclosure can be used, for example, in a drive recorder.
 1 運転支援システム
 10 運転支援装置
 11 現在位置取得部
 12 交通規制物情報取得部
 13 画像取得部
 14、14a 画像認識部
 15,15a,15b 判定部
 16 報知制御部
 20 GNSS受信機
 30 地図データベース
 40 カメラ
 50 モニタ
 60 スピーカ
1 Driving support system 10 Driving support device 11 Current position acquisition unit 12 Traffic control object information acquisition unit 13 Image acquisition unit 14, 14a Image recognition unit 15, 15a, 15b Judgment unit 16 Notification control unit 20 GNSS receiver 30 Map database 40 Camera 50 Monitor 60 Speaker

Claims (7)

  1.  車両の現在位置を取得する現在位置取得部と、
     前記車両の進行方向に存在する交通規制物に関する交通規制物情報を取得する交通規制物情報取得部と、
     前記車両の進行方向を少なくとも含む車両周囲を撮影した画像を取得する画像取得部と、
     前記画像に撮影された認識対象物を認識する画像認識部と、
     前記交通規制物が存在し、かつ、前記交通規制物である前記認識対象物が認識されていないこと、を判定する判定部と、
     前記判定の結果に基づいた報知を制御する報知制御部と、
     を備える、運転支援装置。
    a current position acquisition unit that acquires the current position of the vehicle;
    a traffic regulation object information acquisition unit that obtains traffic regulation object information regarding traffic regulation objects existing in the direction of travel of the vehicle;
    an image acquisition unit that acquires an image of the surroundings of the vehicle including at least the traveling direction of the vehicle;
    an image recognition unit that recognizes a recognition target captured in the image;
    a determination unit that determines that the traffic restriction object exists and the recognition target object that is the traffic restriction object is not recognized;
    a notification control unit that controls notification based on the result of the determination;
    A driving support device equipped with.
  2.  前記認識対象物は、前記車両の進行方向に存在する障害物を含み、
     前記判定部は、前記交通規制物が存在し、かつ、前記障害物が認識されている場合に、前記画像に占める前記障害物を認識した領域が予め定めた所定値以上の大きさであること、をさらに判定する、
     請求項1に記載の運転支援装置。
    The recognition target includes an obstacle existing in the direction of travel of the vehicle,
    The determining unit determines that when the traffic regulating object exists and the obstacle is recognized, an area in the image where the obstacle is recognized is larger than a predetermined value. further determine ,
    The driving support device according to claim 1.
  3.  前記認識対象物は、前記車両の進行方向に存在する障害物を含み、
     前記判定部は、前記交通規制物が存在し、かつ、前記障害物が認識されている場合に、前記画像上に占める前記障害物が認識された領域の位置が、予め定めた所定領域を含むこと、をさらに判定する、
     請求項1に記載の運転支援装置。
    The recognition target includes an obstacle existing in the direction of travel of the vehicle,
    The determining unit is configured to determine, when the traffic regulating object exists and the obstacle is recognized, the position of the area occupied by the obstacle in the image includes a predetermined area. further determine that
    The driving support device according to claim 1.
  4.  前記交通規制物情報は、前記交通規制物の種別に関する交通規制物種別情報を含み、
     前記予め定めた所定値は、前記交通規制物の種別に基づいて定められる、請求項2に記載の運転支援装置。
    The traffic regulation object information includes traffic regulation object type information regarding the type of the traffic regulation object,
    The driving support device according to claim 2, wherein the predetermined value is determined based on the type of the traffic regulating object.
  5.  前記交通規制物情報は、前記交通規制物の種別に関する交通規制物種別情報を含み、
     前記予め定めた所定領域は、前記交通規制物の種別に基づいて定められる、請求項3に記載の運転支援装置。
    The traffic regulation object information includes traffic regulation object type information regarding the type of the traffic regulation object,
    The driving support device according to claim 3, wherein the predetermined area is determined based on the type of the traffic regulating object.
  6.  前記画像取得部が取得した画像を記録する制御を行う記録制御部と、前記画像取得部が取得した画像を外部に送信する制御を行う通信制御部と、の少なくともいずれかをさらに備える、
     請求項1から3のいずれか一項に記載の運転支援装置。
    further comprising at least one of a recording control unit that controls recording of the image acquired by the image acquisition unit, and a communication control unit that controls transmission of the image acquired by the image acquisition unit to the outside;
    The driving support device according to any one of claims 1 to 3.
  7.  コンピュータが、
     車両の現在位置を取得するステップと、
     前記車両の進行方向に存在する交通規制物に関する交通規制物情報を取得するステップと、
     前記車両の進行方向を少なくとも含む車両周囲を撮影した画像を取得するステップと、
     前記画像に撮影された認識対象物を認識するステップと、
     前記交通規制物が存在し、かつ、前記交通規制物である前記認識対象物が認識されていないこと、を判定するステップと、
     前記判定の結果に基づいた報知を制御するステップと、
     を実行する、運転支援方法。
    The computer is
    obtaining the current location of the vehicle;
    acquiring traffic regulation object information regarding traffic regulation objects existing in the direction of travel of the vehicle;
    acquiring an image of the surroundings of the vehicle including at least the traveling direction of the vehicle;
    a step of recognizing a recognition target captured in the image;
    determining that the traffic restriction object exists and the recognition target object that is the traffic restriction object is not recognized;
    a step of controlling notification based on the result of the determination;
    A driving assistance method that performs.
PCT/JP2023/016675 2022-04-28 2023-04-27 Driving assistance device and driving assistance method WO2023210753A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2022-074843 2022-04-28
JP2022074844 2022-04-28
JP2022074843 2022-04-28
JP2022-074844 2022-04-28

Publications (1)

Publication Number Publication Date
WO2023210753A1 true WO2023210753A1 (en) 2023-11-02

Family

ID=88518803

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/016675 WO2023210753A1 (en) 2022-04-28 2023-04-27 Driving assistance device and driving assistance method

Country Status (1)

Country Link
WO (1) WO2023210753A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007183764A (en) * 2006-01-05 2007-07-19 Hitachi Ltd Onboard equipment
JP2008262481A (en) * 2007-04-13 2008-10-30 Denso Corp Vehicle control device
JP2009301267A (en) * 2008-06-12 2009-12-24 Toyota Industries Corp Driving support device
JP2011108175A (en) * 2009-11-20 2011-06-02 Alpine Electronics Inc Driving support system, driving support method and driving support program
WO2021006060A1 (en) * 2019-07-08 2021-01-14 株式会社デンソー Display control device and display control program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007183764A (en) * 2006-01-05 2007-07-19 Hitachi Ltd Onboard equipment
JP2008262481A (en) * 2007-04-13 2008-10-30 Denso Corp Vehicle control device
JP2009301267A (en) * 2008-06-12 2009-12-24 Toyota Industries Corp Driving support device
JP2011108175A (en) * 2009-11-20 2011-06-02 Alpine Electronics Inc Driving support system, driving support method and driving support program
WO2021006060A1 (en) * 2019-07-08 2021-01-14 株式会社デンソー Display control device and display control program

Similar Documents

Publication Publication Date Title
US20230311749A1 (en) Communication between autonomous vehicle and external observers
US10780897B2 (en) Method and device for signaling present driving intention of autonomous vehicle to humans by using various V2X-enabled application
JP6840240B2 (en) Dynamic route determination for autonomous vehicles
US10336326B2 (en) Lane detection systems and methods
US9195894B2 (en) Vehicle and mobile device traffic hazard warning techniques
US10497264B2 (en) Methods and systems for providing warnings of obstacle objects
CN109712432B (en) System and method for projecting a trajectory of an autonomous vehicle onto a road surface
US11282388B2 (en) Edge-assisted alert system
US20170206426A1 (en) Pedestrian Detection With Saliency Maps
US9809165B1 (en) System and method for minimizing driver distraction of a head-up display (HUD) in a vehicle
US20170161569A1 (en) System and method of object detection
US20100030474A1 (en) Driving support apparatus for vehicle
US11866037B2 (en) Behavior-based vehicle alerts
CN113808418B (en) Road condition information display system, method, vehicle, computer device and storage medium
US10102747B2 (en) Intersection traffic signal indicator systems and methods for vehicles
US10906556B2 (en) System and method for oncoming vehicle warning
WO2023210753A1 (en) Driving assistance device and driving assistance method
JP7359099B2 (en) Mobile object interference detection device, mobile object interference detection system, and mobile object interference detection program
KR102370876B1 (en) Method and apparatus for providing driving information by using object recognition of cloud
CN114333339B (en) Deep neural network functional module de-duplication method
JP2019164602A (en) Reverse drive warning system, reverse drive warning method, and reverse drive warning program
JP2017126213A (en) Intersection state check system, imaging device, on-vehicle device, intersection state check program and intersection state check method
KR102611784B1 (en) Intelligent Control System and Method Capable of Controlling Traffic at the Intersection
JP3222638U (en) Safe driving support device
JP2024031063A (en) Vehicle control device, vehicle control method, and vehicle control computer program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23796489

Country of ref document: EP

Kind code of ref document: A1