WO2022196660A1 - Driving assistance device, driving assistance method, drive recorder, and driving assistance control program - Google Patents

Driving assistance device, driving assistance method, drive recorder, and driving assistance control program Download PDF

Info

Publication number
WO2022196660A1
WO2022196660A1 PCT/JP2022/011446 JP2022011446W WO2022196660A1 WO 2022196660 A1 WO2022196660 A1 WO 2022196660A1 JP 2022011446 W JP2022011446 W JP 2022011446W WO 2022196660 A1 WO2022196660 A1 WO 2022196660A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
unit
determination
driving
information
Prior art date
Application number
PCT/JP2022/011446
Other languages
French (fr)
Japanese (ja)
Inventor
俊介 木村
Original Assignee
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社デンソー filed Critical 株式会社デンソー
Publication of WO2022196660A1 publication Critical patent/WO2022196660A1/en

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles

Definitions

  • the present disclosure relates to a driving assistance device, a driving assistance method, a drive recorder, and a driving assistance control program that provide advice on driving operations to the driver.
  • Japanese Unexamined Patent Application Publication No. 2013-191230 describes a vehicle safe driving promotion system that can improve the driver's awareness of safe driving by giving the driver mental encouragement.
  • the in-vehicle device disclosed in Japanese Unexamined Patent Application Publication No. 2013-191230 stores the safe driving threshold value and the economical driving threshold value stored in the storage means, the driving state information obtained by the driving state obtaining means, and the safe driving evaluation history. judging the driving situation of the driver from the information and the economic driving evaluation history information, and if the driving situation of the driver is within the judgment threshold value stored in the storage means, notifying the driver of a message to praise the driver via the notifying means; When the driving condition of the driver is outside the threshold value, advice is given to the driver via the notification means.
  • This praise (message) can give the driver mental encouragement, and as a result, it is expected that the driver's voluntary practice of safe driving will improve safe driving awareness and reduce traffic accidents.
  • the present disclosure obtains and analyzes necessary and sufficient driving operation information and information obtained by monitoring the surroundings of the vehicle for each of various scenes while driving the vehicle, and provides advice according to the driving operation of each individual driver. It is an object to obtain a driving assistance control device, a driving assistance method, a drive recorder, and a driving assistance control program capable of providing
  • a driving support control device acquires position information of a vehicle and behavior information of the vehicle as essential information sources, and recognizes an essential situation of the vehicle based on the essential information sources.
  • a second situation in which an arbitrary information source is acquired based on image information of at least one of the surroundings of the vehicle and the interior of the vehicle, and the driving situation of the vehicle is recognized based on the essential information source and the arbitrary information source.
  • a driving support method acquires vehicle position information and vehicle behavior information as essential information sources, and according to recognition items used for risk event determination by a determination unit, a first stage of situation recognition.
  • a first recognition function that recognizes the driving situation based on the essential information source and outputs it to the determination unit as a a second recognition function for recognizing the driving situation based on the essential information source and the arbitrary information source as situation recognition and outputting it to the determination unit;
  • a plurality of risk determination items are provided, and input parameters for each risk determination item Among them, there is a pattern in which a decision is made with a judgment formula that satisfies the input parameters only with the recognition items of the first situation recognition part, and a pattern that satisfies the input parameters without using the recognition items of the second situation recognition part depending on the risk judgment item.
  • the recognition items of the second situation recognition unit are additionally used for determination, and if the risk event is found in the determination result, advice for resolving the risk event is given, and at least the vehicle is driven. It is characterized by notifying the driver to do so.
  • a drive recorder is mounted on a vehicle, captures at least the surrounding environment including the front of the vehicle, and records the captured images while rewriting the oldest images within a predetermined capacity range.
  • it has a main control unit that, when an emergency is detected, saves images for a predetermined period before and after the emergency, and the above-described driving support control device.
  • a driving assistance control program is characterized by causing a computer to operate as each part of the driving assistance control device described above.
  • necessary and sufficient driving operation information and information obtained by monitoring the surroundings of the vehicle are acquired and analyzed for each of various scenes during driving of the vehicle, so that the driving operation of the individual driver can be performed. can provide good advice.
  • FIG. 1 is a schematic diagram of a driving support system that supports driving in automatic driving of a vehicle according to the first embodiment
  • FIG. 2 is a functional block diagram of a driving assistance device in the drive recorder according to the first embodiment, mainly showing driving assistance control classified by function
  • FIG. 3 is a functional block diagram showing the detailed configuration of the situation recognition unit according to the first embodiment
  • FIG. 4 is a table diagram in which items processed by the action-based scoring processing unit according to the first embodiment are classified;
  • FIG. 1 is a schematic diagram of a driving support system that supports driving in automatic driving of a vehicle according to the first embodiment
  • FIG. 2 is a functional block diagram of a driving assistance device in the drive recorder according to the first embodiment, mainly showing driving assistance control classified by function
  • FIG. 3 is a functional block diagram showing the detailed configuration of the situation recognition unit according to the first embodiment
  • FIG. 4 is a table diagram in which items processed by the action-based scoring processing unit according to the first embodiment are classified
  • FIG. 1 is a schematic diagram of
  • FIG. 5 is a table diagram in which items processed by the location-specific scoring processing unit according to the first embodiment are classified;
  • 6A is a flowchart showing a driving support control analysis routine in the driving support system according to the first embodiment
  • FIG. 6B is a flowchart showing a driving support control notification routine in the driving support system according to the first embodiment
  • FIG. 7 is a time-CPU resource characteristic diagram of the drive recorder according to the first embodiment
  • FIG. 8 is a functional block diagram showing the detailed configuration of the situation recognition unit according to the modification of the first embodiment (addition of CAN information);
  • FIG. 9 is a functional block diagram of a driving assistance device in the drive recorder according to the second embodiment, mainly showing driving assistance control classified by function;
  • FIG. 10 is a flowchart showing a driving support control analysis routine in the driving support system according to the second embodiment
  • FIG. 11 is a functional block diagram of a driving assistance device in a drive recorder according to a third embodiment, in which mainly driving assistance control is categorized by function
  • FIG. 12 is a flowchart showing a driving support control analysis routine in the driving support system according to the third embodiment
  • FIG. 13 is a functional block diagram of a driving assistance device in a drive recorder according to a fourth embodiment, in which mainly driving assistance control is categorized by function
  • FIG. 14 is a flowchart showing a driving support control analysis routine in the driving support system according to the fourth embodiment, FIG.
  • FIG. 15 is a functional block diagram of a driving assistance device in a drive recorder according to a fifth embodiment, mainly showing driving assistance control classified by function;
  • FIG. 16 is a flowchart showing a driving support control analysis routine in the driving support system according to the fifth embodiment;
  • FIG. 17 is a functional block diagram of a driving assistance device in a drive recorder according to a sixth embodiment, in which mainly driving assistance control is categorized by function;
  • FIG. 18 is a flowchart showing a driving support control analysis routine in the driving support system according to the sixth embodiment;
  • FIG. 19 is a schematic diagram of a driving support system that supports traveling in automatic driving of a vehicle according to the seventh embodiment.
  • FIG. 1 is a plan view of a vehicle 10 to which a driving support system according to the first embodiment is applied.
  • the drive recorder 14 has the original functions of the drive recorder 14 (monitoring by photographing the exterior and interior of the vehicle, and the function of storing images in an emergency), as well as the driving assistance device 14A according to the first embodiment (see FIG. 1). (represented by a dotted line frame).
  • the driving assistance device 14A includes, for example, a CPU (Central Processing Unit) and a rewritable nonvolatile memory.
  • the nonvolatile memory stores a program indicating driving support control, which will be described later, and the CPU reads and executes the program.
  • the vehicle control device 12 executes control including the drive system (engine control, etc.) and the electrical system when the vehicle 10 is running.
  • the drive recorder 14 includes a gyro sensor (Gyro sensor) 16 (indicated as “Gyro” in FIG. 1), an acceleration sensor (G sensor) 18 (indicated as “G” in FIG. 1), and a GPS receiver 20 ( In FIG. 1, “GPS”) is connected.
  • a gyro sensor Gyro sensor
  • G sensor acceleration sensor
  • GPS GPS receiver
  • the gyro sensor 16 detects azimuth information in the running direction of the vehicle 10
  • the acceleration sensor 18 detects acceleration and deceleration of the vehicle 10
  • the GPS receiver 20 detects position information of the vehicle 10 .
  • a radar group 22 including a plurality of millimeter wave radars and LIDAR is also connected to the vehicle control device 12 .
  • the radar group 22 detects obstacles and the like ahead of the vehicle 10 .
  • the vehicle control device 12 uses the received information to execute control including the drive system and the electrical system, or to notify the driver 24 who is driving of the position information, the travel route to the destination, and the like. Note that FIG. 1 shows the driver 24 getting off the vehicle.
  • the drive recorder 14 includes a group of cameras that capture the surroundings of the vehicle 10 (a front monitoring camera 26A and a rear monitoring camera 26B are shown in FIG. 1 as an example).
  • the drive recorder 14 of the first embodiment also includes a vehicle interior monitoring camera 26C that captures the interior of the vehicle 10 (hereinafter referred to as the front monitoring camera 26A, the rear monitoring camera 26B, and the vehicle interior monitoring camera 26C).
  • the camera group 26 may include a right monitoring camera 26 ⁇ /b>D that captures the right side of the vehicle 10 and a left monitoring camera 26 ⁇ /b>E that captures the left side of the vehicle 10 .
  • the drive recorder 14 is equipped with a driving support device 14A according to the first embodiment.
  • the inside of the drive recorder 14 is indicated by a dotted line.
  • the description of the original function of the recorder 14 and the description of the function of the driving support device 14A are used separately.
  • FIG. 2 is a functional block diagram of the driving assistance device 14A in the drive recorder 14, which mainly shows driving assistance control classified by function. Note that each block does not limit the hardware configuration of the driving support device 14A. If necessary, some or all of the blocks may be operated by a microcomputer as a driving support program.
  • the drive recorder 14 has a drive recorder main control section 30 .
  • a camera group 26 (front monitoring camera 26A, rear monitoring camera 26B, and vehicle interior monitoring camera 26C) is connected to the drive recorder main control unit 30, and the camera group continuously captures images at least during driving. .
  • the images captured by the camera group 26 are rewritten sequentially from the oldest image within a predetermined capacity (for example, about one hour). I'm trying In other words, the camera group 26 always shoots, and records while shifting the predetermined period.
  • a predetermined capacity for example, about one hour
  • an emergency time for example, during sudden deceleration such as sudden braking
  • a signal is received from the vehicle control device 12
  • images of a predetermined period for example, 15 minutes before and after
  • a through image image being recorded
  • Images before and after the emergency period can be applied, for example, to analyze the factors causing the emergency.
  • the monitor unit 32 can also be used to reproduce recorded images.
  • the drive recorder 14 of the first embodiment has a role as a driving support device 14A.
  • the driving support device 14A analyzes the driving situation of the driver 24 based on information from the gyro sensor 16, the acceleration sensor 18, the GPS receiver 20, and the camera group 26, and detects dangerous behavior unsuitable for driving (hereinafter referred to as risk event) is determined, and advice for reducing the risk event is notified for the next driving.
  • Information may be acquired from the radar group 22 connected to the vehicle control device 12 .
  • the driving support device 14A includes a sensor information acquisition unit 34 and acquires Gyro information (direction information), G information (acceleration/deceleration information), GPS information (position information), and radar group analysis information (obstacle information). .
  • the sensor information acquisition unit 34 is connected to the situation recognition unit 36, and sends the acquired Gyro information, G information, GPS information, and obstacle detection information to the situation recognition unit 36.
  • the situation recognition unit 36 is connected to the drive recorder main control unit 30.
  • the situation recognizing unit 36 acquires shooting information from the camera group 26 that is used by the drive recorder main control unit 30 .
  • the situation recognition unit 36 can collect the travel history, behavior, and situations (image information) of the vehicle 10 outside and inside the vehicle when the driver 24 is driving, and analyze the situation during driving. , can be recognized.
  • FIG. 3 is a functional block diagram showing the detailed configuration of the situation recognition unit 36. As shown in FIG.
  • the situation recognition unit 36 includes a travel point analysis unit 36A, a behavior analysis unit 36B, a surrounding environment recognition unit 36C, and a driver situation recognition unit 36D.
  • the travel point analysis unit 36A acquires GPS information from the sensor information acquisition unit 34 and analyzes the travel point of the vehicle 10, such as an intersection, parking lot, indoors, narrow road, or the like.
  • the behavior analysis unit 36B acquires G information (acceleration information) and Gyro information (azimuth information) from the sensor information acquisition unit 34, and analyzes the behavior of the vehicle 10, for example, turning right or left, starting, stopping, vehicle speed, steering angle, etc. do.
  • the surrounding environment recognition unit 36C acquires vehicle exterior image information from the drive recorder main control unit 30 and, if necessary, obstacle information from the sensor information acquisition unit 34. , the positions and movements of the preceding and oncoming vehicles, the white and yellow lines on the roadway, and the distance between vehicles.
  • the driver situation recognition unit 36D acquires vehicle interior image information from the drive recorder main control unit 30, and recognizes the driving posture of the driver 24 who drives the vehicle 10, such as line of sight, face direction, degree of eyes open/closed, skeleton, possession object. Analyze the state of things.
  • the situation recognition unit 36 sends information about the travel point of the vehicle 10 analyzed by the travel point analysis unit 36A and the behavior of the vehicle 10 analyzed by the behavior analysis unit 36B to the risk event processing control unit 38 (essential information sending).
  • the situation recognition unit 36 In response to a request from the risk event processing control unit 38, the situation recognition unit 36 also detects the surrounding environment of the vehicle 10 analyzed by the surrounding environment recognition unit 36C and the driving posture of the driver 24 analyzed by the driver situation recognition unit 36D. to the risk event processing control unit 38 (transmission of arbitrary information).
  • the risk event processing control unit 38 includes a risk event determination unit 40.
  • the risk event determination unit 40 receives essential information sent from the situation recognition unit 36 such as the travel location of the vehicle 10 and the behavior of the vehicle 10, and optional information such as the surrounding environment of the vehicle 10 and the driver 24 of each of the driving postures is aggregated.
  • the risk event determination unit 40 has the role of determining whether or not there is a risk while driving the vehicle 10 .
  • the risk during driving is, for example, the degree of risk of a situation that may hinder safe driving in the behavior pattern determined by checking with the Road Traffic Act, etc., and the location pattern determined by the driving point and driving state of the vehicle 10. It is the degree, and in the first embodiment, it is quantified (scoring).
  • the risk event determination unit 40 determines whether or not there is a risk in two steps.
  • the risk event determination unit 40 based on the essential information (driving point of the vehicle 10 and the behavior of the vehicle 10) and optional information (surrounding environment of the vehicle 10, driving posture of the driver 24), the risk of items that can be assumed. Determine the presence or absence of Items that can be assumed may be extracted from, for example, pre-registered risks that have occurred in the past. The number is not limited. Judgment items are judged by a scene search formula and a judgment formula, respectively.
  • Scene search formulas include search conditions based on points such as intersections without traffic lights, intersections with traffic lights, parking lots, etc., search conditions based on behavior such as turning right, turning left, going straight, stopping, vehicles in front, pedestrians on the left shoulder, and vehicles stopped at intersections. There are search conditions based on the surrounding environment, such as a bicycle, and search conditions based on the driver's state, such as the driver's line of sight, drowsiness, and smartphone calls. For the scenes that match the scene search formula, the decision formula is used to determine whether or not there is a risk.
  • the vehicle speed can be calculated using the integrated value of the G sensor information (corresponding to the first recognition situation), and the distance to the preceding vehicle is the size of the preceding vehicle recognized from the image taken by the forward monitoring camera 26A. , can be calculated from the hardware parameters of the camera group 26 (corresponding to the second recognition situation).
  • the presence or absence of a conceivable item of risk is determined based on the essential information (the travel location of the vehicle 10 and the behavior of the vehicle 10).
  • search conditions based on points such as intersections without traffic lights, intersections with traffic lights, etc.
  • search conditions based on behavior such as turning right, turning left, going straight, and stopping.
  • a predetermined value for example 0.35 G
  • the vehicle speed can be calculated using the integrated value of the G sensor information (corresponding to the first recognition situation), and the distance to the preceding vehicle is the size of the preceding vehicle recognized from the image taken by the forward monitoring camera 26A. , can be calculated from the hardware parameters of the camera group 26 (corresponding to the second recognition situation).
  • the drive recorder main control section 30 is requested to transmit the information from the camera group 26 to the situation recognition section 36 via the image request section 42 .
  • the surrounding environment of the vehicle 10 is sent from the surrounding environment recognition unit 36C (see FIG. 2) to the risk event determination unit 40 as arbitrary information, and the driver
  • the driving posture of the driver 24 is sent to the risk event determination section 40 from the situation recognition section 36D (see FIG. 2).
  • the risk event determination unit 40 based on the essential information (travel location of the vehicle 10 and the behavior of the vehicle 10) and optional information (surrounding environment of the vehicle 10, driving posture of the driver 24), Determine the presence or absence of risks for items that can be assumed.
  • the presence or absence of risk is determined based on sensor information, which has a relatively small amount of information and does not take much time to analyze, rather than using image information, which has a relatively large amount of information and takes a long time to analyze, from the beginning. Therefore, the function of the driving support device 14A can be realized without impairing the original function of the drive recorder 14.
  • the risk event determination unit 40 is connected to the behavior-based scoring processing unit 44 and the location-based scoring processing unit 46, and provides the behavior-based scoring processing unit 44 and the location-based scoring processing unit 46 with each item. The determination result of the presence or absence of risk is sent out.
  • FIG. 4 is a table diagram in which the items processed by the action-based scoring processing unit 44 are classified. As shown in FIG. 4, the behavioral items are classified into large items, medium items, and small items.
  • Major items include, for example, classification according to the Road Traffic Act, classification according to safe driving obligations, classification according to the driver's condition, etc.
  • Medium items include, for example, classifications related to road signs, classifications related to driver behavior, classifications related to driver's physical condition, etc.
  • Subcategories include classification by road sign, classification by driver behavior, classification by driver's physical condition, etc.
  • IDs are assigned in advance to identify each of the major, medium, and minor items, and each ID is aggregated into the minor item ID. By looking at this minor item ID, it is possible to identify the major item type, the medium item type, and the minor item type.
  • the action-based scoring processing unit 44 scores each sub-item based on a predetermined calculation formula, and evaluates with the total value. For example, in the behavior, a calculation that gives 50 points out of 100 to the driver 24 who drives on average, deducts points if there is risk, and adds points if the risk-free state continues. You can build expressions.
  • FIG. 5 is a table diagram in which the items processed by the location-specific scoring processing unit 46 are classified. As shown in FIG. 5, items for each location are classified into vehicle location conditions and vehicle running conditions, and a scene is set by combining each.
  • Vehicle point conditions include, for example, the presence or absence of facilities such as traffic lights, the shape of roads, and specific locations.
  • Vehicle driving conditions include, for example, speed, acceleration/deceleration, location, and weather.
  • the location-specific scoring processing unit 46 scores each scene based on a predetermined calculation formula, and evaluates with the total value. For example, a calculation that gives 50 points out of 100 to the driver 24 who drives on average in the place, points are deducted if there is risk, and points are added if the risk-free state continues. You can build expressions.
  • the behavior-based scoring processing unit 44 and the location-based scoring processing unit 46 are each connected to a priority determination unit 48.
  • the priority determining unit 48 extracts risks to be notified to the driver 24 based on the scoring results of the behavior-based scoring processing unit 44 and the location-based scoring processing unit 46, and prioritizes the extracted risks. set the degree. Risks are premised to be notified to the driver 24 , but if notification is restricted, it is necessary to notify the risks in descending order of priority.
  • the advice notification unit 50 executes specific risk consulting based on the received risks, and notifies the mobile terminal device 24A or the like possessed by the driver 24. In addition, it is possible to notify the mobile terminal device 24A of the notification destination via the mobile terminal device 24A using a web application or a native application. In addition, not only the portable terminal device 24A but also the drive recorder 14 or the voice notification function of the vehicle 10 may be used to notify the driver 24, or via the cockpit or display of the vehicle 10 to the driver 24. may notify you.
  • FIG. 6A is a flowchart showing a driving assistance control analysis routine in the driving assistance system according to the first embodiment
  • FIG. 6B is a flowchart showing a driving assistance control notification routine in the driving assistance system according to the first embodiment. be.
  • step 100 it is determined whether or not it is time to analyze the information.
  • Information may be analyzed periodically or irregularly.
  • the predetermined condition is set based on the availability of hardware (CPU resources) installed in the drive recorder 14, as shown in FIG.
  • the horizontal axis is time and the vertical axis is CPU resource (usage rate), and the usage rate at which the function locks is set to 95%.
  • the CPU resource fluctuates over time and is driven while maintaining 5% or less.
  • the time period for executing image processing is set to be equal to or less than a predetermined threshold value (90% in FIG. 7). That is, the image processing does not have to be executed in real time, and it is sufficient that the image processing is executed by the time of notifying the driver 24 and appropriate advice is generated.
  • the timing of notification may be until getting off the vehicle for the current driving or until getting off the vehicle for the next driving.
  • the distance may be detected by the drive recorder 14 , or may be acquired from the short-range communication function between the mobile terminal device 24 ⁇ /b>A possessed by the driver 24 and the drive recorder 14 .
  • the threshold is not limited to 90%, and may be determined as long as it does not interfere with the original function of the drive recorder 14.
  • step 100 of FIG. 6A the process proceeds to step 126 to determine whether or not to continue the analysis. If the determination in step 126 is affirmative, the process returns to step 100 .
  • step 100 If an affirmative determination is made in step 100, the process moves to step 102 to acquire various sensor information, then moves to step 104 to analyze the travel point from the GPS information, and moves to step 106.
  • step 106 vehicle behavior is analyzed from GPS information, G information and Gyro information, and the process proceeds to step 108.
  • step 108 the risk event is judged by the first stage, that is, the essential information, and the process proceeds to step 110 to judge whether or not the input information to the risk event judgment formula is sufficient.
  • step 110 If a negative determination is made in step 110, or the input information to the risk event determination formula is insufficient, the process proceeds from step 110 to step 112 to request image information outside and inside the vehicle.
  • step 114 image information is acquired from the drive recorder main control unit 30, and the process proceeds to step 116.
  • step 116 the second step, that is, the risk event including optional information in essential information is determined, and the process proceeds to step 118. Also, if the determination in step 110 is affirmative, it is determined that the second stage of risk determination in consideration of arbitrary information is unnecessary, and the process proceeds to step 118 .
  • step 118 action-based scoring processing is executed for each determined risk event (see FIG. 4), and then the process proceeds to step 120, where location-based scoring processing is performed for each determined risk event (see FIG. 4). 5), and go to step 122.
  • steps 118 and 120 are processed in this order, the order may be step 120 ⁇ step 118, or step 118 and step 120 may be processed simultaneously (parallel processing).
  • step 122 the priority of risk event notification is determined based on the scoring process results, and the process proceeds to step 124.
  • result storage processing is executed, and the process proceeds to step 126 .
  • step 126 it is determined whether or not to continue the analysis, and if the determination is affirmative, the process returns to step 100 as described above, and if the determination is negative, this routine ends.
  • step 150 it is determined whether or not it is time to notify the risk event. If the determination at step 150 is negative, the routine ends. Also, if the determination in step 150 is affirmative, the process proceeds to step 152 .
  • the notification is made when the ignition key is turned off, and as shown in FIG. 1, the notification is made when the driver 24 gets off the vehicle 10 .
  • the notification timing is not limited to when the vehicle 10 gets off, and may be, for example, when the ignition key is turned on next time.
  • the driver 24 temporarily goes out of the predetermined range and then returns to the predetermined range.
  • Other notification times such as when to notify, may be used.
  • the notification may be sent via a web application or a native application at any timing.
  • step 152 the risk event stored at step 124 in FIG.
  • step 156 archive processing of the notified risk event is executed, and this routine ends. It should be noted that it is not necessary to keep all the risk events as a history, and they may be selected.
  • a vehicle operation mode recognition unit 36E that recognizes the vehicle operation mode based on the CAN data of the vehicle 10 may be added. Thereby, the behavior of the host vehicle (vehicle 10) can be recognized in more detail.
  • the own vehicle (vehicle 10) behavior and point information are derived. For example, turn left at an intersection with a signal.
  • Analyze the image information from the camera group 26 (outside the vehicle) and estimate the surrounding situation. For example, five seconds before turning left at a green light, a bicycle was on the left side of the own vehicle (vehicle 10).
  • Analyze the image information from the camera group 26 (inside the vehicle) and estimate the state of the driver 24 . For example, safety confirmation is not performed within 5 seconds before turning left.
  • An example that does not require image analysis is the event of excessive speed while going straight on a main road.
  • the location is a main road
  • the behavior of the own vehicle (vehicle 10) is to go straight
  • the speed of the own vehicle (vehicle 10) is compared with the upper speed limit of the main road, and it is detected whether the vehicle is over speeding. becomes unnecessary.
  • Example 2 of the first embodiment since the timestamp for image analysis is specified, it is possible to reduce CPU resources required for image analysis.
  • the scene with the lowest score is selected, and the priority of risk events that should be given priority advice is calculated.
  • the problematic driving scenes are specified among the many risky driving and notified with priority, so that the problematic driving scenes can be quickly corrected. becomes possible.
  • FIG. 9 A second embodiment of the present disclosure will be described below with reference to FIGS. 9 and 10.
  • FIG. 9 The same reference numerals are given to the same components as those of the first embodiment, and the description of the configuration is omitted.
  • a feature of the second embodiment is that the determination result of the risk event determination unit 40 deviates from the presumed determination, or does not correspond to any of the preset risk event items.
  • a function is added to edit (add, delete, change, etc.) items such as determination thresholds and risk event determination when new risks are identified.
  • FIG. 9 is a functional block diagram of a driving assistance device 14A that mainly classifies driving assistance control by function in the drive recorder 14 according to the second embodiment of the present disclosure.
  • a user interface (UI) 52 is connected to the driving support device 14A.
  • the UI 52 is, for example, a device that is installed in a management department that remotely supports the driving assistance device 14A and that can be operated by an operator. In addition to the operator, the driver 24 may be used.
  • the risk event determination unit 40 sends information about the determination result to this UI52.
  • the UI 52 analyzes the determination result and determines whether or not the threshold value adjustment and the determination level need to be adjusted.
  • the threshold adjusting unit 54 is connected to the situation recognizing unit 36 and adjusts the threshold for comparison with the sensor information executed by the situation recognizing unit 36 .
  • the operator determines that the determination level needs to be adjusted, the operator operates the UI 52 to send determination level adjustment information to the determination level adjustment unit 56 .
  • the determination level adjustment unit 56 is connected to the risk event determination unit 40 and adjusts the determination criteria and the like for risk event determination performed by the risk event determination unit 40 .
  • FIG. 10 is a flowchart showing a driving support control analysis routine in the driving support system according to the second embodiment.
  • the same step numbers are given to the same steps as those in the first embodiment (see FIG. 6A), and the description of the processing is omitted.
  • step 1208 After it is determined in step 110 that the input information to the risk event determination formula is sufficient, or after the second stage of risk event determination is executed in step 116, in step 128, the determination result is Send to UI52.
  • step 130 the UI 52 side determines whether adjustment of the judgment result (threshold adjustment, judgment level adjustment) is necessary. If it is judged at step 130 that adjustment is necessary, the process moves to step 132 to execute the judgment level adjustment process and the threshold value adjustment process, returns to step 108, and repeats the above steps.
  • step 130 determines whether adjustment of the determination result is unnecessary, and the process proceeds to step 118.
  • the risk event determination threshold and determination level adjustment for example, when general traffic manners change due to changes in laws and regulations or social conditions, by editing the risk event determination threshold and determination level adjustment, a new It is possible to continuously provide high-level advice without having to develop it in detail.
  • the output of the risk event determination unit 40 is presented to the operator via the UI 52.
  • the threshold value and determination level are adjusted according to the following cases.
  • the determination target of the risk event matches (combination of location, own vehicle (vehicle 10) behavior, etc.)
  • the determination threshold is adjusted (for example, , following distance and relative speed thresholds, etc.).
  • the above response may be performed manually by the driver 24, processed according to a predetermined program, or machine-learned based on a vast amount of past history information (so-called big data) using AI. can be judged by
  • FIG. 11 A third embodiment of the present disclosure will be described below with reference to FIGS. 11 and 12.
  • FIG. 11 the same components as those in the first embodiment and the second embodiment are denoted by the same reference numerals, and the description of the configuration is omitted.
  • the feature of the third embodiment is that the determination result and determination accuracy in the risk event determination unit 40 deviate from the determination assumed in advance, or the risk event items set in advance, etc.
  • a function is added to edit (add, delete, change, etc.) items such as the judgment threshold value and risk event judgment items.
  • FIG. 11 is a functional block diagram of a driving assistance device 14A that mainly classifies driving assistance control by function in the drive recorder 14 according to the second embodiment of the present disclosure.
  • a user interface (UI) 52 is connected to the driving support device 14A.
  • the UI 52 is, for example, a device that is installed in a management department that remotely supports the driving assistance device 14A and that can be operated by an operator. In addition to the operator, the driver 24 may be used.
  • the risk event determination unit 40 sends information regarding determination results and determination accuracy to this UI 52 .
  • the UI 52 analyzes the determination result and determination accuracy, and determines whether or not threshold adjustment is necessary, and whether or not the determination level needs to be adjusted.
  • the threshold adjusting unit 54 is connected to the situation recognizing unit 36 and adjusts the threshold for comparison with the sensor information executed by the situation recognizing unit 36 .
  • the operator determines that the determination level needs to be adjusted, the operator operates the UI 52 to send determination level adjustment information to the determination level adjustment unit 56 .
  • the determination level adjustment unit 56 is connected to the risk event determination unit 40 and adjusts the determination criteria and the like for risk event determination performed by the risk event determination unit 40 .
  • FIG. 12 is a flowchart showing a driving support control analysis routine in the driving support system according to the third embodiment.
  • the same step numbers are given to the same steps as those in the first embodiment (see FIG. 6A), and the description of the processing is omitted.
  • step 1208 After it is determined in step 110 that the input information to the risk event determination formula is sufficient, or after the second step of risk event determination is executed in step 116, in step 128, the determination result and Send the determination accuracy to the UI 52 .
  • step 130 the UI 52 side determines whether or not adjustment of the judgment result and judgment accuracy (threshold adjustment, judgment level adjustment) is necessary. If it is judged at step 130 that adjustment is necessary, the process moves to step 132 to execute the judgment level adjustment process and the threshold value adjustment process, returns to step 108, and repeats the above steps.
  • step 130 determines whether adjustment of the determination result and determination accuracy is unnecessary, and the process proceeds to step 118.
  • the third embodiment for example, when general traffic manners change due to changes in laws and regulations or social conditions, by editing the risk event determination threshold value and determination level adjustment, a new It is possible to continuously provide high-level advice without having to develop it in detail.
  • the determination accuracy is added to the output of the risk event determination unit 40 and presented to the operator via the UI 52.
  • advice will not be provided or the priority of the order of provision will be lowered.
  • FIG. 13 and 14 The fourth embodiment of the present disclosure will be described below with reference to FIGS. 13 and 14.
  • FIG. 13 The same reference numerals are given to the same components as those of the first embodiment, and the description of the configuration is omitted.
  • the risk event determination unit 40 is provided with the image request unit 42, and in the determination (first stage) based on the essential information of the risk event determination unit 40, it is determined that the image information is necessary.
  • image information (arbitrary information) is requested from the drive recorder main control unit 30, and the image information is taken into consideration in the determination (second stage).
  • a risk event is determined using sensor information and image information from the beginning without distinguishing between essential information and optional information. be.
  • FIG. 13 is a functional block diagram of a driving assistance device 14A that mainly classifies driving assistance control by function in the drive recorder 14 according to the fourth embodiment of the present disclosure.
  • the situation recognition unit 36 sends information about the travel point of the vehicle 10 analyzed by the travel point analysis unit 36A and the behavior of the vehicle 10 analyzed by the behavior analysis unit 36B to the risk event processing control unit 38.
  • the situation recognition unit 36 acquires information from the camera group 26, sends the surrounding environment of the vehicle 10 from the surrounding environment recognition unit 36C (see FIG. 2) to the risk event determination unit 40, and recognizes the driver situation.
  • the driver behavior of the driver 24 is sent to the risk event determination unit 40 from the unit 36D (see FIG. 2).
  • the risk event determination unit 40 determines the presence or absence of conceivable items of risk based on the travel point of the vehicle 10, the behavior of the vehicle 10, the surrounding environment of the vehicle 10, and the driving posture of the driver 24.
  • FIG. 14 is a flowchart showing a driving support control analysis routine in the driving support system according to the fourth embodiment.
  • the same step numbers are given to the same steps as those in the first embodiment (see FIG. 6A), and the description of the processing is omitted.
  • step 106 vehicle behavior is analyzed from GPS information, G information and Gyro information, and the process proceeds to step 114.
  • step 114 image information is acquired from the drive recorder main control unit 30, and the process proceeds to step 116.
  • step 116 a risk event is determined based on the travel point of the vehicle 10, the behavior of the vehicle 10, the surrounding environment of the vehicle 10, and the driving posture of the driver 24, and the process proceeds to step 118.
  • the fourth embodiment it is possible to determine risk events according to various scenes acquired while the vehicle 10 is running, and provide accurate advice to eliminate the risk events.
  • FIG. 15 and 16 The fifth embodiment of the present disclosure will be described below with reference to FIGS. 15 and 16.
  • FIG. 15 and 16 The same reference numerals are given to the same components as those of the first embodiment, and the description of the configuration is omitted.
  • the risk event determination unit 40 includes the behavior-based scoring processing unit 44 and the location-based scoring processing unit 46, and the behavior-based scoring processing unit 44 and the location-based scoring processing unit 46, the result of determining the presence or absence of risk for each item is sent, and the behavior-specific scoring processing unit 44 scores each sub-item based on a predetermined calculation formula, and evaluates with the total value.
  • the location-specific scoring processing unit 46 scores each scene based on a predetermined calculation formula, and evaluates the score based on the total value.
  • the results determined by the risk event determination unit 40 are not distinguished or prioritized, and are based on so-called raw data. , to determine risk events.
  • FIG. 15 is a functional block diagram of a driving assistance device 14A in which driving assistance control is mainly classified by function in the drive recorder 14 according to the fifth embodiment of the present disclosure.
  • the risk event determination unit 40 determines whether or not there is a risk in two stages.
  • the presence or absence of a conceivable item of risk is determined based on the essential information (the travel location of the vehicle 10 and the behavior of the vehicle 10).
  • the drive recorder main control section 30 is requested to send the information from the camera group 26 to the situation recognition section 36 via the image request section 42 .
  • the situation recognition unit 36 by acquiring information from the camera group 26, the surrounding environment of the vehicle 10 is sent from the surrounding environment recognition unit 36C (see FIG. 2) to the risk event determination unit 40 as arbitrary information, and the driver
  • the situation recognition unit 36D (see FIG. 2) sends the driver behavior of the driver 24 (inattentive glance, flickering glance, careless state, smartphone operation, posture collapse, unconfirmed safety, etc.) to the risk event determination unit 40.
  • the risk event determination unit 40 based on the essential information (travel location of the vehicle 10 and the behavior of the vehicle 10) and optional information (surrounding environment of the vehicle 10, driving posture of the driver 24), Determine the presence or absence of risks for items that can be assumed.
  • the presence or absence of risk is determined based on sensor information, which has a relatively small amount of information and does not take much time to analyze, rather than using image information, which has a relatively large amount of information and takes a long time to analyze, from the beginning. Therefore, the function of the driving support device 14A can be realized without impairing the original function of the drive recorder 14.
  • the risk event determined by the risk event determination unit 40 is sent to the advice notification unit 50.
  • the advice notification unit 50 executes specific risk consulting based on the received risks, and notifies the mobile terminal device 24A or the like possessed by the driver 24.
  • FIG. 16 is a flow chart showing a driving support control analysis routine in the driving support system according to the fifth embodiment.
  • the same step numbers are given to the same steps as those in the first embodiment (see FIG. 6A), and the description of the processing is omitted.
  • step 110 If a negative determination is made in step 110, or the input information to the risk event determination formula is insufficient, the process proceeds from step 110 to step 112 to request image information outside and inside the vehicle.
  • step 114 image information is acquired from the drive recorder main control unit 30, and the process proceeds to step 116.
  • step 116 the second step, that is, the risk event including optional information in essential information is determined, and the process proceeds to step 124. Also, if the determination in step 110 is affirmative, it is determined that the second stage of risk determination in consideration of arbitrary information is unnecessary, and the process proceeds to step 124 .
  • step 124 the result storage process is executed, and the process moves to step 126.
  • step 126 it is determined whether or not to continue the analysis, and if the determination is affirmative, the process returns to step 100 as described above, and if the determination is negative, this routine ends.
  • the fifth embodiment it is possible to determine risk events according to various scenes acquired while the vehicle 10 is running, and provide accurate advice to eliminate the risk events.
  • FIG. 17 The same reference numerals are given to the same components as those of the first embodiment, and the description of the configuration is omitted.
  • the risk event determination unit 40 is provided with the image request unit 42, and in the determination (first stage) based on the essential information of the risk event determination unit 40, it is determined that the image information is necessary.
  • image information (arbitrary information) is requested from the drive recorder main control unit 30, and the image information is taken into consideration in the determination (second stage).
  • a risk event is determined using sensor information and image information from the beginning without distinguishing between essential information and optional information. be.
  • the risk event determination unit 40 includes a behavior-based scoring processing unit 44 and a location-based scoring processing unit 46.
  • the action-specific scoring processing unit 44 scores each sub-item based on a predetermined calculation formula, evaluates the total value
  • the scoring processing unit 46 is configured to score each scene based on a predetermined calculation formula and evaluate the total value.
  • the result determined by the risk event determination unit 40 is not distinguished or prioritized, and is based on so-called raw data. , to determine risk events.
  • FIG. 17 is a functional block diagram of a driving assistance device 14A in which driving assistance control is mainly classified by function in the drive recorder 14 according to the sixth embodiment of the present disclosure.
  • the situation recognition unit 36 sends information about the travel point of the vehicle 10 analyzed by the travel point analysis unit 36A and the behavior of the vehicle 10 analyzed by the behavior analysis unit 36B to the risk event processing control unit 38.
  • the situation recognition unit 36 acquires information from the camera group 26, sends the surrounding environment of the vehicle 10 from the surrounding environment recognition unit 36C (see FIG. 2) to the risk event determination unit 40, and recognizes the driver situation.
  • the driver behavior of the driver 24 is sent to the risk event determination unit 40 from the unit 36D (see FIG. 2).
  • the risk event determination unit 40 determines the presence or absence of conceivable items of risk based on the travel point of the vehicle 10, the behavior of the vehicle 10, the surrounding environment of the vehicle 10, and the driving posture of the driver 24.
  • the risk event determined by the risk event determination unit 40 is sent to the advice notification unit 50.
  • the advice notification unit 50 executes specific risk consulting based on the received risks, and notifies the mobile terminal device 24A or the like possessed by the driver 24.
  • FIG. 18 is a flow chart showing a driving support control analysis routine in the driving support system according to the sixth embodiment.
  • the same step numbers are given to the same steps as those in the first embodiment (see FIG. 6A), and the description of the processing is omitted.
  • step 106 vehicle behavior is analyzed from GPS information, G information and Gyro information, and the process proceeds to step 114.
  • step 114 image information is acquired from the drive recorder main control unit 30, and the process proceeds to step 116.
  • step 116 a risk event is determined based on the travel point of the vehicle 10, the behavior of the vehicle 10, the surrounding environment of the vehicle 10, and the driving posture of the driver 24, and the process proceeds to step 124.
  • step 124 the result storage process is executed, and the process moves to step 126.
  • step 126 it is determined whether or not to continue the analysis, and if the determination is affirmative, the process returns to step 100 as described above, and if the determination is negative, this routine ends.
  • the sixth embodiment it is possible to determine risk events according to various scenes acquired while the vehicle 10 is running, and provide accurate advice to eliminate the risk events.
  • a risk event can be determined from the beginning using sensor information and image information without distinguishing between essential information and optional information.
  • risk events are determined based on so-called raw data without distinguishing or setting priorities for the results determined by the risk event determination unit 40.
  • the risk event corresponding to the driving scene of the vehicle 10 can be accurately determined with necessary and sufficient information without performing complicated control, thereby simplifying the device configuration. It can be said that it is a useful configuration as a low-priced version accompanying.
  • the seventh embodiment of the present disclosure will be described below with reference to FIG.
  • the same reference numerals are given to the same components as those of the first embodiment, and the description of the configuration is omitted.
  • a gyro sensor 16 (indicated as “Gyro” in FIG. 19), an acceleration sensor (G sensor) 18 (indicated as “G” in FIG. 19), and A GPS receiver 20 (denoted as “GPS” in FIG. 19) is connected to the vehicle control device 12 .
  • information necessary for the travel point analysis unit 36A and the behavior analysis unit 36B is obtained from the vehicle control device 12.
  • obstacle information in front of the vehicle 10 may also be acquired from the vehicle control device 12 as information from the radar group 22 .
  • the amount of information from the camera group 26 is small, and obstacles that are difficult to recognize can be reliably recognized.
  • the drive recorder 14 when any of the existing sensors (the gyro sensor 16, the acceleration sensor 18, and the GPS receiver 20) is present in the vehicle 10, the drive recorder 14 is provided with this function. Since the necessary sensor information can be acquired from the vehicle control device 12, the configuration can be simplified.
  • the drive recorder 14 is configured to execute all controls (information acquisition, situation recognition, risk determination, etc.), but the gyro sensor 16 and the acceleration sensor 18, GPS receiver 20, camera group 26, and drive recorder main control unit 30 are configured as in-vehicle hardware (drive recorder 14), and output signals from the hardware are transmitted to the cloud,
  • a process that replaces the risk event determination unit 40, the priority determination unit 48, the advice notification unit 50, and the like may be executed. In other words, there are no restrictions on the arrangement between the hardware mounted on the vehicle 10 and the functions processed on the cloud.
  • a program is stored in a non-transitional substantive recording medium, and a method corresponding to the program is executed by executing the program.
  • Programs are stored in non-transitory storage media such as CD-ROM (Compact Disk Read Only Memory), DVD-ROM (Digital Versatile Disk Read Only Memory), USB (Universal Serial Bus) memory, semiconductor memory, etc. may be provided in any form. Further, each of the above programs may be downloaded from an external device via a network.

Abstract

A driving assistance device (14) has: a first condition recognition unit (36) that acquires, as an essential information source, position information of a vehicle and behavior information of the vehicle and recognizes an essential condition for the vehicle on the basis of the essential information source; a second condition recognition unit (36, 40, 42) that acquires an arbitrarily defined information source on the basis of image information relating to the surroundings of the vehicle and/or inside a vehicle cabin and recognizes a driving condition of the vehicle on the basis of the essential information source and the arbitrarily defined information source; a determination unit (40) that analyzes a recognition result by the first condition recognition unit or a recognition result by the second condition recognition unit and determines the presence/absence of a risk event which can occur during traveling of the vehicle; and a notification unit (50) that, when the determination result by the determination unit has indicated that the risk event is present, notifies at least a driver who is driving the vehicle of advice for eliminating the risk event.

Description

運転支援装置、運転支援方法、ドライブレコーダ、運転支援制御プログラムDriving support device, driving support method, drive recorder, driving support control program
 本開示は、運転者へ運転操作に関するアドバイスを提供する運転支援装置、運転支援方法、ドライブレコーダ、運転支援制御プログラムに関する。 The present disclosure relates to a driving assistance device, a driving assistance method, a drive recorder, and a driving assistance control program that provide advice on driving operations to the driver.
関連出願への相互参照Cross-references to related applications
 本出願は、2021年3月19日に出願された特許出願番号2021-046575号に基づくものであって、その優先権の利益を主張するものであり、その特許出願のすべての内容が、参照により本明細書に組み入れられる。 This application is based on and claims the benefit of priority from patent application number 2021-046575, filed March 19, 2021, the entire contents of which are incorporated by reference. incorporated herein by.
 近年、運転者の操作により車両を運転するとき、加速度や速度情報を使って、急アクセル及び急ブレーキに起因する急加減速、並びに、法定速度超過等、予め定めた安全運転操作に対して問題のある運転操作等の行動があった場合に、運転者に問題のある運転操作等の行動を是正するアドバイスを送ることが実用化されている。 In recent years, when a vehicle is operated by a driver, using acceleration and speed information, there are problems with predetermined safe driving operations such as sudden acceleration and deceleration caused by sudden acceleration and braking, and exceeding the legal speed limit. It has been put into practical use to send advice to the driver to correct the problematic behavior such as driving operation when there is a certain behavior such as driving operation.
 上記のアドバイスでは、運転者の苦手な運転シーン(例えば、左折時に、歩行者に気づくのが遅れて、急ブレーキ操作をし易い等)、個々の運転者の特徴的な運転操作が特定できず、上記のような苦手な運転シーンに対して、適切、かつ具体的な是正アドバイスを提供することができない場合がある。 With the above advice, it is not possible to identify the driving situations that the driver is not good at (for example, when turning left, it is easy to brake suddenly due to the delay in noticing a pedestrian, etc.) and the characteristic driving operations of individual drivers. , there are cases where it is not possible to provide appropriate and specific corrective advice for difficult driving scenes such as those described above.
 ここで、特開2013-191230号公報には、ドライバに精神的な励みを与えることによって、ドライバの安全運転意識を向上させることができる車両の安全運転促進システムが記載されている。 Here, Japanese Unexamined Patent Application Publication No. 2013-191230 describes a vehicle safe driving promotion system that can improve the driver's awareness of safe driving by giving the driver mental encouragement.
 より具体的には、特開2013-191230号公報の車載装置は、記憶手段に記憶された安全運転閾値,経済運転閾値と、走行状態取得手段で取得された走行状態情報と、安全運転評価履歴情報,経済運転評価履歴情報とからドライバの運転状況を判定し、ドライバの運転状況が、記憶手段に記憶された判定閾値内である場合、前記報知手段を介してドライバに褒めるメッセージを報知し、ドライバの運転状況が前記判定しきい値外である場合、前記報知手段を介してドライバにアドバイスを行うようにしている。 More specifically, the in-vehicle device disclosed in Japanese Unexamined Patent Application Publication No. 2013-191230 stores the safe driving threshold value and the economical driving threshold value stored in the storage means, the driving state information obtained by the driving state obtaining means, and the safe driving evaluation history. judging the driving situation of the driver from the information and the economic driving evaluation history information, and if the driving situation of the driver is within the judgment threshold value stored in the storage means, notifying the driver of a message to praise the driver via the notifying means; When the driving condition of the driver is outside the threshold value, advice is given to the driver via the notification means.
 この賞賛(メッセージ)によってドライバに精神的な励みを与えることができ、この結果、ドライバの自発的安全運転の実施によって安全運転意識が向上し、交通事故の削減が見込まれる。 This praise (message) can give the driver mental encouragement, and as a result, it is expected that the driver's voluntary practice of safe driving will improve safe driving awareness and reduce traffic accidents.
 しかしながら、シーンに応じて、どうような運転をすればよいかという、個々の運転者の運転状態を考慮したアドバイスではなく、一般的なアドバイスが提供されるに過ぎない。また、個々の運転者の運転情報を詳細に把握するためには、車両の周囲や運転者の動向の詳細が要求される場合がある。言い換えれば、シーンによっては、解析のための情報量が少なくてもよい場合と、膨大な情報量を必要とする場合がある。この場合、何れのシーンにも対応可能なように、最大の情報量を常に準備しておくと、要求に応じて情報量が増大することになるが、個々の車両における情報処理能力(例えば、後付けのドライブレコーダ等の処理能力)では対応できない場合が生じる。 However, it does not provide advice on how to drive depending on the scene, but rather general advice that takes into account the driving conditions of individual drivers. In addition, in order to grasp the driving information of each driver in detail, there are cases where the details of the surroundings of the vehicle and the behavior of the driver are required. In other words, depending on the scene, there are cases where the amount of information for analysis may be small, and cases where a huge amount of information is required. In this case, if the maximum amount of information is always prepared so that it can be used in any scene, the amount of information will increase in response to requests. The processing capacity of an aftermarket drive recorder, etc.) may not be sufficient.
 本開示は、車両の運転中の様々なシーン毎に、必要十分な、運転操作情報及び車両走行周辺を監視した情報を取得して解析することで、個々の運転者の運転操作に応じたアドバイスを提供することができる運転支援制御装置、運転支援方法、ドライブレコーダ、運転支援制御プログラムを得ることが目的である。 The present disclosure obtains and analyzes necessary and sufficient driving operation information and information obtained by monitoring the surroundings of the vehicle for each of various scenes while driving the vehicle, and provides advice according to the driving operation of each individual driver. It is an object to obtain a driving assistance control device, a driving assistance method, a drive recorder, and a driving assistance control program capable of providing
 本開示に係る運転支援制御装置は、車両の位置情報及び前記車両の挙動情報を必須情報源として取得して、前記必須情報源に基づいて前記車両の必須の状況を認識する第1の状況認識部と、車両の周辺及び車室内の少なくとも一方の画像情報をもとに任意情報源を取得し、前記必須情報源と任意情報源とに基づいて前記車両の運転状況を認識する第2の状況認識部と、前記第1の状況認識部での認識結果、又は、前記第2の状況認識部の認識結果を解析して、前記車両の走行中に発生し得るリスクイベントの有無を判定する判定部と、前記判定部での判定結果で前記リスクイベントがあった場合に、当該リスクイベントを解消するためのアドバイスを、少なくとも前記車両を運転する運転者へ通知する通知部と、を有している。 A driving support control device according to the present disclosure acquires position information of a vehicle and behavior information of the vehicle as essential information sources, and recognizes an essential situation of the vehicle based on the essential information sources. A second situation in which an arbitrary information source is acquired based on image information of at least one of the surroundings of the vehicle and the interior of the vehicle, and the driving situation of the vehicle is recognized based on the essential information source and the arbitrary information source. Analyzing the recognition result of the recognition unit and the first situation recognition unit, or the recognition result of the second situation recognition unit, and determining whether or not there is a risk event that can occur while the vehicle is running. and a notification unit that notifies at least the driver driving the vehicle of advice for resolving the risk event when the risk event has occurred in the determination result of the determination unit. there is
 本開示の一態様による運転支援方法は、車両の位置情報及び前記車両の挙動情報を必須情報源として取得して、判定部でリスクイベントの判定に用いる認識項目に応じ、1段目の状況認識として前記必須情報源に基づき運転状況を認識して前記判定部に出力する第1の認識機能と、1段目の出力ではリスクイベントの判定に用いる認識項目として不足する場合に、2段目の状況認識として前記必須情報源と任意情報源とに基づいて前記運転状況を認識して前記判定部に出力する第2の認識機能と、を備え、前記第1の認識機能の出力、及び第2の認識機能の出力の少なくとも一方を解析して、前記車両の走行中に発生し得るリスクイベントの有無を判定する場合に、複数のリスク判定項目を備えておき、それぞれのリスク判定項目の入力パラメータのうち、第1の状況認識部の認識項目のみで入力パラメータを満足する判定式で判定を行うパターンと、リスク判定項目によっては第2の状況認識部の認識項目も使わないと入力パラメータを満足しない場合は、追加で第2の状況認識部の認識項目も利用して判定し、判定結果で前記リスクイベントがあった場合に、当該リスクイベントを解消するためのアドバイスを、少なくとも前記車両を運転する運転者へ通知する、ことを特徴としている。 A driving support method according to an aspect of the present disclosure acquires vehicle position information and vehicle behavior information as essential information sources, and according to recognition items used for risk event determination by a determination unit, a first stage of situation recognition. A first recognition function that recognizes the driving situation based on the essential information source and outputs it to the determination unit as a a second recognition function for recognizing the driving situation based on the essential information source and the arbitrary information source as situation recognition and outputting it to the determination unit; When analyzing at least one of the outputs of the recognition function of and determining the presence or absence of a risk event that can occur while the vehicle is running, a plurality of risk determination items are provided, and input parameters for each risk determination item Among them, there is a pattern in which a decision is made with a judgment formula that satisfies the input parameters only with the recognition items of the first situation recognition part, and a pattern that satisfies the input parameters without using the recognition items of the second situation recognition part depending on the risk judgment item. If not, the recognition items of the second situation recognition unit are additionally used for determination, and if the risk event is found in the determination result, advice for resolving the risk event is given, and at least the vehicle is driven. It is characterized by notifying the driver to do so.
 本開示の一態様によるドライブレコーダは、車両に搭載され、少なくとも前記車両の前方を含む周囲の環境を撮影し、撮影した画像を、予め定めた容量の範囲内で、古い画像から順に書き換えながら録画すると共に、非常時期を検出した場合に、当該非常時期の前後の所定期間の画像を保存する主制御部と、上記の運転支援制御装置と、を有している。 A drive recorder according to one aspect of the present disclosure is mounted on a vehicle, captures at least the surrounding environment including the front of the vehicle, and records the captured images while rewriting the oldest images within a predetermined capacity range. In addition, it has a main control unit that, when an emergency is detected, saves images for a predetermined period before and after the emergency, and the above-described driving support control device.
 本開示の一態様による運転支援制御プログラムは、コンピュータを、上記の運転支援制御装置の各部として動作させることを特徴としている。 A driving assistance control program according to one aspect of the present disclosure is characterized by causing a computer to operate as each part of the driving assistance control device described above.
 本開示によれば、車両の運転中の様々なシーン毎に、必要十分な、運転操作情報及び車両走行周辺を監視した情報を取得して解析することで、個々の運転者の運転操作に応じたアドバイスを提供することができる。 According to the present disclosure, necessary and sufficient driving operation information and information obtained by monitoring the surroundings of the vehicle are acquired and analyzed for each of various scenes during driving of the vehicle, so that the driving operation of the individual driver can be performed. can provide good advice.
 本開示についての上記目的およびその他の目的、特徴や利点は、添付の図面を参照しながら下記の詳細な記述により、より明確になる。その図面は、
図1は、第1の実施の形態に係る車両の自動運転における走行を支援する運転支援システムの概略図であり、 図2は、第1の実施の形態に係るドライブレコーダにおける、主として運転支援制御を、機能別に分類して示した運転支援装置の機能ブロック図であり、 図3は、第1の実施の形態に係る状況認識部の詳細構成を示す機能ブロック図であり、 図4は、第1の実施の形態に係る行動別スコアリング処理部で処理する項目を分類したテーブル図であり、 図5は、第1の実施の形態に係る場所別スコアリング処理部で処理する項目を分類したテーブル図であるり、 図6Aは、は第1の実施の形態に係る運転支援システムにおける運転支援制御解析ルーチンを示すフローチャートであり、 図6Bは、第1の実施の形態に係る運転支援システムにおける運転支援制御通知ルーチンを示すフローチャートであり、 図7は、第1の実施の形態に係るドライブレコーダの時間-CPUリソース特性図であり、 図8は、第1の実施の形態の変形例に係る状況認識部の詳細構成を示す機能ブロック図であり(CAN情報追加)、 図9は、第2の実施の形態に係るドライブレコーダにおける、主として運転支援制御を、機能別に分類して示した運転支援装置の機能ブロック図であり、 図10は、第2の実施の形態に係る運転支援システムにおける運転支援制御解析ルーチンを示すフローチャートであり、 図11は、第3の実施の形態に係るドライブレコーダにおける、主として運転支援制御を、機能別に分類して示した運転支援装置の機能ブロック図であり、 図12は、第3の実施の形態に係る運転支援システムにおける運転支援制御解析ルーチンを示すフローチャートであり、 図13は、第4の実施の形態に係るドライブレコーダにおける、主として運転支援制御を、機能別に分類して示した運転支援装置の機能ブロック図であり、 図14は、第4の実施の形態に係る運転支援システムにおける運転支援制御解析ルーチンを示すフローチャートであり、 図15は、第5の実施の形態に係るドライブレコーダにおける、主として運転支援制御を、機能別に分類して示した運転支援装置の機能ブロック図であり、 図16は、第5の実施の形態に係る運転支援システムにおける運転支援制御解析ルーチンを示すフローチャートであり、 図17は、第6の実施の形態に係るドライブレコーダにおける、主として運転支援制御を、機能別に分類して示した運転支援装置の機能ブロック図であり、 図18は、第6の実施の形態に係る運転支援システムにおける運転支援制御解析ルーチンを示すフローチャートであり、 図19は、第7の実施の形態に係る車両の自動運転における走行を支援する運転支援システムの概略図である。
The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description with reference to the accompanying drawings. The drawing is
FIG. 1 is a schematic diagram of a driving support system that supports driving in automatic driving of a vehicle according to the first embodiment, FIG. 2 is a functional block diagram of a driving assistance device in the drive recorder according to the first embodiment, mainly showing driving assistance control classified by function; FIG. 3 is a functional block diagram showing the detailed configuration of the situation recognition unit according to the first embodiment; FIG. 4 is a table diagram in which items processed by the action-based scoring processing unit according to the first embodiment are classified; FIG. 5 is a table diagram in which items processed by the location-specific scoring processing unit according to the first embodiment are classified; 6A is a flowchart showing a driving support control analysis routine in the driving support system according to the first embodiment, FIG. 6B is a flowchart showing a driving support control notification routine in the driving support system according to the first embodiment; FIG. 7 is a time-CPU resource characteristic diagram of the drive recorder according to the first embodiment, FIG. 8 is a functional block diagram showing the detailed configuration of the situation recognition unit according to the modification of the first embodiment (addition of CAN information); FIG. 9 is a functional block diagram of a driving assistance device in the drive recorder according to the second embodiment, mainly showing driving assistance control classified by function; FIG. 10 is a flowchart showing a driving support control analysis routine in the driving support system according to the second embodiment; FIG. 11 is a functional block diagram of a driving assistance device in a drive recorder according to a third embodiment, in which mainly driving assistance control is categorized by function; FIG. 12 is a flowchart showing a driving support control analysis routine in the driving support system according to the third embodiment, FIG. 13 is a functional block diagram of a driving assistance device in a drive recorder according to a fourth embodiment, in which mainly driving assistance control is categorized by function; FIG. 14 is a flowchart showing a driving support control analysis routine in the driving support system according to the fourth embodiment, FIG. 15 is a functional block diagram of a driving assistance device in a drive recorder according to a fifth embodiment, mainly showing driving assistance control classified by function; FIG. 16 is a flowchart showing a driving support control analysis routine in the driving support system according to the fifth embodiment; FIG. 17 is a functional block diagram of a driving assistance device in a drive recorder according to a sixth embodiment, in which mainly driving assistance control is categorized by function; FIG. 18 is a flowchart showing a driving support control analysis routine in the driving support system according to the sixth embodiment; FIG. 19 is a schematic diagram of a driving support system that supports traveling in automatic driving of a vehicle according to the seventh embodiment.
 [第1の実施の形態] [First embodiment]
 図1は、第1の実施の形態に係る運転支援システムが適用された車両10の平面図である。 FIG. 1 is a plan view of a vehicle 10 to which a driving support system according to the first embodiment is applied.
 車両10には、車両制御装置12及びドライブレコーダ14が搭載されている。ドライブレコーダ14は、ドライブレコーダ14としての本来の機能(車外及び車室内の撮影による監視、及び緊急時の画像保存機能)に加え、第1の実施の形態に係る運転支援装置14A(図1に点線枠で表示)としての機能を有している。運転支援装置14Aは、例えばCPU(Central Processing Unit)及び書き換え可能な不揮発性メモリ等を含んで構成される。不揮発性メモリには後述する運転支援制御を示すプログラムが記憶され、CPUは、プログラムを読み出して実行する。 A vehicle control device 12 and a drive recorder 14 are mounted on the vehicle 10 . The drive recorder 14 has the original functions of the drive recorder 14 (monitoring by photographing the exterior and interior of the vehicle, and the function of storing images in an emergency), as well as the driving assistance device 14A according to the first embodiment (see FIG. 1). (represented by a dotted line frame). The driving assistance device 14A includes, for example, a CPU (Central Processing Unit) and a rewritable nonvolatile memory. The nonvolatile memory stores a program indicating driving support control, which will be described later, and the CPU reads and executes the program.
 車両制御装置12は、車両10が走行しているときの駆動系統(エンジン制御等)及び電気系統を含む制御を実行する。 The vehicle control device 12 executes control including the drive system (engine control, etc.) and the electrical system when the vehicle 10 is running.
 ドライブレコーダ14には、ジャイロセンサ(Gyroセンサ)16(図1では、「Gyro」と表記)、加速度センサ(Gセンサ)18(図1では、「G」と表記)、及びGPS受信機20(図1では、「GPS」と表記)が接続されている。 The drive recorder 14 includes a gyro sensor (Gyro sensor) 16 (indicated as "Gyro" in FIG. 1), an acceleration sensor (G sensor) 18 (indicated as "G" in FIG. 1), and a GPS receiver 20 ( In FIG. 1, "GPS") is connected.
 ジャイロセンサ16は、車両10の走行方向の方位情報を検出し、加速度センサ18は、車両10の加速度及び減速度を検出し、GPS受信機20は、車両10の位置情報を検出する。 The gyro sensor 16 detects azimuth information in the running direction of the vehicle 10 , the acceleration sensor 18 detects acceleration and deceleration of the vehicle 10 , and the GPS receiver 20 detects position information of the vehicle 10 .
 また、車両制御装置12には、複数のミリ波レーダ及びLIDARを備えたレーダ群22が接続されている。 A radar group 22 including a plurality of millimeter wave radars and LIDAR is also connected to the vehicle control device 12 .
 レーダ群22は、車両10の前方の障害物等を検出する。 The radar group 22 detects obstacles and the like ahead of the vehicle 10 .
 車両制御装置12では、受信した情報に利用し、駆動系統及び電気系統を含む制御を実行する、或いは、位置情報や目的地までの走行ルート等を運転中の運転者24へ報知する。なお、図1では、降車している運転者24を図示している。 The vehicle control device 12 uses the received information to execute control including the drive system and the electrical system, or to notify the driver 24 who is driving of the position information, the travel route to the destination, and the like. Note that FIG. 1 shows the driver 24 getting off the vehicle.
 ドライブレコーダ14は、車両10の周囲を撮影するカメラ群(図1では、一例として、前方監視カメラ26A及び後方監視カメラ26Bを図示)を備えている。また、第1の実施の形態のドライブレコーダ14は、車両10の室内を撮影する車室内監視カメラ26Cを備えている(以下、前方監視カメラ26A、後方監視カメラ26B、及び車室内監視カメラ26Cを総称する場合、「カメラ群26」という)。なお、カメラ群26は、車両10の右側を撮影する右方向監視カメラ26D及び車両10の左側を撮影する左方向監視カメラ26Eを設けてもよい。 The drive recorder 14 includes a group of cameras that capture the surroundings of the vehicle 10 (a front monitoring camera 26A and a rear monitoring camera 26B are shown in FIG. 1 as an example). The drive recorder 14 of the first embodiment also includes a vehicle interior monitoring camera 26C that captures the interior of the vehicle 10 (hereinafter referred to as the front monitoring camera 26A, the rear monitoring camera 26B, and the vehicle interior monitoring camera 26C). When collectively referred to as "camera group 26"). Note that the camera group 26 may include a right monitoring camera 26</b>D that captures the right side of the vehicle 10 and a left monitoring camera 26</b>E that captures the left side of the vehicle 10 .
 ドライブレコーダ14には、第1の実施の形態に係る運転支援装置14Aが搭載されている。図1では、ドライブレコーダ14の内部に、点線で図示したが、ドライブレコーダ14の本来の機能と共通のデバイス等を利用するため、図2以降では、ドライブレコーダ14(14A)と表記し、ドライブレコーダ14の本来の機能の説明と、運転支援装置14Aの機能の説明とで、使い分けることとする。 The drive recorder 14 is equipped with a driving support device 14A according to the first embodiment. In FIG. 1, the inside of the drive recorder 14 is indicated by a dotted line. The description of the original function of the recorder 14 and the description of the function of the driving support device 14A are used separately.
 図2は、ドライブレコーダ14における、主として運転支援制御を、機能別に分類して示した運転支援装置14Aの機能ブロック図である。なお、各ブロックは、運転支援装置14Aのハード構成を限定するものではない。必要に応じて、一部又は全部のブロックを、運転支援プログラムとして、マイクロコンピュータで動作させるようにしてもよい。 FIG. 2 is a functional block diagram of the driving assistance device 14A in the drive recorder 14, which mainly shows driving assistance control classified by function. Note that each block does not limit the hardware configuration of the driving support device 14A. If necessary, some or all of the blocks may be operated by a microcomputer as a driving support program.
 図2に示される如く、ドライブレコーダ14は、ドライブレコーダ主制御部30を備えている。ドライブレコーダ主制御部30には、カメラ群26(前方監視カメラ26A、後方監視カメラ26B、及び車室内監視カメラ26C)が接続されており、カメラ群により、少なくも運転中は常時撮影している。 As shown in FIG. 2 , the drive recorder 14 has a drive recorder main control section 30 . A camera group 26 (front monitoring camera 26A, rear monitoring camera 26B, and vehicle interior monitoring camera 26C) is connected to the drive recorder main control unit 30, and the camera group continuously captures images at least during driving. .
 (ドライブレコーダ14の本来の機能) (Original function of the drive recorder 14)
 ドライブレコーダ主制御部30では、ドライブレコーダ14の本来の機能として、カメラ群26による撮影し、撮影した画像を、予め定めた容量(例えば、1時間程度)の範囲内で、古い画像から順に書き換えるようにしている。言い換えれば、カメラ群26での撮影は常時実行し、予め定めた期間をずらしながら録画するようになっている。 In the drive recorder main control unit 30, as an original function of the drive recorder 14, the images captured by the camera group 26 are rewritten sequentially from the oldest image within a predetermined capacity (for example, about one hour). I'm trying In other words, the camera group 26 always shoots, and records while shifting the predetermined period.
 ここで、非常時期(例えば、急ブレーキ等の急減速時)を検出し(或いは、車両制御装置12から信号を受け)、当該非常時期前後の所定期間(例えば、前後15分等)の画像を保存する。また、スルー画像(録画中の画像)は、モニタ部32によって、運転者24が確認可能となっている。 Here, an emergency time (for example, during sudden deceleration such as sudden braking) is detected (or a signal is received from the vehicle control device 12), and images of a predetermined period (for example, 15 minutes before and after) before and after the emergency time are captured. save. A through image (image being recorded) can be confirmed by the driver 24 through the monitor unit 32 .
 非常時期前後の画像は、例えば、非常となる要因を解析するために適用可能である。モニタ部32は、録画した画像の再生にも利用可能である。 Images before and after the emergency period can be applied, for example, to analyze the factors causing the emergency. The monitor unit 32 can also be used to reproduce recorded images.
 (運転支援機能) (driving support function)
 ここで、第1の実施の形態のドライブレコーダ14は、運転支援装置14Aとしての役目を有している。運転支援装置14Aでは、ジャイロセンサ16、加速度センサ18、GPS受信機20、及びカメラ群26からの情報に基づき、運転者24の運転状況を解析し、運転に適さない危険な行動(以下、リスクイベントという)の有無を判別して、次の運転に向けて、当該リスクイベントが軽減されるアドバイスを通知する。なお、車両制御装置12に接続された、レーダ群22からの情報を取得するようにしてもよい。 Here, the drive recorder 14 of the first embodiment has a role as a driving support device 14A. The driving support device 14A analyzes the driving situation of the driver 24 based on information from the gyro sensor 16, the acceleration sensor 18, the GPS receiver 20, and the camera group 26, and detects dangerous behavior unsuitable for driving (hereinafter referred to as risk event) is determined, and advice for reducing the risk event is notified for the next driving. Information may be acquired from the radar group 22 connected to the vehicle control device 12 .
 運転支援装置14Aは、センサ情報取得部34を備えており、Gyro情報(方位情報)、G情報(加減速情報)、GPS情報(位置情報)、レーダ群解析情報(障害物情報)を取得する。 The driving support device 14A includes a sensor information acquisition unit 34 and acquires Gyro information (direction information), G information (acceleration/deceleration information), GPS information (position information), and radar group analysis information (obstacle information). .
 センサ情報取得部34は、状況認識部36に接続されており、取得したGyro情報、G情報、GPS情報、及び障害物検知情報を、状況認識部36へ送出する。 The sensor information acquisition unit 34 is connected to the situation recognition unit 36, and sends the acquired Gyro information, G information, GPS information, and obstacle detection information to the situation recognition unit 36.
 状況認識部36は、ドライブレコーダ主制御部30に接続されている。状況認識部36では、ドライブレコーダ主制御部30で利用される、カメラ群26による撮影情報を取得する。 The situation recognition unit 36 is connected to the drive recorder main control unit 30. The situation recognizing unit 36 acquires shooting information from the camera group 26 that is used by the drive recorder main control unit 30 .
 これにより、状況認識部36には、運転者24が運転しているときの、車両10の走行履歴、挙動、車外及び車室内の状況(画像情報)を集約でき、運転中の状況を解析し、認識することができる。 As a result, the situation recognition unit 36 can collect the travel history, behavior, and situations (image information) of the vehicle 10 outside and inside the vehicle when the driver 24 is driving, and analyze the situation during driving. , can be recognized.
 (状況認識部36の詳細構成) (Detailed configuration of situation recognition unit 36)
 図3は、状況認識部36の詳細構成を示す機能ブロック図である。 FIG. 3 is a functional block diagram showing the detailed configuration of the situation recognition unit 36. As shown in FIG.
 状況認識部36は、走行地点解析部36A、挙動解析部36B、周辺環境認識部36C、及びドライバ状況認識部36Dを備えている。 The situation recognition unit 36 includes a travel point analysis unit 36A, a behavior analysis unit 36B, a surrounding environment recognition unit 36C, and a driver situation recognition unit 36D.
 走行地点解析部36Aは、センサ情報取得部34からGPS情報を取得し、車両10の走行地点、例えば、交差点、駐車場や屋内、及び狭路等の何れを走行しているかを解析する。 The travel point analysis unit 36A acquires GPS information from the sensor information acquisition unit 34 and analyzes the travel point of the vehicle 10, such as an intersection, parking lot, indoors, narrow road, or the like.
 挙動解析部36Bは、センサ情報取得部34からG情報(加速度情報)及びGyro情報(方位情報)を取得し、車両10の挙動、例えば、右左折、発進、停車、車速、舵角等を解析する。 The behavior analysis unit 36B acquires G information (acceleration information) and Gyro information (azimuth information) from the sensor information acquisition unit 34, and analyzes the behavior of the vehicle 10, for example, turning right or left, starting, stopping, vehicle speed, steering angle, etc. do.
 周辺環境認識部36Cは、ドライブレコーダ主制御部30から車外画像情報、及び、必要に応じてセンサ情報取得部34から障害物情報を取得し、車両10の周辺環境、例えば、自車周辺の人、先行車、対向車等の位置や動き、車道上の白線や黄色線、並びに車間距離等を解析する。 The surrounding environment recognition unit 36C acquires vehicle exterior image information from the drive recorder main control unit 30 and, if necessary, obstacle information from the sensor information acquisition unit 34. , the positions and movements of the preceding and oncoming vehicles, the white and yellow lines on the roadway, and the distance between vehicles.
 ドライバ状況認識部36Dは、ドライブレコーダ主制御部30から車室内画像情報を取得し、車両10を運転する運転者24の運転姿勢、例えば、視線、顔向き、開眼・閉眼度、骨格、所持対象物等の状態を解析する。 The driver situation recognition unit 36D acquires vehicle interior image information from the drive recorder main control unit 30, and recognizes the driving posture of the driver 24 who drives the vehicle 10, such as line of sight, face direction, degree of eyes open/closed, skeleton, possession object. Analyze the state of things.
 状況認識部36では、走行地点解析部36Aで解析した車両10の走行地点、挙動解析部36Bで解析した車両10の挙動の各々の情報を、リスクイベント処理制御部38へ送出する(必須情報の送出)。 The situation recognition unit 36 sends information about the travel point of the vehicle 10 analyzed by the travel point analysis unit 36A and the behavior of the vehicle 10 analyzed by the behavior analysis unit 36B to the risk event processing control unit 38 (essential information sending).
 また、状況認識部36では、リスクイベント処理制御部38からの要求に基づき、周辺環境認識部36Cで解析した車両10の周辺環境、及びドライバ状況認識部36Dで解析部した運転者24の運転姿勢の各々の情報を、リスクイベント処理制御部38へ送出する(任意情報の送出)。 In response to a request from the risk event processing control unit 38, the situation recognition unit 36 also detects the surrounding environment of the vehicle 10 analyzed by the surrounding environment recognition unit 36C and the driving posture of the driver 24 analyzed by the driver situation recognition unit 36D. to the risk event processing control unit 38 (transmission of arbitrary information).
 図2に示される如く、リスクイベント処理制御部38は、リスクイベント判定部40を備えている。このリスクイベント判定部40には、状況認識部36から送出される、必須情報である車両10の走行地点、及び車両10の挙動、並びに、任意情報である車両10の周辺環境、及び運転者24の運転姿勢の各々の情報が、集約される。 As shown in FIG. 2, the risk event processing control unit 38 includes a risk event determination unit 40. The risk event determination unit 40 receives essential information sent from the situation recognition unit 36 such as the travel location of the vehicle 10 and the behavior of the vehicle 10, and optional information such as the surrounding environment of the vehicle 10 and the driver 24 of each of the driving postures is aggregated.
 リスクイベント判定部40では、車両10の運転中のリスクの有無を判定する役目を有している。 The risk event determination unit 40 has the role of determining whether or not there is a risk while driving the vehicle 10 .
 運転中のリスクとは、例えば、道路交通法等との照合で決まる行動パターンと、車両10の走行地点と走行状態で決まる場所パターンとにおいて、安全走行を妨げる可能性のある状況の危険度の度合いであり、第1の実施の形態では、定量化(スコアリング)するようにしている。 The risk during driving is, for example, the degree of risk of a situation that may hinder safe driving in the behavior pattern determined by checking with the Road Traffic Act, etc., and the location pattern determined by the driving point and driving state of the vehicle 10. It is the degree, and in the first embodiment, it is quantified (scoring).
 ここで、リスクイベント判定部40は、2段階でリスクの有無の判定を実行している。 Here, the risk event determination unit 40 determines whether or not there is a risk in two steps.
 リスクイベント判定部40では、必須情報(車両10の走行地点、及び車両10の挙動)、並びに、任意情報(車両10の周辺環境、運転者24の運転姿勢)に基づき、想定し得る項目のリスクの有無を判定する。想定し得る項目は、例えば、予め登録された過去に発生したリスクから抜粋すればよく、例えば、処理能力との関係で、100~200項目のリスクが想定されれば充分であるが、この項目数は限定されるものではない。判定項目はそれぞれシーン検索式と判定式で判定する。 In the risk event determination unit 40, based on the essential information (driving point of the vehicle 10 and the behavior of the vehicle 10) and optional information (surrounding environment of the vehicle 10, driving posture of the driver 24), the risk of items that can be assumed. Determine the presence or absence of Items that can be assumed may be extracted from, for example, pre-registered risks that have occurred in the past. The number is not limited. Judgment items are judged by a scene search formula and a judgment formula, respectively.
 シーン検索式としては、信号無交差点や信号有交差点・駐車場等の地点に基づく検索条件、右折・左折・直進・停車など挙動に基づく検索条件、先行車や左路肩の歩行者・交差点停車中の自転車など、周辺環境に基づく検索条件、及び、ドライバの視線や眠気・スマホ通話などのドライバ状態に基づく検索条件、があり、それぞれ単独でもよいが組み合わせでもよい。シーン検索式で合致したシーンに対し、判定式にてリスク有無の判定を行う。 Scene search formulas include search conditions based on points such as intersections without traffic lights, intersections with traffic lights, parking lots, etc., search conditions based on behavior such as turning right, turning left, going straight, stopping, vehicles in front, pedestrians on the left shoulder, and vehicles stopped at intersections. There are search conditions based on the surrounding environment, such as a bicycle, and search conditions based on the driver's state, such as the driver's line of sight, drowsiness, and smartphone calls. For the scenes that match the scene search formula, the decision formula is used to determine whether or not there is a risk.
 判定式としては急ブレーキのみを判定する場合(具体的には、加速度が所定値(例えば、0.35G)を超えるか否か)、先行車両との車間距離が詰まりすぎる場合(具体的には、車間距離<1.5×(自車速度)/9.8)、等がある。 As a judgment formula, when judging only sudden braking (specifically, whether the acceleration exceeds a predetermined value (for example, 0.35 G)), when the distance to the preceding vehicle is too small (specifically, , inter-vehicle distance<1.5×(vehicle speed) 2 /9.8), and so on.
 自車速度は、Gセンサ情報の積分値を利用して算出でき(第1の認識状況に相当)、先行車との車間距離は、前方監視カメラ26Aで撮影した画像で認識した先行車両のサイズと、カメラ群26のハードウェアパラメータから算出できる(第2の認識状況に相当)。 The vehicle speed can be calculated using the integrated value of the G sensor information (corresponding to the first recognition situation), and the distance to the preceding vehicle is the size of the preceding vehicle recognized from the image taken by the forward monitoring camera 26A. , can be calculated from the hardware parameters of the camera group 26 (corresponding to the second recognition situation).
 リスクイベント判定部40における1段目の判定では、必須情報(車両10の走行地点、及び車両10の挙動)に基づき、想定し得る項目のリスクの有無を判定する。 In the first stage of determination by the risk event determination unit 40, the presence or absence of a conceivable item of risk is determined based on the essential information (the travel location of the vehicle 10 and the behavior of the vehicle 10).
 1段目のシーン検索式としては、信号無交差点、信号有交差点等の地点に基づく検索条件や、右折・左折・直進・停車など挙動に基づく検索条件があり、それぞれ単独でもよいが組み合わせでもよい。 As the scene search formula in the first stage, there are search conditions based on points such as intersections without traffic lights, intersections with traffic lights, etc., and search conditions based on behavior such as turning right, turning left, going straight, and stopping. .
 1段目の判定式としては、急ブレーキのみを判定する場合(具体的には、加速度が所定値(例えば、0.35G)を超えるか否か)、一時停止場所での不停止(具体的には、一時停止の標識がある交差点において、車速=0km/hを含まず通過する場合)、等がある。 As the first-stage judgment formula, when judging only sudden braking (specifically, whether the acceleration exceeds a predetermined value (for example, 0.35 G)), non-stop at the stop place (specifically, , when passing through an intersection with a stop sign without including the vehicle speed = 0 km/h), and the like.
 自車速度は、Gセンサ情報の積分値を利用して算出でき(第1の認識状況に相当)、先行車との車間距離は、前方監視カメラ26Aで撮影した画像で認識した先行車両のサイズと、カメラ群26のハードウェアパラメータから算出できる(第2の認識状況に相当)。 The vehicle speed can be calculated using the integrated value of the G sensor information (corresponding to the first recognition situation), and the distance to the preceding vehicle is the size of the preceding vehicle recognized from the image taken by the forward monitoring camera 26A. , can be calculated from the hardware parameters of the camera group 26 (corresponding to the second recognition situation).
 ここで、必須情報のみでリスクの有無を判定できる場合は問題ないが、情報不足でリスクの判定が困難となる場合がある。そこで、情報不足の場合は、画像要求部42を介して、ドライブレコーダ主制御部30に対して、カメラ群26からの情報を状況認識部36へ送出するように要求する。 Here, there is no problem if the presence or absence of risk can be determined with only the required information, but it may be difficult to determine the risk due to lack of information. Therefore, if the information is insufficient, the drive recorder main control section 30 is requested to transmit the information from the camera group 26 to the situation recognition section 36 via the image request section 42 .
 状況認識部36では、カメラ群26からの情報を取得することで、任意情報として、周辺環境認識部36C(図2参照)から車両10の周辺環境をリスクイベント判定部40へ送出すると共に、ドライバ状況認識部36D(図2参照)から運転者24の運転姿勢をリスクイベント判定部40へ送出する。 In the situation recognition unit 36, by acquiring information from the camera group 26, the surrounding environment of the vehicle 10 is sent from the surrounding environment recognition unit 36C (see FIG. 2) to the risk event determination unit 40 as arbitrary information, and the driver The driving posture of the driver 24 is sent to the risk event determination section 40 from the situation recognition section 36D (see FIG. 2).
 リスクイベント判定部40における2段目の判定では、必須情報(車両10の走行地点、及び車両10の挙動)、並びに、任意情報(車両10の周辺環境、運転者24の運転姿勢)に基づき、想定し得る項目のリスクの有無を判定する。 In the second stage determination in the risk event determination unit 40, based on the essential information (travel location of the vehicle 10 and the behavior of the vehicle 10) and optional information (surrounding environment of the vehicle 10, driving posture of the driver 24), Determine the presence or absence of risks for items that can be assumed.
 シーン検索式で、[場所検索]信号無交差点、[挙動]直進、[周辺環境]先行車あり、[ドライバ]前方注視、を検索する。次に、判定式にて、該当するシーンにて車間距離不足があったかどうかを調べるため、車間距離<1.5×(自車速度)/9.8の条件が成立するかどうかを判定する。任意情報の画像から取得した先行車のサイズと、ハードウェアパラメータをもとに、車間距離を算出する。必須情報から取得したG値の積分値をもとに、自車速度を算出する。これらの値を判定式に入力・満たす場合はリスクあり、満たさない場合はリスクなしと判定する。 With the scene search formula, search for [place search] intersection without traffic lights, [behavior] go straight, [surrounding environment] with preceding vehicle, [driver] look ahead. Next, in order to check whether or not the following distance is insufficient in the relevant scene, it is determined whether or not the condition of following distance<1.5×(vehicle speed) 2 /9.8 is established. . The inter-vehicle distance is calculated based on the size of the preceding vehicle obtained from the image of the optional information and the hardware parameters. Based on the integral value of the G value obtained from the essential information, the own vehicle speed is calculated. If these values are entered into the determination formula and satisfy the risk, it is determined that there is no risk if they are not satisfied.
 すなわち、比較的情報量が多く、解析処理に時間がかかる画像情報を最初から利用せず、比較的情報量が少なく、解析処理に時間がかからないセンサ情報に基づいて、リスクの有無を判定することで、ドライブレコーダ14としての本来の機能を損なうことなく、運転支援装置14Aとしての機能を実現することができる。 That is, the presence or absence of risk is determined based on sensor information, which has a relatively small amount of information and does not take much time to analyze, rather than using image information, which has a relatively large amount of information and takes a long time to analyze, from the beginning. Therefore, the function of the driving support device 14A can be realized without impairing the original function of the drive recorder 14. FIG.
 リスクイベント判定部40は、行動別スコアリング処理部44及び場所別スコアリング処理部46に接続されており、行動別スコアリング処理部44及び場所別スコアリング処理部46に対して、各項目のリスクの有無の判定結果が送出されるようになっている。 The risk event determination unit 40 is connected to the behavior-based scoring processing unit 44 and the location-based scoring processing unit 46, and provides the behavior-based scoring processing unit 44 and the location-based scoring processing unit 46 with each item. The determination result of the presence or absence of risk is sent out.
 なお、行動別スコアリング処理部44及び場所別スコアリング処理部46へ全ての項目のリスクの有無の判定結果を送出する必要はなく、行動及び場所という概念に基づいて、取捨選択することも可能である。例えば、全項目数が160項目ある内、行動別スコアリング処理部44及び場所別スコアリング処理部46にそれぞれ130項目ずつ送出する場合もある。 It should be noted that it is not necessary to send the determination result of the presence or absence of risk for all items to the behavior-based scoring processing unit 44 and the location-based scoring processing unit 46, and it is also possible to select based on the concept of behavior and location. is. For example, out of 160 items in total, 130 items may be sent to each of the behavior-based scoring processing unit 44 and the location-based scoring processing unit 46 .
 図4は、行動別スコアリング処理部44で処理する項目を分類したテーブル図である。図4に示される如く、行動別の項目として、大項目、中項目、及び小項目に分類する。 FIG. 4 is a table diagram in which the items processed by the action-based scoring processing unit 44 are classified. As shown in FIG. 4, the behavioral items are classified into large items, medium items, and small items.
 大項目は、例えば、道路交通法による分類、安全運転義務による分類、運転者状態による分類等が挙げられる。 Major items include, for example, classification according to the Road Traffic Act, classification according to safe driving obligations, classification according to the driver's condition, etc.
 中項目は、例えば、道路標識に関わる分類、運転者行動に関わる分類、運転者体調に関わる分類等が挙げられる。 Medium items include, for example, classifications related to road signs, classifications related to driver behavior, classifications related to driver's physical condition, etc.
 小項目は、道路標識毎の分類、運転者行動毎の分類、運転者体調毎の分類等が挙げられる。 Subcategories include classification by road sign, classification by driver behavior, classification by driver's physical condition, etc.
 大項目、中項目、小項目は、予めそれぞれを識別するIDが割り当てられており、小項目IDには各IDが集約されることになる。この小項目IDを見ることで、大項目の種類、中項目の種類、小項目の種類が識別可能となる。 IDs are assigned in advance to identify each of the major, medium, and minor items, and each ID is aggregated into the minor item ID. By looking at this minor item ID, it is possible to identify the major item type, the medium item type, and the minor item type.
 行動別スコアリング処理部44では、予め定めた計算式に基づいて、小項目毎にスコアリングし、その合計値で評価する。例えば、行動において、平均的な運転をする運転者24に対して、100点満点中50点を付与し、リスクがあると減点され、リスクが無い状態が継続されると加点されるような計算式を構築してもよい。 The action-based scoring processing unit 44 scores each sub-item based on a predetermined calculation formula, and evaluates with the total value. For example, in the behavior, a calculation that gives 50 points out of 100 to the driver 24 who drives on average, deducts points if there is risk, and adds points if the risk-free state continues. You can build expressions.
 図5は、場所別スコアリング処理部46で処理する項目を分類したテーブル図である。図5に示される如く、場所別の項目として、車両地点状況と車両走行状態とに分類し、それぞれの組み合わせによってシーンを設定する。 FIG. 5 is a table diagram in which the items processed by the location-specific scoring processing unit 46 are classified. As shown in FIG. 5, items for each location are classified into vehicle location conditions and vehicle running conditions, and a scene is set by combining each.
 車両地点状況は、例えば、信号等の施設有無、道路の形状、特定場所等が挙げられる。 Vehicle point conditions include, for example, the presence or absence of facilities such as traffic lights, the shape of roads, and specific locations.
 車両走行状態は、例えば、速度、加減速度、場所、天候等が挙げられる。 Vehicle driving conditions include, for example, speed, acceleration/deceleration, location, and weather.
 車両地点状況と車両走行状態との組み合わせは、車両地点状況の種類×車両走行状態の種類の数のシーンが存在し、それぞれを識別するIDが割り当てられており、IDを見ることで、シーンが識別可能となる。 As for the combination of the vehicle point situation and the vehicle traveling state, there are scenes of the number of types of vehicle point situation times the number of types of vehicle traveling state, and an ID for identifying each is assigned. Identifiable.
 場所別スコアリング処理部46では、予め定めた計算式に基づいて、シーン毎にスコアリングし、その合計値で評価する。例えば、場所において、平均的な運転をする運転者24に対して、100点満点中50点を付与し、リスクがあると減点され、リスクが無い状態が継続されると加点されるような計算式を構築してもよい。 The location-specific scoring processing unit 46 scores each scene based on a predetermined calculation formula, and evaluates with the total value. For example, a calculation that gives 50 points out of 100 to the driver 24 who drives on average in the place, points are deducted if there is risk, and points are added if the risk-free state continues. You can build expressions.
 図2に示される如く、行動別スコアリング処理部44及び場所別スコアリング処理部46は、それぞれ、優先度決定部48に接続されている。 As shown in FIG. 2, the behavior-based scoring processing unit 44 and the location-based scoring processing unit 46 are each connected to a priority determination unit 48.
 優先度決定部48では、行動別スコアリング処理部44及び場所別スコアリング処理部46のそれぞれのスコアリング結果に基づいて、運転者24へ通知するリスクを抽出し、かつ、抽出したリスクに優先度を設定する。リスクは、運転者24へ通知することが前提であるが、通知制限がある場合は、優先度の高い順から通知する必要があり、所定数のリスクの通知をアドバイス通知部50へ送出する。 The priority determining unit 48 extracts risks to be notified to the driver 24 based on the scoring results of the behavior-based scoring processing unit 44 and the location-based scoring processing unit 46, and prioritizes the extracted risks. set the degree. Risks are premised to be notified to the driver 24 , but if notification is restricted, it is necessary to notify the risks in descending order of priority.
 アドバイス通知部50では、受け付けたリスクに基づいて、具体的なリスクコンサルティングを実行し、運転者24が所持する携帯端末装置24A等へ通知する。なお、通知先の携帯端末装置24Aへは、Webアプリやネイティブアプリを利用して携帯端末装置24Aを介して通知することが可能である。また、携帯端末装置24Aに限らず、ドライブレコーダ14や車両10の音声通知機能を利用して、運転者24へ通知してもよいし、車両10のコックピットやディスプレイを介して、運転者24へ通知してもよい。 The advice notification unit 50 executes specific risk consulting based on the received risks, and notifies the mobile terminal device 24A or the like possessed by the driver 24. In addition, it is possible to notify the mobile terminal device 24A of the notification destination via the mobile terminal device 24A using a web application or a native application. In addition, not only the portable terminal device 24A but also the drive recorder 14 or the voice notification function of the vehicle 10 may be used to notify the driver 24, or via the cockpit or display of the vehicle 10 to the driver 24. may notify you.
 以下に、第1の実施の形態の作用を図6のフローチャートに従い説明する。 The operation of the first embodiment will be described below according to the flowchart of FIG.
 図6Aは、第1の実施の形態に係る運転支援システムにおける運転支援制御解析ルーチンを示すフローチャート、図6Bは、第1の実施の形態に係る運転支援システムにおける運転支援制御通知ルーチンを示すフローチャートである。 6A is a flowchart showing a driving assistance control analysis routine in the driving assistance system according to the first embodiment, and FIG. 6B is a flowchart showing a driving assistance control notification routine in the driving assistance system according to the first embodiment. be.
 まず、図6Aに従い、運転支援制御解析ルーチンを説明する。 First, the driving support control analysis routine will be described according to FIG. 6A.
 ステップ100では、情報の解析時期か否かを判断する。情報の解析時期は、定期的、不定期の何れであってもよい。 At step 100, it is determined whether or not it is time to analyze the information. Information may be analyzed periodically or irregularly.
 なお、画像情報処理を実行する場合は、例えば、予め定めた条件を設定することが好ましい。予め定めた条件とは、図7に示される如く、ドライブレコーダ14に搭載されているハードウェア(CPUリソース)の空き状態に基づき設定する。 When executing image information processing, it is preferable to set predetermined conditions, for example. The predetermined condition is set based on the availability of hardware (CPU resources) installed in the drive recorder 14, as shown in FIG.
 図7の横軸は時間、縦軸はCPUリソース(使用率)であり、機能がロックする使用率を95%と設定している。このような状態で、時間によって、CPUリソースは変位し、5%以下を維持しながら駆動している。 In FIG. 7, the horizontal axis is time and the vertical axis is CPU resource (usage rate), and the usage rate at which the function locks is set to 95%. In this state, the CPU resource fluctuates over time and is driven while maintaining 5% or less.
 ここで、第1の実施の形態に係る運転支援装置14Aとして、画像処理を実行する時間帯を、予め定めたしきい値以下(図7では、90%)以下のときとしている。すなわち、画像処理は、リアルタイムで実行する必要はなく、運転者24に対して通知をする時期までに画像処理が実行され、適正なアドバイスが生成されればよい。通知する時期は、今回の運転の降車まででもよいし、次回の運転の降車まででもよく、通信時期を認識する手段としては、イグニッションスイッチのオン・オフ状態、及び、ドア施錠・開錠状態をドライブレコーダ14が検出してもよいし、運転者24が所持する携帯端末装置24Aとドライブレコーダ14との間の近距離通信機能から離間距離を取得してもよい。 Here, in the driving assistance device 14A according to the first embodiment, the time period for executing image processing is set to be equal to or less than a predetermined threshold value (90% in FIG. 7). That is, the image processing does not have to be executed in real time, and it is sufficient that the image processing is executed by the time of notifying the driver 24 and appropriate advice is generated. The timing of notification may be until getting off the vehicle for the current driving or until getting off the vehicle for the next driving. The distance may be detected by the drive recorder 14 , or may be acquired from the short-range communication function between the mobile terminal device 24</b>A possessed by the driver 24 and the drive recorder 14 .
 なお、しきい値は、90%に限定されるものではなく、ドライブレコーダ14の本来の機能を妨げない程度で決めればよい。 It should be noted that the threshold is not limited to 90%, and may be determined as long as it does not interfere with the original function of the drive recorder 14.
 図6Aのステップ100において、否定判定された場合は、ステップ126へ移行して、解析を継続するか否かを判断する。ステップ126で肯定判定された場合は、ステップ100へ戻る。 If a negative determination is made in step 100 of FIG. 6A, the process proceeds to step 126 to determine whether or not to continue the analysis. If the determination in step 126 is affirmative, the process returns to step 100 .
 ステップ100において、肯定判定されると、ステップ102へ移行して、各種センサ情報を取得し、次いで、ステップ104へ移行してGPS情報から走行地点を解析し、ステップ106へ移行する。 If an affirmative determination is made in step 100, the process moves to step 102 to acquire various sensor information, then moves to step 104 to analyze the travel point from the GPS information, and moves to step 106.
 ステップ106では、GPS情報、G情報及びGyro情報から車両挙動を解析し、ステップ108へ移行する。 At step 106, vehicle behavior is analyzed from GPS information, G information and Gyro information, and the process proceeds to step 108.
 ステップ108では、1段目、すなわち、必須情報によるリスクイベントを判定し、ステップ110へ移行して、リスクイベントの判定式への入力情報が足りているかどうかを判定する。 In step 108, the risk event is judged by the first stage, that is, the essential information, and the process proceeds to step 110 to judge whether or not the input information to the risk event judgment formula is sufficient.
 このステップ110で否定判定、リスクイベントの判定式への入力情報が足りていないと判断された場合は、ステップ110からステップ112へ移行して、車外、車室内の画像情報を要求する。 If a negative determination is made in step 110, or the input information to the risk event determination formula is insufficient, the process proceeds from step 110 to step 112 to request image information outside and inside the vehicle.
 次のステップ114では、ドライブレコーダ主制御部30から画像情報を取得し、ステップ116へ移行する。 In the next step 114, image information is acquired from the drive recorder main control unit 30, and the process proceeds to step 116.
 ステップ116では、2段目、すなわち、必須情報に任意情報を含めたリスクイベントを判定し、ステップ118へ移行する。また、ステップ110で肯定判定された場合は、任意情報を加味した2段目のリスク判定は不要であると判断し、ステップ118へ移行する。 In step 116, the second step, that is, the risk event including optional information in essential information is determined, and the process proceeds to step 118. Also, if the determination in step 110 is affirmative, it is determined that the second stage of risk determination in consideration of arbitrary information is unnecessary, and the process proceeds to step 118 .
 ステップ118では、判定したリスクイベント毎に行動別スコアリング処理を実行し(図4参照)、次いで、ステップ120へ移行して、判定したリスクイベント毎に場所別スコアリング処理を実行して(図5参照)、ステップ122へ移行する。なお、ステップ118とステップ120の順で処理を行うようにしたが、ステップ120→ステップ118の順序でもよいし、ステップ118とステップ120とを同時に処理(並行処理)するようにしてもよい。 In step 118, action-based scoring processing is executed for each determined risk event (see FIG. 4), and then the process proceeds to step 120, where location-based scoring processing is performed for each determined risk event (see FIG. 4). 5), and go to step 122. Although steps 118 and 120 are processed in this order, the order may be step 120→step 118, or step 118 and step 120 may be processed simultaneously (parallel processing).
 ステップ122では、スコアリング処理結果に基づいて、リスクイベントを通知する優先度を決定し、ステップ124へ移行する。ステップ124では、結果格納処理を実行し、ステップ126へ移行する。 In step 122, the priority of risk event notification is determined based on the scoring process results, and the process proceeds to step 124. At step 124 , result storage processing is executed, and the process proceeds to step 126 .
 ステップ126では、解析を継続するか否かを判断し、肯定判定された場合は、前述したようにステップ100へ戻り、否定判定された場合は、このルーチンは終了する。 At step 126, it is determined whether or not to continue the analysis, and if the determination is affirmative, the process returns to step 100 as described above, and if the determination is negative, this routine ends.
 次に図6Bに従い、運転支援制御通知ルーチンを説明する。 Next, the driving support control notification routine will be described according to FIG. 6B.
 ステップ150では、リスクイベントの通知時期か否かを判断する。ステップ150で否定判定された場合は、このルーチンは終了する。また、ステップ150で肯定判定されると、ステップ152へ移行する。 At step 150, it is determined whether or not it is time to notify the risk event. If the determination at step 150 is negative, the routine ends. Also, if the determination in step 150 is affirmative, the process proceeds to step 152 .
 通知時期は、第1の実施の形態では、イグニッションキーがオフになったときとしており、図1に示される如く、運転者24が、車両10を降車したときを見計らって通知するようにしている。 In the first embodiment, the notification is made when the ignition key is turned off, and as shown in FIG. 1, the notification is made when the driver 24 gets off the vehicle 10 .
 なお、通知時期は、車両10の降車時に限定されず、例えば、次のイグニッションキーのオン時であってよい。また、例えば、近距離通信機能を利用して、運転者24が所持するキーや携帯端末装置24Aと、ドライブレコーダ14との間で、所定範囲外に一旦出て、その後所定範囲内に戻ったときに通知する等、他の通知時期であってもよい。また、車両周辺に運転者がいない状況であっても、任意のタイミングでWebアプリやネイティブアプリを経由して通知してもよい。 Note that the notification timing is not limited to when the vehicle 10 gets off, and may be, for example, when the ignition key is turned on next time. In addition, for example, using the short-range communication function, between the key or the mobile terminal device 24A possessed by the driver 24 and the drive recorder 14, the driver 24 temporarily goes out of the predetermined range and then returns to the predetermined range. Other notification times, such as when to notify, may be used. Also, even if there is no driver around the vehicle, the notification may be sent via a web application or a native application at any timing.
 ステップ152では、図6Aのステップ124で格納したリスクイベントを読み出し、ステップ154へ移行して、運転者24が所持する携帯端末装置24A等へ通知処理を実行し、ステップ156へ移行する。 At step 152, the risk event stored at step 124 in FIG.
 ステップ156では、通知したリスクイベントのアーカイブ処理を実行し、このルーチンは終了する。なお、リスクイベントは全ての履歴としてのこす必要はなく、取捨選択してもよい。 At step 156, archive processing of the notified risk event is executed, and this routine ends. It should be noted that it is not necessary to keep all the risk events as a history, and they may be selected.
 (第1の実施の形態の変形例) (Modification of the first embodiment)
 なお、第1の実施の形態では、図3に示される如く、状況認識部36において、走行地点解析部36A、挙動解析部36B、周辺環境認識部36C、及び、ドライバ状況認識部36Dにおいて、状況を認識するようにしたが、図8に示される如く、車両10のCANデータに基づいて、車両動作モードを認識する車両動作モード認識部36Eを追加してもよい。これにより、自車(車両10)の挙動をさらに詳細に認識することができる。 Incidentally, in the first embodiment, as shown in FIG. However, as shown in FIG. 8, a vehicle operation mode recognition unit 36E that recognizes the vehicle operation mode based on the CAN data of the vehicle 10 may be added. Thereby, the behavior of the host vehicle (vehicle 10) can be recognized in more detail.
 (第1の実施の形態の実施例「実施例1」) (Example "Example 1" of the first embodiment)
 以下に示す(a)~(c)の手順で、リスクイベント処理を実行する。  Perform the risk event processing according to the following procedures (a) to (c).
(a) GPS情報及び加速度情報に基づいて、自車(車両10)行動と地点情報を導出する。例えば、信号有りの交差点を左折する。 (a) Based on the GPS information and acceleration information, the own vehicle (vehicle 10) behavior and point information are derived. For example, turn left at an intersection with a signal.
(b) 画像処理結果を取得する。 (b) Acquire the image processing result.
・ カメラ群26(車外)からの画像情報を解析し、周囲の状況を推定する。例えば、青信号で左折する手前5秒に、自車(車両10)の左側に自転車がいた。 · Analyze the image information from the camera group 26 (outside the vehicle) and estimate the surrounding situation. For example, five seconds before turning left at a green light, a bicycle was on the left side of the own vehicle (vehicle 10).
・ カメラ群26(車室内)からの画像情報を解析し、運転者24の状態を推定する。例えば、左折手前5秒以内で安全確認をしていない。 · Analyze the image information from the camera group 26 (inside the vehicle) and estimate the state of the driver 24 . For example, safety confirmation is not performed within 5 seconds before turning left.
(c) 左折するシーンの前後の任意時間分のみの(a)及び(b)のデータを用いて、リスクイベントを判定する。 (c) Using the data (a) and (b) only for an arbitrary amount of time before and after the left turn scene, determine the risk event.
 Case1:(b)で左折時に急ブレーキを踏んだ場合  Case 1: When the brake is suddenly applied when turning left in (b)
 気づくのに遅れて急ブレーキを踏んだ、というリスクシーンであり、安全操作不適に相当する。 This is a risky scene in which the driver stepped on the brakes suddenly without noticing it, which is equivalent to unsuitable safety operation.
 Case2:(b)で左折時にブレーキ操作を全くしなかった場合  Case 2: When the brake is not operated at all when turning left in (b)
 気づかずに通過した、というリスクシーンであり、横断者等妨害に相当する。 It's a risky scene where you pass by without noticing it, and it's equivalent to interfering with someone crossing the street.
 Case3:(c)の判定結果に基づき、具体的なアドバイスを行う。  Case 3: Give specific advice based on the judgment result of (c).
 Case1の場合は、『交差点進入前は必ず安全確認の上左折しましょう。』と通知し、Case2の場合は、『必ず停止可能な速度まで減速し、安全確認の上左折しましょう。』と通知する。 In the case of Case 1, "Always check safety before entering the intersection and turn left. ', and in Case 2, 'Be sure to slow down to a stopable speed and turn left after confirming safety. ’ is notified.
 (第1の実施の形態の実施例「実施例2」) (Example "Example 2" of the first embodiment)
 地点×自車(車両10)挙動の解析を行った上で、画像解析が必要なリスクシーンの解析が必要なタイミングのみ、追加画像を解析する(任意情報を利用)。 After analyzing the location x own vehicle (vehicle 10) behavior, additional images are analyzed only at the timing when analysis of risk scenes requiring image analysis is necessary (using arbitrary information).
 画像解析のいらない例(必須情報のみで解析可能な例)としては、幹線道路を直進している際に速度超過するイベントが挙げられる。 An example that does not require image analysis (an example that can be analyzed with only essential information) is the event of excessive speed while going straight on a main road.
 この場合、場所が幹線道路、自車(車両10)の挙動が直進、自車(車両10)速度と幹線道路の上限速度を比較し、速度オーバーをしているか検知すればよく、画像情報が不要となる。 In this case, the location is a main road, the behavior of the own vehicle (vehicle 10) is to go straight, the speed of the own vehicle (vehicle 10) is compared with the upper speed limit of the main road, and it is detected whether the vehicle is over speeding. becomes unnecessary.
 第1の実施の形態の実施例2によれば、画像解析を行うタイムスタンプを指定するため、画像解析にかかるCPUリソースの削減が可能となる。 According to Example 2 of the first embodiment, since the timestamp for image analysis is specified, it is possible to reduce CPU resources required for image analysis.
 (第1の実施の形態の実施例「実施例3」) (Example "Example 3" of the first embodiment)
 検出したリスクイベントを、場所別及び行動別に分類して、スコアリングを実施することで、スコアの最も低いシーンを選定し、優先的にアドバイスすべきリスクイベントの優先度を算定する。 By classifying the detected risk events by location and behavior and performing scoring, the scene with the lowest score is selected, and the priority of risk events that should be given priority advice is calculated.
 第1の実施の形態の実施例3によれば、数あるリスクのある運転の中でも、特に問題のある運転シーンを特定し、優先的に通知することで、問題のある運転シーンの迅速な是正が可能となる。 According to Example 3 of the first embodiment, the problematic driving scenes are specified among the many risky driving and notified with priority, so that the problematic driving scenes can be quickly corrected. becomes possible.
 [第2の実施の形態] [Second embodiment]
 以下に、本開示の第2の実施の形態について、図9及び図10に従い説明する。なお、第1の実施の形態と同一構成部分については、同一の符号を付して、その構成の説明を省略する。 A second embodiment of the present disclosure will be described below with reference to FIGS. 9 and 10. FIG. The same reference numerals are given to the same components as those of the first embodiment, and the description of the configuration is omitted.
 第2の実施の形態の特徴は、リスクイベント判定部40における判定結果が、予め想定していた判定に対して乖離があったり、予め設定していたリスクイベントの項目等の何れにも該当しない新たなリスクが判明した場合に、判定しきい値やリスクイベント判定の項目等を編集(追加、削除、変更等)する機能を付加したものである。 A feature of the second embodiment is that the determination result of the risk event determination unit 40 deviates from the presumed determination, or does not correspond to any of the preset risk event items. A function is added to edit (add, delete, change, etc.) items such as determination thresholds and risk event determination when new risks are identified.
 図9は、本開示の第2の実施の形態に係るドライブレコーダ14における、主として運転支援制御を、機能別に分類して示した運転支援装置14Aの機能ブロック図である。 FIG. 9 is a functional block diagram of a driving assistance device 14A that mainly classifies driving assistance control by function in the drive recorder 14 according to the second embodiment of the present disclosure.
 運転支援装置14Aには、ユーザインターフェイス(UI)52が接続されている。UI52は、例えば、運転支援装置14Aを遠隔で支援する管理部門に設置され、オペレータによって操作可能なデバイスである。なお、オペレータ以外に、運転者24であってもよい。 A user interface (UI) 52 is connected to the driving support device 14A. The UI 52 is, for example, a device that is installed in a management department that remotely supports the driving assistance device 14A and that can be operated by an operator. In addition to the operator, the driver 24 may be used.
 リスクイベント判定部40は、このUI52へ判定結果に関する情報を送出する。UI52では、判定結果を解析し、しきい値調整の要否、及び判定レベルの調整の要否を判断する。 The risk event determination unit 40 sends information about the determination result to this UI52. The UI 52 analyzes the determination result and determines whether or not the threshold value adjustment and the determination level need to be adjusted.
 オペレータは、しきい値調整が必要と判断した場合、UI52を操作して、しきい値調整部54へ、しきい値調整情報を送出する。しきい値調整部54は、状況認識部36に接続されており、当該状況認識部36で実行される、センサ情報と比較するしきい値を調整する。 When the operator determines that threshold adjustment is necessary, the operator operates the UI 52 to send threshold adjustment information to the threshold adjustment unit 54 . The threshold adjusting unit 54 is connected to the situation recognizing unit 36 and adjusts the threshold for comparison with the sensor information executed by the situation recognizing unit 36 .
 また、オペレータは、判定レベルの調整が必要と判断した場合、UI52を操作して、判定レベル調整部56へ、判定レベル調整情報を送出する。判定レベル調整部56は、リスクイベント判定部40に接続されており、当該リスクイベント判定部40で実行される、リスクイベント判定のための判断基準等の調整を実行する。 Also, when the operator determines that the determination level needs to be adjusted, the operator operates the UI 52 to send determination level adjustment information to the determination level adjustment unit 56 . The determination level adjustment unit 56 is connected to the risk event determination unit 40 and adjusts the determination criteria and the like for risk event determination performed by the risk event determination unit 40 .
 以下に、第2の実施の形態の作用を説明する。 The operation of the second embodiment will be described below.
 図10は、第2の実施の形態に係る運転支援システムにおける運転支援制御解析ルーチンを示すフローチャートである。なお、第1の実施の形態(図6A参照)と同一ステップには、同一のステップ番号を付して、その処理の説明を省略する。 FIG. 10 is a flowchart showing a driving support control analysis routine in the driving support system according to the second embodiment. The same step numbers are given to the same steps as those in the first embodiment (see FIG. 6A), and the description of the processing is omitted.
 ステップ110において、リスクイベントの判定式への入力情報が足りていると判定された後、又は、ステップ116において、2段目のリスクイベントの判定が実行された後、ステップ128では、判定結果をUI52へ送信する。 After it is determined in step 110 that the input information to the risk event determination formula is sufficient, or after the second stage of risk event determination is executed in step 116, in step 128, the determination result is Send to UI52.
 次のステップ130では、UI52側において、判定結果の調整(しきい値調整、判定レベル調整)の要否を判断する。ステップ130で調整が必要と判断されると、ステップ132へ移行して、判定レベル調整処理及びしきい値調整処理を実行し、ステップ108へ戻り、上記工程を繰り返す。 In the next step 130, the UI 52 side determines whether adjustment of the judgment result (threshold adjustment, judgment level adjustment) is necessary. If it is judged at step 130 that adjustment is necessary, the process moves to step 132 to execute the judgment level adjustment process and the threshold value adjustment process, returns to step 108, and repeats the above steps.
 また、ステップ130で否定判定された場合は、判定結果の調整が不要と判断し、ステップ118へ移行する。 Also, if a negative determination is made in step 130, it is determined that adjustment of the determination result is unnecessary, and the process proceeds to step 118.
 第2の実施の形態によれば、例えば、法規制や社会情勢の変化によって、一般的な交通マナーが変化した場合、リスクイベントの判定のしきい値や判定レベル調整を編集することで、新規に開発せずとも、高度なアドバイスを継続して提供することができる。 According to the second embodiment, for example, when general traffic manners change due to changes in laws and regulations or social conditions, by editing the risk event determination threshold and determination level adjustment, a new It is possible to continuously provide high-level advice without having to develop it in detail.
 (第2の実施の形態の実施例「実施例4」) (Example "Example 4" of the second embodiment)
 リスクイベント判定部40の出力を、UI52を介して、オペレータへ提示する。 The output of the risk event determination unit 40 is presented to the operator via the UI 52.
 判定内容に誤りがあった場合、以下のケースに応じて、しきい値調整、判定レベル調整を実行する。 If there is an error in the determination content, the threshold value and determination level are adjusted according to the following cases.
 Case1:リスクはないと判断した場合  Case 1: When it is determined that there is no risk
 リスクイベントの判定対象は合致しているが(場所、自車(車両10)挙動等の組み合わせ)、判定しきい値の実際のリスクの程度と合致しない場合、判定しきい値を調整する(例えば、車間距離や相対速度のしきい値等)。 Although the determination target of the risk event matches (combination of location, own vehicle (vehicle 10) behavior, etc.), if the determination threshold does not match the actual degree of risk, the determination threshold is adjusted (for example, , following distance and relative speed thresholds, etc.).
 Case2:リスクイベントの判定対象が誤っていて、予め設計した別のルールに変更すべきと判断した場合  Case 2: When the judgment target of the risk event is incorrect and it is determined that it should be changed to another pre-designed rule
 例えば、リスクイベント判定対象とする、場所等の検出結果に誤りがあった場合、以下の処理1~処理2を実行する。 For example, if there is an error in the detection result of the location, etc., which is subject to risk event determination, the following processes 1 and 2 are executed.
・処理1 該当するリスクイベントの判定結果が出力されるように、場所等の検出結果を修正する。 · Process 1 Correct the detection results such as the location so that the judgment result of the corresponding risk event is output.
・処理2 処理1での修正結果に対して、状況認識部36の出力が整合するように判定しきい値群を修正する。 • Processing 2 Correct the decision threshold value group so that the output of the situation recognition unit 36 matches the correction result of the processing 1.
 Case3:リスクイベント判定対象が誤っており、かつ、予め設計したルールのどれにも該当しない場合  Case 3: When the risk event determination target is incorrect and none of the pre-designed rules apply
 リスクイベント判定ルールを新規に追加する。 Add new risk event judgment rules.
 上記の対応は、オペレータは運転者24が手動で行ってもよいし、予め定めたプログラムに従って処理したり、AIを用いて、過去の膨大な履歴情報(所謂ビッグデータ)に基づいて、機械学習によって判断してもよい。 The above response may be performed manually by the driver 24, processed according to a predetermined program, or machine-learned based on a vast amount of past history information (so-called big data) using AI. can be judged by
 [第3の実施の形態] [Third embodiment]
 以下に、本開示の第3の実施の形態について、図11及び図12に従い説明する。なお、第1の実施の形態及び第2の実施の形態と同一構成部分については、同一の符号を付して、その構成の説明を省略する。 A third embodiment of the present disclosure will be described below with reference to FIGS. 11 and 12. FIG. It should be noted that the same components as those in the first embodiment and the second embodiment are denoted by the same reference numerals, and the description of the configuration is omitted.
 第3の実施の形態の特徴は、リスクイベント判定部40における判定結果及び判定精度が、予め想定していた判定に対して乖離があったり、予め設定していたリスクイベントの項目等の何れにも該当しない新たなリスクが判明した場合に、判定しきい値やリスクイベント判定の項目等を編集(追加、削除、変更等)する機能を付加したものである。 The feature of the third embodiment is that the determination result and determination accuracy in the risk event determination unit 40 deviate from the determination assumed in advance, or the risk event items set in advance, etc. When a new risk that does not correspond to the above is identified, a function is added to edit (add, delete, change, etc.) items such as the judgment threshold value and risk event judgment items.
 図11は、本開示の第2の実施の形態に係るドライブレコーダ14における、主として運転支援制御を、機能別に分類して示した運転支援装置14Aの機能ブロック図である。 FIG. 11 is a functional block diagram of a driving assistance device 14A that mainly classifies driving assistance control by function in the drive recorder 14 according to the second embodiment of the present disclosure.
 運転支援装置14Aには、ユーザインターフェイス(UI)52が接続されている。UI52は、例えば、運転支援装置14Aを遠隔で支援する管理部門に設置され、オペレータによって操作可能なデバイスである。なお、オペレータ以外に、運転者24であってもよい。 A user interface (UI) 52 is connected to the driving support device 14A. The UI 52 is, for example, a device that is installed in a management department that remotely supports the driving assistance device 14A and that can be operated by an operator. In addition to the operator, the driver 24 may be used.
 リスクイベント判定部40は、このUI52へ判定結果及び判定精度に関する情報を送出する。UI52では、判定結果及び判定精度を解析し、しきい値調整の要否、及び判定レベルの調整の要否を判断する。 The risk event determination unit 40 sends information regarding determination results and determination accuracy to this UI 52 . The UI 52 analyzes the determination result and determination accuracy, and determines whether or not threshold adjustment is necessary, and whether or not the determination level needs to be adjusted.
 オペレータは、しきい値調整が必要と判断した場合、UI52を操作して、しきい値調整部54へ、しきい値調整情報を送出する。しきい値調整部54は、状況認識部36に接続されており、当該状況認識部36で実行される、センサ情報と比較するしきい値を調整する。 When the operator determines that threshold adjustment is necessary, the operator operates the UI 52 to send threshold adjustment information to the threshold adjustment unit 54 . The threshold adjusting unit 54 is connected to the situation recognizing unit 36 and adjusts the threshold for comparison with the sensor information executed by the situation recognizing unit 36 .
 また、オペレータは、判定レベルの調整が必要と判断した場合、UI52を操作して、判定レベル調整部56へ、判定レベル調整情報を送出する。判定レベル調整部56は、リスクイベント判定部40に接続されており、当該リスクイベント判定部40で実行される、リスクイベント判定のための判断基準等の調整を実行する。 Also, when the operator determines that the determination level needs to be adjusted, the operator operates the UI 52 to send determination level adjustment information to the determination level adjustment unit 56 . The determination level adjustment unit 56 is connected to the risk event determination unit 40 and adjusts the determination criteria and the like for risk event determination performed by the risk event determination unit 40 .
 以下に、第3の実施の形態の作用を説明する。 The operation of the third embodiment will be described below.
 図12は、第3の実施の形態に係る運転支援システムにおける運転支援制御解析ルーチンを示すフローチャートである。なお、第1の実施の形態(図6A参照)と同一ステップには、同一のステップ番号を付して、その処理の説明を省略する。 FIG. 12 is a flowchart showing a driving support control analysis routine in the driving support system according to the third embodiment. The same step numbers are given to the same steps as those in the first embodiment (see FIG. 6A), and the description of the processing is omitted.
 ステップ110において、リスクイベントの判定式への入力情報が足りていると判定された後、又は、ステップ116において、2段目のリスクイベントの判定が実行された後、ステップ128では、判定結果及び判定精度をUI52へ送信する。 After it is determined in step 110 that the input information to the risk event determination formula is sufficient, or after the second step of risk event determination is executed in step 116, in step 128, the determination result and Send the determination accuracy to the UI 52 .
 次のステップ130では、UI52側において、判定結果及び判定精度の調整(しきい値調整、判定レベル調整)の要否を判断する。ステップ130で調整が必要と判断されると、ステップ132へ移行して、判定レベル調整処理及びしきい値調整処理を実行し、ステップ108へ戻り、上記工程を繰り返す。 In the next step 130, the UI 52 side determines whether or not adjustment of the judgment result and judgment accuracy (threshold adjustment, judgment level adjustment) is necessary. If it is judged at step 130 that adjustment is necessary, the process moves to step 132 to execute the judgment level adjustment process and the threshold value adjustment process, returns to step 108, and repeats the above steps.
 また、ステップ130で否定判定された場合は、判定結果及び判定精度の調整が不要と判断し、ステップ118へ移行する。 Also, if a negative determination is made in step 130, it is determined that adjustment of the determination result and determination accuracy is unnecessary, and the process proceeds to step 118.
 第3の実施の形態によれば、例えば、法規制や社会情勢の変化によって、一般的な交通マナーが変化した場合、リスクイベントの判定のしきい値や判定レベル調整を編集することで、新規に開発せずとも、高度なアドバイスを継続して提供することができる。 According to the third embodiment, for example, when general traffic manners change due to changes in laws and regulations or social conditions, by editing the risk event determination threshold value and determination level adjustment, a new It is possible to continuously provide high-level advice without having to develop it in detail.
 加えて、第3の実施の形態によれば、リスクイベント判定結果の判定精度に応じて、出力をフィルタリング・ソートすることで、UI52でイベント判定結果を確認する場合において、判定精度が低いものから順に確認を行うことで効率的に確認をすることができる。 In addition, according to the third embodiment, by filtering and sorting the output according to the judgment accuracy of the risk event judgment result, when confirming the event judgment result on the UI 52, from those with low judgment accuracy Confirmation can be performed efficiently by confirming in order.
 (第3の実施の形態の実施例「実施例5」) (Example "Example 5" of the third embodiment)
 第2の実施例の形態の実施例4に加え、リスクイベント判定部40の出力に判定精度を追加して、UI52を介して、オペレータへ提示する。 In addition to the embodiment 4 of the form of the second embodiment, the determination accuracy is added to the output of the risk event determination unit 40 and presented to the operator via the UI 52.
 判定内容に誤りがあった場合、実施例3に加え、以下の対応をとる。 If there is an error in the content of the judgment, the following measures will be taken in addition to Example 3.
 Case1:アドバイス提供について  Case 1: About providing advice
 判定精度が任意の値以下のリスクイベントに対しては、アドバイスを提供しない、或いは提供順序の優先度を落とす。 For risk events with judgment accuracy below an arbitrary value, advice will not be provided or the priority of the order of provision will be lowered.
 Case2:UI52の表示について  Case 2: Display of UI52
 判定精度が任意の値以上のリスクイベントに対しては、優先的にUI52へ表示する。 For risk events with a judgment accuracy greater than an arbitrary value, they are preferentially displayed on the UI52.
 [第4の実施の形態] [Fourth embodiment]
 以下に、本開示の第4の実施の形態について、図13及び図14に従い説明する。なお、第1の実施の形態と同一構成部分については、同一の符号を付して、その構成の説明を省略する。 The fourth embodiment of the present disclosure will be described below with reference to FIGS. 13 and 14. FIG. The same reference numerals are given to the same components as those of the first embodiment, and the description of the configuration is omitted.
 ここで、第1の実施の形態ではリスクイベント判定部40に画像要求部42を設け、リスクイベント判定部40の必須情報に基づく判定(1段目)において、画像情報が必要であると判断した場合に、ドライブレコーダ主制御部30に対して画像情報(任意情報)を要求し、画像情報を加味して判定(2段目)する構成であった。 Here, in the first embodiment, the risk event determination unit 40 is provided with the image request unit 42, and in the determination (first stage) based on the essential information of the risk event determination unit 40, it is determined that the image information is necessary. In this case, image information (arbitrary information) is requested from the drive recorder main control unit 30, and the image information is taken into consideration in the determination (second stage).
 これに対し、第4の実施の形態では、リスクイベント判定部40における判定において、必須情報及び任意情報の区別をすることなく、最初からセンサ情報と画像情報を用いてリスクイベントを判定する構成である。 On the other hand, in the fourth embodiment, in the determination by the risk event determination unit 40, a risk event is determined using sensor information and image information from the beginning without distinguishing between essential information and optional information. be.
 図13は、本開示の第4の実施の形態に係るドライブレコーダ14における、主として運転支援制御を、機能別に分類して示した運転支援装置14Aの機能ブロック図である。 FIG. 13 is a functional block diagram of a driving assistance device 14A that mainly classifies driving assistance control by function in the drive recorder 14 according to the fourth embodiment of the present disclosure.
 状況認識部36では、走行地点解析部36Aで解析した車両10の走行地点、挙動解析部36Bで解析した車両10の挙動の各々の情報を、リスクイベント処理制御部38へ送出する。 The situation recognition unit 36 sends information about the travel point of the vehicle 10 analyzed by the travel point analysis unit 36A and the behavior of the vehicle 10 analyzed by the behavior analysis unit 36B to the risk event processing control unit 38.
 また、状況認識部36では、カメラ群26からの情報を取得することで、周辺環境認識部36C(図2参照)から車両10の周辺環境をリスクイベント判定部40へ送出すると共に、ドライバ状況認識部36D(図2参照)から運転者24の運転者行動(わき見、ちらちらわき見、漫然状態、スマホ操作、姿勢崩れ、安全不確認等)をリスクイベント判定部40へ送出する。 In addition, the situation recognition unit 36 acquires information from the camera group 26, sends the surrounding environment of the vehicle 10 from the surrounding environment recognition unit 36C (see FIG. 2) to the risk event determination unit 40, and recognizes the driver situation. The driver behavior of the driver 24 (inattentive glance, flickering glance, careless state, smartphone operation, posture collapse, safety unconfirmation, etc.) is sent to the risk event determination unit 40 from the unit 36D (see FIG. 2).
 リスクイベント判定部40は、車両10の走行地点、及び車両10の挙動、並びに、車両10の周辺環境、運転者24の運転姿勢に基づき、想定し得る項目のリスクの有無を判定する。 The risk event determination unit 40 determines the presence or absence of conceivable items of risk based on the travel point of the vehicle 10, the behavior of the vehicle 10, the surrounding environment of the vehicle 10, and the driving posture of the driver 24.
 以下に、第4の実施の形態の作用を説明する。 The operation of the fourth embodiment will be described below.
 図14は、第4の実施の形態に係る運転支援システムにおける運転支援制御解析ルーチンを示すフローチャートである。なお、第1の実施の形態(図6A参照)と同一ステップには、同一のステップ番号を付して、その処理の説明を省略する。 FIG. 14 is a flowchart showing a driving support control analysis routine in the driving support system according to the fourth embodiment. The same step numbers are given to the same steps as those in the first embodiment (see FIG. 6A), and the description of the processing is omitted.
 ステップ106では、GPS情報、G情報及びGyro情報から車両挙動を解析し、ステップ114へ移行する。 In step 106, vehicle behavior is analyzed from GPS information, G information and Gyro information, and the process proceeds to step 114.
 ステップ114では、ドライブレコーダ主制御部30から画像情報を取得し、ステップ116へ移行する。 In step 114, image information is acquired from the drive recorder main control unit 30, and the process proceeds to step 116.
 ステップ116では、車両10の走行地点、及び車両10の挙動、並びに、車両10の周辺環境、運転者24の運転姿勢に基づき、リスクイベントを判定し、ステップ118へ移行する。 In step 116, a risk event is determined based on the travel point of the vehicle 10, the behavior of the vehicle 10, the surrounding environment of the vehicle 10, and the driving posture of the driver 24, and the process proceeds to step 118.
 第4の実施の形態によれば、車両10が走行中に取得した、様々なシーンに応じたリスクイベントを判定し、当該リスクイベントを解消する的確なアドバイスを提供することができる。 According to the fourth embodiment, it is possible to determine risk events according to various scenes acquired while the vehicle 10 is running, and provide accurate advice to eliminate the risk events.
 また、第4の実施の形態によれば、数あるリスクのある運転の中でも、特に問題のある運転シーンを特定し、優先的に通知することで、問題のある運転シーンの迅速な是正が可能となる。 In addition, according to the fourth embodiment, it is possible to quickly correct problematic driving scenes by identifying and giving priority to problematic driving scenes among many risky driving situations. becomes.
 [第5の実施の形態] [Fifth embodiment]
 以下に、本開示の第5の実施の形態について、図15及び図16に従い説明する。なお、第1の実施の形態と同一構成部分については、同一の符号を付して、その構成の説明を省略する。 The fifth embodiment of the present disclosure will be described below with reference to FIGS. 15 and 16. FIG. The same reference numerals are given to the same components as those of the first embodiment, and the description of the configuration is omitted.
 ここで、第1の実施の形態では、リスクイベント判定部40が、行動別スコアリング処理部44及び場所別スコアリング処理部46を備え、行動別スコアリング処理部44及び場所別スコアリング処理部46に対して、各項目のリスクの有無の判定結果を送出し、行動別スコアリング処理部44では、予め定めた計算式に基づいて、小項目毎にスコアリングし、その合計値で評価し、場所別スコアリング処理部46では、予め定めた計算式に基づいて、シーン毎にスコアリングし、その合計値で評価する構成であった。 Here, in the first embodiment, the risk event determination unit 40 includes the behavior-based scoring processing unit 44 and the location-based scoring processing unit 46, and the behavior-based scoring processing unit 44 and the location-based scoring processing unit 46, the result of determining the presence or absence of risk for each item is sent, and the behavior-specific scoring processing unit 44 scores each sub-item based on a predetermined calculation formula, and evaluates with the total value. , the location-specific scoring processing unit 46 scores each scene based on a predetermined calculation formula, and evaluates the score based on the total value.
 これに対し、第5の実施の形態では、リスクイベント判定部40における判定において、リスクイベント判定部40で判定した結果に対して、区別したり優先度を設定せず、所謂生データに基づいて、リスクイベントを判定する構成とした。 On the other hand, in the fifth embodiment, in the determination by the risk event determination unit 40, the results determined by the risk event determination unit 40 are not distinguished or prioritized, and are based on so-called raw data. , to determine risk events.
 図15は、本開示の第5の実施の形態に係るドライブレコーダ14における、主として運転支援制御を、機能別に分類して示した運転支援装置14Aの機能ブロック図である。 FIG. 15 is a functional block diagram of a driving assistance device 14A in which driving assistance control is mainly classified by function in the drive recorder 14 according to the fifth embodiment of the present disclosure.
 リスクイベント判定部40は、2段階でリスクの有無の判定を実行している。 The risk event determination unit 40 determines whether or not there is a risk in two stages.
 リスクイベント判定部40における1段目の判定では、必須情報(車両10の走行地点、及び車両10の挙動)に基づき、想定し得る項目のリスクの有無を判定する。 In the first stage of determination by the risk event determination unit 40, the presence or absence of a conceivable item of risk is determined based on the essential information (the travel location of the vehicle 10 and the behavior of the vehicle 10).
 ここで、必須情報のみでリスクの有無を判定できる場合は問題ないが、情報不足でリスクの判定が困難となる場合がある。 Here, there is no problem if the presence or absence of risk can be determined with only the required information, but it may be difficult to determine the risk due to lack of information.
 そこで、情報不足の場合は、画像要求部42を介して、ドライブレコーダ主制御部30に対して、カメラ群26からの情報を状況認識部36へ送出するように要求する。 Therefore, if the information is insufficient, the drive recorder main control section 30 is requested to send the information from the camera group 26 to the situation recognition section 36 via the image request section 42 .
 状況認識部36では、カメラ群26からの情報を取得することで、任意情報として、周辺環境認識部36C(図2参照)から車両10の周辺環境をリスクイベント判定部40へ送出すると共に、ドライバ状況認識部36D(図2参照)から運転者24の運転者行動(わき見、ちらちらわき見、漫然状態、スマホ操作、姿勢崩れ、安全不確認等)をリスクイベント判定部40へ送出する。 In the situation recognition unit 36, by acquiring information from the camera group 26, the surrounding environment of the vehicle 10 is sent from the surrounding environment recognition unit 36C (see FIG. 2) to the risk event determination unit 40 as arbitrary information, and the driver The situation recognition unit 36D (see FIG. 2) sends the driver behavior of the driver 24 (inattentive glance, flickering glance, careless state, smartphone operation, posture collapse, unconfirmed safety, etc.) to the risk event determination unit 40.
 リスクイベント判定部40における2段目の判定では、必須情報(車両10の走行地点、及び車両10の挙動)、並びに、任意情報(車両10の周辺環境、運転者24の運転姿勢)に基づき、想定し得る項目のリスクの有無を判定する。 In the second stage determination in the risk event determination unit 40, based on the essential information (travel location of the vehicle 10 and the behavior of the vehicle 10) and optional information (surrounding environment of the vehicle 10, driving posture of the driver 24), Determine the presence or absence of risks for items that can be assumed.
 すなわち、比較的情報量が多く、解析処理に時間がかかる画像情報を最初から利用せず、比較的情報量が少なく、解析処理に時間がかからないセンサ情報に基づいて、リスクの有無を判定することで、ドライブレコーダ14としての本来の機能を損なうことなく、運転支援装置14Aとしての機能を実現することができる。 That is, the presence or absence of risk is determined based on sensor information, which has a relatively small amount of information and does not take much time to analyze, rather than using image information, which has a relatively large amount of information and takes a long time to analyze, from the beginning. Therefore, the function of the driving support device 14A can be realized without impairing the original function of the drive recorder 14. FIG.
 リスクイベント判定部40で判定したリスクイベントは、アドバイス通知部50へ送出する。 The risk event determined by the risk event determination unit 40 is sent to the advice notification unit 50.
 アドバイス通知部50では、受け付けたリスクに基づいて、具体的なリスクコンサルティングを実行し、運転者24が所持する携帯端末装置24A等へ通知する。 The advice notification unit 50 executes specific risk consulting based on the received risks, and notifies the mobile terminal device 24A or the like possessed by the driver 24.
 以下に、第5の実施の形態の作用を説明する。 The operation of the fifth embodiment will be described below.
 図16は、第5の実施の形態に係る運転支援システムにおける運転支援制御解析ルーチンを示すフローチャートである。なお、第1の実施の形態(図6A参照)と同一ステップには、同一のステップ番号を付して、その処理の説明を省略する。 FIG. 16 is a flow chart showing a driving support control analysis routine in the driving support system according to the fifth embodiment. The same step numbers are given to the same steps as those in the first embodiment (see FIG. 6A), and the description of the processing is omitted.
 ステップ110で否定判定、リスクイベントの判定式への入力情報が足りていないと判断された場合は、ステップ110からステップ112へ移行して、車外、車室内の画像情報を要求する。 If a negative determination is made in step 110, or the input information to the risk event determination formula is insufficient, the process proceeds from step 110 to step 112 to request image information outside and inside the vehicle.
 次のステップ114では、ドライブレコーダ主制御部30から画像情報を取得し、ステップ116へ移行する。 In the next step 114, image information is acquired from the drive recorder main control unit 30, and the process proceeds to step 116.
 ステップ116では、2段目、すなわち、必須情報に任意情報を含めたリスクイベントを判定し、ステップ124へ移行する。また、ステップ110で肯定判定された場合は、任意情報を加味した2段目のリスク判定は不要であると判断し、ステップ124へ移行する。 In step 116, the second step, that is, the risk event including optional information in essential information is determined, and the process proceeds to step 124. Also, if the determination in step 110 is affirmative, it is determined that the second stage of risk determination in consideration of arbitrary information is unnecessary, and the process proceeds to step 124 .
 ステップ124では、結果格納処理を実行し、ステップ126へ移行する。 In step 124, the result storage process is executed, and the process moves to step 126.
 ステップ126では、解析を継続するか否かを判断し、肯定判定された場合は、前述したようにステップ100へ戻り、否定判定された場合は、このルーチンは終了する。 At step 126, it is determined whether or not to continue the analysis, and if the determination is affirmative, the process returns to step 100 as described above, and if the determination is negative, this routine ends.
 第5の実施の形態によれば、車両10が走行中に取得した、様々なシーンに応じたリスクイベントを判定し、当該リスクイベントを解消する的確なアドバイスを提供することができる。 According to the fifth embodiment, it is possible to determine risk events according to various scenes acquired while the vehicle 10 is running, and provide accurate advice to eliminate the risk events.
 また、第5の実施の形態によれば、画像解析を行うタイムスタンプを指定するため、画像解析にかかるCPUリソースの削減が可能となる。 Also, according to the fifth embodiment, since the time stamp for image analysis is specified, it is possible to reduce CPU resources required for image analysis.
 [第6の実施の形態] [Sixth embodiment]
 以下に、本開示の第6の実施の形態について、図17及び図18に従い説明する。なお、第1の実施の形態と同一構成部分については、同一の符号を付して、その構成の説明を省略する。 The sixth embodiment of the present disclosure will be described below with reference to FIGS. 17 and 18. FIG. The same reference numerals are given to the same components as those of the first embodiment, and the description of the configuration is omitted.
 (第1の実施の形態との相違点1) (Difference 1 from the first embodiment)
 ここで、第1の実施の形態ではリスクイベント判定部40に画像要求部42を設け、リスクイベント判定部40の必須情報に基づく判定(1段目)において、画像情報が必要であると判断した場合に、ドライブレコーダ主制御部30に対して画像情報(任意情報)を要求し、画像情報を加味して判定(2段目)する構成であった。 Here, in the first embodiment, the risk event determination unit 40 is provided with the image request unit 42, and in the determination (first stage) based on the essential information of the risk event determination unit 40, it is determined that the image information is necessary. In this case, image information (arbitrary information) is requested from the drive recorder main control unit 30, and the image information is taken into consideration in the determination (second stage).
 これに対し、第6の実施の形態では、リスクイベント判定部40における判定において、必須情報及び任意情報の区別をすることなく、最初からセンサ情報と画像情報を用いてリスクイベントを判定する構成である。 In contrast, in the sixth embodiment, in the determination by the risk event determination unit 40, a risk event is determined using sensor information and image information from the beginning without distinguishing between essential information and optional information. be.
 (第1の実施の形態との相違点2) (Difference 2 from the first embodiment)
 第1の実施の形態では、リスクイベント判定部40が、行動別スコアリング処理部44及び場所別スコアリング処理部46を備え、行動別スコアリング処理部44及び場所別スコアリング処理部46に対して、各項目のリスクの有無の判定結果を送出し、行動別スコアリング処理部44では、予め定めた計算式に基づいて、小項目毎にスコアリングし、その合計値で評価し、場所別スコアリング処理部46では、予め定めた計算式に基づいて、シーン毎にスコアリングし、その合計値で評価する構成であった。 In the first embodiment, the risk event determination unit 40 includes a behavior-based scoring processing unit 44 and a location-based scoring processing unit 46. For the behavior-based scoring processing unit 44 and the location-based scoring processing unit 46, Then, the action-specific scoring processing unit 44 scores each sub-item based on a predetermined calculation formula, evaluates the total value, and The scoring processing unit 46 is configured to score each scene based on a predetermined calculation formula and evaluate the total value.
 これに対し、第6の実施の形態では、リスクイベント判定部40における判定において、リスクイベント判定部40で判定した結果に対して、区別したり優先度を設定せず、所謂生データに基づいて、リスクイベントを判定する構成とした。 On the other hand, in the sixth embodiment, in the determination by the risk event determination unit 40, the result determined by the risk event determination unit 40 is not distinguished or prioritized, and is based on so-called raw data. , to determine risk events.
 図17は、本開示の第6の実施の形態に係るドライブレコーダ14における、主として運転支援制御を、機能別に分類して示した運転支援装置14Aの機能ブロック図である。 FIG. 17 is a functional block diagram of a driving assistance device 14A in which driving assistance control is mainly classified by function in the drive recorder 14 according to the sixth embodiment of the present disclosure.
 状況認識部36では、走行地点解析部36Aで解析した車両10の走行地点、挙動解析部36Bで解析した車両10の挙動の各々の情報を、リスクイベント処理制御部38へ送出する。 The situation recognition unit 36 sends information about the travel point of the vehicle 10 analyzed by the travel point analysis unit 36A and the behavior of the vehicle 10 analyzed by the behavior analysis unit 36B to the risk event processing control unit 38.
 また、状況認識部36では、カメラ群26からの情報を取得することで、周辺環境認識部36C(図2参照)から車両10の周辺環境をリスクイベント判定部40へ送出すると共に、ドライバ状況認識部36D(図2参照)から運転者24の運転者行動(わき見、ちらちらわき見、漫然状態、スマホ操作、姿勢崩れ、安全不確認等)をリスクイベント判定部40へ送出する。 In addition, the situation recognition unit 36 acquires information from the camera group 26, sends the surrounding environment of the vehicle 10 from the surrounding environment recognition unit 36C (see FIG. 2) to the risk event determination unit 40, and recognizes the driver situation. The driver behavior of the driver 24 (inattentive glance, flickering glance, careless state, smartphone operation, posture collapse, safety unconfirmation, etc.) is sent to the risk event determination unit 40 from the unit 36D (see FIG. 2).
 リスクイベント判定部40は、車両10の走行地点、及び車両10の挙動、並びに、車両10の周辺環境、運転者24の運転姿勢に基づき、想定し得る項目のリスクの有無を判定する。 The risk event determination unit 40 determines the presence or absence of conceivable items of risk based on the travel point of the vehicle 10, the behavior of the vehicle 10, the surrounding environment of the vehicle 10, and the driving posture of the driver 24.
 リスクイベント判定部40で判定したリスクイベントは、アドバイス通知部50へ送出する。 The risk event determined by the risk event determination unit 40 is sent to the advice notification unit 50.
 アドバイス通知部50では、受け付けたリスクに基づいて、具体的なリスクコンサルティングを実行し、運転者24が所持する携帯端末装置24A等へ通知する。 The advice notification unit 50 executes specific risk consulting based on the received risks, and notifies the mobile terminal device 24A or the like possessed by the driver 24.
 以下に、第6の実施の形態の作用を説明する。 The operation of the sixth embodiment will be described below.
 図18は、第6の実施の形態に係る運転支援システムにおける運転支援制御解析ルーチンを示すフローチャートである。なお、第1の実施の形態(図6A参照)と同一ステップには、同一のステップ番号を付して、その処理の説明を省略する。 FIG. 18 is a flow chart showing a driving support control analysis routine in the driving support system according to the sixth embodiment. The same step numbers are given to the same steps as those in the first embodiment (see FIG. 6A), and the description of the processing is omitted.
 ステップ106では、GPS情報、G情報及びGyro情報から車両挙動を解析し、ステップ114へ移行する。 In step 106, vehicle behavior is analyzed from GPS information, G information and Gyro information, and the process proceeds to step 114.
 ステップ114では、ドライブレコーダ主制御部30から画像情報を取得し、ステップ116へ移行する。 In step 114, image information is acquired from the drive recorder main control unit 30, and the process proceeds to step 116.
 ステップ116では、車両10の走行地点、及び車両10の挙動、並びに、車両10の周辺環境、運転者24の運転姿勢に基づき、リスクイベントを判定し、ステップ124へ移行する。 In step 116, a risk event is determined based on the travel point of the vehicle 10, the behavior of the vehicle 10, the surrounding environment of the vehicle 10, and the driving posture of the driver 24, and the process proceeds to step 124.
 ステップ124では、結果格納処理を実行し、ステップ126へ移行する。 In step 124, the result storage process is executed, and the process moves to step 126.
 ステップ126では、解析を継続するか否かを判断し、肯定判定された場合は、前述したようにステップ100へ戻り、否定判定された場合は、このルーチンは終了する。 At step 126, it is determined whether or not to continue the analysis, and if the determination is affirmative, the process returns to step 100 as described above, and if the determination is negative, this routine ends.
 第6の実施の形態によれば、車両10が走行中に取得した、様々なシーンに応じたリスクイベントを判定し、当該リスクイベントを解消する的確なアドバイスを提供することができる。 According to the sixth embodiment, it is possible to determine risk events according to various scenes acquired while the vehicle 10 is running, and provide accurate advice to eliminate the risk events.
 また、リスクイベント判定部40における判定において、必須情報及び任意情報の区別をすることなく、最初からセンサ情報と画像情報を用いてリスクイベントを判定することができる。 Also, in the determination by the risk event determination unit 40, a risk event can be determined from the beginning using sensor information and image information without distinguishing between essential information and optional information.
 さらに、リスクイベント判定部40における判定において、リスクイベント判定部40で判定した結果に対して、区別したり優先度を設定せず、所謂生データに基づいて、リスクイベントを判定する。 Furthermore, in the determination by the risk event determination unit 40, risk events are determined based on so-called raw data without distinguishing or setting priorities for the results determined by the risk event determination unit 40.
 すなわち、第6の実施の形態によれば、車両10の走行シーンに応じたリスクイベントを、複雑な制御を行うことなく、必要十分な情報で的確に判定することができ、装置構成の簡略化に伴う廉価版として有用な構成ということができる。 That is, according to the sixth embodiment, the risk event corresponding to the driving scene of the vehicle 10 can be accurately determined with necessary and sufficient information without performing complicated control, thereby simplifying the device configuration. It can be said that it is a useful configuration as a low-priced version accompanying.
 [第7の実施の形態] [Seventh embodiment]
 以下に、本開示の第7の実施の形態について、図19に従い説明する。なお、第1の実施の形態と同一構成部分については、同一の符号を付して、その構成の説明を省略する。 The seventh embodiment of the present disclosure will be described below with reference to FIG. The same reference numerals are given to the same components as those of the first embodiment, and the description of the configuration is omitted.
 図19に示される如く、第7の実施の形態では、ジャイロセンサ16(図19では、「Gyro」と表記)、加速度センサ(Gセンサ)18(図19では、「G」と表記)、及びGPS受信機20(図19では、「GPS」と表記)が、車両制御装置12に接続されている。 As shown in FIG. 19, in the seventh embodiment, a gyro sensor 16 (indicated as "Gyro" in FIG. 19), an acceleration sensor (G sensor) 18 (indicated as "G" in FIG. 19), and A GPS receiver 20 (denoted as “GPS” in FIG. 19) is connected to the vehicle control device 12 .
 第7の実施の形態では、走行地点解析部36A及び挙動解析部36Bで必要な情報、すなわち、ジャイロセンサ16、加速度センサ18、及びGPS受信機20からの情報を車両制御装置12から取得する。この場合、車両制御装置12から、レーダ群22からの情報として、車両10の前方の障害物情報を、併せて取得してもよい。特に、夜間や雨天時等、カメラ群26からの情報量が少なく、認識しにくい障害物等を、確実に認識することができる。 In the seventh embodiment, information necessary for the travel point analysis unit 36A and the behavior analysis unit 36B, that is, information from the gyro sensor 16, the acceleration sensor 18, and the GPS receiver 20, is obtained from the vehicle control device 12. In this case, obstacle information in front of the vehicle 10 may also be acquired from the vehicle control device 12 as information from the radar group 22 . Especially at night or in rainy weather, the amount of information from the camera group 26 is small, and obstacles that are difficult to recognize can be reliably recognized.
 第7の実施の形態によれば、車両10に既存のセンサ(ジャイロセンサ16、加速度センサ18、及びGPS受信機20)の何れかが存在する場合に、ドライブレコーダ14として、当該機能を持たせる必要がなく、車両制御装置12から必要なセンサ情報を取得することができるため、構成を簡略化することができる。 According to the seventh embodiment, when any of the existing sensors (the gyro sensor 16, the acceleration sensor 18, and the GPS receiver 20) is present in the vehicle 10, the drive recorder 14 is provided with this function. Since the necessary sensor information can be acquired from the vehicle control device 12, the configuration can be simplified.
 なお、本開示の各実施の形態(変形例を含む)では、ドライブレコーダ14において、全ての制御(情報取得、状況認識、リスク判定等)を実行する構成としたが、ジャイロセンサ16、加速度センサ18、及びGPS受信機20、並びに、カメラ群26及びドライブレコーダ主制御部30は、車載搭載のハードウェア(ドライブレコーダ14)として構成し、当該ハードウェアからの出力信号をクラウドへ送信し、クラウド上で、リスクイベント判定部40、優先度決定部48、アドバイス通知部50等に代わる処理を実行してもよい。言い換えれば、車両10に搭載するハードウェアと、クラウド上で処理する機能との間の配置に制限はない。 In each embodiment (including modifications) of the present disclosure, the drive recorder 14 is configured to execute all controls (information acquisition, situation recognition, risk determination, etc.), but the gyro sensor 16 and the acceleration sensor 18, GPS receiver 20, camera group 26, and drive recorder main control unit 30 are configured as in-vehicle hardware (drive recorder 14), and output signals from the hardware are transmitted to the cloud, In the above, a process that replaces the risk event determination unit 40, the priority determination unit 48, the advice notification unit 50, and the like may be executed. In other words, there are no restrictions on the arrangement between the hardware mounted on the vehicle 10 and the functions processed on the cloud.
 また、本実施の形態では、プログラムが不揮発性メモリに予め記憶(インストール)されている態様を説明したが、これに限定されない。プログラムは非遷移的実体的記録媒体に格納されており、プログラムが実行されることでプログラムに対応する方法が実行される。プログラムは、CD-ROM(Compact Disk Read Only Memory)、DVD-ROM(Digital Versatile Disk Read Only Memory)、USB(Universal Serial Bus)メモリ、半導体メモリ等の非一時的(non-transitory)記憶媒体に記憶された形態で提供されてもよい。また、上記各プログラムは、ネットワークを介して外部装置からダウンロードされる形態としてもよい。 Also, in the present embodiment, a mode in which the program is pre-stored (installed) in the non-volatile memory has been described, but the present invention is not limited to this. A program is stored in a non-transitional substantive recording medium, and a method corresponding to the program is executed by executing the program. Programs are stored in non-transitory storage media such as CD-ROM (Compact Disk Read Only Memory), DVD-ROM (Digital Versatile Disk Read Only Memory), USB (Universal Serial Bus) memory, semiconductor memory, etc. may be provided in any form. Further, each of the above programs may be downloaded from an external device via a network.
 本開示は、実施例に準拠して記述されたが、本開示は当該実施例や構造に限定されるものではないと理解される。本開示は、様々な変形例や均等範囲内の変形をも包含する。加えて、様々な組み合わせや形態、さらには、それらに一要素のみ、それ以上、あるいはそれ以下、を含む他の組み合わせや形態をも、本開示の範疇や思想範囲に入るものである。 Although the present disclosure has been described with reference to examples, it is understood that the present disclosure is not limited to those examples or structures. The present disclosure also includes various modifications and modifications within the equivalent range. In addition, various combinations and configurations, as well as other combinations and configurations, including single elements, more, or less, are within the scope and spirit of this disclosure.

Claims (9)

  1.  車両の位置情報及び前記車両の挙動情報を必須情報源として取得して、前記必須情報源に基づいて前記車両の必須の状況を認識する第1の状況認識部と、(36)
     車両の周辺及び車室内の少なくとも一方の画像情報をもとに任意情報源を取得し、前記必須情報源と任意情報源とに基づいて前記車両の運転状況を認識する第2の状況認識部と、(36、40、42)
     前記第1の状況認識部での認識結果、又は、前記第2の状況認識部の認識結果を解析して、前記車両の走行中に発生し得るリスクイベントの有無を判定する判定部と、(40)
     前記判定部での判定結果で前記リスクイベントがあった場合に、当該リスクイベントを解消するためのアドバイスを、少なくとも前記車両を運転する運転者へ通知する通知部と、(50)
    を有する運転支援装置。
    (36) a first situation recognition unit that acquires vehicle position information and vehicle behavior information as essential information sources and recognizes the essential situation of the vehicle based on the essential information sources;
    a second situation recognition unit that acquires an arbitrary information source based on at least one image information of the surroundings of the vehicle and the inside of the vehicle, and recognizes the driving situation of the vehicle based on the essential information source and the arbitrary information source; , (36, 40, 42)
    a determination unit that analyzes the recognition result of the first situation recognition unit or the recognition result of the second situation recognition unit and determines whether or not there is a risk event that can occur while the vehicle is running; 40)
    (50) a notification unit for notifying at least a driver of the vehicle of advice for resolving the risk event when the determination result of the determination unit indicates that the risk event has occurred;
    A driving support device having
  2.  前記判定部では、複数のリスク判定項目を有しており、それぞれのリスク判定項目の入力パラメータのうち、第1の状況認識部の認識項目のみで入力パラメータを満足する判定式で判定を行うパターンと、リスク判定項目によっては第2の状況認識部の認識項目も使わないと入力パラメータを満足しない場合は、追加で第2の状況認識部の認識項目も利用する、請求項1記載の運転支援装置。 The judgment unit has a plurality of risk judgment items, and out of the input parameters of each risk judgment item, only the recognition items of the first situation recognition unit make judgments using a judgment formula that satisfies the input parameters. The driving support according to claim 1, wherein if the input parameters are not satisfied unless the recognition items of the second situation recognition unit are used depending on the risk determination item, the recognition items of the second situation recognition unit are additionally used. Device.
  3.  前記判定部で判定したリスクイベントを、少なくとも道路交通法の違反項目に関わる行動型リスクイベントと、走行地点と走行状態とで決まるシーンに基づく場所型リスクイベントとに、重複可能に分類し、前記行動型リスクイベント及び前記場所型リスクイベントの各々に対して重要度に応じたスコアリングを実行し、スコアリングの結果に基づいて、前記通知部で通知する優先度を決定する、請求項1又は請求項2記載の運転支援装置。 The risk events determined by the determination unit are classified so as to overlap at least into behavioral risk events related to violations of the Road Traffic Act and location-based risk events based on scenes determined by driving locations and driving conditions, and 2. Execution of scoring according to the degree of importance for each of the behavioral risk event and the location type risk event, and determining the priority of notification by the notification unit based on the scoring result, or The driving support device according to claim 2.
  4.  前記判定部での判定結果を取得して、前記判定部における判定結果の良否を監視し、監視結果に基づいて、前記判定部での判定基準を更新する監視部をさらに有する、請求項1~請求項3の何れか1項記載の運転支援装置。 Claims 1 to 1, further comprising a monitoring unit that acquires the determination result of the determination unit, monitors whether the determination result of the determination unit is good or bad, and updates the determination criteria of the determination unit based on the monitoring result. The driving assistance device according to claim 3 .
  5.  前記監視部が、前記判定部から判定精度情報を取得して、判定精度の低い順に、前記判定結果の良否を監視する、請求項4記載の運転支援装置。 5. The driving support device according to claim 4, wherein the monitoring unit acquires determination accuracy information from the determination unit and monitors the quality of the determination results in descending order of determination accuracy.
  6.  前記第1の状況認識部で取得する情報源として、前記車両の走行情報を加える、請求項1~請求項5の何れか1項記載の運転支援装置。 The driving support device according to any one of claims 1 to 5, wherein the driving information of the vehicle is added as an information source acquired by the first situation recognition unit.
  7.  車両の位置情報及び前記車両の挙動情報を必須情報源として取得して、判定部でリスクイベントの判定に用いる認識項目に応じ、
     1段目の状況認識として前記必須情報源に基づき運転状況を認識して前記判定部に出力する第1の認識機能と、
     1段目の出力ではリスクイベントの判定に用いる認識項目として不足する場合に、2段目の状況認識として前記必須情報源と任意情報源とに基づいて前記運転状況を認識して前記判定部に出力する第2の認識機能と、を備え、
     前記第1の認識機能の出力、及び第2の認識機能の出力の少なくとも一方を解析して、
     前記車両の走行中に発生し得るリスクイベントの有無を判定する場合に、複数のリスク判定項目を備えておき、
     それぞれのリスク判定項目の入力パラメータのうち、第1の状況認識部の認識項目のみで入力パラメータを満足する判定式で判定を行うパターンと、リスク判定項目によっては第2の状況認識部の認識項目も使わないと入力パラメータを満足しない場合は、追加で第2の状況認識部の認識項目も利用して判定し、
     判定結果で前記リスクイベントがあった場合に、当該リスクイベントを解消するためのアドバイスを、少なくとも前記車両を運転する運転者へ通知する、運転支援方法。
    Acquire vehicle position information and vehicle behavior information as essential information sources, and according to the recognition items used for risk event determination by the determination unit,
    a first recognition function for recognizing a driving situation based on the essential information source as a first-stage situation recognition and outputting it to the determination unit;
    If the output of the first stage is insufficient as the recognition items used to determine the risk event, the driving situation is recognized based on the essential information source and the arbitrary information source as the situation recognition of the second stage and sent to the determination unit. a second recognition function that outputs,
    analyzing at least one of the output of the first cognitive function and the output of the second cognitive function,
    A plurality of risk determination items are provided when determining the presence or absence of a risk event that may occur while the vehicle is running,
    Of the input parameters for each risk judgment item, a pattern in which judgment is made with a judgment formula that satisfies the input parameters only with the recognition items of the first situation recognition unit, and depending on the risk judgment item, the recognition items of the second situation recognition unit If the input parameters are not satisfied unless the
    A driving support method for notifying at least a driver driving the vehicle of advice for resolving the risk event when the risk event is found in a determination result.
  8.  車両に搭載され、少なくとも前記車両の前方を含む周囲の環境を撮影し、撮影した画像を、予め定めた容量の範囲内で、古い画像から順に書き換えながら録画すると共に、非常時期を検出した場合に、当該非常時期の前後の所定期間の画像を保存する主制御部と、
     請求項1~請求項6の何れか1項記載の運転支援装置と、
    を有するドライブレコーダ。
    It is mounted on a vehicle, photographs the surrounding environment including at least the front of the vehicle, records the photographed images while rewriting them in order from the oldest image within the range of a predetermined capacity, and when an emergency is detected. , a main control unit that saves images for a predetermined period before and after the emergency;
    A driving support device according to any one of claims 1 to 6;
    Drive recorder with
  9.  コンピュータを、
     請求項1~請求項6の何れか1項記載の運転支援制御装置の各部として動作させる、
     運転支援制御プログラム。
    the computer,
    Operate as each part of the driving support control device according to any one of claims 1 to 6,
    Driving assistance control program.
PCT/JP2022/011446 2021-03-19 2022-03-14 Driving assistance device, driving assistance method, drive recorder, and driving assistance control program WO2022196660A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-046575 2021-03-19
JP2021046575A JP2022145252A (en) 2021-03-19 2021-03-19 Driving assistance device, driving assistance method, drive recorder, driving assistance control program

Publications (1)

Publication Number Publication Date
WO2022196660A1 true WO2022196660A1 (en) 2022-09-22

Family

ID=83322319

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/011446 WO2022196660A1 (en) 2021-03-19 2022-03-14 Driving assistance device, driving assistance method, drive recorder, and driving assistance control program

Country Status (2)

Country Link
JP (1) JP2022145252A (en)
WO (1) WO2022196660A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010084580A1 (en) * 2009-01-21 2010-07-29 パイオニア株式会社 Drive evaluation device, and control method, control program and storage medium for drive evaluation device
JP2013182573A (en) * 2012-03-05 2013-09-12 Fujitsu Ten Ltd Drive recorder, data recording method, and program
JP2018022220A (en) * 2016-08-01 2018-02-08 株式会社リコー Behavior data analysis system, and behavior data analysis device, and behavior data analysis method
WO2019069732A1 (en) * 2017-10-06 2019-04-11 ソニー株式会社 Information processing device, information processing method, and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010084580A1 (en) * 2009-01-21 2010-07-29 パイオニア株式会社 Drive evaluation device, and control method, control program and storage medium for drive evaluation device
JP2013182573A (en) * 2012-03-05 2013-09-12 Fujitsu Ten Ltd Drive recorder, data recording method, and program
JP2018022220A (en) * 2016-08-01 2018-02-08 株式会社リコー Behavior data analysis system, and behavior data analysis device, and behavior data analysis method
WO2019069732A1 (en) * 2017-10-06 2019-04-11 ソニー株式会社 Information processing device, information processing method, and program

Also Published As

Publication number Publication date
JP2022145252A (en) 2022-10-03

Similar Documents

Publication Publication Date Title
US10332390B1 (en) Driving event data analysis
US9934627B1 (en) Driving event data analysis
US10515546B2 (en) Driving determination device and detection device
US10445954B2 (en) Drive event capturing based on geolocation
US10166934B2 (en) Capturing driving risk based on vehicle state and automatic detection of a state of a location
US9047773B2 (en) Exceptional road-condition warning device, system and method for a vehicle
US20240109542A1 (en) Exhaustive Driving Analytical Systems and Modelers
US11860979B2 (en) Synchronizing image data with either vehicle telematics data or infrastructure data pertaining to a road segment
JP7146516B2 (en) Driving evaluation device and in-vehicle device
SE1250310A1 (en) Procedure and system for distance adjustment during travel widened vehicle train
JP2017151694A (en) Safety confirmation diagnostic system and safety confirmation diagnostic method
US10977882B1 (en) Driver health profile
CN111526311B (en) Method and system for judging driving user behavior, computer equipment and storage medium
CN112606831A (en) Anti-collision warning information external interaction method and system for passenger car
JP2017062763A (en) Device and method for evaluating driving ability, and program for causing computer to execute the method
CN115720555A (en) Method and system for improving user alertness in an autonomous vehicle
WO2022196660A1 (en) Driving assistance device, driving assistance method, drive recorder, and driving assistance control program
WO2022196659A1 (en) Driving support device, driving support method, drive recorder, and driving support control program
WO2020121627A1 (en) Vehicle control device, vehicle, vehicle control method, and program
CN109895694B (en) Lane departure early warning method and device and vehicle
KR20210020463A (en) Method and apparatus for automatically reporting traffic rule violation vehicles using black box images
JP7276276B2 (en) Dangerous driving detection device, dangerous driving detection system, and dangerous driving detection program
JP7219015B2 (en) Vehicle operation evaluation device and vehicle operation evaluation method
KR102499056B1 (en) Method, apparatus and server to monitor driving status based vehicle route
JP2022187273A (en) Information processing device and driving evaluation system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22771405

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22771405

Country of ref document: EP

Kind code of ref document: A1