WO2024069689A1 - Driving assistance method and driving assistance device - Google Patents

Driving assistance method and driving assistance device Download PDF

Info

Publication number
WO2024069689A1
WO2024069689A1 PCT/JP2022/035669 JP2022035669W WO2024069689A1 WO 2024069689 A1 WO2024069689 A1 WO 2024069689A1 JP 2022035669 W JP2022035669 W JP 2022035669W WO 2024069689 A1 WO2024069689 A1 WO 2024069689A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
driving
host vehicle
traveling
driving assistance
Prior art date
Application number
PCT/JP2022/035669
Other languages
French (fr)
Japanese (ja)
Inventor
幸熙 小泉
周平 江本
明 森本
綾汰 池渕
浩輔 神田
恵吾 菊池
Original Assignee
日産自動車株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日産自動車株式会社 filed Critical 日産自動車株式会社
Priority to PCT/JP2022/035669 priority Critical patent/WO2024069689A1/en
Publication of WO2024069689A1 publication Critical patent/WO2024069689A1/en

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Definitions

  • the present invention relates to a driving assistance method and a driving assistance device.
  • a surround view system for vehicles that generates a three-dimensional model of the vehicle's surroundings based on data on the vehicle's surroundings, and maps visual data onto each part of the three-dimensional model, thereby reducing distortion in the generated virtual surround view (Patent Document 1).
  • the process of mapping the above-mentioned visual data onto a three-dimensional model places a large load on the processing device.
  • the above-mentioned conventional technology is unable to adequately reduce distortion in the virtual surround view in driving scenes where the conditions around the vehicle are constantly changing, and is unable to accurately recognize the driving conditions of other vehicles around the vehicle.
  • unnecessary evasive action is taken based on the misrecognized driving conditions of other vehicles, disrupting the behavior of the vehicle.
  • the problem that this invention aims to solve is to provide a driving assistance method and driving assistance device that can reduce the impact of erroneous recognition of the driving conditions of other vehicles on the driving conditions of the vehicle itself.
  • the present invention solves the above problem by executing risk management control to control the vehicle so as to address the risk of misrecognizing the driving state of the other vehicle from the detection results of the multiple imaging devices when the detection ranges of multiple imaging devices mounted on the vehicle overlap in a lane adjacent to the vehicle's own lane in which the vehicle is traveling, and it is determined that the other vehicle will enter the overlapping portion within a predetermined time.
  • the present invention can reduce the impact of erroneous recognition of the driving conditions of other vehicles on the driving conditions of the vehicle itself.
  • FIG. 1 is a block diagram showing an example of a driving assistance system including a driving assistance device according to the present invention.
  • FIG. 2 is a plan view showing an example of the imaging device of FIG. 1 .
  • 3 is a plan view showing an example of a detection result of another vehicle by the imaging device shown in FIG. 2 .
  • FIG. 2 is a plan view showing an example of a driving scene in which driving assistance is performed by the driving assistance system shown in FIG. 1 (part 1).
  • 2 is a plan view showing an example of a driving scene in which driving assistance is performed by the driving assistance system shown in FIG. 1 (part 2).
  • FIG. 3 is a plan view showing an example of a driving scene in which driving assistance is performed by the driving assistance system shown in FIG. 1 (part 3).
  • FIG. 1 is a block diagram showing an example of a driving assistance system including a driving assistance device according to the present invention.
  • FIG. 2 is a plan view showing an example of the imaging device of FIG. 1 .
  • 3 is a
  • FIG. 10 is a plan view showing another example of a driving scene in which driving assistance is performed by the driving assistance system shown in FIG. 1 (part 1).
  • FIG. 2 is a plan view showing another example of a driving scene in which driving assistance is performed by the driving assistance system shown in FIG. 1 (part 2).
  • FIG. 1. is a plan view showing another example of a driving scene in which driving assistance is performed by the driving assistance system shown in FIG. 1 (part 3).
  • 10 is a plan view showing still another example of a driving scene in which driving assistance is performed by the driving assistance system shown in FIG. 1 (part 1).
  • FIG. 4 is a plan view (part 2) showing still another example of a driving scene in which driving assistance is performed by the driving assistance system shown in FIG. 1.
  • FIG. 4 is a plan view (part 3) showing still another example of a driving scene in which driving assistance is performed by the driving assistance system shown in FIG. 2 is a flowchart showing an example of a processing procedure in the driving assistance system of FIG. 1 (part 1).
  • 2 is a flowchart showing an example of a processing procedure in the driving assistance system of FIG. 1 (part 2).
  • 10 is a flowchart showing an example of a processing procedure in the driving assistance system of FIG. 1 (part 3).
  • FIG. 1 is a block diagram showing a driving assistance system 10 according to the present invention.
  • the driving assistance system 10 is an in-vehicle system that drives a vehicle to a destination set by a vehicle occupant (including a driver) by autonomous driving control.
  • the autonomous driving control means that the driving operation of the vehicle is autonomously controlled using a driving assistance device described later, and the driving operation includes all driving operations such as acceleration, deceleration, starting, stopping, steering to the right or left, lane changing, and pulling over.
  • autonomously controlling the driving operation means that the driving assistance device controls the driving operation using a device of the vehicle.
  • the driving assistance device controls these driving operations within a predetermined range, and the driving operations that are not controlled by the driving assistance device are manually operated by the driver.
  • the driving assistance system 10 comprises an imaging device 11, a distance measuring device 12, a vehicle state detection device 13, map information 14, a vehicle position detection device 15, a navigation device 16, a vehicle control device 17, a display device 18, and a driving assistance device 19.
  • the devices that make up the driving assistance system 10 are connected by a CAN (Controller Area Network) or other in-vehicle LAN, and can send and receive information between each other.
  • CAN Controller Area Network
  • the imaging device 11 is a device that recognizes objects around the vehicle using images, and is, for example, a camera equipped with an imaging element such as a CCD, an ultrasonic camera, an infrared camera, or the like.
  • a single vehicle can be provided with multiple imaging devices 11, and they can be placed, for example, in the vehicle's front grille, under the left and right door mirrors, and near the rear bumper. This makes it possible to reduce blind spots when recognizing objects around the vehicle.
  • the ranging device 12 is a device for calculating the relative distance and relative speed between the vehicle and an object, and is, for example, a radar device or sonar such as a laser radar, millimeter wave radar (LRF, etc.), a LiDAR (light detection and ranging) unit, or an ultrasonic radar.
  • a single vehicle can be provided with multiple ranging devices 12, and can be positioned, for example, at the front, right side, left side, and rear of the vehicle. This makes it possible to accurately calculate the relative distance and relative speed between the vehicle and objects around it.
  • Objects detected by the imaging device 11 and distance measuring device 12 include road lane boundaries, center lines, road markings, medians, guardrails, curbs, highway sidewalls, road signs, traffic lights, crosswalks, construction sites, accident sites, traffic restrictions, etc. Objects also include obstacles that may affect the travel of the vehicle, such as automobiles (other vehicles) other than the vehicle itself, motorcycles, bicycles, pedestrians, etc.
  • the detection results of the imaging device 11 and distance measuring device 12 are obtained at predetermined time intervals by the driving assistance device 19 as necessary. The predetermined time intervals can be set to an appropriate value depending on the processing capacity of the driving assistance device 19.
  • the detection results of the imaging device 11 and the distance measuring device 12 can be integrated or synthesized (so-called sensor fusion) by the driving assistance device 19, which makes it possible to supplement missing information about the detected object.
  • the driving assistance device 19 can calculate the position information of the object based on the self-position information, which is the position where the vehicle is traveling, acquired by the vehicle position detection device 15, and the relative position (distance and direction) between the vehicle and the object.
  • the calculated position information of the object is integrated by the driving assistance device 19 with multiple pieces of information, such as the detection results of the imaging device 11 and the distance measuring device 12 and the map information 14, to become information about the driving environment around the vehicle.
  • the detection results of the imaging device 11 and the distance measuring device 12 and the map information 14 can be used to recognize objects around the vehicle and predict their movements.
  • the vehicle state detection device 13 is a device for detecting the vehicle's running state, and examples of such devices include a vehicle speed sensor, an acceleration sensor, a yaw rate sensor (e.g., a gyro sensor), a steering angle sensor, and an inertial measurement unit. There are no particular limitations on these devices, and any known device can be used. Furthermore, the placement and number of these devices can be set appropriately within a range in which the vehicle's running state can be appropriately detected. The detection results of each device are obtained by the driving assistance device 19 at specified time intervals as necessary.
  • Map information 14 is information used for generating driving routes, controlling driving operations, etc., and includes road information, facility information, and their attribute information.
  • Road information and road attribute information include information such as road width, road curvature radius, road shoulder structures, road traffic regulations (speed limit, whether lane changes are permitted), road junctions and branching points, and locations where the number of lanes increases or decreases.
  • Map information 14 is high-definition map information that allows the movement trajectory of each lane to be grasped, and includes two-dimensional position information and/or three-dimensional position information at each map coordinate, road/lane boundary information at each map coordinate, road attribute information, lane uphill/downhill information, lane identification information, connecting lane information, etc. Note that high-precision maps are also called HD (High-Definition) maps.
  • HD High-Definition
  • the road/lane boundary information in the high-resolution map information is information that indicates the boundary between the lane on which the vehicle travels and other lane.
  • the lane on which the vehicle travels is the road on which the vehicle travels, and the form of the lane is not particularly limited.
  • the boundary exists on both the left and right sides of the vehicle's direction of travel, and the form is not particularly limited.
  • the boundary is, for example, a road marking or a road structure. Examples of road markings include lane boundaries and center lines, and examples of road structures include medians, guard rails, curbs, tunnels, and side walls of expressways. Note that at points where the lane boundary cannot be clearly identified, such as within intersections, a boundary is set for the lane in advance. This boundary is imaginary, and is not an actual road marking or road structure.
  • Map information 14 is stored in a readable state on a recording medium provided in the driving assistance device 19, an in-vehicle device, or a server on a network.
  • the driving assistance device 19 acquires map information 14 as necessary.
  • the vehicle position detection device 15 is a positioning system for detecting the current position of the vehicle, and is not particularly limited, and any known system can be used.
  • the vehicle position detection device 15 calculates the current position of the vehicle from radio waves received from a satellite for the Global Positioning System (GPS), for example.
  • GPS Global Positioning System
  • the vehicle position detection device 15 may also estimate the current position of the vehicle from vehicle speed information and acceleration information acquired from the vehicle state detection device 13, which includes a vehicle speed sensor, an acceleration sensor, and a gyro sensor, and calculate the current position of the vehicle by comparing the estimated current position with the map information 14.
  • the navigation device 16 is a device that refers to the map information 14 and calculates a driving route from the current position of the vehicle detected by the vehicle position detection device 15 to a destination set by the occupants (including the driver).
  • the navigation device 16 uses road information and facility information in the map information 14 to search for a driving route for the vehicle to reach the destination from the current position.
  • the driving route includes at least information on the road the vehicle is traveling on, the driving lane, and the vehicle's driving direction, and is displayed, for example, linearly. There may be multiple driving routes depending on the search conditions.
  • the driving route calculated by the navigation device 16 is output to the driving assistance device 19.
  • the vehicle control device 17 is an on-board computer such as an electronic control unit (ECU), and electronically controls on-board equipment that governs the driving of the vehicle.
  • the vehicle control device 17 is equipped with a vehicle speed control device 171 that controls the driving speed of the vehicle, and a steering control device 172 that controls the steering operation of the vehicle.
  • the vehicle speed control device 171 and the steering control device 172 autonomously control the operation of these drive devices and steering devices in response to control signals input from the driving assistance device 19. This allows the vehicle to drive autonomously according to a set driving route.
  • Information necessary for autonomous control by the vehicle speed control device 171 and the steering control device 172 is obtained from the vehicle state detection device 13.
  • the drive devices controlled by the vehicle speed control device 171 include an electric motor and/or an internal combustion engine, which are drive sources for driving, a power transmission device including a drive shaft and an automatic transmission that transmits the output from these drive sources for driving to the drive wheels, and a drive device that controls the power transmission device.
  • the braking device controlled by the vehicle speed control device 171 is, for example, a braking device that brakes the wheels.
  • a control signal corresponding to the set driving speed is input to the vehicle speed control device 171 from the driving assistance device 19.
  • the vehicle speed control device 171 generates signals to control these drive devices based on the control signals input from the driving assistance device 19, and transmits the signals to the drive devices, thereby autonomously controlling the vehicle's driving speed.
  • the steering device controlled by the steering control device 172 is a steering device that controls the steered wheels according to the rotation angle of the steering wheel, and an example of this is a steering actuator such as a motor attached to the steering column shaft.
  • the steering control device 172 autonomously controls the operation of the steering device so that the vehicle travels while maintaining a predetermined lateral position (left-right position of the vehicle) with respect to the set travel route. For this control, at least one of the detection results of the imaging device 11 and the distance measuring device 12, the vehicle's travel state obtained by the vehicle state detection device 13, the map information 14, and the information on the current position of the vehicle obtained by the vehicle position detection device 15 is used.
  • the display device 18 is a device for providing necessary information to vehicle occupants, and is, for example, a liquid crystal display provided on the instrument panel, a projector such as a head-up display (HUD), etc.
  • the display device 18 may also be equipped with an input device for the vehicle occupants to input instructions to the driving assistance device 19. Examples of input devices include a touch panel that receives input by the user's finger or a stylus pen, a microphone that receives instructions by the user's voice, and a switch attached to the steering wheel of the vehicle.
  • the display device 18 may also be equipped with a speaker as an output device.
  • the driving assistance device 19 is a device that controls the driving of the vehicle by controlling and coordinating the devices that make up the driving assistance system 10, and drives the vehicle to a set destination.
  • the destination is set, for example, by the vehicle occupant.
  • the driving assistance device 19 is, for example, a computer, and includes a CPU (Central Processing Unit) 191, which is a processor, a ROM (Read Only Memory) 192 in which programs are stored, and a RAM (Random Access Memory) 193 that functions as an accessible storage device.
  • the CPU 191 is an operating circuit that executes the programs stored in the ROM 192 and realizes the functions of the driving assistance device 19.
  • the driving assistance device 19 has a driving assistance function of driving the vehicle to a set destination by autonomous driving control.
  • the driving assistance device 19 has, as driving assistance functions, a route generation function of generating a driving route, an environment recognition function of recognizing the driving environment around the vehicle, a determination function of making a determination necessary for executing autonomous driving control based on the recognized driving environment, and a driving control function of generating a driving trajectory and driving the vehicle along the driving trajectory.
  • the programs stored in ROM 192 include programs for realizing these functions, and these functions are realized by CPU 191 executing the programs stored in ROM 192.
  • Figure 1 shows functional blocks that realize each function extracted for convenience.
  • the support unit 20 has a driving support function that drives the vehicle to a set destination by autonomous driving control.
  • FIG. 2 is a plan view showing an example of a driving scene in which the driving support device 19 autonomously controls the driving of the vehicle by the driving support function.
  • a three-lane road extends in the vertical direction of the drawing, and the vehicle drives on the road from the bottom to the top of the drawing.
  • the lanes are designated as lanes L1, L2, and L3 in order from the left lane in the driving direction.
  • the host vehicle V1 is driving on lane L2, and is heading toward a destination (not shown) ahead that has been set by the occupant of the host vehicle V1.
  • the recognition unit 21 has an environmental recognition function that recognizes the driving environment around the vehicle.
  • the driving assistance device 19 recognizes the driving environment around the vehicle using the imaging device 11 and distance measuring device 12 through the environmental recognition function of the recognition unit 21.
  • the driving environment is information for determining whether the vehicle can maintain its current driving state or needs to change its driving state, and includes information such as the type and position of an object, the type and position of an obstacle if one exists, road conditions such as road surface conditions, and weather.
  • the driving assistance device 19 recognizes the driving environment by performing appropriate processing such as pattern matching and sensor fusion on the detection results of the imaging device 11 and distance measuring device 12.
  • the recognition unit 21 of this embodiment has the function of detecting obstacles using multiple imaging devices 11 mounted on the host vehicle V1.
  • the host vehicle V1 is equipped with a front camera that detects obstacles present in a detection range A1 in front of the host vehicle V1, and a rear camera that detects obstacles present in a detection range A2 behind the host vehicle V1.
  • the host vehicle V1 is equipped with a front wide-angle camera that detects obstacles present in a detection range B1 in front of the host vehicle V1, a rear wide-angle camera that detects obstacles present in a detection range B2 behind the host vehicle V1, a left side wide-angle camera that detects obstacles present in a detection range B3 to the left of the host vehicle V1, and a right side wide-angle camera that detects obstacles present in a detection range B4 to the right of the host vehicle V1.
  • a front wide-angle camera that detects obstacles present in a detection range B1 in front of the host vehicle V1
  • a rear wide-angle camera that detects obstacles present in a detection range B2 behind the host vehicle V1
  • a left side wide-angle camera that detects obstacles present in a detection range B3 to the left of the host vehicle V1
  • a right side wide-angle camera that detects obstacles present in a detection range B4 to the right of the host vehicle V1.
  • Wide-angle cameras are equipped with wide-angle lenses, so they have a wider angle of view and a shorter focal length than normal cameras. Therefore, the detection range B1 of the front wide-angle camera is shorter in distance along the lane L2 than the detection range A1 of the front camera, and has a wider angle of view in the width direction of lane L2. Similarly, the detection range B2 of the rear wide-angle camera is shorter in distance along the lane L2 than the detection range A2 of the rear camera, and has a wider angle of view in the width direction of lane L2.
  • the driving assistance device 19 uses the function of the recognition unit 21 to integrate and process the detection results of the front wide-angle camera, rear wide-angle camera, left side wide-angle camera, and right side wide-angle camera using sensor fusion to thoroughly detect obstacles present around the host vehicle V1.
  • These wide-angle cameras are arranged so that adjacent cameras' detection ranges overlap in part (for example, a range of about 10 to 15% of the field of view from the horizontal end of the detection range) so that there are no blind spots around the host vehicle V1 where obstacles cannot be detected.
  • the detection ranges of the multiple imaging devices 11 mounted on the host vehicle V1 are arranged so that they overlap in parts of the lane adjacent to the host vehicle V1's lane.
  • the overlapping parts of the detection ranges of the imaging devices 11 are referred to as overlapping parts.
  • the forward detection range B1 and the left side detection range B3 overlap in the range indicated by overlapping portion C1
  • the rear detection range B2 and the left side detection range B3 overlap in the range indicated by overlapping portion C2.
  • Both overlapping portions C1 and C2 are located in the lane L1 adjacent to the lane L2 in which the host vehicle V1 is traveling.
  • the forward detection range B1 and the right side detection range B4 overlap in the range indicated by overlapping portion C3
  • the rear detection range B2 and the right side detection range B4 overlap in the range indicated by overlapping portion C4.
  • Both overlapping portions C3 and C4 are located in the lane L3 adjacent to the lane L2 in which the host vehicle V1 is traveling.
  • the state of the obstacle cannot often be accurately detected. This is because the overlapping areas are located at the edge of the angle of view of the wide-angle lens, and the lens characteristics cause the shape of the obstacle photographed to be distorted in the overlapping areas. Also, if an obstacle is present in front of and behind the overlapping area in the adjacent lanes, none of the cameras can photograph the obstacle in its entirety, and the obstacle is recognized by combining the detection results from multiple cameras mounted on the vehicle V1. This is because there are cases in which the state of the obstacle cannot be accurately estimated when combining the detection results.
  • the driving scene shown in Figure 3 is a driving scene in which the other vehicle V2 is driving on lane L1 in the driving scene shown in Figure 2.
  • the other vehicle V2 is a truck, and is a large vehicle different from the host vehicle V1, which is a passenger car.
  • the overall length of the other vehicle V2 is longer than the overall length of the host vehicle V1.
  • the other vehicle V2 is driving at a position on the overlapping portion C1 of the lane L1, and the body of the other vehicle V2 is located in front of and behind the overlapping portion C1.
  • the driving support device 19 recognizes that the other vehicle V2 is driving in the state of V2 shown in FIG. 3. That is, the driving support device 19 correctly recognizes that the other vehicle V2 is moving straight in the lane L1. In this case, the driving support device 19 uses the driving control function to move straight in the lane L1 by lane keeping control, and does not perform a driving operation to avoid the other vehicle V2. Also, for example, when the other vehicle V2 changes lanes toward a position in front of the host vehicle V1, as shown by V2x in FIG.
  • a driving operation to avoid the other vehicle V2 is performed based on the recognized driving state of the other vehicle V2.
  • the vehicle speed control device 171 is used to decelerate the host vehicle V1
  • the steering control device 172 is used to change lanes of the host vehicle V1 from lane L2 to lane L3.
  • the driving support device 19 recognizes that the other vehicle V2 is driving in a state such as V2x shown in FIG. 3.
  • the driving support device 19 uses the driving control function to perform driving operations to avoid the other vehicle V2 based on the erroneously recognized driving state of the other vehicle V2.
  • This avoidance action is actually an unnecessary driving action to avoid the other vehicle V2 that is traveling straight ahead, and this driving action disrupts the behavior of the host vehicle V1 and causes discomfort to the occupants of the host vehicle V1.
  • the driving assistance device 19 of this embodiment therefore executes autonomous driving control to address the risk of erroneously recognizing the driving state of the other vehicle V2, in order to reduce the impact that an erroneously recognized driving state of the other vehicle V2 has on the driving state of the host vehicle V1.
  • This autonomous driving control is mainly controlled by the functions of the determination unit 22 and the control unit 23.
  • the functions of the determination unit 22 and the control unit 23 will be explained below using Figures 4A to 4C. In the following explanation, the overlapping portion of the detection ranges of the multiple imaging devices 11 mounted on the host vehicle V1 will also be simply referred to as the "overlapping portion".
  • the determination unit 22 has a determination function of determining whether the other vehicle V2 will enter an overlapping portion of the detection ranges of the multiple imaging devices 11.
  • the driving assistance device 19 uses the function of the determination unit 22 to determine whether the other vehicle V2 will enter an overlapping portion of the detection ranges of the multiple imaging devices 11 based on driving environment information around the vehicle V1 acquired by the function of the recognition unit 21. This makes it possible to start control to reduce the risk of erroneous detection before the other vehicle V2, which may exceed the overlapping range, enters the overlapping portion where erroneous detection is likely to occur.
  • the driving assistance device 19 determines whether the other vehicle V2 is entering an overlapping area of the detection ranges of the multiple imaging devices 11, it recognizes the driving states of the host vehicle V1 and the other vehicle V2, for example, from driving environment information acquired by the function of the recognition unit 21.
  • the driving state of the vehicle refers to the state of the vehicle's traveling direction and driving speed, and includes states such as a state in which the vehicle is traveling straight, a state in which the vehicle is steering to the right or left, a state in which the vehicle is accelerating or decelerating, and a state in which the vehicle is traveling at a constant speed.
  • the driving state of the vehicle also includes the state of the driving operation performed by the vehicle. Examples include a state in which the vehicle's turn indicators are flashing, a state in which the vehicle's headlights are turned on, etc.
  • the driving assistance device 19 uses the function of the determination unit 22, acquires information such as the driving speed, acceleration, yaw rate, steering angle, and steering wheel rotation angle of the host vehicle V1 from various sensors of the host vehicle state detection device 13, and recognizes the current driving state of the host vehicle V1.
  • the driving assistance device 19 may acquire road information from the map information 14, acquire the current position of the host vehicle V1 from the host vehicle position detection device 15, acquire the driving route from the navigation device 16, and recognize the traveling direction and/or driving speed of the host vehicle V1 from the shape of the road and/or the driving route at the current position of the host vehicle V1.
  • the driving assistance device 19 uses the functions of the determination unit 22 to, for example, acquire image data from the imaging device 11, extract and identify obstacles by pattern matching, and recognize the type, position, and state of the obstacle. It also acquires information obtained by scanning the area around the vehicle V1 from the distance measuring device 12, and recognizes the position and direction of the obstacle from this information. If it recognizes that the obstacle is another vehicle V2 from the image data acquired from the imaging device 11, it recognizes the degree to which the vehicle body is tilted (i.e., the degree to which it is being steered) from its shape. It also acquires the position and relative speed of the other vehicle V2 with respect to the vehicle V1 from the scan results of the distance measuring device 12. Then, it recognizes the driving position, direction of travel, and driving speed of the other vehicle V2 based on these detection results.
  • the driving support device 19 determines whether the other vehicle V2 will enter the overlapping portion based on the recognized driving states of the subject vehicle V1 and the other vehicle V2. The driving support device 19 determines whether the other vehicle V2 will enter the overlapping portion based on the positional relationship between the subject vehicle V1 and the other vehicle V2, the speed difference between the subject vehicle V1 and the other vehicle V2, and the traveling direction of the subject vehicle V1 and the other vehicle V2.
  • the other vehicle V2 when the other vehicle V2 is traveling behind the host vehicle V1, if the traveling speed of the other vehicle V2 is faster than the traveling speed of the host vehicle V1 and the other vehicle V2 is heading toward the overlapping portion, it is determined that the other vehicle V2 will enter the overlapping portion.
  • the other vehicle V2 when the other vehicle V2 is traveling behind the host vehicle V1 and the traveling speed of the other vehicle V2 is equal to or lower than the traveling speed of the host vehicle V1, it is determined that the other vehicle V2 will not enter the overlapping portion.
  • the other vehicle V2 when the other vehicle V2 is traveling ahead of the host vehicle V1, if the traveling speed of the other vehicle V2 is equal to or higher than the traveling speed of the host vehicle V1, it is determined that the other vehicle V2 will not enter the overlapping portion.
  • the other vehicle V2 when the other vehicle V2 is traveling ahead of the host vehicle V1, if the traveling speed of the other vehicle V2 is slower than the traveling speed of the host vehicle V1 and the host vehicle V1 (especially the overlapping portion) is heading toward the other vehicle V2, it is determined that the other vehicle V2 will enter the overlapping portion.
  • the method by which the driving assistance device 19 determines whether the other vehicle V2 will enter the overlapping portion is not limited to the above, and other methods may be used.
  • the driving assistance device 19 may predict the driving state of the host vehicle V1 and the driving state of the other vehicle V2 after a predetermined time, and determine whether the other vehicle V2 will enter the overlapping portion within a predetermined time based on the predicted driving states of the host vehicle V1 and the other vehicle V2.
  • the specified time can be set to an appropriate value within a range in which autonomous driving control that addresses the risk of misrecognizing the driving conditions can be initiated before the other vehicle V2 actually enters the overlapping portion, for example 10 to 20 seconds. If the specified time is shorter than this, the start of autonomous driving control that addresses the risk of misrecognizing the driving conditions will be delayed, resulting in greater changes in the behavior of the host vehicle V1. Conversely, if the specified time is longer than this, it will be impossible to accurately predict the driving conditions, and there is a risk that autonomous driving control that addresses the risk of misrecognizing the driving conditions will be executed in driving scenes where such control is not necessary.
  • the driving assistance device 19 When predicting the running state of the host vehicle V1 after a predetermined time, the driving assistance device 19 recognizes the current running state of the host vehicle V1 based on information obtained from the host vehicle state detection device 13. In addition, the driving assistance device 19 obtains control signals output to the drive device and/or steering device from the vehicle control device 17, and recognizes how to control (change) the traveling direction and/or running speed of the host vehicle V1. Then, based on these, it predicts how the running state of the host vehicle V1 will change after a predetermined time.
  • the driving assistance device 19 may obtain road information from the map information 14, obtain the current position of the host vehicle V1 from the host vehicle position detection device 15, obtain the running route from the navigation device 16, and predict the traveling direction and/or running speed of the host vehicle V1 after a predetermined time from the shape of the road ahead of the current position of the host vehicle V1 and/or the running route.
  • the driving assistance device 19 when predicting the driving state of the other vehicle V2 after a predetermined time, obtains information obtained by scanning the surroundings of the vehicle V1 from the distance measuring device 12, and recognizes the position and direction of the obstacle from that information.
  • the driving assistance device 19 repeats the process of recognizing the position and direction of the obstacle from the scan results of the distance measuring device 12 multiple times (e.g., three or more times) at time intervals shorter than the predetermined time, recognizes the tendency of changes in the obstacle position, and predicts the state of the obstacle after the predetermined time (i.e. the driving state of the other vehicle V2) from that tendency.
  • the driving assistance device 19 determines whether the other vehicle V2 will enter the overlapping portion within a predetermined time based on the predicted driving state of the subject vehicle V1 and the driving state of the other vehicle V2. Specifically, it determines whether the other vehicle V2 will enter the overlapping portion based on the positional relationship between the subject vehicle V1 and the other vehicle V2 after the predetermined time. This makes it possible to determine whether the other vehicle V2 will enter the overlapping portion even if the driving states of both the subject vehicle V1 and the other vehicle V2 change.
  • the driving scene shown in FIG. 4A is a driving scene in which another vehicle V2 is present in lane L3 in the driving scene shown in FIG. 2.
  • the other vehicle V2 is traveling at position P1 behind the host vehicle V1, and the driving assistance device 19 of the host vehicle V1 recognizes the other vehicle V2 present in the detection range A2 of the rear camera through the function of the recognition unit 21.
  • the host vehicle V1 is traveling at a constant speed due to lane keeping control
  • the other vehicle V2 is traveling straight at a faster driving speed than the host vehicle V1, and the other vehicle V2 will overtake the host vehicle V1 within a predetermined time.
  • the driving assistance device 19 acquires vehicle speed information of the host vehicle V1 from the vehicle speed sensor (host vehicle state detection device 13) and calculates the traveling position of the host vehicle V1 when constant speed traveling by lane keeping control continues for a predetermined time (10 to 20 seconds).
  • the driving assistance device 19 also acquires image data from the imaging device 11 and recognizes by pattern matching that the obstacle is a truck (other vehicle V2) and that it is traveling in lane L3.
  • the detection result of the rear camera imaging device 11
  • it recognizes that the other vehicle V2 is traveling straight without turning.
  • the relative speed of the other vehicle V2 with respect to the host vehicle V1 is acquired.
  • the driving assistance device 19 calculates the traveling speed of the other vehicle V2 from the relative speed of the other vehicle V2 with respect to the host vehicle V1 and predicts where the other vehicle V2 traveling straight will be traveling after a predetermined time. In the driving scene shown in FIG. 4A, the other vehicle V2 will overtake the host vehicle V1 within a predetermined time, and the driving assistance device 19 determines that the other vehicle V2 will enter the overlapping portions C3 and C4 within the predetermined time based on the relationship between the calculated driving position of the host vehicle V1 and the predicted driving position of the other vehicle V2.
  • predicting the driving state of the subject vehicle V1 and the driving state of the other vehicle V2 after a predetermined time, and determining whether the other vehicle V2 will enter the overlapping portion within the predetermined time based on the predicted driving states of the subject vehicle V1 and the other vehicle V2, are not essential components of the present invention, and may be provided as necessary.
  • the control unit 23 determines, by the function of the determination unit 22, that the other vehicle V2 is entering the overlapping portion, the control unit 23 has a function of executing autonomous driving control that addresses the risk of misrecognizing the driving state of the other vehicle V2 from the detection results of the multiple imaging devices 11 mounted on the host vehicle V1.
  • Autonomous driving control that addresses the risk of misrecognizing the driving state is, for example, autonomous driving control for suppressing the possibility of misrecognizing the driving state of the other vehicle V2 traveling in the overlapping portion, as shown in FIG. 3, or for suppressing the effect of misrecognizing the driving state on the host vehicle V1.
  • autonomous driving control that controls the host vehicle V1 to address the risk of misrecognizing the driving state of the other vehicle V2 from the detection results of the imaging devices 11 mounted on the host vehicle V1 will also be referred to as risk handling control.
  • the driving assistance device 19 autonomously controls the driving of the host vehicle V1 so that the other vehicle V2 does not enter the overlapping portion.
  • the driving assistance device 19 may autonomously control the driving of the host vehicle V1 so that the driving state of the other vehicle V2, which is erroneously recognized from the detection result of the imaging device 11, does not affect the driving state of the host vehicle V1.
  • Autonomously controlling the travel of the vehicle V1 so that the other vehicle V2 does not enter the overlapping portion includes setting the travel speed of the vehicle V1 so that the other vehicle V2 is not included in the detection range of the imaging device 11 (particularly the overlapping portion of the detection range), setting the inter-vehicle distance between the vehicle V1 and the other vehicle V2 so that the other vehicle V2 is not included in the detection range of the imaging device 11 (particularly the overlapping portion of the detection range), and changing the lane of the vehicle V1 to an adjacent lane.
  • autonomously controlling the travel of the vehicle V1 so that the erroneously recognized travel state of the other vehicle V2 does not affect the travel state of the vehicle V1 includes autonomously controlling the travel of the vehicle V1 without using the detection result of the imaging device 11, and autonomously controlling the travel of the vehicle V1 by setting the weighting of the detection result of the imaging device 11 to a small value.
  • the driving support device 19 executes risk response control according to the driving scene, and may execute an appropriate combination of the above-mentioned multiple risk response controls. Specifically, the driving support device 19 executes appropriate risk response control based on the positional relationship between the host vehicle V1 and the other vehicle V2, the speed difference between the host vehicle V1 and the other vehicle V2, the traveling direction of the host vehicle V1 and the other vehicle V2, the driving behavior of the host vehicle V1 and the other vehicle V2, etc.
  • the driving assistance device 19 judges whether the other vehicle V2 is traveling behind the vehicle V1. If it is judged that the other vehicle V2 is traveling behind the vehicle V1, the driving assistance device 19 autonomously controls the driving of the vehicle V1 without using the detection result of the imaging device 11 or by reducing the weighting of the detection result. On the other hand, if it is judged that the other vehicle V2 is traveling ahead of the vehicle V1 or the vehicle V1 and the other vehicle V2 are traveling side by side, it judges whether the vehicle V1 will overtake or pass the other vehicle V2. If it is judged that the vehicle V1 will not overtake or pass, it autonomously controls the driving of the vehicle V1 so that the other vehicle V2 is not included in the overlapping portion.
  • the vehicle V1 sets a driving speed faster than a predetermined driving speed set in advance, and performs the overtaking or pass without using the detection result of the imaging device 11 or by reducing the weighting of the detection result.
  • a driving scene in which the host vehicle V1 overtakes another vehicle V2 refers to, for example, a driving scene in which the other vehicle V2 is a preceding vehicle of the host vehicle V1 traveling in the same lane as the host vehicle V1, and the traveling speed of the other vehicle V2 is slow, and the autonomous driving control of the host vehicle V1 cannot be continued, so the host vehicle V1 changes lanes to an adjacent lane to overtake the other vehicle V2.
  • the traveling of the host vehicle V1 may be autonomously controlled without using the detection result of the imaging device 11 or by reducing the weighting of the detection result.
  • the driving assistance device 19 determines that the other vehicle V2 has entered the overlapping portions C3 and C4 within a predetermined time and that the overall length D1 of the other vehicle V2 is longer than the length D2 of the overlapping portion C4 along the lane L3, and therefore executes risk response control using the function of the control unit 23. First, the driving assistance device 19 determines whether the other vehicle V2 is traveling behind the host vehicle V1. In the driving scene shown in FIG. 4A, based on the positional relationship shown in the figure, the driving assistance device 19 determines that the other vehicle V2 is traveling behind the host vehicle V1.
  • the driving assistance device 19 autonomously controls the running of the host vehicle V1 without using the detection results of the imaging device 11, or autonomously controls the running of the host vehicle V1 by setting a small weighting for the detection results of the imaging device 11.
  • the running of the host vehicle V1 is autonomously controlled without using the detection results of the imaging device 11.
  • the driving assistance device 19 performs lane keeping control for the host vehicle V1 without using the detection results of the imaging device 11. Specifically, lane keeping control is performed using the detection results of the distance measuring device 12, which includes a radar device, sonar, etc., instead of the imaging device 11.
  • the detection results of a normal camera other than a wide-angle camera may be used among the imaging devices 11. This is because there is no risk of misrecognition due to overlapping parts.
  • Position P2 shown in Figure 4C is a position where the other vehicle V2 has passed through overlapping parts C3 and C4 and is included in the detection range A1 of the front camera, so there is no risk that the driving assistance device 19 will erroneously recognize the driving state of the other vehicle V2.
  • the weighting of the detection results of the imaging device 11 is set relatively low compared to the detection results of other detection devices such as the distance measuring device 12 so that the detection results of the imaging device 11 (particularly the wide-angle camera) do not affect the traveling state of the vehicle V1 (i.e., traveling speed and steering operation).
  • the weighting of the detection results of the imaging device 11 may be set low, or the weighting of the detection results of other detection devices such as the distance measuring device 12 may be set high.
  • the weighting coefficient can be set to an appropriate value within a range in which the detection results of the imaging device 11 do not affect the traveling state of the vehicle V1.
  • FIG. 5A is a plan view showing an example of a driving scene in which risk response control is executed.
  • the driving scene shown in FIG. 5A is a driving scene in which the host vehicle V1 and other vehicles V2 and V3 are driving on the road shown in FIG. 2, with the host vehicle V1 driving at position P3 in lane L2, and ahead of it, the other vehicle V2 is driving on lane L1, and the other vehicle V3 is driving on lane L2.
  • the host vehicle V1 drives at a constant speed due to lane keeping control, and the other vehicles V2 and V3 are traveling straight at a driving speed slower than the host vehicle V1. In other words, the host vehicle V1 catches up with the other vehicle V3 within a predetermined time.
  • the driving speed of the subject vehicle V1 is faster than the driving speed of the other vehicles V2 and V3, so the distance between the subject vehicle V1 and the other vehicles V2 and V3 decreases over time.
  • the driving assistance device 19 recognizes the other vehicles V2 and V3 that are included in the detection range A1 from the detection results of the front camera using the function of the recognition unit 21.
  • the driving support device 19 uses the function of the determination unit 22, acquires vehicle speed information of the host vehicle V1 from the vehicle speed sensor, and calculates the traveling position of the host vehicle V1 when constant speed traveling by lane keeping control is continued for a predetermined time.
  • the driving support device 19 also acquires image data from the imaging device 11, and recognizes by pattern matching that the obstacles are a truck (other vehicle V2) and a passenger car (other vehicle V3), that the other vehicle V2 is traveling in lane L1, and that the other vehicle V3 is traveling in lane L2.
  • the detection result of the front camera imaging device 11
  • the driving support device 19 predicts the traveling state of the other vehicles V2 and V3 after a predetermined time.
  • the host vehicle V1 will catch up with the other vehicles V2 and V3 within a predetermined time, so the driving assistance device 19 determines that the other vehicle V2 will enter the overlapping portion C1 within the predetermined time based on the relationship between the calculated driving position of the host vehicle V1 and the predicted driving positions of the other vehicles V2 and V3.
  • the driving assistance device 19 uses the function of the control unit 23 to determine whether or not the other vehicle V2 is traveling ahead of the host vehicle V1. In the driving scene shown in FIG. 5B, it is determined that the other vehicle V2 is traveling ahead of the host vehicle V1 based on the positional relationship shown in the figure. In this case, the driving assistance device 19 determines whether or not the host vehicle V1 will overtake or pass the other vehicle V2. For example, the driving assistance device 19 determines whether or not the host vehicle V1 will overtake the other vehicle V2 after a predetermined time based on the driving states of the host vehicle V1 and the other vehicle V2 predicted by the function of the determination unit 22. Alternatively or in addition to this, the driving assistance device 19 may determine whether or not it is necessary to overtake the other vehicle V2 in order to continue autonomous driving control of the host vehicle V1.
  • the driving assistance device 19 autonomously controls the driving of the host vehicle V1 so that the other vehicle V2 is not included in the overlapping portion C1.
  • the driving assistance device 19 sets the distance D3 shown in FIG. 5C as the inter-vehicle distance between the host vehicle V1 and the other vehicle V3.
  • the driving assistance device 19 sets the distance D4 shown in FIG. 5C as the virtual inter-vehicle distance between the position P6 corresponding to the host vehicle V1 in the adjacent lane L1 and the other vehicle V2.
  • the distance D3, which is the distance between the host vehicle V1 and the other vehicle V3, can be set to an appropriate value within a range in which the host vehicle V1 can avoid contact with the other vehicle V3.
  • the distance D4, which is the virtual distance between the host vehicle V1 and the other vehicle V2 can be set to an appropriate value within a range in which the other vehicle V3 can avoid entering the overlapping portion C1.
  • the driving assistance device 19 can avoid the risk of misrecognizing the driving state of the other vehicle V2 by performing autonomous driving control (following control) that maintains the virtual distance between the host vehicle V1 and the other vehicle V2 and the distance between the host vehicle V1 and the other vehicle V3.
  • FIG. 6A is a plan view showing an example of a driving scene in which risk response control is executed.
  • the driving scene shown in FIG. 6A is a driving scene in which the host vehicle V1 and another vehicle V2 are traveling on the road shown in FIG. 2, with the host vehicle V1 traveling at position P7 in lane L2 and the other vehicle V2 traveling ahead of it in lane L1.
  • the host vehicle V1 travels at a constant speed due to lane keeping control, and the other vehicle V2 travels straight at a slower driving speed than the host vehicle V1.
  • the host vehicle V1 overtakes the other vehicle V2 within a predetermined time.
  • the driving support device 19 recognizes another vehicle V2 included in the detection range A1 from the detection results of the front camera using the function of the recognition unit 21.
  • the driving support device 19 uses the function of the determination unit 22 to calculate the driving state of the host vehicle V1 after a predetermined time in a manner similar to that of the driving scene shown in FIG. 5A, and predicts the driving state of the other vehicle V2 after a predetermined time.
  • the subject vehicle V1 will overtake the other vehicle V2 within a predetermined time, and the driving assistance device 19 determines that the other vehicle V2 will enter the overlapping portion C1 within the predetermined time based on the relationship between the calculated driving position of the subject vehicle V1 and the predicted driving position of the other vehicle V2.
  • the driving assistance device 19 uses the function of the control unit 23 to determine whether or not the other vehicle V2 is traveling ahead of the host vehicle V1.
  • the other vehicle V2 is traveling ahead of the host vehicle V1, so it is determined that the other vehicle V2 is traveling ahead of the host vehicle V1.
  • the driving assistance device 19 determines whether or not the host vehicle V1 will overtake or be overtaken by the other vehicle V2.
  • the host vehicle V1 will overtake the other vehicle V2 within a predetermined time, so it is determined that the host vehicle V1 will overtake the other vehicle V2.
  • the driving assistance device 19 sets a traveling speed faster than a predetermined traveling speed, and overtakes the other vehicle V2 without using the detection result of the imaging device 11 or by setting the weighting of the detection result to be small.
  • the predetermined traveling speed is, for example, a traveling speed set by the occupant of the vehicle V1
  • a traveling speed faster than the predetermined traveling speed is, for example, a traveling speed 5 to 25 km/h faster than the traveling speed set by the occupant of the vehicle V1.
  • the driving assistance device 19 can shorten the time during which erroneous recognition of the traveling state of the other vehicle V2 affects the traveling state of the vehicle V1.
  • the driving assistance device 19 sets a driving speed faster than that set by the occupant of the vehicle V1, and at the same time, sets a small weighting on the detection results of the imaging device 11.
  • the vehicle V1 is then caused to travel at a constant speed along the travel trajectory T3 from position P5 to position P6.
  • the processing of the detection results of the imaging device 11 in this case is the same as in the case of the driving scenes shown in FIGS. 4A to 4C.
  • Position P9 shown in Figure 6C is a position where the other vehicle V2 passes through overlapping parts C1 and C2 and is included in the detection range A2 of the rear camera, so there is no risk that the driving assistance device 19 will erroneously recognize the driving state of the other vehicle V2.
  • risk response control in each driving scene shown in Figures 4A to 4C, 5A to 5C, and 6A to 6C is merely an example, and risk response control other than the risk response control described above may be executed in each driving scene.
  • the driving assistance device 19 of this embodiment it is not essential to execute risk response control for all of the driving scenes shown in Figures 4A to 4C, 5A to 5C, and 6A to 6C, and the driving assistance device 19 may execute risk response control for some of the driving scenes.
  • the driving assistance device 19 may calculate the overall length of the other vehicle V2, i.e., the length in the direction of travel of the other vehicle V2, from the detection results of the various cameras (imaging devices 11) and the detection results of the distance measuring device 12, such as radar, LiDAR, or sonar. Then, it may be determined whether the calculated overall length of the other vehicle V2 is longer than the length of the overlapping portion in the direction along the adjacent lane. Also, instead of the length of the overlapping portion in the direction along the adjacent lane, the length of the portion of the overlapping portion that is on the adjacent lane in the direction along the adjacent lane may be used.
  • the driving assistance device 19 performs image analysis, including edge extraction and shape recognition, on the image data acquired from the imaging device 11, to calculate the overall length of the other vehicle V2. For example, in the driving scene shown in FIG. 4A, the driving assistance device 19 performs image analysis on the image data of the detection range A2 to calculate the overall length D1 of the other vehicle V2. In addition, for the overlapping portion C4 that exists in the lane L3 adjacent to the lane L2 in which the host vehicle V1 is traveling, the length D2 along the lane L3 is registered in advance in the ROM 192 of the driving assistance device 19. The driving assistance device 19 compares the overall length D1 of the other vehicle V2 with the length D2 of the overlapping portion C4 along the lane L3, and therefore determines that the overall length D1 is longer than the length D2.
  • the driving assistance device 19 performs image analysis on the image data acquired from the imaging device 11 and calculates the overall length D1 of the other vehicle V2.
  • the length D2a in the direction along the lane L1 of the portion of the overlapping portion C1 that exists on the lane L1 is registered in advance in the ROM 192 of the driving assistance device 19.
  • the driving assistance device 19 finds that the overall length D1 is longer, and therefore determines that the overall length D1 is longer than the length D2a.
  • the driving assistance device 19 calculates the overall length D1 of the other vehicle V2 in the same manner as in the driving scene shown in FIG. 5B.
  • the length D2 of the overlapping portion C1 in the direction along the lane L1 is registered in advance in the ROM 192 of the driving assistance device 19, so the driving assistance device 19 compares the overall length D1 of the other vehicle V2 with the length D2 of the overlapping portion C1 in the direction along the lane L1.
  • the overall length D1 is longer, so the driving assistance device 19 determines that the overall length D1 is longer than the length D2.
  • the total length of the other vehicle V2 does not necessarily need to be determined accurately, but it is sufficient to be able to calculate the range of the total length. In other words, it is sufficient to be able to determine whether the total length of the other vehicle V2 is longer or shorter than the length of the overlapping portion where the other vehicle V2 enters in the direction along the adjacent lane. Furthermore, the length of the overlapping portion in the direction along the adjacent lane is set as a fixed value for each overlapping portion when the imaging device 11 (particularly the wide-angle camera) is installed, and is registered in advance in the driving assistance device 19.
  • calculating the total length of the other vehicle V2 and determining whether the calculated total length of the other vehicle V2 is longer than the length of the overlapping portion in the direction along the adjacent lane are not essential components of the present invention, and may be added or omitted as necessary.
  • the driving assistance device 19 may determine the vehicle type of the other vehicle V2 using the function of the determination unit 22, and execute risk response control according to the determined vehicle type. Specifically, the driving assistance device 19 may determine whether the other vehicle V2 is a large vehicle or not, and execute risk response control using the function of the control unit 23 if it is determined that the other vehicle V2 is a large vehicle. Note that determining whether the other vehicle V2 is a large vehicle or not, and executing risk response control using the function of the control unit 23 if it is determined that the other vehicle V2 is a large vehicle, are not essential components of the present invention, and may be added or omitted as necessary.
  • Examples of vehicle types include private passenger cars such as light cars, small cars (compact cars), and regular cars, as well as manned or unmanned taxis, buses, and trucks.
  • a large vehicle is a vehicle whose overall length is longer than the host vehicle V1, for example, a vehicle whose overall length is 1.25 times or more longer than the host vehicle V1.
  • Examples of vehicle types that fall under large vehicles include commercial vehicles such as buses and trucks that transport passengers or cargo.
  • the other vehicle V2 is a large vehicle such as a bus or truck, the driving assistance device 19 executes risk response control, and if the other vehicle V2 is a small private passenger car such as a compact car, it does not execute risk response control.
  • the control unit 23 may determine whether or not there is a single imaging device that can detect the entire other vehicle V2 among the multiple imaging devices 11 mounted on the vehicle V1. When it determines that there is no single imaging device 11 that can detect the entire other vehicle V2, risk response control may be executed. This is because if the entire body of the other vehicle V2 is detected by a single imaging device 11, the risk of misrecognizing the traveling state of the other vehicle V2 can be reduced.
  • determining whether or not there is a single imaging device that can detect the entire other vehicle V2 among the multiple imaging devices 11 mounted on the vehicle V1, and executing risk response control when it is determined that there is no single imaging device 11 that can detect the entire other vehicle V2 are not essential configurations for the present invention, and may be added or omitted as necessary.
  • the driving assistance device 19 may use the function of the control unit 23 to obtain the brightness value of each pixel from an image captured using the imaging device 11, calculate the average brightness value of the image by dividing the sum of the brightness values by the total number of pixels, and execute risk management control if the average brightness value is equal to or greater than a predetermined value. This is because when a strong light source such as sunlight or reflected light is incident on the camera, the brightness value of the acquired image becomes saturated, the entire image becomes bright (white), and the shape of an obstacle cannot be accurately recognized from the contrast of the image. In other words, if the entire image becomes white, the driving state of the other vehicle V2 cannot be accurately recognized.
  • the predetermined value can be set to an appropriate value (for example, 200 or more) within a range in which the driving state of the other vehicle V2 can be accurately recognized. Note that obtaining the brightness value of each pixel from an image captured using the imaging device 11, calculating the average brightness value of the image by dividing the sum of the brightness values by the total number of pixels, and executing risk management control if the average brightness value is equal to or greater than a predetermined value are not essential components of the present invention, and may be added or omitted as necessary.
  • FIGS 7A to 7C are an example of a flowchart showing information processing executed in the driving assistance system 10 of this embodiment. The processing described below is executed at predetermined time intervals by the CPU 191, which is the processor of the driving assistance device 19. Note that the flowcharts shown in Figures 7A to 7C are premised on a driving scene in which the host vehicle V1 is driving on a road using lane keeping control.
  • step S1 of FIG. 7A the recognition unit 21 detects another vehicle V2 using the multiple imaging devices 11 mounted on the host vehicle V1.
  • step S2 it is determined from the detection result whether or not another vehicle V2 is present around the host vehicle V1. If no other vehicle V2 is present around the host vehicle V1, the process proceeds to step S3, where normal autonomous driving control is executed and the process proceeds to step S20 of FIG. 7C. On the other hand, if another vehicle V2 is present around the host vehicle V1, the process proceeds to step S4, where the determination unit 22 predicts the driving state of the host vehicle V1 and the other vehicle V2 after a predetermined time.
  • step S6 the control unit 23 determines whether the host vehicle V1 will overtake the other vehicle V2. If it is determined that the host vehicle V1 will not overtake the other vehicle V2, the process proceeds to step S3. On the other hand, if it is determined that the host vehicle V1 will overtake the other vehicle V2, the process proceeds to step S18 in FIG. 7C. Note that step S6 is not essential to the present invention and may be provided as necessary.
  • step S5 determines whether the vehicle V1 and the other vehicle V2 will not be traveling in the same lane after the predetermined time. If it is determined that the other vehicle V2 will not enter the overlapping portion within the predetermined time, the process proceeds to step S3. On the other hand, if it is determined that the other vehicle V2 will enter the overlapping portion within the predetermined time, the process proceeds to step S8, where the function of the determination unit 22 is used to calculate the total length D1 of the other vehicle V2 and the length D2 of the overlapping portion in the direction along the adjacent lane.
  • step S9 of FIG. 7B the function of the determination unit 22 determines whether or not the overall length D1 of the other vehicle V2 is longer than the length D2 of the overlapping portion. If the overall length D1 of the other vehicle V2 is longer than the length D2 of the overlapping portion, the process proceeds to step S14 of FIG. 7C. On the other hand, if the overall length D1 of the other vehicle V2 is equal to or less than the length D2 of the overlapping portion, the process proceeds to step S10.
  • step S10 the function of the determination unit 22 is used to determine whether or not the other vehicle V2 is a large vehicle. If it is determined that the other vehicle V2 is a large vehicle, the process proceeds to step S14 in FIG. 7C. On the other hand, if it is determined that the other vehicle V2 is not a large vehicle, the process proceeds to step S11.
  • step S11 the function of the determination unit 22 is used to determine whether or not the other vehicle V2 can be detected in its entirety by a single imaging device 11. If the other vehicle V2 cannot be detected in its entirety by a single imaging device 11, the process proceeds to step S14 in FIG. 7C. On the other hand, if the other vehicle V2 can be detected in its entirety by a single imaging device 11, the process proceeds to step S12.
  • step S12 the average brightness value of the image is calculated using the function of the determination unit 22, and in the following step S13, it is determined whether or not the average brightness value is equal to or greater than a predetermined value. If the average brightness value is less than the predetermined value, the process proceeds to step S3 in FIG. 7A. In contrast, if the average brightness value is equal to or greater than the predetermined value, the process proceeds to step S14 in FIG. 7C. Note that steps S9 to S13 are not essential steps for the present invention, and may be provided as necessary.
  • step S14 of FIG. 7C the control unit 23 determines whether or not the other vehicle V2 is traveling ahead of the host vehicle V1. If it is determined that the other vehicle V2 is not traveling ahead of the host vehicle V1, the process proceeds to step S15, where the traveling of the host vehicle V1 is autonomously controlled without using the detection result of the imaging device 11 or by reducing the weighting of the detection result. Then, the process proceeds to step S20.
  • step S16 it is determined whether the host vehicle V1 will overtake the other vehicle V2. If it is determined that the host vehicle V1 will not overtake the other vehicle V2, the process proceeds to step S17, where the traveling of the host vehicle V1 is autonomously controlled so that the other vehicle V2 is not included in the overlapping portion of the detection range.
  • step S18 a traveling speed higher than a predetermined traveling speed is set, and in the following step S19, the overtaking or passing is performed without using the detection result of the imaging device 11 or by setting the weighting of the detection result to a small value. Then, the process proceeds to step S20.
  • step S20 the function of the support unit 20 determines whether the vehicle V1 has reached the destination. If it is determined that the vehicle V1 has reached the destination, the execution of the routine is terminated, and the display device 18 is used to prompt the driver of the vehicle V1 to drive manually. In contrast, if it is determined that the vehicle V1 has not reached the destination, the process proceeds to step S1 in FIG. 7A.
  • manual driving refers to the driving support device 19 not performing autonomous driving control of the driving operation, but rather controlling the driving of the vehicle through the driver's operation.
  • the driving support device 19 and the driving support method according to the present invention can be used in any of the following cases: autonomous control of only the vehicle's driving speed, autonomous control of only the vehicle's steering operation, and autonomous control of both the vehicle's driving speed and steering operation.
  • the driving support device 19 and the driving support method according to the present invention can be used not only for autonomous driving control, but also to support the driver's driving operation in manual driving.
  • the processor determines whether or not the other vehicle V2 will enter the overlapping portion of the detection range within a predetermined time, and if it is determined that the other vehicle V2 will enter the overlapping portion within the predetermined time, executes risk countermeasure control to control the vehicle V1 so as to counter the risk of erroneously recognizing the traveling state of the other vehicle V2 from the detection results of the plurality of imaging devices 11. This makes it possible to suppress the effect of erroneous recognition of the traveling state of the other vehicle V2 on the traveling state of the vehicle V1.
  • the risk response control includes at least one of autonomously controlling the driving of the host vehicle V1 so that the other vehicle V2 does not enter the overlapping portion, and autonomously controlling the driving of the host vehicle V1 so that the driving state of the other vehicle V2, which has been erroneously recognized from the detection result, does not affect the driving state of the host vehicle V1.
  • autonomously controlling the traveling of the host vehicle V1 so that the other vehicle V2 does not enter the overlapping portion includes setting the traveling speed of the host vehicle V1 and/or the distance between the host vehicle V1 and the other vehicle V2 so that the other vehicle V2 is not included in the detection range, and changing the host vehicle V1 to the adjacent lane, and autonomously controlling the traveling of the host vehicle V1 so that the erroneously recognized traveling state of the other vehicle V2 does not affect the traveling state of the host vehicle V1 includes autonomously controlling the traveling of the host vehicle V1 without using the detection result, and autonomously controlling the traveling of the host vehicle V1 by setting the weighting of the detection result to a small value. This makes it possible to suppress the effect of erroneous recognition of the traveling state of the other vehicle V2 on the traveling state of the host vehicle V1 depending on the traveling scene.
  • the processor determines whether the overall length D1 of the other vehicle V2 is longer than the length D2 of the overlapping portion in the direction along the adjacent lane, and if it determines that the overall length D1 is longer than the length D2 of the overlapping portion, executes the risk response control. This makes it possible to more accurately predict whether the other vehicle V2 will enter the overlapping portion.
  • the processor determines whether the other vehicle V2 is a large vehicle, and if it determines that the other vehicle V2 is a large vehicle, executes the risk response control. This allows risk response control to be executed according to the vehicle type of the other vehicle V2.
  • the processor determines that the other vehicle V2 is not the large vehicle, it determines whether or not there is a single imaging device 11 among the multiple imaging devices 11 that can detect the other vehicle V2 in its entirety, and when it determines that there is not a single imaging device 11 that can detect the other vehicle V2 in its entirety, it executes the risk response control. This makes it possible to suppress the execution of risk response control when the risk of erroneously recognizing the traveling state of the other vehicle V2 is low.
  • the processor acquires the brightness value of each pixel from the image captured using the imaging device 11, calculates the average brightness value of the image by dividing the sum of the brightness values by the total number of pixels, and executes the risk response control if the average brightness value is equal to or greater than a predetermined value. This makes it possible to execute risk response control when the shape of an obstacle cannot be accurately recognized from the contrast of the image.
  • the processor determines whether the other vehicle V2 is traveling behind the host vehicle V1, and if it determines that the other vehicle V2 is traveling behind the host vehicle V1, autonomously controls the traveling of the host vehicle V1 without using the detection result or by reducing the weighting of the detection result. This makes it possible to execute risk response control according to the traveling scene.
  • the processor determines whether the other vehicle V2 is traveling ahead of the host vehicle V1, and if it determines that the other vehicle V2 is traveling ahead of the host vehicle V1, it determines whether the host vehicle V1 will overtake or pass the other vehicle V2, and if it determines that the host vehicle V1 will not overtake or pass the other vehicle V2, it autonomously controls the traveling of the host vehicle V1 so that the other vehicle V2 is not included in the overlapping portion. This makes it possible to execute risk response control according to the driving scene.
  • the processor determines that the host vehicle V1 will overtake or be overtaken, it sets a traveling speed faster than a predetermined traveling speed that has been set in advance, and performs the overtaking or be overtaken without using the detection result or by setting the weighting of the detection result to be small. This makes it possible to execute risk response control according to the driving scene.
  • a driving assistance device 19 includes: a plurality of imaging devices 11 mounted on the host vehicle V1, the detection ranges of which overlap in a lane adjacent to the host vehicle V1 in which the host vehicle V1 is traveling; a determination unit 22 that determines whether or not another vehicle V2 will enter the overlapping portion of the detection ranges within a predetermined time; and a control unit 23 that, if it is determined that the other vehicle V2 will enter the overlapping portion within the predetermined time, executes risk response control to control the host vehicle V1 so as to respond to the risk of erroneously recognizing the traveling state of the other vehicle V2 from the detection results of the plurality of imaging devices 11. This makes it possible to suppress the impact of erroneous recognition of the traveling state of the other vehicle V2 on the traveling state of the host vehicle V1.
  • the driving assistance method and driving assistance device 19 according to the present invention include the following embodiments (1) to (11).
  • Embodiment (1) A driving assistance method in which the detection ranges of multiple imaging devices 11 mounted on the vehicle V1 overlap in a lane adjacent to the vehicle V1's own lane, the driving assistance method determines whether another vehicle V2 will enter the overlapping portion of the detection ranges within a predetermined time, and if it is determined that the other vehicle V2 will enter the overlapping portion within the predetermined time, executes risk management control to control the vehicle V1 so as to address the risk of erroneously recognizing the driving state of the other vehicle V2 from the detection results of the multiple imaging devices 11.
  • the risk response control includes at least one of autonomously controlling the driving of the host vehicle V1 so that the other vehicle V2 does not enter the overlapping portion, and autonomously controlling the driving of the host vehicle V1 so that the driving state of the other vehicle V2, which is erroneously recognized from the detection result, does not affect the driving state of the host vehicle V1.
  • Autonomously controlling the travel of the host vehicle V1 so that the other vehicle V2 does not enter the overlapping portion includes setting the travel speed of the host vehicle V1 and/or the distance between the host vehicle V1 and the other vehicle V2 so that the other vehicle V2 is not included in the detection range, and changing lanes of the host vehicle V1 to the adjacent lane, and autonomously controlling the travel of the host vehicle V1 so that the erroneously recognized travel state of the other vehicle V2 does not affect the travel state of the host vehicle V1 includes autonomously controlling the travel of the host vehicle V1 without using the detection result, and autonomously controlling the travel of the host vehicle V1 by setting a small weighting for the detection result.
  • Implementation example (4) Determine whether the overall length D1 of the other vehicle V2 is longer than the length D2 of the overlapping portion in the direction along the adjacent lane, and if it is determined that the overall length D1 is longer than the length D2 of the overlapping portion, execute the risk response control.
  • Implementation (5) Determine whether the other vehicle V2 is a large vehicle, and if it is determined that the other vehicle V2 is a large vehicle, execute the risk response control.
  • Implementation (6) Among the multiple imaging devices 11, it is determined whether or not there is a single imaging device 11 that can detect the entirety of the other vehicle V2, and when it is determined that there is no single imaging device 11 that can detect the entirety of the other vehicle V2, the risk response control is executed.
  • Implementation example (7) The luminance value of each pixel is obtained from an image captured using the imaging device 11, the sum of the luminance values is divided by the total number of pixels to calculate an average luminance value of the image, and if the average luminance value is equal to or greater than a predetermined value, the risk response control is executed.
  • Implementation example (8) It is determined whether the other vehicle V2 is traveling behind the host vehicle V1, and if it is determined that the other vehicle V2 is traveling behind the host vehicle V1, the traveling of the host vehicle V1 is autonomously controlled without using the detection result or by reducing the weighting of the detection result.
  • Implementation example (9) Determine whether the other vehicle V2 is traveling ahead of the host vehicle V1, and if it is determined that the other vehicle V2 is traveling ahead of the host vehicle V1, determine whether the host vehicle V1 will overtake or pass the other vehicle V2, and if it is determined that the host vehicle V1 will not overtake or pass the other vehicle V2, autonomously control the traveling of the host vehicle V1 so that the other vehicle V2 is not included in the overlapping portion.
  • Implementation (10) Determine whether the other vehicle V2 is traveling ahead of the host vehicle V1, and if it is determined that the other vehicle V2 is traveling ahead of the host vehicle V1, determine whether the host vehicle V1 will overtake or pass the other vehicle V2, and if it is determined that the host vehicle V1 will overtake or pass the other vehicle V2, set a traveling speed higher than a predetermined traveling speed set in advance, and perform the overtaking or pass without using the detection result or by setting the weighting of the detection result to a small value.
  • Embodiment (11) A driving assistance device 19 including: a plurality of imaging devices 11 mounted on a host vehicle V1, the detection ranges of which overlap in a lane adjacent to the host vehicle V1 on which the host vehicle V1 is traveling; a determination unit 22 that determines whether another vehicle V2 will enter the overlapping portion of the detection ranges within a predetermined time; and a control unit 23 that, if it is determined that the other vehicle V2 will enter the overlapping portion within the predetermined time, executes risk response control to control the host vehicle V1 so as to address the risk of erroneously recognizing the traveling state of the other vehicle V2 from the detection results of the plurality of imaging devices 11.
  • the embodiment (1) relates to a driving assistance method, and may be combined with the embodiments (2) to (10). Specific combinations include the following: ⁇ A combination of embodiment (1) with any one of embodiment (2) and (4) to (10) ⁇ A combination of embodiment (1) with any two of embodiment (2) and (4) to (10) ⁇ A combination of embodiment (1) with any three of embodiment (2) and (4) to (10) ⁇ A combination of embodiment (1) with any four of embodiment (2) and (4) to (10) ⁇ A combination of embodiment (1) with any five of embodiment (2) and (4) to (10) ⁇ A combination of embodiment (1) with any six of embodiment (2) and (4) to (10) ⁇ A combination of embodiment (1) with any seven of embodiment (2) and (4) to (10) A combination of embodiment (1) with embodiment (2) and (4) to (10); a combination of embodiment (1) to (3) with any one of embodiment (4) to (10); a combination of embodiment (1) to (3) with any two of embodiment (4) to (10); a combination of embodiment (1) to (3) with any three of embodiment (4) to (10); a combination of embodiment (1) to (3) with any four of embodiment (4) to (10); a combination of embodiment
  • the embodiment (11) relates to a driving support device 19, and may be combined with the embodiments (2) to (10). Specific combinations include the following: ⁇ A combination of embodiment (11) and any one of embodiments (2) and (4) to (10) ⁇ A combination of embodiment (11) and any two of embodiments (2) and (4) to (10) ⁇ A combination of embodiment (11) and any three of embodiments (2) and (4) to (10) ⁇ A combination of embodiment (11) and any four of embodiments (2) and (4) to (10) ⁇ A combination of embodiment (11) and any five of embodiments (2) and (4) to (10) ⁇ A combination of embodiment (11) and any six of embodiments (2) and (4) to (10) ⁇ A combination of embodiment (11) and any seven of embodiments (2) and (4) to (10) ⁇ An embodiment (11) and an embodiment (2 ) and (4) to (10) A combination of embodiment (11) and (2) to (3) and any one of embodiments (4) to (10) A combination of embodiment (11) and (2) to (3) and any two of embodiments (4) to (10) A combination of embodiment (11) and (2) to (3) and any three of embodiment
  • Driving assistance system 11 Imaging device 12 Distance measuring device 13 Vehicle state detection device 14 Map information 15 Vehicle position detection device 16 Navigation device 17 Vehicle control device 171 Vehicle speed control device 172 Steering control device 18 Display device 19 Driving assistance device 191 CPU (processor) 192...ROM 193...RAM 20... Support unit 21... Recognition unit 22... Determination unit 23... Control unit A1, A2... Detection range B1, B2, B3, B4... Detection range C1, C2, C3, C4... Overlapping portion D1... Total length D2, D2a... Length D3, D4... Distance L1, L2, L3... Lanes P1, P2, P3, P4, P5, P6, P7, P8, P9... Positions T1, T2, T3... Travel trajectory V1... Own vehicle V2, V2x, V3... Other vehicles

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

Provided are a driving assistance method and a driving assistance device (19) that perform risk handling control in which, when a portion of detection ranges of a plurality of imaging devices (11) that are mounted on a host vehicle (V1) overlap in a lane adjacent to a host vehicle lane in which the host vehicle (V1) travels, it is determined whether or not another vehicle (V2) will enter the overlapping portion of the detection ranges within a prescribed time, and when it has been determined that said other vehicle (V2) will enter the overlapping portion within the prescribed time, the host vehicle (V1) is controlled to handle the risk that the state of travel of said other vehicle (V2) will be incorrectly recognized from the detection results of the plurality of imaging devices (11).

Description

運転支援方法及び運転支援装置Driving assistance method and driving assistance device
 本発明は、運転支援方法及び運転支援装置に関するものである。 The present invention relates to a driving assistance method and a driving assistance device.
 車両周辺環境データに基づいて車両周辺環境の3次元モデルを生成し、ビジュアルデータを3次元モデルの各部にマッピングすることで、生成される仮想サラウンドビューにおける歪みを減少する、車両用サラウンドビューシステムが知られている(特許文献1)。 A surround view system for vehicles is known that generates a three-dimensional model of the vehicle's surroundings based on data on the vehicle's surroundings, and maps visual data onto each part of the three-dimensional model, thereby reducing distortion in the generated virtual surround view (Patent Document 1).
特開2019-200781号公報JP 2019-200781 A
 上述のビジュアルデータを3次元モデルにマッピングする処理は、処理装置に対する負荷が大きい。そのため、上記従来技術では、自車両の周囲の状況が時々刻々変化する走行シーンにおいて、仮想サラウンドビューの歪みを適切に低減できず、自車両の周囲を走行する他車両の走行状態を正確に認識できない。その結果、誤認識した他車両の走行状態に基づいて不要な回避動作を行い、自車両の挙動が乱れてしまう。 The process of mapping the above-mentioned visual data onto a three-dimensional model places a large load on the processing device. As a result, the above-mentioned conventional technology is unable to adequately reduce distortion in the virtual surround view in driving scenes where the conditions around the vehicle are constantly changing, and is unable to accurately recognize the driving conditions of other vehicles around the vehicle. As a result, unnecessary evasive action is taken based on the misrecognized driving conditions of other vehicles, disrupting the behavior of the vehicle.
 本発明が解決しようとする課題は、他車両の走行状態の誤認識が自車両の走行状態に与える影響を抑制できる運転支援方法及び運転支援装置を提供することである。 The problem that this invention aims to solve is to provide a driving assistance method and driving assistance device that can reduce the impact of erroneous recognition of the driving conditions of other vehicles on the driving conditions of the vehicle itself.
 本発明は、自車両が走行する自車線の隣接車線において、自車両に搭載された複数の撮像装置の検出範囲の一部が重複している場合に、他車両が、所定時間内に重複部分に進入すると判定したときは、複数の撮像装置の検出結果から他車両の走行状態を誤認識するリスクに対処するように自車両を制御するリスク対処制御を実行することによって上記課題を解決する。 The present invention solves the above problem by executing risk management control to control the vehicle so as to address the risk of misrecognizing the driving state of the other vehicle from the detection results of the multiple imaging devices when the detection ranges of multiple imaging devices mounted on the vehicle overlap in a lane adjacent to the vehicle's own lane in which the vehicle is traveling, and it is determined that the other vehicle will enter the overlapping portion within a predetermined time.
 本発明によれば、他車両の走行状態の誤認識が自車両の走行状態に与える影響を抑制できる。 The present invention can reduce the impact of erroneous recognition of the driving conditions of other vehicles on the driving conditions of the vehicle itself.
本発明の運転支援装置を含む運転支援システムの一例を示すブロック図である。1 is a block diagram showing an example of a driving assistance system including a driving assistance device according to the present invention. 図1の撮像装置の一例を示す平面図である。FIG. 2 is a plan view showing an example of the imaging device of FIG. 1 . 図2に示す撮像装置による他車両の検出結果の一例を示す平面図である。3 is a plan view showing an example of a detection result of another vehicle by the imaging device shown in FIG. 2 . 図1に示す運転支援システムにて運転支援を実行する走行シーンの一例を示す平面図である(その1)。FIG. 2 is a plan view showing an example of a driving scene in which driving assistance is performed by the driving assistance system shown in FIG. 1 (part 1). 図1に示す運転支援システムにて運転支援を実行する走行シーンの一例を示す平面図である(その2)。2 is a plan view showing an example of a driving scene in which driving assistance is performed by the driving assistance system shown in FIG. 1 (part 2). FIG. 図1に示す運転支援システムにて運転支援を実行する走行シーンの一例を示す平面図である(その3)。3 is a plan view showing an example of a driving scene in which driving assistance is performed by the driving assistance system shown in FIG. 1 (part 3). FIG. 図1に示す運転支援システムにて運転支援を実行する走行シーンの他の例を示す平面図である(その1)。10 is a plan view showing another example of a driving scene in which driving assistance is performed by the driving assistance system shown in FIG. 1 (part 1). FIG. 図1に示す運転支援システムにて運転支援を実行する走行シーンの他の例を示す平面図である(その2)。2 is a plan view showing another example of a driving scene in which driving assistance is performed by the driving assistance system shown in FIG. 1 (part 2). FIG. 図1に示す運転支援システムにて運転支援を実行する走行シーンの他の例を示す平面図である(その3)。1. FIG. 4 is a plan view showing another example of a driving scene in which driving assistance is performed by the driving assistance system shown in FIG. 1 (part 3). 図1に示す運転支援システムにて運転支援を実行する走行シーンのまた他の例を示す平面図である(その1)。10 is a plan view showing still another example of a driving scene in which driving assistance is performed by the driving assistance system shown in FIG. 1 (part 1). FIG. 図1に示す運転支援システムにて運転支援を実行する走行シーンのまた他の例を示す平面図である(その2)。1. FIG. 4 is a plan view (part 2) showing still another example of a driving scene in which driving assistance is performed by the driving assistance system shown in FIG. 図1に示す運転支援システムにて運転支援を実行する走行シーンのまた他の例を示す平面図である(その3)。1. FIG. 4 is a plan view (part 3) showing still another example of a driving scene in which driving assistance is performed by the driving assistance system shown in FIG. 図1の運転支援システムにおける処理手順の一例を示すフローチャートである(その1)。2 is a flowchart showing an example of a processing procedure in the driving assistance system of FIG. 1 (part 1). 図1の運転支援システムにおける処理手順の一例を示すフローチャートである(その2)。2 is a flowchart showing an example of a processing procedure in the driving assistance system of FIG. 1 (part 2). 図1の運転支援システムにおける処理手順の一例を示すフローチャートである(その3)。10 is a flowchart showing an example of a processing procedure in the driving assistance system of FIG. 1 (part 3).
 以下、本発明の実施形態を図面に基づいて説明する。なお、以下の説明は、左側通行の法規を有する国で、車両が左側通行で走行することを前提としている。右側通行の法規を有する国では、車両が右側通行で走行するため、以下の説明の右と左を対称にして読み替えるものとする。 Below, an embodiment of the present invention will be described with reference to the drawings. Note that the following description is based on the assumption that vehicles drive on the left side of the road in countries that have laws stipulating left-hand traffic. In countries that have laws stipulating right-hand traffic, vehicles drive on the right side of the road, so the following description should be interpreted as symmetrical between right and left.
[運転支援システムの構成]
 図1は、本発明に係る運転支援システム10を示すブロック図である。運転支援システム10は車載システムであり、自律走行制御により、車両の乗員(ドライバーを含む)により設定された目的地まで車両を走行させる。自律走行制御とは、後述する運転支援装置を用いて車両の走行動作を自律的に制御することをいい、当該走行動作には、加速、減速、発進、停車、右方向又は左方向への転舵、車線変更、幅寄せなど、あらゆる走行動作が含まれる。また、自律的に走行動作を制御するとは、運転支援装置が、車両の装置を用いて走行動作の制御を行うことをいう。運転支援装置は、予め定められた範囲内でこれらの走行動作を制御し、運転支援装置により制御されない走行動作については、ドライバーによる手動の操作が行われる。
[Configuration of driving assistance system]
FIG. 1 is a block diagram showing a driving assistance system 10 according to the present invention. The driving assistance system 10 is an in-vehicle system that drives a vehicle to a destination set by a vehicle occupant (including a driver) by autonomous driving control. The autonomous driving control means that the driving operation of the vehicle is autonomously controlled using a driving assistance device described later, and the driving operation includes all driving operations such as acceleration, deceleration, starting, stopping, steering to the right or left, lane changing, and pulling over. In addition, autonomously controlling the driving operation means that the driving assistance device controls the driving operation using a device of the vehicle. The driving assistance device controls these driving operations within a predetermined range, and the driving operations that are not controlled by the driving assistance device are manually operated by the driver.
 図1に示すように、運転支援システム10は、撮像装置11、測距装置12、自車状態検出装置13、地図情報14、自車位置検出装置15、ナビゲーション装置16、車両制御装置17、表示装置18及び運転支援装置19を備える。運転支援システム10を構成する装置は、CAN(Controller Area Network)その他の車載LANによって接続され、互いに情報を授受できる。 As shown in FIG. 1, the driving assistance system 10 comprises an imaging device 11, a distance measuring device 12, a vehicle state detection device 13, map information 14, a vehicle position detection device 15, a navigation device 16, a vehicle control device 17, a display device 18, and a driving assistance device 19. The devices that make up the driving assistance system 10 are connected by a CAN (Controller Area Network) or other in-vehicle LAN, and can send and receive information between each other.
 撮像装置11は、画像により車両の周囲の対象物を認識する装置であり、たとえば、CCDなどの撮像素子を備えるカメラ、超音波カメラ、赤外線カメラなどのカメラである。撮像装置11は、一台の車両に複数を設けることができ、たとえば、車両のフロントグリル部、左右ドアミラーの下部及びリアバンパ近傍に配置できる。これにより、車両の周囲の対象物を認識する場合の死角を減らすことができる。 The imaging device 11 is a device that recognizes objects around the vehicle using images, and is, for example, a camera equipped with an imaging element such as a CCD, an ultrasonic camera, an infrared camera, or the like. A single vehicle can be provided with multiple imaging devices 11, and they can be placed, for example, in the vehicle's front grille, under the left and right door mirrors, and near the rear bumper. This makes it possible to reduce blind spots when recognizing objects around the vehicle.
 測距装置12は、車両と対象物との相対距離および相対速度を演算するための装置であり、たとえば、レーザーレーダー、ミリ波レーダーなど(LRFなど)、LiDAR(light detection and ranging)ユニット、超音波レーダーなどのレーダー装置又はソナーである。測距装置12は、一台の車両に複数設けることができ、たとえば、車両の前方、右側方、左側方及び後方に配置できる。これにより、車両の周囲の対象物との相対距離及び相対速度を正確に演算できる。 The ranging device 12 is a device for calculating the relative distance and relative speed between the vehicle and an object, and is, for example, a radar device or sonar such as a laser radar, millimeter wave radar (LRF, etc.), a LiDAR (light detection and ranging) unit, or an ultrasonic radar. A single vehicle can be provided with multiple ranging devices 12, and can be positioned, for example, at the front, right side, left side, and rear of the vehicle. This makes it possible to accurately calculate the relative distance and relative speed between the vehicle and objects around it.
 撮像装置11及び測距装置12にて検出する対象物は、道路の車線境界線、中央線、路面標識、中央分離帯、ガードレール、縁石、高速道路の側壁、道路標識、信号機、横断歩道、工事現場、事故現場、交通制限などである。また、対象物には、自車両以外の自動車(他車両)、自動二輪車(オートバイ)、自転車、歩行者など、車両の走行に影響を与える可能性がある障害物も含まれている。撮像装置11及び測距装置12の検出結果は、必要に応じ、運転支援装置19により所定の時間間隔で取得される。当該所定の時間間隔は、運転支援装置19の処理能力に応じて適宜の値を設定できる。 Objects detected by the imaging device 11 and distance measuring device 12 include road lane boundaries, center lines, road markings, medians, guardrails, curbs, highway sidewalls, road signs, traffic lights, crosswalks, construction sites, accident sites, traffic restrictions, etc. Objects also include obstacles that may affect the travel of the vehicle, such as automobiles (other vehicles) other than the vehicle itself, motorcycles, bicycles, pedestrians, etc. The detection results of the imaging device 11 and distance measuring device 12 are obtained at predetermined time intervals by the driving assistance device 19 as necessary. The predetermined time intervals can be set to an appropriate value depending on the processing capacity of the driving assistance device 19.
 また、撮像装置11及び測距装置12の検出結果は、運転支援装置19にて統合又は合成(いわゆるセンサフュージョン)することができ、これにより、検出した対象物の不足する情報を補完できる。たとえば、自車位置検出装置15により取得した、車両が走行する位置である自己位置情報と、車両と対象物の相対位置(距離と方向)とにより、運転支援装置19にて対象物の位置情報を算出できる。算出された対象物の位置情報は、運転支援装置19にて、撮像装置11及び測距装置12の検出結果、並びに地図情報14などの複数の情報と統合され、車両の周囲の走行環境情報となる。また、撮像装置11及び測距装置12の検出結果と、地図情報14とを用いて、車両の周囲の対象物を認識し、その動きを予測することもできる。 In addition, the detection results of the imaging device 11 and the distance measuring device 12 can be integrated or synthesized (so-called sensor fusion) by the driving assistance device 19, which makes it possible to supplement missing information about the detected object. For example, the driving assistance device 19 can calculate the position information of the object based on the self-position information, which is the position where the vehicle is traveling, acquired by the vehicle position detection device 15, and the relative position (distance and direction) between the vehicle and the object. The calculated position information of the object is integrated by the driving assistance device 19 with multiple pieces of information, such as the detection results of the imaging device 11 and the distance measuring device 12 and the map information 14, to become information about the driving environment around the vehicle. In addition, the detection results of the imaging device 11 and the distance measuring device 12 and the map information 14 can be used to recognize objects around the vehicle and predict their movements.
 自車状態検出装置13は、車両の走行状態を検出するための装置であり、車速センサ、加速度センサ、ヨーレートセンサ(たとえばジャイロセンサ)、舵角センサ、慣性計測ユニットなどが挙げられる。これらの装置については、特に限定はなく、公知のものを用いることができる。また、これらの装置の配置及び数は、車両の走行状態を適切に検出できる範囲内で適宜に設定できる。各装置の検出結果は、必要に応じ、運転支援装置19により所定の時間間隔で取得される。 The vehicle state detection device 13 is a device for detecting the vehicle's running state, and examples of such devices include a vehicle speed sensor, an acceleration sensor, a yaw rate sensor (e.g., a gyro sensor), a steering angle sensor, and an inertial measurement unit. There are no particular limitations on these devices, and any known device can be used. Furthermore, the placement and number of these devices can be set appropriately within a range in which the vehicle's running state can be appropriately detected. The detection results of each device are obtained by the driving assistance device 19 at specified time intervals as necessary.
 地図情報14は、走行経路の生成、走行動作の制御などに用いられる情報であり、道路情報、施設情報及びそれらの属性情報を含む。道路情報及び道路の属性情報には、道路の幅、道路の曲率半径、路肩の構造物、道路交通法規(制限速度、車線変更の可否)、道路の合流地点と分岐地点、車線数の増加・減少位置などの情報が含まれている。地図情報14は、レーンごとの移動軌跡を把握できる高精細地図情報であり、各地図座標における二次元位置情報及び/又は三次元位置情報、各地図座標における道路・レーンの境界情報、道路属性情報、レーンの上り・下り情報、レーン識別情報、接続先レーン情報などを含む。なお、高精度地図のことをHD(High-Definition)マップとも言う。 Map information 14 is information used for generating driving routes, controlling driving operations, etc., and includes road information, facility information, and their attribute information. Road information and road attribute information include information such as road width, road curvature radius, road shoulder structures, road traffic regulations (speed limit, whether lane changes are permitted), road junctions and branching points, and locations where the number of lanes increases or decreases. Map information 14 is high-definition map information that allows the movement trajectory of each lane to be grasped, and includes two-dimensional position information and/or three-dimensional position information at each map coordinate, road/lane boundary information at each map coordinate, road attribute information, lane uphill/downhill information, lane identification information, connecting lane information, etc. Note that high-precision maps are also called HD (High-Definition) maps.
 高精細地図情報の道路・レーンの境界情報は、車両が走行する走路とそれ以外との境界を示す情報である。車両が走行する走路とは、車両が走行するための道であり、走路の形態は特に限定されない。境界は、車両の進行方向に対して左右それぞれに存在し、形態は特に限定されない。境界は、たとえば、路面標示又は道路構造物であり、路面標示としては車線境界線、中央線などが挙げられ、道路構造物としては中央分離帯、ガードレール、縁石、トンネル、高速道路の側壁などが挙げられる。なお、交差点内のような走路境界が明確に特定できない地点では、予め、走路に境界が設定されている。この境界は架空のものであって、実際に存在する路面標示または道路構造物ではない。 The road/lane boundary information in the high-resolution map information is information that indicates the boundary between the lane on which the vehicle travels and other lane. The lane on which the vehicle travels is the road on which the vehicle travels, and the form of the lane is not particularly limited. The boundary exists on both the left and right sides of the vehicle's direction of travel, and the form is not particularly limited. The boundary is, for example, a road marking or a road structure. Examples of road markings include lane boundaries and center lines, and examples of road structures include medians, guard rails, curbs, tunnels, and side walls of expressways. Note that at points where the lane boundary cannot be clearly identified, such as within intersections, a boundary is set for the lane in advance. This boundary is imaginary, and is not an actual road marking or road structure.
 地図情報14は、運転支援装置19、車載装置、又はネットワーク上のサーバに設けられた記録媒体に読み込み可能な状態で記憶されている。運転支援装置19は、必要に応じて地図情報14を取得する。 Map information 14 is stored in a readable state on a recording medium provided in the driving assistance device 19, an in-vehicle device, or a server on a network. The driving assistance device 19 acquires map information 14 as necessary.
 自車位置検出装置15は、車両の現在位置を検出するための測位システムであり、特に限定されず、公知のものを用いることができる。自車位置検出装置15は、たとえば、GPS(Global Positioning System)用の衛星から受信した電波などから車両の現在位置を算出する。また、自車位置検出装置15は、自車状態検出装置13である車速センサ、加速度センサ及びジャイロセンサから取得した車速情報及び加速度情報から車両の現在位置を推定し、推定した現在位置を地図情報14と照合することで、車両の現在位置を算出してもよい。 The vehicle position detection device 15 is a positioning system for detecting the current position of the vehicle, and is not particularly limited, and any known system can be used. The vehicle position detection device 15 calculates the current position of the vehicle from radio waves received from a satellite for the Global Positioning System (GPS), for example. The vehicle position detection device 15 may also estimate the current position of the vehicle from vehicle speed information and acceleration information acquired from the vehicle state detection device 13, which includes a vehicle speed sensor, an acceleration sensor, and a gyro sensor, and calculate the current position of the vehicle by comparing the estimated current position with the map information 14.
 ナビゲーション装置16は、地図情報14を参照して、自車位置検出装置15により検出された車両の現在位置から、乗員(ドライバーを含む)により設定された目的地までの走行経路を算出する装置である。ナビゲーション装置16は、地図情報14の道路情報及び施設情報などを用いて、車両が現在位置から目的地まで到達するための走行経路を検索する。走行経路は、車両が走行する道路、走行車線及び車両の走行方向の情報を少なくとも含み、たとえば線形で表示される。検索条件に応じ、走行経路は複数存在し得る。ナビゲーション装置16にて算出された走行経路は、運転支援装置19に出力される。 The navigation device 16 is a device that refers to the map information 14 and calculates a driving route from the current position of the vehicle detected by the vehicle position detection device 15 to a destination set by the occupants (including the driver). The navigation device 16 uses road information and facility information in the map information 14 to search for a driving route for the vehicle to reach the destination from the current position. The driving route includes at least information on the road the vehicle is traveling on, the driving lane, and the vehicle's driving direction, and is displayed, for example, linearly. There may be multiple driving routes depending on the search conditions. The driving route calculated by the navigation device 16 is output to the driving assistance device 19.
 車両制御装置17は、電子制御ユニット(ECU:Electronic Control Unit)などの車載コンピュータであり、車両の走行を律する車載機器を電子的に制御する。車両制御装置17は、車両の走行速度を制御する車速制御装置171と、車両の操舵操作を制御する操舵制御装置172を備える。車速制御装置171及び操舵制御装置172は、運転支援装置19から入力された制御信号に応じ、これらの駆動装置及び操舵装置の動作を自律的に制御する。これにより、車両は、設定した走行経路に従って自律的に走行できる。車速制御装置171及び操舵制御装置172による自律的な制御に必要な情報、たとえば車両の走行速度、加速度、操舵角度及び姿勢は、自車状態検出装置13から取得する。 The vehicle control device 17 is an on-board computer such as an electronic control unit (ECU), and electronically controls on-board equipment that governs the driving of the vehicle. The vehicle control device 17 is equipped with a vehicle speed control device 171 that controls the driving speed of the vehicle, and a steering control device 172 that controls the steering operation of the vehicle. The vehicle speed control device 171 and the steering control device 172 autonomously control the operation of these drive devices and steering devices in response to control signals input from the driving assistance device 19. This allows the vehicle to drive autonomously according to a set driving route. Information necessary for autonomous control by the vehicle speed control device 171 and the steering control device 172, such as the vehicle's driving speed, acceleration, steering angle, and attitude, is obtained from the vehicle state detection device 13.
 車速制御装置171が制御する駆動装置としては、走行駆動源である電動モータ及び/又は内燃機関、これら走行駆動源からの出力を駆動輪に伝達するドライブシャフトや自動変速機を含む動力伝達装置、動力伝達装置を制御する駆動装置などが挙げられる。また、車速制御装置171が制御する制動装置は、たとえば、車輪を制動する制動装置である。車速制御装置171には、運転支援装置19から、設定した走行速度に応じた制御信号が入力される。車速制御装置171は、運転支援装置19から入力された制御信号に基づいて、これらの駆動装置を制御する信号を生成し、駆動装置に当該信号を送信することで、車両の走行速度を自律的に制御する。 The drive devices controlled by the vehicle speed control device 171 include an electric motor and/or an internal combustion engine, which are drive sources for driving, a power transmission device including a drive shaft and an automatic transmission that transmits the output from these drive sources for driving to the drive wheels, and a drive device that controls the power transmission device. The braking device controlled by the vehicle speed control device 171 is, for example, a braking device that brakes the wheels. A control signal corresponding to the set driving speed is input to the vehicle speed control device 171 from the driving assistance device 19. The vehicle speed control device 171 generates signals to control these drive devices based on the control signals input from the driving assistance device 19, and transmits the signals to the drive devices, thereby autonomously controlling the vehicle's driving speed.
 一方、操舵制御装置172が制御する操舵装置は、ステアリングホイールの回転角度に応じて操舵輪を制御する操舵装置であり、たとえば、ステアリングのコラムシャフトに取り付けられるモータなどのステアリングアクチュエータが挙げられる。操舵制御装置172は、運転支援装置19から入力された制御信号に基づき、設定した走行経路に対して所定の横位置(車両の左右方向の位置)を維持しながら車両が走行するように、操舵装置の動作を自律的に制御する。この制御には、撮像装置11及び測距装置12の検出結果、自車状態検出装置13で取得した車両の走行状態、地図情報14及び自車位置検出装置15で取得した車両の現在位置の情報のうちの少なくとも一つを用いる。 On the other hand, the steering device controlled by the steering control device 172 is a steering device that controls the steered wheels according to the rotation angle of the steering wheel, and an example of this is a steering actuator such as a motor attached to the steering column shaft. Based on a control signal input from the driving assistance device 19, the steering control device 172 autonomously controls the operation of the steering device so that the vehicle travels while maintaining a predetermined lateral position (left-right position of the vehicle) with respect to the set travel route. For this control, at least one of the detection results of the imaging device 11 and the distance measuring device 12, the vehicle's travel state obtained by the vehicle state detection device 13, the map information 14, and the information on the current position of the vehicle obtained by the vehicle position detection device 15 is used.
 表示装置18は、車両の乗員に必要な情報を提供するための装置であり、たとえば、インストルメントパネルに設けられた液晶ディスプレイ、ヘッドアップディスプレイ(HUD)などのプロジェクターである。表示装置18は、車両の乗員が、運転支援装置19に指示を入力するための入力装置を備えてもよい。入力装置としては、ユーザの指触又はスタイラスペンによって入力されるタッチパネル、ユーザの音声による指示を取得するマイクロフォン、車両のステアリングホイールに取付けられたスイッチなどが挙げられる。また、表示装置18は、出力装置としてのスピーカーを備えてもよい。 The display device 18 is a device for providing necessary information to vehicle occupants, and is, for example, a liquid crystal display provided on the instrument panel, a projector such as a head-up display (HUD), etc. The display device 18 may also be equipped with an input device for the vehicle occupants to input instructions to the driving assistance device 19. Examples of input devices include a touch panel that receives input by the user's finger or a stylus pen, a microphone that receives instructions by the user's voice, and a switch attached to the steering wheel of the vehicle. The display device 18 may also be equipped with a speaker as an output device.
 運転支援装置19は、運転支援システム10を構成する装置を制御して協働させることで車両の走行を制御し、設定された目的地まで車両を走行させるための装置である。目的地は、たとえば車両の乗員が設定する。運転支援装置19は、たとえばコンピュータであり、プロセッサであるCPU(Central Processing Unit)191と、プログラムが格納されたROM(Read Only Memory)192と、アクセス可能な記憶装置として機能するRAM(Random Access Memory)193とを備える。CPU191は、ROM192に格納されたプログラムを実行し、運転支援装置19が有する機能を実現するための動作回路である。 The driving assistance device 19 is a device that controls the driving of the vehicle by controlling and coordinating the devices that make up the driving assistance system 10, and drives the vehicle to a set destination. The destination is set, for example, by the vehicle occupant. The driving assistance device 19 is, for example, a computer, and includes a CPU (Central Processing Unit) 191, which is a processor, a ROM (Read Only Memory) 192 in which programs are stored, and a RAM (Random Access Memory) 193 that functions as an accessible storage device. The CPU 191 is an operating circuit that executes the programs stored in the ROM 192 and realizes the functions of the driving assistance device 19.
 運転支援装置19は、自律走行制御により、設定された目的地まで車両を走行させる運転支援機能を有する。運転支援装置19は、運転支援機能として、走行経路を生成する経路生成機能と、車両の周囲の走行環境を認識する環境認識機能と、認識した走行環境に基づいて自律走行制御の実行に必要な判定を行う判定機能と、走行軌跡を生成し、走行軌跡に沿って車両を走行させる走行制御機能とを有する。ROM192に格納されたプログラムはこれらの機能を実現するためのプログラムを備え、CPU191がROM192に格納されたプログラムを実行することで、これらの機能が実現される。図1には、各機能を実現する機能ブロックを便宜的に抽出して示す。 The driving assistance device 19 has a driving assistance function of driving the vehicle to a set destination by autonomous driving control. The driving assistance device 19 has, as driving assistance functions, a route generation function of generating a driving route, an environment recognition function of recognizing the driving environment around the vehicle, a determination function of making a determination necessary for executing autonomous driving control based on the recognized driving environment, and a driving control function of generating a driving trajectory and driving the vehicle along the driving trajectory. The programs stored in ROM 192 include programs for realizing these functions, and these functions are realized by CPU 191 executing the programs stored in ROM 192. Figure 1 shows functional blocks that realize each function extracted for convenience.
[各機能ブロックの機能]
 以下、図1に示す支援部20、認識部21、判定部22及び制御部23の各機能ブロックが有する機能について説明する。
[Functions of each functional block]
The functions of each of the functional blocks, ie, the support unit 20, the recognition unit 21, the determination unit 22, and the control unit 23 shown in FIG. 1, will be described below.
 支援部20は、自律走行制御により、設定された目的地まで車両を走行させる運転支援機能を有する。図2は、運転支援装置19が運転支援機能により車両の走行を自律制御する走行シーンの一例を示す平面図である。図2に示す走行シーンでは、3車線の道路が図面の上下方向に延在し、車両は当該道路を図面の下側から上側に向かって走行するものとする。図2に示すように、走行方向左側の車線から順に、各車線を車線L1,L2,L3とする。図2に示す走行シーンでは、自車両V1は、車線L2を走行しており、自車両V1の乗員により設定された前方の目的地(図示しない)に向かうものとする。 The support unit 20 has a driving support function that drives the vehicle to a set destination by autonomous driving control. FIG. 2 is a plan view showing an example of a driving scene in which the driving support device 19 autonomously controls the driving of the vehicle by the driving support function. In the driving scene shown in FIG. 2, a three-lane road extends in the vertical direction of the drawing, and the vehicle drives on the road from the bottom to the top of the drawing. As shown in FIG. 2, the lanes are designated as lanes L1, L2, and L3 in order from the left lane in the driving direction. In the driving scene shown in FIG. 2, the host vehicle V1 is driving on lane L2, and is heading toward a destination (not shown) ahead that has been set by the occupant of the host vehicle V1.
 認識部21は、車両の周囲の走行環境を認識する環境認識機能を有する。運転支援装置19は、認識部21の環境認識機能により、撮像装置11及び測距装置12を用いて、車両の周囲の走行環境を認識する。走行環境とは、車両が、現在の走行状態を維持できるか、走行状態を変更する必要があるかを判定するための情報であり、たとえば、対象物の種類及び位置、障害物が存在する場合はその種類及び位置、路面状況などの道路状況、天気などの情報が含まれる。運転支援装置19は、撮像装置11及び測距装置12の検出結果に対し、パターンマッチング、センサフュージョンなどの適宜の処理を行い、走行環境を認識する。 The recognition unit 21 has an environmental recognition function that recognizes the driving environment around the vehicle. The driving assistance device 19 recognizes the driving environment around the vehicle using the imaging device 11 and distance measuring device 12 through the environmental recognition function of the recognition unit 21. The driving environment is information for determining whether the vehicle can maintain its current driving state or needs to change its driving state, and includes information such as the type and position of an object, the type and position of an obstacle if one exists, road conditions such as road surface conditions, and weather. The driving assistance device 19 recognizes the driving environment by performing appropriate processing such as pattern matching and sensor fusion on the detection results of the imaging device 11 and distance measuring device 12.
 本実施形態の認識部21は、自車両V1に搭載された複数の撮像装置11を用いて障害物を検出する機能を有する。たとえば図2に示すように、自車両V1は、自車両V1の前方の検出範囲A1に存在する障害物を検出する前方カメラと、自車両V1の後方の検出範囲A2に存在する障害物を検出する後方カメラとを備える。これに加え、自車両V1は、自車両V1の前方の検出範囲B1に存在する障害物を検出する前方広角カメラと、自車両V1の後方の検出範囲B2に存在する障害物を検出する後方広角カメラと、自車両V1の左側方の検出範囲B3に存在する障害物を検出する左側方広角カメラと、自車両V1の右側方の検出範囲B4に存在する障害物を検出する右側方広角カメラとを備える。 The recognition unit 21 of this embodiment has the function of detecting obstacles using multiple imaging devices 11 mounted on the host vehicle V1. For example, as shown in FIG. 2, the host vehicle V1 is equipped with a front camera that detects obstacles present in a detection range A1 in front of the host vehicle V1, and a rear camera that detects obstacles present in a detection range A2 behind the host vehicle V1. In addition, the host vehicle V1 is equipped with a front wide-angle camera that detects obstacles present in a detection range B1 in front of the host vehicle V1, a rear wide-angle camera that detects obstacles present in a detection range B2 behind the host vehicle V1, a left side wide-angle camera that detects obstacles present in a detection range B3 to the left of the host vehicle V1, and a right side wide-angle camera that detects obstacles present in a detection range B4 to the right of the host vehicle V1.
 広角カメラは広角レンズを備えるため、通常のカメラより画角が広く、焦点距離が短い。そのため、前方広角カメラの検出範囲B1は、前方カメラの検出範囲A1より車線L2に沿う方向の距離が短く、車線L2の幅方向の画角が広い。同様に、後方広角カメラの検出範囲B2は、後方カメラの検出範囲A2より車線L2に沿う方向の距離が短く、車線L2の幅方向の画角が広い。 Wide-angle cameras are equipped with wide-angle lenses, so they have a wider angle of view and a shorter focal length than normal cameras. Therefore, the detection range B1 of the front wide-angle camera is shorter in distance along the lane L2 than the detection range A1 of the front camera, and has a wider angle of view in the width direction of lane L2. Similarly, the detection range B2 of the rear wide-angle camera is shorter in distance along the lane L2 than the detection range A2 of the rear camera, and has a wider angle of view in the width direction of lane L2.
 運転支援装置19は、認識部21の機能により、前方広角カメラ、後方広角カメラ、左側方広角カメラ及び右側方広角カメラの検出結果をセンサフュージョンにより統合して処理し、自車両V1の周囲に存在する障害物を隈なく検出する。これらの広角カメラは、自車両V1の周囲に障害物が検出できない死角が生じないよう、隣り合うカメラ同士の検出範囲の一部(たとえば、検出範囲の水平方向端部から画角の10~15%程度の範囲)が重複するように配置されている。本実施形態では、自車両V1に搭載された複数の撮像装置11の検出範囲の一部が、自車両V1が走行する自車線の隣接車線において重複するように配置されている。以下、撮像装置11の検出範囲が重複する部分を重複部分と言う。 The driving assistance device 19 uses the function of the recognition unit 21 to integrate and process the detection results of the front wide-angle camera, rear wide-angle camera, left side wide-angle camera, and right side wide-angle camera using sensor fusion to thoroughly detect obstacles present around the host vehicle V1. These wide-angle cameras are arranged so that adjacent cameras' detection ranges overlap in part (for example, a range of about 10 to 15% of the field of view from the horizontal end of the detection range) so that there are no blind spots around the host vehicle V1 where obstacles cannot be detected. In this embodiment, the detection ranges of the multiple imaging devices 11 mounted on the host vehicle V1 are arranged so that they overlap in parts of the lane adjacent to the host vehicle V1's lane. Hereinafter, the overlapping parts of the detection ranges of the imaging devices 11 are referred to as overlapping parts.
 たとえば、図2に示す走行シーンでは、前方の検出範囲B1と左側方の検出範囲B3とが重複部分C1で示す範囲で重複し、後方の検出範囲B2と左側方の検出範囲B3とが重複部分C2で示す範囲で重複している。重複部分C1,C2は、どちらも、自車両V1が走行する車線L2の隣接車線L1に存在する。また、前方の検出範囲B1と右側方の検出範囲B4とが重複部分C3で示す範囲で重複し、後方の検出範囲B2と右側方の検出範囲B4とが重複部分C4で示す範囲で重複している。重複部分C3,C4は、どちらも、自車両V1が走行する車線L2の隣接車線L3に存在する。 For example, in the driving scene shown in FIG. 2, the forward detection range B1 and the left side detection range B3 overlap in the range indicated by overlapping portion C1, and the rear detection range B2 and the left side detection range B3 overlap in the range indicated by overlapping portion C2. Both overlapping portions C1 and C2 are located in the lane L1 adjacent to the lane L2 in which the host vehicle V1 is traveling. Furthermore, the forward detection range B1 and the right side detection range B4 overlap in the range indicated by overlapping portion C3, and the rear detection range B2 and the right side detection range B4 overlap in the range indicated by overlapping portion C4. Both overlapping portions C3 and C4 are located in the lane L3 adjacent to the lane L2 in which the host vehicle V1 is traveling.
 上述の重複部分C1,C2,C3、C4では、障害物の状態を正確に検出できない場合が多いことが知られている。これは、重複部分が広角レンズの画角の端部に位置するため、レンズの特性により、重複部分では撮影した障害物の形状が歪むためである。また、障害物が、隣接車線において重複部分の前後に渡って存在する場合は、いずれのカメラでも障害物の全体を撮影できず、自車両V1に搭載された複数のカメラの検出結果を統合して障害物を認識することになる。そのため、検出結果の統合において障害物の状態を正確に推定できない場合があるためである。 It is known that in the overlapping areas C1, C2, C3, and C4 described above, the state of the obstacle cannot often be accurately detected. This is because the overlapping areas are located at the edge of the angle of view of the wide-angle lens, and the lens characteristics cause the shape of the obstacle photographed to be distorted in the overlapping areas. Also, if an obstacle is present in front of and behind the overlapping area in the adjacent lanes, none of the cameras can photograph the obstacle in its entirety, and the obstacle is recognized by combining the detection results from multiple cameras mounted on the vehicle V1. This is because there are cases in which the state of the obstacle cannot be accurately estimated when combining the detection results.
 図3に示す走行シーンは、図2に示す走行シーンおいて、他車両V2が車線L1を走行している走行シーンである。他車両V2はトラックであり、乗用車である自車両V1と異なる大型車両であるものとする。すなわち、他車両V2の全長は、自車両V1の全長より長い。図3に示す走行シーンでは、他車両V2は車線L1の重複部分C1上の位置を走行しており、他車両V2の車体は重複部分C1の前方と後方に渡って存在している。 The driving scene shown in Figure 3 is a driving scene in which the other vehicle V2 is driving on lane L1 in the driving scene shown in Figure 2. The other vehicle V2 is a truck, and is a large vehicle different from the host vehicle V1, which is a passenger car. In other words, the overall length of the other vehicle V2 is longer than the overall length of the host vehicle V1. In the driving scene shown in Figure 3, the other vehicle V2 is driving at a position on the overlapping portion C1 of the lane L1, and the body of the other vehicle V2 is located in front of and behind the overlapping portion C1.
 たとえば、図3に示す走行シーンにおいて、前方カメラから取得した画像データと、測距装置から取得した、障害物までの距離データとを統合して他車両V2の走行状態を認識すると、運転支援装置19は、他車両V2が、図3に示すV2の状態で走行していると認識する。すなわち、運転支援装置19は、他車両V2が車線L1を直進していることを正しく認識する。この場合、運転支援装置19は、走行制御機能により、車線L1をレーンキープ制御により直進し、他車両V2を回避する走行動作は行わない。また、たとえば、他車両V2が、図3にV2xで示すように、自車両V1の前方の位置に向けて車線変更した場合は、認識した他車両V2の走行状態に基づいて他車両V2を回避する走行動作を行う。たとえば、車速制御装置171を用いて自車両V1を減速させ、これに代え又はこれに加え、操舵制御装置172を用いて自車両V1を車線L2から車線L3に車線変更させる。 For example, in the driving scene shown in FIG. 3, when the driving state of the other vehicle V2 is recognized by integrating the image data acquired from the forward camera and the distance data to the obstacle acquired from the distance measuring device, the driving support device 19 recognizes that the other vehicle V2 is driving in the state of V2 shown in FIG. 3. That is, the driving support device 19 correctly recognizes that the other vehicle V2 is moving straight in the lane L1. In this case, the driving support device 19 uses the driving control function to move straight in the lane L1 by lane keeping control, and does not perform a driving operation to avoid the other vehicle V2. Also, for example, when the other vehicle V2 changes lanes toward a position in front of the host vehicle V1, as shown by V2x in FIG. 3, a driving operation to avoid the other vehicle V2 is performed based on the recognized driving state of the other vehicle V2. For example, the vehicle speed control device 171 is used to decelerate the host vehicle V1, and instead of or in addition to this, the steering control device 172 is used to change lanes of the host vehicle V1 from lane L2 to lane L3.
 このように、単一の撮像装置11と、測距装置12など検出装置とを用いて他車両V2の走行状態を認識する場合は、自車両V1に搭載された複数の撮像装置11の検出範囲の重複による誤認識は生じない。これに対し、図3に示す走行シーンにおいて、認識部21の機能により、前方広角カメラ及び左側方広角カメラから取得した画像データを統合して他車両V2の走行状態を認識すると、運転支援装置19は、他車両V2が、たとえば図3に示すV2xの状態で走行していると認識する。すなわち、他車両V2の走行状態を、自車両V1に搭載された複数の撮像装置11の画像データを統合して認識すると、実際は直進している他車両V2を、右方向に転舵して車線L1から車線L2に車線変更しているものと誤って認識してしまう。この場合、運転支援装置19は、走行制御機能により、誤認識した他車両V2の走行状態に基づいて他車両V2を回避する走行動作を行う。この回避動作は、実際は直進している他車両V2を回避するための不要な走行動作であり、当該走行動作により自車両V1の挙動が乱れるとともに、自車両V1の乗員に違和感を与えることになる。 In this way, when the driving state of the other vehicle V2 is recognized using a single imaging device 11 and a detection device such as a distance measuring device 12, there is no erroneous recognition due to overlapping detection ranges of the multiple imaging devices 11 mounted on the vehicle V1. In contrast, in the driving scene shown in FIG. 3, when the recognition unit 21 functions to integrate image data acquired from the front wide-angle camera and the left wide-angle camera to recognize the driving state of the other vehicle V2, the driving support device 19 recognizes that the other vehicle V2 is driving in a state such as V2x shown in FIG. 3. In other words, when the driving state of the other vehicle V2 is recognized by integrating image data from the multiple imaging devices 11 mounted on the vehicle V1, the other vehicle V2, which is actually traveling straight, is mistakenly recognized as turning right and changing lanes from lane L1 to lane L2. In this case, the driving support device 19 uses the driving control function to perform driving operations to avoid the other vehicle V2 based on the erroneously recognized driving state of the other vehicle V2. This avoidance action is actually an unnecessary driving action to avoid the other vehicle V2 that is traveling straight ahead, and this driving action disrupts the behavior of the host vehicle V1 and causes discomfort to the occupants of the host vehicle V1.
 そこで、本実施形態の運転支援装置19は、誤認識した他車両V2の走行状態が自車両V1の走行状態に与える影響を抑制するため、他車両V2の走行状態を誤認識するリスクに対処する自律走行制御を実行する。この自律走行制御は、主に判定部22及び制御部23の有する機能により制御される。以下、図4A~4Cを用いて判定部22及び制御部23の機能について説明する。なお、以下の説明において、自車両V1に搭載された複数の撮像装置11の検出範囲の重複部分のことを、単に「重複部分」とも言うこととする。 The driving assistance device 19 of this embodiment therefore executes autonomous driving control to address the risk of erroneously recognizing the driving state of the other vehicle V2, in order to reduce the impact that an erroneously recognized driving state of the other vehicle V2 has on the driving state of the host vehicle V1. This autonomous driving control is mainly controlled by the functions of the determination unit 22 and the control unit 23. The functions of the determination unit 22 and the control unit 23 will be explained below using Figures 4A to 4C. In the following explanation, the overlapping portion of the detection ranges of the multiple imaging devices 11 mounted on the host vehicle V1 will also be simply referred to as the "overlapping portion".
 判定部22は、判定機能として、他車両V2が、複数の撮像装置11の検出範囲の重複部分に進入するか否かを判定する機能を有する。運転支援装置19は、認識部21の機能により取得した、自車両V1の周囲の走行環境情報に基づき、判定部22の機能により、他車両V2が、複数の撮像装置11の検出範囲の重複部分に進入するか否かを判定する。これにより、重複範囲を超える可能性がある他車両V2が、誤検出の起きやすい重複部分に進入する前に、誤検出リスクを抑制する制御を開始できる。 The determination unit 22 has a determination function of determining whether the other vehicle V2 will enter an overlapping portion of the detection ranges of the multiple imaging devices 11. The driving assistance device 19 uses the function of the determination unit 22 to determine whether the other vehicle V2 will enter an overlapping portion of the detection ranges of the multiple imaging devices 11 based on driving environment information around the vehicle V1 acquired by the function of the recognition unit 21. This makes it possible to start control to reduce the risk of erroneous detection before the other vehicle V2, which may exceed the overlapping range, enters the overlapping portion where erroneous detection is likely to occur.
 運転支援装置19は、他車両V2が、複数の撮像装置11の検出範囲の重複部分に進入するか否かを判定する場合に、たとえば、認識部21の機能により取得した走行環境情報から自車両V1と他車両V2の走行状態を認識する。車両の走行状態とは、車両の進行方向と走行速度の状態であり、車両が直進している状態、車両が右方向又は左方向に転舵している状態、車両が加速又は減速している状態、車両が定速で走行している状態などの状態が含まれる。また、車両の走行状態には、車両が行う走行動作の状態も含まれる。例として、車両が方向指示器を点滅させている状態、車両が前照灯を点灯している状態などが挙げられる。 When the driving assistance device 19 determines whether the other vehicle V2 is entering an overlapping area of the detection ranges of the multiple imaging devices 11, it recognizes the driving states of the host vehicle V1 and the other vehicle V2, for example, from driving environment information acquired by the function of the recognition unit 21. The driving state of the vehicle refers to the state of the vehicle's traveling direction and driving speed, and includes states such as a state in which the vehicle is traveling straight, a state in which the vehicle is steering to the right or left, a state in which the vehicle is accelerating or decelerating, and a state in which the vehicle is traveling at a constant speed. The driving state of the vehicle also includes the state of the driving operation performed by the vehicle. Examples include a state in which the vehicle's turn indicators are flashing, a state in which the vehicle's headlights are turned on, etc.
 自車両V1の走行状態について、運転支援装置19は、判定部22の機能により、たとえば、自車状態検出装置13の各種センサから自車両V1の走行速度、加速度、ヨーレート、舵角、ステアリングホイールの回転角度などの情報を取得し、自車両V1の現在の走行状態を認識する。これに代え、又はこれに加え、運転支援装置19は、地図情報14から道路情報を取得し、自車位置検出装置15から自車両V1の現在位置を取得し、ナビゲーション装置16から走行経路を取得し、自車両V1の現在位置における道路の形状及び/又は走行経路から、自車両V1の進行方向及び/又は走行速度を認識してもよい。 With regard to the driving state of the host vehicle V1, the driving assistance device 19, using the function of the determination unit 22, acquires information such as the driving speed, acceleration, yaw rate, steering angle, and steering wheel rotation angle of the host vehicle V1 from various sensors of the host vehicle state detection device 13, and recognizes the current driving state of the host vehicle V1. Alternatively or in addition to this, the driving assistance device 19 may acquire road information from the map information 14, acquire the current position of the host vehicle V1 from the host vehicle position detection device 15, acquire the driving route from the navigation device 16, and recognize the traveling direction and/or driving speed of the host vehicle V1 from the shape of the road and/or the driving route at the current position of the host vehicle V1.
 これに対し、他車両V2の走行状態の予測について、運転支援装置19は、判定部22の機能により、たとえば撮像装置11から画像データを取得し、パターンマッチングによる障害物の抽出と特定を行い、障害物の種類、位置及びその状態を認識する。また、測距装置12から自車両V1の周囲をスキャンした情報を取得し、当該情報から障害物の位置とその方向を認識する。撮像装置11から取得した画像データから、障害物が他車両V2であることを認識した場合は、その形状から車体がどの程度傾いているか(つまりどの程度転舵しているか)を認識する。また、測距装置12のスキャン結果から、自車両V1に対する他車両V2の位置と相対速度を取得する。そして、これらの検出結果に基づいて他車両V2の走行位置、進行方向及び走行速度を認識する。 In response to this, when predicting the driving state of the other vehicle V2, the driving assistance device 19 uses the functions of the determination unit 22 to, for example, acquire image data from the imaging device 11, extract and identify obstacles by pattern matching, and recognize the type, position, and state of the obstacle. It also acquires information obtained by scanning the area around the vehicle V1 from the distance measuring device 12, and recognizes the position and direction of the obstacle from this information. If it recognizes that the obstacle is another vehicle V2 from the image data acquired from the imaging device 11, it recognizes the degree to which the vehicle body is tilted (i.e., the degree to which it is being steered) from its shape. It also acquires the position and relative speed of the other vehicle V2 with respect to the vehicle V1 from the scan results of the distance measuring device 12. Then, it recognizes the driving position, direction of travel, and driving speed of the other vehicle V2 based on these detection results.
 運転支援装置19は、認識した自車両V1の走行状態と他車両V2の走行状態とに基づき、他車両V2が重複部分に進入するか否かを判定する。運転支援装置19は、自車両V1と他車両V2の位置関係、自車両V1と他車両V2の速度差、及び自車両V1と他車両V2の進行方向に基づいて、他車両V2が重複部分に進入するか否かを判定する。 The driving support device 19 determines whether the other vehicle V2 will enter the overlapping portion based on the recognized driving states of the subject vehicle V1 and the other vehicle V2. The driving support device 19 determines whether the other vehicle V2 will enter the overlapping portion based on the positional relationship between the subject vehicle V1 and the other vehicle V2, the speed difference between the subject vehicle V1 and the other vehicle V2, and the traveling direction of the subject vehicle V1 and the other vehicle V2.
 たとえば、他車両V2が自車両V1の後方を走行している場合に、他車両V2の走行速度が自車両V1の走行速度より速く、他車両V2の進行方向が重複部分に向かうときは、他車両V2が重複部分に進入すると判定する。これに対し、他車両V2が自車両V1の後方を走行している場合に、他車両V2の走行速度が自車両V1の走行速度以下であるときは、他車両V2が重複部分に進入しないと判定する。また、他車両V2が自車両V1の前方を走行している場合に、他車両V2の走行速度が自車両V1の走行速度以上であるときは、他車両V2が重複部分に進入しないと判定する。これに対し、他車両V2が自車両V1の前方を走行している場合に、他車両V2の走行速度が自車両V1の走行速度より遅く、自車両V1(特に重複部分)が他車両V2に向かうときは、他車両V2が重複部分に進入すると判定する。 For example, when the other vehicle V2 is traveling behind the host vehicle V1, if the traveling speed of the other vehicle V2 is faster than the traveling speed of the host vehicle V1 and the other vehicle V2 is heading toward the overlapping portion, it is determined that the other vehicle V2 will enter the overlapping portion. On the other hand, when the other vehicle V2 is traveling behind the host vehicle V1 and the traveling speed of the other vehicle V2 is equal to or lower than the traveling speed of the host vehicle V1, it is determined that the other vehicle V2 will not enter the overlapping portion. Also, when the other vehicle V2 is traveling ahead of the host vehicle V1, if the traveling speed of the other vehicle V2 is equal to or higher than the traveling speed of the host vehicle V1, it is determined that the other vehicle V2 will not enter the overlapping portion. On the other hand, when the other vehicle V2 is traveling ahead of the host vehicle V1, if the traveling speed of the other vehicle V2 is slower than the traveling speed of the host vehicle V1 and the host vehicle V1 (especially the overlapping portion) is heading toward the other vehicle V2, it is determined that the other vehicle V2 will enter the overlapping portion.
 運転支援装置19が、他車両V2が重複部分に進入するか否かを判定する方法は上述のものに限られず、他の方法でもよい。たとえば、運転支援装置19は、所定時間後の自車両V1の走行状態と他車両V2の走行状態とを予測し、予測した自車両V1と他車両V2の走行状態に基づき、他車両V2が、所定時間内に重複部分に進入するか否かを判定してもよい。 The method by which the driving assistance device 19 determines whether the other vehicle V2 will enter the overlapping portion is not limited to the above, and other methods may be used. For example, the driving assistance device 19 may predict the driving state of the host vehicle V1 and the driving state of the other vehicle V2 after a predetermined time, and determine whether the other vehicle V2 will enter the overlapping portion within a predetermined time based on the predicted driving states of the host vehicle V1 and the other vehicle V2.
 所定時間は、他車両V2が実際に重複部分に進入するまでに、走行状態の誤認識のリスクに対処する自律走行制御が開始できる範囲内で適宜の値を設定でき、たとえば10~20秒である。所定時間がこれより短いと、走行状態を誤認識するリスクに対処する自律走行制御の開始が遅れ、自車両V1の挙動の変化が大きくなる。逆に、所定時間がこれより長いと、走行状態を正確に予測できなくなり、走行状態を誤認識するリスクに対処する自律走行制御が不要な走行シーンにおいて当該自律走行制御を実行するおそれがある。 The specified time can be set to an appropriate value within a range in which autonomous driving control that addresses the risk of misrecognizing the driving conditions can be initiated before the other vehicle V2 actually enters the overlapping portion, for example 10 to 20 seconds. If the specified time is shorter than this, the start of autonomous driving control that addresses the risk of misrecognizing the driving conditions will be delayed, resulting in greater changes in the behavior of the host vehicle V1. Conversely, if the specified time is longer than this, it will be impossible to accurately predict the driving conditions, and there is a risk that autonomous driving control that addresses the risk of misrecognizing the driving conditions will be executed in driving scenes where such control is not necessary.
 所定時間後の自車両V1の走行状態を予測する場合、運転支援装置19は、自車状態検出装置13から取得した情報に基づき、自車両V1の現在の走行状態を認識する。これに加え、車両制御装置17から、駆動装置及び/又は操舵装置に出力された制御信号を取得し、自車両V1の進行方向及び/又は走行速度をどのように制御するのか(どのように変化させるのか)を認識する。そして、これらに基づき、所定時間後に自車両V1の走行状態がどのように変化するのかを予測する。これに代え、運転支援装置19は、地図情報14から道路情報を取得し、自車位置検出装置15から自車両V1の現在位置を取得し、ナビゲーション装置16から走行経路を取得し、自車両V1の現在位置の前方における道路の形状及び/又は走行経路から、所定時間後の自車両V1の進行方向及び/又は走行速度を予測してもよい。 When predicting the running state of the host vehicle V1 after a predetermined time, the driving assistance device 19 recognizes the current running state of the host vehicle V1 based on information obtained from the host vehicle state detection device 13. In addition, the driving assistance device 19 obtains control signals output to the drive device and/or steering device from the vehicle control device 17, and recognizes how to control (change) the traveling direction and/or running speed of the host vehicle V1. Then, based on these, it predicts how the running state of the host vehicle V1 will change after a predetermined time. Alternatively, the driving assistance device 19 may obtain road information from the map information 14, obtain the current position of the host vehicle V1 from the host vehicle position detection device 15, obtain the running route from the navigation device 16, and predict the traveling direction and/or running speed of the host vehicle V1 after a predetermined time from the shape of the road ahead of the current position of the host vehicle V1 and/or the running route.
 これに対し、所定時間後の他車両V2の走行状態を予測する場合、運転支援装置19は、測距装置12から自車両V1の周囲をスキャンした情報を取得し、当該情報から障害物の位置とその方向を認識する。運転支援装置19は、測距装置12のスキャン結果から障害物の位置とその方向を認識する処理を、所定時間より短い時間間隔で複数回(たとえば3回以上)繰り返し行い、障害物の位置の変化の傾向を認識し、当該傾向から所定時間後の障害物の状態(つまり他車両V2の走行状態)を予測する。 In contrast, when predicting the driving state of the other vehicle V2 after a predetermined time, the driving assistance device 19 obtains information obtained by scanning the surroundings of the vehicle V1 from the distance measuring device 12, and recognizes the position and direction of the obstacle from that information. The driving assistance device 19 repeats the process of recognizing the position and direction of the obstacle from the scan results of the distance measuring device 12 multiple times (e.g., three or more times) at time intervals shorter than the predetermined time, recognizes the tendency of changes in the obstacle position, and predicts the state of the obstacle after the predetermined time (i.e. the driving state of the other vehicle V2) from that tendency.
 そして、運転支援装置19は、予測した自車両V1の走行状態と他車両V2の走行状態とに基づき、他車両V2が、所定時間内に重複部分に進入するか否かを判定する。具体的には、所定時間後の自車両V1と他車両V2の位置関係から、他車両V2が重複部分に進入するか否かを判定する。これにより、自車両V1と他車両V2の両方の走行状態が変化する場合でも、他車両V2が重複部分に進入するか否かを判定できる。 Then, the driving assistance device 19 determines whether the other vehicle V2 will enter the overlapping portion within a predetermined time based on the predicted driving state of the subject vehicle V1 and the driving state of the other vehicle V2. Specifically, it determines whether the other vehicle V2 will enter the overlapping portion based on the positional relationship between the subject vehicle V1 and the other vehicle V2 after the predetermined time. This makes it possible to determine whether the other vehicle V2 will enter the overlapping portion even if the driving states of both the subject vehicle V1 and the other vehicle V2 change.
 図4Aを用いて、走行状態の予測を具体的に説明する。図4Aに示す走行シーンは、図2に示す走行シーンおいて車線L3に他車両V2が存在する走行シーンである。他車両V2は、自車両V1の後方の位置P1を走行しており、自車両V1の運転支援装置19は、認識部21の機能により、後方カメラの検出範囲A2に存在する他車両V2を認識しているものとする。また、自車両V1はレーンキープ制御により定速走行し、他車両V2は自車両V1より速い走行速度で直進しており、他車両V2は所定時間内に自車両V1を追い抜くものとする。 The prediction of the driving state will be specifically explained using FIG. 4A. The driving scene shown in FIG. 4A is a driving scene in which another vehicle V2 is present in lane L3 in the driving scene shown in FIG. 2. The other vehicle V2 is traveling at position P1 behind the host vehicle V1, and the driving assistance device 19 of the host vehicle V1 recognizes the other vehicle V2 present in the detection range A2 of the rear camera through the function of the recognition unit 21. In addition, the host vehicle V1 is traveling at a constant speed due to lane keeping control, the other vehicle V2 is traveling straight at a faster driving speed than the host vehicle V1, and the other vehicle V2 will overtake the host vehicle V1 within a predetermined time.
 この場合、運転支援装置19は、車速センサ(自車状態検出装置13)から自車両V1の車速情報を取得し、レーンキープ制御による定速走行を所定時間(10~20秒)継続したときの自車両V1の走行位置を算出する。また、運転支援装置19は、撮像装置11から画像データを取得し、パターンマッチングにより障害物がトラック(他車両V2)であり、車線L3を走行していることを認識する。これに加え、後方カメラ(撮像装置11)の検出結果から、他車両V2が転舵せずに直進していることを認識する。さらに、測距装置12のスキャン結果から、自車両V1に対する他車両V2の相対速度を取得する。運転支援装置19は、自車両V1に対する他車両V2の相対速度から、他車両V2の走行速度を算出し、直進する他車両V2が所定時間後にどの位置を走行しているかを予測する。図4Aに示す走行シーンでは、所定時間内に他車両V2が自車両V1を追い抜くため、運転支援装置19は、算出した自車両V1の走行位置と、予測した他車両V2の走行位置との関係から、他車両V2が、所定時間内に重複部分C3,C4に進入すると判定する。 In this case, the driving assistance device 19 acquires vehicle speed information of the host vehicle V1 from the vehicle speed sensor (host vehicle state detection device 13) and calculates the traveling position of the host vehicle V1 when constant speed traveling by lane keeping control continues for a predetermined time (10 to 20 seconds). The driving assistance device 19 also acquires image data from the imaging device 11 and recognizes by pattern matching that the obstacle is a truck (other vehicle V2) and that it is traveling in lane L3. In addition, from the detection result of the rear camera (imaging device 11), it recognizes that the other vehicle V2 is traveling straight without turning. Furthermore, from the scan result of the distance measurement device 12, the relative speed of the other vehicle V2 with respect to the host vehicle V1 is acquired. The driving assistance device 19 calculates the traveling speed of the other vehicle V2 from the relative speed of the other vehicle V2 with respect to the host vehicle V1 and predicts where the other vehicle V2 traveling straight will be traveling after a predetermined time. In the driving scene shown in FIG. 4A, the other vehicle V2 will overtake the host vehicle V1 within a predetermined time, and the driving assistance device 19 determines that the other vehicle V2 will enter the overlapping portions C3 and C4 within the predetermined time based on the relationship between the calculated driving position of the host vehicle V1 and the predicted driving position of the other vehicle V2.
 なお、所定時間後の自車両V1の走行状態と他車両V2の走行状態とを予測することと、予測した自車両V1と他車両V2の走行状態に基づき、他車両V2が、所定時間内に重複部分に進入するか否かを判定することは、本発明に必須の構成でなく、必要に応じて設けてもよい。 Note that predicting the driving state of the subject vehicle V1 and the driving state of the other vehicle V2 after a predetermined time, and determining whether the other vehicle V2 will enter the overlapping portion within the predetermined time based on the predicted driving states of the subject vehicle V1 and the other vehicle V2, are not essential components of the present invention, and may be provided as necessary.
 制御部23は、判定部22の機能により、他車両V2が重複部分に進入すると判定した場合は、自車両V1に搭載された複数の撮像装置11の検出結果から他車両V2の走行状態を誤認識するリスクに対処する自律走行制御を実行する機能を有する。走行状態を誤認識するリスクに対処する自律走行制御とは、たとえば図3に示す、重複部分を走行する他車両V2の走行状態を誤認識する可能性に対し、当該可能性を抑制するか、走行状態を誤認識したとしても、それが自車両V1の走行状態に与える影響を抑制するための自律走行制御である。以下、自車両V1に搭載された撮像装置11の検出結果から他車両V2の走行状態を誤認識するリスクに対処するように自車両V1を制御する自律走行制御を、リスク対処制御とも言うこととする。 When the control unit 23 determines, by the function of the determination unit 22, that the other vehicle V2 is entering the overlapping portion, the control unit 23 has a function of executing autonomous driving control that addresses the risk of misrecognizing the driving state of the other vehicle V2 from the detection results of the multiple imaging devices 11 mounted on the host vehicle V1. Autonomous driving control that addresses the risk of misrecognizing the driving state is, for example, autonomous driving control for suppressing the possibility of misrecognizing the driving state of the other vehicle V2 traveling in the overlapping portion, as shown in FIG. 3, or for suppressing the effect of misrecognizing the driving state on the host vehicle V1. Hereinafter, autonomous driving control that controls the host vehicle V1 to address the risk of misrecognizing the driving state of the other vehicle V2 from the detection results of the imaging devices 11 mounted on the host vehicle V1 will also be referred to as risk handling control.
 運転支援装置19は、リスク対処制御として、他車両V2が重複部分に進入しないように自車両V1の走行を自律制御する。これに代え、又はこれに加え、運転支援装置19は、リスク対処制御として、撮像装置11の検出結果から誤認識した他車両V2の走行状態が、自車両V1の走行状態に影響しないように自車両V1の走行を自律制御してもよい。 As a risk response control, the driving assistance device 19 autonomously controls the driving of the host vehicle V1 so that the other vehicle V2 does not enter the overlapping portion. Alternatively or in addition, as a risk response control, the driving assistance device 19 may autonomously control the driving of the host vehicle V1 so that the driving state of the other vehicle V2, which is erroneously recognized from the detection result of the imaging device 11, does not affect the driving state of the host vehicle V1.
 他車両V2が重複部分に進入しないように自車両V1の走行を自律制御することには、他車両V2が、撮像装置11の検出範囲(特に検出範囲の重複部分)に含まれないように自車両V1の走行速度を設定することと、他車両V2が、撮像装置11の検出範囲(特に検出範囲の重複部分)に含まれないように自車両V1と他車両V2との車間距離を設定することと、自車両V1を隣接車線に車線変更させることとが含まれる。また、誤認識した他車両V2の走行状態が、自車両V1の走行状態に影響しないように自車両V1の走行を自律制御することには、撮像装置11の検出結果を用いないで自車両V1の走行を自律制御することと、撮像装置11の検出結果の重み付けを小さく設定して自車両V1の走行を自律制御することとが含まれる。 Autonomously controlling the travel of the vehicle V1 so that the other vehicle V2 does not enter the overlapping portion includes setting the travel speed of the vehicle V1 so that the other vehicle V2 is not included in the detection range of the imaging device 11 (particularly the overlapping portion of the detection range), setting the inter-vehicle distance between the vehicle V1 and the other vehicle V2 so that the other vehicle V2 is not included in the detection range of the imaging device 11 (particularly the overlapping portion of the detection range), and changing the lane of the vehicle V1 to an adjacent lane. Furthermore, autonomously controlling the travel of the vehicle V1 so that the erroneously recognized travel state of the other vehicle V2 does not affect the travel state of the vehicle V1 includes autonomously controlling the travel of the vehicle V1 without using the detection result of the imaging device 11, and autonomously controlling the travel of the vehicle V1 by setting the weighting of the detection result of the imaging device 11 to a small value.
 運転支援装置19は、走行シーンに応じてリスク対処制御を実行し、上述した複数のリスク対処制御を適宜組み合わせて実行してもよい。具体的には、運転支援装置19は、自車両V1と他車両V2の位置関係、自車両V1と他車両V2の速度差、自車両V1と他車両V2の進行方向、自車両V1と他車両V2の走行動作などに基づき、適宜のリスク対処制御を実行する。 The driving support device 19 executes risk response control according to the driving scene, and may execute an appropriate combination of the above-mentioned multiple risk response controls. Specifically, the driving support device 19 executes appropriate risk response control based on the positional relationship between the host vehicle V1 and the other vehicle V2, the speed difference between the host vehicle V1 and the other vehicle V2, the traveling direction of the host vehicle V1 and the other vehicle V2, the driving behavior of the host vehicle V1 and the other vehicle V2, etc.
 一例として、運転支援装置19は、他車両V2が自車両V1の後方を走行しているか否かを判定する。そして、他車両V2が自車両V1の後方を走行していると判定した場合は、撮像装置11の検出結果を用いずに又は当該検出結果の重み付けを小さくして自車両V1の走行を自律制御する。これに対し、他車両V2が自車両V1の前方を走行している又は自車両V1と他車両V2とが並走していると判定した場合は、自車両V1が他車両V2の追い抜き又は追い越しを行うか否かを判定する。そして、自車両V1が追い抜き又は追い越しを行なわないと判定した場合は、他車両V2が重複部分に含まれないように自車両V1の走行を自律制御する。これに対し、自車両V1が追い抜き又は追い越しを行なうと判定したときは、予め設定された所定の走行速度より速い走行速度を設定するとともに、撮像装置11の検出結果を用いずに又は当該検出結果の重み付けを小さく設定して追い抜き又は追い越しを行う。 As an example, the driving assistance device 19 judges whether the other vehicle V2 is traveling behind the vehicle V1. If it is judged that the other vehicle V2 is traveling behind the vehicle V1, the driving assistance device 19 autonomously controls the driving of the vehicle V1 without using the detection result of the imaging device 11 or by reducing the weighting of the detection result. On the other hand, if it is judged that the other vehicle V2 is traveling ahead of the vehicle V1 or the vehicle V1 and the other vehicle V2 are traveling side by side, it judges whether the vehicle V1 will overtake or pass the other vehicle V2. If it is judged that the vehicle V1 will not overtake or pass, it autonomously controls the driving of the vehicle V1 so that the other vehicle V2 is not included in the overlapping portion. On the other hand, if it is judged that the vehicle V1 will overtake or pass, it sets a driving speed faster than a predetermined driving speed set in advance, and performs the overtaking or pass without using the detection result of the imaging device 11 or by reducing the weighting of the detection result.
 なお、本実施形態において自車両V1が他車両V2を追い越す走行シーンとは、たとえば、他車両V2が、自車両V1と同じ車線を走行している、自車両V1の先行車両であり、他車両V2の走行速度が遅く、自車両V1の自律走行制御を継続できないため、自車両V1が隣接車線に車線変更して他車両V2を追い越す走行シーンを想定している。また、上述の例において、自車両V1と他車両V2とが並走していると判定された場合に、撮像装置11の検出結果を用いずに又は当該検出結果の重み付けを小さくして自車両V1の走行を自律制御してもよい。 In this embodiment, a driving scene in which the host vehicle V1 overtakes another vehicle V2 refers to, for example, a driving scene in which the other vehicle V2 is a preceding vehicle of the host vehicle V1 traveling in the same lane as the host vehicle V1, and the traveling speed of the other vehicle V2 is slow, and the autonomous driving control of the host vehicle V1 cannot be continued, so the host vehicle V1 changes lanes to an adjacent lane to overtake the other vehicle V2. Also, in the above example, when it is determined that the host vehicle V1 and the other vehicle V2 are traveling side by side, the traveling of the host vehicle V1 may be autonomously controlled without using the detection result of the imaging device 11 or by reducing the weighting of the detection result.
 図4Aに示す走行シーンでは、運転支援装置19は、他車両V2が所定時間内に重複部分C3,C4に進入し、他車両V2の全長D1が、重複部分C4の車線L3に沿う長さD2より長いと判定したため、制御部23の機能によりリスク対処制御を実行する。まず、運転支援装置19は、他車両V2が自車両V1の後方を走行しているか否かを判定する。図4Aに示す走行シーンでは、同図に示す位置関係から、運転支援装置19は他車両V2が自車両V1の後方を走行していると判定する。 In the driving scene shown in FIG. 4A, the driving assistance device 19 determines that the other vehicle V2 has entered the overlapping portions C3 and C4 within a predetermined time and that the overall length D1 of the other vehicle V2 is longer than the length D2 of the overlapping portion C4 along the lane L3, and therefore executes risk response control using the function of the control unit 23. First, the driving assistance device 19 determines whether the other vehicle V2 is traveling behind the host vehicle V1. In the driving scene shown in FIG. 4A, based on the positional relationship shown in the figure, the driving assistance device 19 determines that the other vehicle V2 is traveling behind the host vehicle V1.
 この場合、運転支援装置19は、撮像装置11の検出結果を用いないで自車両V1の走行を自律制御する、又は撮像装置11の検出結果の重み付けを小さく設定して自車両V1の走行を自律制御する。図4Aに示す走行シーンでは、撮像装置11の検出結果を用いないで自車両V1の走行を自律制御するものとする。運転支援装置19は、図4Bに示すように、他車両V2が走行軌跡T1に沿って位置P1から位置P2まで走行する間、自車両V1は、撮像装置11の検出結果を用いずにレーンキープ制御を行う。具体的には、撮像装置11の代わりに、レーダー装置、ソナーなどを含む測距装置12の検出結果を用いてレーンキープ制御を行う。ただし、撮像装置11の中でも、広角カメラ以外の通常のカメラの検出結果は用いてもよい。重複部分による誤認識のリスクがないためである。 In this case, the driving assistance device 19 autonomously controls the running of the host vehicle V1 without using the detection results of the imaging device 11, or autonomously controls the running of the host vehicle V1 by setting a small weighting for the detection results of the imaging device 11. In the running scene shown in FIG. 4A, the running of the host vehicle V1 is autonomously controlled without using the detection results of the imaging device 11. As shown in FIG. 4B, while the other vehicle V2 runs from position P1 to position P2 along the running trajectory T1, the driving assistance device 19 performs lane keeping control for the host vehicle V1 without using the detection results of the imaging device 11. Specifically, lane keeping control is performed using the detection results of the distance measuring device 12, which includes a radar device, sonar, etc., instead of the imaging device 11. However, the detection results of a normal camera other than a wide-angle camera may be used among the imaging devices 11. This is because there is no risk of misrecognition due to overlapping parts.
 そして、図4Cに示すように、他車両V2が位置P2まで走行した後、運転支援装置19は、通常通り、広角カメラを含めた撮像装置11の検出結果を用いて自車両V1の走行を自律制御する。図4Cに示す位置P2は、他車両V2が重複部分C3,C4を通過し、前方カメラの検出範囲A1に含まれる位置であるため、運転支援装置19が他車両V2の走行状態を誤認識するリスクはない。 Then, as shown in Figure 4C, after the other vehicle V2 has traveled to position P2, the driving assistance device 19 autonomously controls the driving of the host vehicle V1 as usual using the detection results of the imaging device 11, including the wide-angle camera. Position P2 shown in Figure 4C is a position where the other vehicle V2 has passed through overlapping parts C3 and C4 and is included in the detection range A1 of the front camera, so there is no risk that the driving assistance device 19 will erroneously recognize the driving state of the other vehicle V2.
 また、撮像装置11の検出結果の重み付けを小さく設定して自車両V1の走行を自律制御する場合は、撮像装置11(特に広角カメラ)の検出結果が自車両V1の走行状態(すなわち走行速度と操舵操作)に影響を与えないよう、測距装置12などの他の検出装置の検出結果に比べて撮像装置11の検出結果の重み付けを相対的に小さく設定する。つまり、撮像装置11の検出結果の重み付けを小さく設定してもよく、測距装置12などの他の検出装置の検出結果の重み付けを大きく設定してもよい。重み付けの係数は、撮像装置11の検出結果が自車両V1の走行状態に影響を与えない範囲内で適宜の値を設定できる。 Furthermore, when autonomously controlling the traveling of the vehicle V1 by setting the weighting of the detection results of the imaging device 11 low, the weighting of the detection results of the imaging device 11 is set relatively low compared to the detection results of other detection devices such as the distance measuring device 12 so that the detection results of the imaging device 11 (particularly the wide-angle camera) do not affect the traveling state of the vehicle V1 (i.e., traveling speed and steering operation). In other words, the weighting of the detection results of the imaging device 11 may be set low, or the weighting of the detection results of other detection devices such as the distance measuring device 12 may be set high. The weighting coefficient can be set to an appropriate value within a range in which the detection results of the imaging device 11 do not affect the traveling state of the vehicle V1.
 次に、異なる走行シーンにおけるリスク対処制御について、図5A~5Cを用いて説明する。 Next, risk management control in different driving situations will be explained using Figures 5A to 5C.
 図5Aは、リスク対処制御を実行する走行シーンの一例を示す平面図である。図5Aに示す走行シーンは、図2に示す道路を自車両V1と他車両V2,V3が走行している走行シーンであり、自車両V1は車線L2の位置P3を走行し、その前方において、他車両V2が車線L1を走行し、他車両V3が車線L2を走行している。図5Aに示す走行シーンでは、自車両V1はレーンキープ制御により定速走行し、他車両V2,V3は自車両V1より遅い走行速度で直進しているものとする。つまり、自車両V1は、所定時間内に他車両V3に追いつくものとする。 FIG. 5A is a plan view showing an example of a driving scene in which risk response control is executed. The driving scene shown in FIG. 5A is a driving scene in which the host vehicle V1 and other vehicles V2 and V3 are driving on the road shown in FIG. 2, with the host vehicle V1 driving at position P3 in lane L2, and ahead of it, the other vehicle V2 is driving on lane L1, and the other vehicle V3 is driving on lane L2. In the driving scene shown in FIG. 5A, the host vehicle V1 drives at a constant speed due to lane keeping control, and the other vehicles V2 and V3 are traveling straight at a driving speed slower than the host vehicle V1. In other words, the host vehicle V1 catches up with the other vehicle V3 within a predetermined time.
 図5Aに示す走行シーンでは、自車両V1の走行速度が他車両V2,V3の走行速度より速いため、自車両V1と他車両V2,V3との車間距離は時間の経過と共に短くなる。そして、自車両V1が、図5Bに示す位置P4まで走行すると、運転支援装置19は、認識部21の機能により、前方カメラの検出結果から検出範囲A1に含まれる他車両V2,V3を認識する。 In the driving scene shown in FIG. 5A, the driving speed of the subject vehicle V1 is faster than the driving speed of the other vehicles V2 and V3, so the distance between the subject vehicle V1 and the other vehicles V2 and V3 decreases over time. Then, when the subject vehicle V1 drives to position P4 shown in FIG. 5B, the driving assistance device 19 recognizes the other vehicles V2 and V3 that are included in the detection range A1 from the detection results of the front camera using the function of the recognition unit 21.
 次に、運転支援装置19は、判定部22の機能により、車速センサから自車両V1の車速情報を取得し、レーンキープ制御による定速走行を所定時間継続したときの自車両V1の走行位置を算出する。また、運転支援装置19は、撮像装置11から画像データを取得し、パターンマッチングにより障害物がトラック(他車両V2)と乗用車(他車両V3)であり、他車両V2が車線L1を走行し、他車両V3が車線L2を走行していることを認識する。これに加え、前方カメラ(撮像装置11)の検出結果から、他車両V2,V3が転舵せずに直進していることを認識する。さらに、測距装置12のスキャン結果から、自車両V1に対する他車両V2,V3の相対速度を取得する。運転支援装置19は、これらの情報に基づいて所定時間後の他車両V2,V3の走行状態を予測する。 Next, the driving support device 19, using the function of the determination unit 22, acquires vehicle speed information of the host vehicle V1 from the vehicle speed sensor, and calculates the traveling position of the host vehicle V1 when constant speed traveling by lane keeping control is continued for a predetermined time. The driving support device 19 also acquires image data from the imaging device 11, and recognizes by pattern matching that the obstacles are a truck (other vehicle V2) and a passenger car (other vehicle V3), that the other vehicle V2 is traveling in lane L1, and that the other vehicle V3 is traveling in lane L2. In addition, from the detection result of the front camera (imaging device 11), it recognizes that the other vehicles V2 and V3 are traveling straight without turning. Furthermore, from the scan result of the distance measurement device 12, it acquires the relative speed of the other vehicles V2 and V3 with respect to the host vehicle V1. Based on this information, the driving support device 19 predicts the traveling state of the other vehicles V2 and V3 after a predetermined time.
 図5Bに示す走行シーンでは、所定時間内に自車両V1が他車両V2,V3に追いつくため、運転支援装置19は、算出した自車両V1の走行位置と、予測した他車両V2,V3の走行位置との関係から、他車両V2が、所定時間内に重複部分C1に進入すると判定する。 In the driving scene shown in FIG. 5B, the host vehicle V1 will catch up with the other vehicles V2 and V3 within a predetermined time, so the driving assistance device 19 determines that the other vehicle V2 will enter the overlapping portion C1 within the predetermined time based on the relationship between the calculated driving position of the host vehicle V1 and the predicted driving positions of the other vehicles V2 and V3.
 次に、運転支援装置19は、制御部23の機能により、他車両V2が自車両V1の前方を走行しているか否かを判定する。図5Bに示す走行シーンでは、同図に示す位置関係から、他車両V2が自車両V1の前方を走行していると判定する。この場合、運転支援装置19は、自車両V1が他車両V2の追い抜き又は追い越しを行うか否かを判定する。たとえば、運転支援装置19は、判定部22の機能により予測された自車両V1と他車両V2の走行状態に基づき、所定時間後に自車両V1が他車両V2を追い抜くか否かを判定する。またこれに代え、又はこれに加え、運転支援装置19は、自車両V1の自律走行制御を継続するために他車両V2を追い越す必要があるか否かを判定してもよい。 Next, the driving assistance device 19 uses the function of the control unit 23 to determine whether or not the other vehicle V2 is traveling ahead of the host vehicle V1. In the driving scene shown in FIG. 5B, it is determined that the other vehicle V2 is traveling ahead of the host vehicle V1 based on the positional relationship shown in the figure. In this case, the driving assistance device 19 determines whether or not the host vehicle V1 will overtake or pass the other vehicle V2. For example, the driving assistance device 19 determines whether or not the host vehicle V1 will overtake the other vehicle V2 after a predetermined time based on the driving states of the host vehicle V1 and the other vehicle V2 predicted by the function of the determination unit 22. Alternatively or in addition to this, the driving assistance device 19 may determine whether or not it is necessary to overtake the other vehicle V2 in order to continue autonomous driving control of the host vehicle V1.
 図5Bに示す走行シーンでは、自車両V1は、他車両V3に接近した後は他車両V3に追従して走行すればよく、自車両V1の自律走行制御(追従制御)を継続するために他車両V3を追い越す必要がないため、自車両V1が他車両V2の追い越しを行わないと判定する。この場合、運転支援装置19は、他車両V2が重複部分C1に含まれないように自車両V1の走行を自律制御する。たとえば、運転支援装置19は、図5Cに示す距離D3を、自車両V1と他車両V3との車間距離に設定する。またこれとともに(たとえば同時に)、図5Cに示す距離D4を、隣接車線L1において自車両V1に対応する位置P6と、他車両V2との仮想的な車間距離に設定する。 In the driving scene shown in FIG. 5B, after approaching the other vehicle V3, the host vehicle V1 only needs to drive following the other vehicle V3, and there is no need for the host vehicle V1 to overtake the other vehicle V3 in order to continue the autonomous driving control (following control), so the host vehicle V1 determines that it will not overtake the other vehicle V2. In this case, the driving assistance device 19 autonomously controls the driving of the host vehicle V1 so that the other vehicle V2 is not included in the overlapping portion C1. For example, the driving assistance device 19 sets the distance D3 shown in FIG. 5C as the inter-vehicle distance between the host vehicle V1 and the other vehicle V3. In addition (for example, simultaneously), the driving assistance device 19 sets the distance D4 shown in FIG. 5C as the virtual inter-vehicle distance between the position P6 corresponding to the host vehicle V1 in the adjacent lane L1 and the other vehicle V2.
 自車両V1と他車両V3との車間距離である距離D3は、自車両V1が他車両V3との接触を回避できる範囲内で適宜の値を設定できる。また、自車両V1と他車両V2との仮想的な車間距離である距離D4は、他車両V3が重複部分C1に進入することを回避できる範囲内で適宜の値を設定できる。運転支援装置19は、自車両V1と他車両V2との仮想的な車間距離と、自車両V1と他車両V3との車間距離とを維持する自律走行制御(追従制御)を実行することで、他車両V2の走行状態を誤認識するリスクを回避できる。 The distance D3, which is the distance between the host vehicle V1 and the other vehicle V3, can be set to an appropriate value within a range in which the host vehicle V1 can avoid contact with the other vehicle V3. In addition, the distance D4, which is the virtual distance between the host vehicle V1 and the other vehicle V2, can be set to an appropriate value within a range in which the other vehicle V3 can avoid entering the overlapping portion C1. The driving assistance device 19 can avoid the risk of misrecognizing the driving state of the other vehicle V2 by performing autonomous driving control (following control) that maintains the virtual distance between the host vehicle V1 and the other vehicle V2 and the distance between the host vehicle V1 and the other vehicle V3.
 次に、異なる走行シーンにおけるリスク対処制御について、図6A~6Cを用いて説明する。 Next, risk management control in different driving situations will be explained using Figures 6A to 6C.
 図6Aは、リスク対処制御を実行する走行シーンの一例を示す平面図である。図6Aに示す走行シーンは、図2に示す道路を自車両V1と他車両V2が走行している走行シーンであり、自車両V1は車線L2の位置P7を走行し、その前方において、他車両V2が車線L1を走行している。図6Aに示す走行シーンでは、自車両V1はレーンキープ制御により定速走行し、他車両V2は自車両V1より遅い走行速度で直進しているものとする。つまり、自車両V1は、所定時間内に他車両V2を追い抜くものとする。 FIG. 6A is a plan view showing an example of a driving scene in which risk response control is executed. The driving scene shown in FIG. 6A is a driving scene in which the host vehicle V1 and another vehicle V2 are traveling on the road shown in FIG. 2, with the host vehicle V1 traveling at position P7 in lane L2 and the other vehicle V2 traveling ahead of it in lane L1. In the driving scene shown in FIG. 6A, the host vehicle V1 travels at a constant speed due to lane keeping control, and the other vehicle V2 travels straight at a slower driving speed than the host vehicle V1. In other words, the host vehicle V1 overtakes the other vehicle V2 within a predetermined time.
 図6Aに示す走行シーンでは、運転支援装置19は、認識部21の機能により、前方カメラの検出結果から検出範囲A1に含まれる他車両V2を認識する。次に、運転支援装置19は、判定部22の機能により、図5Aに示す走行シーンと同様の方法で、所定時間後の自車両V1の走行状態を算出するとともに、所定時間後の他車両V2の走行状態を予測する。 In the driving scene shown in FIG. 6A, the driving support device 19 recognizes another vehicle V2 included in the detection range A1 from the detection results of the front camera using the function of the recognition unit 21. Next, the driving support device 19 uses the function of the determination unit 22 to calculate the driving state of the host vehicle V1 after a predetermined time in a manner similar to that of the driving scene shown in FIG. 5A, and predicts the driving state of the other vehicle V2 after a predetermined time.
 図6Aに示す走行シーンでは、所定時間内に自車両V1が他車両V2を追い抜くため、運転支援装置19は、算出した自車両V1の走行位置と、予測した他車両V2の走行位置との関係から、他車両V2が、所定時間内に重複部分C1に進入すると判定する。 In the driving scene shown in FIG. 6A, the subject vehicle V1 will overtake the other vehicle V2 within a predetermined time, and the driving assistance device 19 determines that the other vehicle V2 will enter the overlapping portion C1 within the predetermined time based on the relationship between the calculated driving position of the subject vehicle V1 and the predicted driving position of the other vehicle V2.
 次に、運転支援装置19は、制御部23の機能により、他車両V2が自車両V1の前方を走行しているか否かを判定する。図6Aに示す走行シーンでは、他車両V2が自車両V1の前方を走行しているため、他車両V2が自車両V1の前方を走行していると判定する。この場合、運転支援装置19は、自車両V1が他車両V2の追い抜き又は追い越しを行うか否かを判定する。図6Aに示す走行シーンでは、自車両V1が所定時間内に他車両V2を追い抜くため、自車両V1が他車両V2の追い抜きを行うと判定する。 Next, the driving assistance device 19 uses the function of the control unit 23 to determine whether or not the other vehicle V2 is traveling ahead of the host vehicle V1. In the driving scene shown in FIG. 6A, the other vehicle V2 is traveling ahead of the host vehicle V1, so it is determined that the other vehicle V2 is traveling ahead of the host vehicle V1. In this case, the driving assistance device 19 determines whether or not the host vehicle V1 will overtake or be overtaken by the other vehicle V2. In the driving scene shown in FIG. 6A, the host vehicle V1 will overtake the other vehicle V2 within a predetermined time, so it is determined that the host vehicle V1 will overtake the other vehicle V2.
 この場合、運転支援装置19は、予め設定された所定の走行速度より速い走行速度を設定するとともに、撮像装置11の検出結果を用いずに又は当該検出結果の重み付けを小さく設定して他車両V2の追い抜きを行う。予め設定された所定の走行速度とは、たとえば自車両V1の乗員により設定された走行速度であり、当該走行速度より速い走行速度とは、たとえば、自車両V1の乗員により設定された走行速度より5~25km/h速い走行速度である。運転支援装置19は、自車両V1の乗員が違和感を感じない範囲内で、所定の走行速度より速い適宜の走行速度を設定することで、他車両V2の走行状態の誤認識が自車両V1の走行状態に影響を与える時間を短縮できる。 In this case, the driving assistance device 19 sets a traveling speed faster than a predetermined traveling speed, and overtakes the other vehicle V2 without using the detection result of the imaging device 11 or by setting the weighting of the detection result to be small. The predetermined traveling speed is, for example, a traveling speed set by the occupant of the vehicle V1, and a traveling speed faster than the predetermined traveling speed is, for example, a traveling speed 5 to 25 km/h faster than the traveling speed set by the occupant of the vehicle V1. By setting an appropriate traveling speed faster than the predetermined traveling speed within a range in which the occupant of the vehicle V1 does not feel uncomfortable, the driving assistance device 19 can shorten the time during which erroneous recognition of the traveling state of the other vehicle V2 affects the traveling state of the vehicle V1.
 運転支援装置19は、たとえば図6Bに示すように、自車両V1の乗員が設定した走行速度より速い走行速度を設定し、これと同時に、撮像装置11の検出結果の重み付けを小さく設定する。そして、自車両V1を位置P5から位置P6まで走行軌跡T3に沿って定速走行させる。この場合における撮像装置11の検出結果に対する処理は、図4A~4Cに示す走行シーンの場合と同様である。 For example, as shown in FIG. 6B, the driving assistance device 19 sets a driving speed faster than that set by the occupant of the vehicle V1, and at the same time, sets a small weighting on the detection results of the imaging device 11. The vehicle V1 is then caused to travel at a constant speed along the travel trajectory T3 from position P5 to position P6. The processing of the detection results of the imaging device 11 in this case is the same as in the case of the driving scenes shown in FIGS. 4A to 4C.
 図6Cに示すように、他車両V2が位置P9まで走行すると、運転支援装置19は、通常通り、広角カメラを含めた撮像装置11の検出結果を用いて自車両V1の走行を自律制御する。図6Cに示す位置P9は、他車両V2が重複部分C1,C2を通過し、後方カメラの検出範囲A2に含まれる位置であるため、運転支援装置19が他車両V2の走行状態を誤認識するリスクはない。 As shown in Figure 6C, when the other vehicle V2 reaches position P9, the driving assistance device 19 autonomously controls the driving of the vehicle V1 as usual using the detection results of the imaging device 11, including the wide-angle camera. Position P9 shown in Figure 6C is a position where the other vehicle V2 passes through overlapping parts C1 and C2 and is included in the detection range A2 of the rear camera, so there is no risk that the driving assistance device 19 will erroneously recognize the driving state of the other vehicle V2.
 なお、図4A~4C、図5A~5C及び図6A~6Cに示した各走行シーンにおけるリスク対処制御はあくまで例示であり、各走行シーンにおいて、上述したリスク対処制御以外のリスク対処制御を実行してもよい。また、本実施形態の運転支援装置19において、図4A~4C、図5A~5C及び図6A~6Cに示した各走行シーンの全てに対してリスク対処制御を実行することは必須でなく、運転支援装置19は、一部の走行シーンに対してリスク対処制御を行うものであってもよい。 Note that the risk response control in each driving scene shown in Figures 4A to 4C, 5A to 5C, and 6A to 6C is merely an example, and risk response control other than the risk response control described above may be executed in each driving scene. Furthermore, in the driving assistance device 19 of this embodiment, it is not essential to execute risk response control for all of the driving scenes shown in Figures 4A to 4C, 5A to 5C, and 6A to 6C, and the driving assistance device 19 may execute risk response control for some of the driving scenes.
 ここまで、本発明の基本的な実施形態について説明してきたが、運転支援装置19は、他車両V2が検出範囲の重複部分に進入すると判定した場合は、各種カメラ(撮像装置11)の検出結果と、レーダー、LiDAR、ソナーなどの測距装置12の検出結果とから、他車両V2の全長、つまり他車両V2の進行方向の長さを算出してもよい。そして、算出した他車両V2の全長が、重複部分の、隣接車線に沿う方向の長さより長いか否かを判定してもよい。また、重複部分の、隣接車線に沿う方向の長さに代えて、重複部分のうち隣接車線上に存在する部分の、隣接車線に沿う方向の長さを用いてもよい。 So far, a basic embodiment of the present invention has been described, but when the driving assistance device 19 determines that the other vehicle V2 is entering an overlapping portion of the detection range, the driving assistance device 19 may calculate the overall length of the other vehicle V2, i.e., the length in the direction of travel of the other vehicle V2, from the detection results of the various cameras (imaging devices 11) and the detection results of the distance measuring device 12, such as radar, LiDAR, or sonar. Then, it may be determined whether the calculated overall length of the other vehicle V2 is longer than the length of the overlapping portion in the direction along the adjacent lane. Also, instead of the length of the overlapping portion in the direction along the adjacent lane, the length of the portion of the overlapping portion that is on the adjacent lane in the direction along the adjacent lane may be used.
 運転支援装置19は、たとえば、撮像装置11から取得した画像データに対してエッジ抽出及び形状認識などの処理を含む画像解析を行い、他車両V2の全長を算出する。たとえば、図4Aに示す走行シーンでは、運転支援装置19は、検出範囲A2の画像データに対して画像解析を行い、他車両V2の全長D1を算出する。また、自車両V1が走行する車線L2の隣接車線L3に存在する重複部分C4について、車線L3に沿う方向の長さD2が予め運転支援装置19のROM192に登録されている。運転支援装置19は、他車両V2の全長D1と、重複部分C4の車線L3に沿う長さD2とを比較した場合に全長D1の方が長いため、全長D1が長さD2より長いと判定する。 The driving assistance device 19 performs image analysis, including edge extraction and shape recognition, on the image data acquired from the imaging device 11, to calculate the overall length of the other vehicle V2. For example, in the driving scene shown in FIG. 4A, the driving assistance device 19 performs image analysis on the image data of the detection range A2 to calculate the overall length D1 of the other vehicle V2. In addition, for the overlapping portion C4 that exists in the lane L3 adjacent to the lane L2 in which the host vehicle V1 is traveling, the length D2 along the lane L3 is registered in advance in the ROM 192 of the driving assistance device 19. The driving assistance device 19 compares the overall length D1 of the other vehicle V2 with the length D2 of the overlapping portion C4 along the lane L3, and therefore determines that the overall length D1 is longer than the length D2.
 図5Bに示す走行シーンでは、運転支援装置19は、撮像装置11から取得した画像データに対して画像解析を行い、他車両V2の全長D1を算出する。また、自車両V1が走行する車線L2の隣接車線L1に存在する重複部分C1について、重複部分C1のうち車線L1上に存在する部分の、車線L1に沿う方向の長さD2aが予め運転支援装置19のROM192に登録されている。運転支援装置19は、他車両V2の全長D1と、重複部分C1のうち車線L1上に存在する部分の、車線L1に沿う方向の長さD2aとを比較した場合に全長D1の方が長いため、全長D1が長さD2aより長いと判定する。 In the driving scene shown in FIG. 5B, the driving assistance device 19 performs image analysis on the image data acquired from the imaging device 11 and calculates the overall length D1 of the other vehicle V2. In addition, for the overlapping portion C1 that exists in the lane L1 adjacent to the lane L2 on which the host vehicle V1 is traveling, the length D2a in the direction along the lane L1 of the portion of the overlapping portion C1 that exists on the lane L1 is registered in advance in the ROM 192 of the driving assistance device 19. When comparing the overall length D1 of the other vehicle V2 with the length D2a in the direction along the lane L1 of the portion of the overlapping portion C1 that exists on the lane L1, the driving assistance device 19 finds that the overall length D1 is longer, and therefore determines that the overall length D1 is longer than the length D2a.
 図6Aに示す走行シーンでは、運転支援装置19は、図5Bに示す走行シーンと同様の方法で、他車両V2の全長D1を算出する。また、自車両V1が走行する車線L2の隣接車線L1に存在する重複部分C1について、重複部分C1の、車線L1に沿う方向の長さD2が予め運転支援装置19のROM192に登録されているため、運転支援装置19は、他車両V2の全長D1と、重複部分C1の、車線L1に沿う方向の長さD2とを比較する。図6Aに示す走行シーンでは全長D1の方が長いため、運転支援装置19は、全長D1が長さD2より長いと判定する。 In the driving scene shown in FIG. 6A, the driving assistance device 19 calculates the overall length D1 of the other vehicle V2 in the same manner as in the driving scene shown in FIG. 5B. In addition, for the overlapping portion C1 that exists in the lane L1 adjacent to the lane L2 in which the host vehicle V1 is traveling, the length D2 of the overlapping portion C1 in the direction along the lane L1 is registered in advance in the ROM 192 of the driving assistance device 19, so the driving assistance device 19 compares the overall length D1 of the other vehicle V2 with the length D2 of the overlapping portion C1 in the direction along the lane L1. In the driving scene shown in FIG. 6A, the overall length D1 is longer, so the driving assistance device 19 determines that the overall length D1 is longer than the length D2.
 他車両V2の全長は必ずしも正確に求める必要はなく、全長の範囲が算出できればよい。すなわち、他車両V2の全長が、他車両V2が進入する重複部分の、隣接車線に沿う方向の長さより長いか短いかが判定できればよい。また、重複部分の、隣接車線に沿う方向の長さは、撮像装置11(特に広角カメラ)を設置した時点で、重複部分ごとに一の固定値として設定され、予め運転支援装置19に登録されている。なお、他車両V2が検出範囲の重複部分に進入すると判定した場合に、他車両V2の全長を算出することと、算出した他車両V2の全長が、重複部分の、隣接車線に沿う方向の長さより長いか否かを判定することは、本発明に必須の構成ではなく、必要に応じて追加してもよく省略してもよい。 The total length of the other vehicle V2 does not necessarily need to be determined accurately, but it is sufficient to be able to calculate the range of the total length. In other words, it is sufficient to be able to determine whether the total length of the other vehicle V2 is longer or shorter than the length of the overlapping portion where the other vehicle V2 enters in the direction along the adjacent lane. Furthermore, the length of the overlapping portion in the direction along the adjacent lane is set as a fixed value for each overlapping portion when the imaging device 11 (particularly the wide-angle camera) is installed, and is registered in advance in the driving assistance device 19. Note that when it is determined that the other vehicle V2 enters the overlapping portion of the detection range, calculating the total length of the other vehicle V2 and determining whether the calculated total length of the other vehicle V2 is longer than the length of the overlapping portion in the direction along the adjacent lane are not essential components of the present invention, and may be added or omitted as necessary.
 運転支援装置19は、判定部22の機能により他車両V2の車種を判定し、判定した車種に応じてリスク対処制御を実行してもよい。具体的には、運転支援装置19は、他車両V2が大型車両であるか否かを判定し、他車両V2が大型車両であると判定した場合に、制御部23の機能によりリスク対処制御を実行してもよい。なお、他車両V2が大型車両であるか否かを判定することと、他車両V2が大型車両であると判定した場合に、制御部23の機能によりリスク対処制御を実行することとは本発明に必須の構成でなく、必要に応じて追加してもよく省略してもよい。 The driving assistance device 19 may determine the vehicle type of the other vehicle V2 using the function of the determination unit 22, and execute risk response control according to the determined vehicle type. Specifically, the driving assistance device 19 may determine whether the other vehicle V2 is a large vehicle or not, and execute risk response control using the function of the control unit 23 if it is determined that the other vehicle V2 is a large vehicle. Note that determining whether the other vehicle V2 is a large vehicle or not, and executing risk response control using the function of the control unit 23 if it is determined that the other vehicle V2 is a large vehicle, are not essential components of the present invention, and may be added or omitted as necessary.
 車種としては、軽自動車、小型自動車(コンパクトカー)、普通自動車などの自家用乗用車、有人又は無人のタクシー、バス、トラックなどを挙げることができる。大型車両とは、自車両V1より全長が長い車両であり、たとえば自車両V1より全長が1.25倍以上長い車両である。車種としては、バス、トラックなどの旅客又は貨物を運搬する事業用自動車が大型車両に該当する。運転支援装置19は、たとえば、他車両V2がバスやトラックなどの大型車両である場合は、リスク対処制御を実行し、他車両V2がコンパクトカーなどの小型自家用乗用車である場合は、リスク対処制御を実行しない。 Examples of vehicle types include private passenger cars such as light cars, small cars (compact cars), and regular cars, as well as manned or unmanned taxis, buses, and trucks. A large vehicle is a vehicle whose overall length is longer than the host vehicle V1, for example, a vehicle whose overall length is 1.25 times or more longer than the host vehicle V1. Examples of vehicle types that fall under large vehicles include commercial vehicles such as buses and trucks that transport passengers or cargo. For example, if the other vehicle V2 is a large vehicle such as a bus or truck, the driving assistance device 19 executes risk response control, and if the other vehicle V2 is a small private passenger car such as a compact car, it does not execute risk response control.
 運転支援装置19は、他車両V2が大型車両でないと判定した場合は、制御部23の機能により、自車両V1に搭載された複数の撮像装置11のうち、他車両V2の全体を検出できる単一の撮像装置が存在するか否かを判定してもよい。また、他車両V2の全体を検出できる単一の撮像装置11が存在しないと判定したときは、リスク対処制御を実行してもよい。これは、他車両V2の車体の全体を一つの撮像装置11で検出した場合は、他車両V2の走行状態を誤認識するリスクが抑制できるためである。なお、他車両V2が大型車両でないと判定した場合に、自車両V1に搭載された複数の撮像装置11のうち、他車両V2の全体を検出できる単一の撮像装置が存在するか否かを判定することと、他車両V2の全体を検出できる単一の撮像装置11が存在しないと判定したときに、リスク対処制御を実行することとは本発明に必須の構成でなく、必要に応じて追加してもよく省略してもよい。 When the driving assistance device 19 determines that the other vehicle V2 is not a large vehicle, the control unit 23 may determine whether or not there is a single imaging device that can detect the entire other vehicle V2 among the multiple imaging devices 11 mounted on the vehicle V1. When it determines that there is no single imaging device 11 that can detect the entire other vehicle V2, risk response control may be executed. This is because if the entire body of the other vehicle V2 is detected by a single imaging device 11, the risk of misrecognizing the traveling state of the other vehicle V2 can be reduced. Note that when it is determined that the other vehicle V2 is not a large vehicle, determining whether or not there is a single imaging device that can detect the entire other vehicle V2 among the multiple imaging devices 11 mounted on the vehicle V1, and executing risk response control when it is determined that there is no single imaging device 11 that can detect the entire other vehicle V2 are not essential configurations for the present invention, and may be added or omitted as necessary.
 運転支援装置19は、制御部23の機能により、撮像装置11を用いて撮影した画像から各画素の輝度値を取得し、輝度値の総和を画素の総数で除して画像の平均輝度値を算出し、平均輝度値が所定値以上である場合は、リスク対処制御を実行してもよい。これは、太陽光のような強い光源又は反射光がカメラに入射すると、取得した画像の輝度値が飽和してしまい、画像全体が明るく(白く)なり、画像のコントラストから障害物の形状を正確に認識できなくなるためである。すなわち、画像全体が白くなると、他車両V2の走行状態を正確に認識できなくなる。当該所定値は、他車両V2の走行状態を正確に認識できる範囲内で適宜の値(たとえば200以上)を設定できる。なお、撮像装置11を用いて撮影した画像から各画素の輝度値を取得することと、輝度値の総和を画素の総数で除して画像の平均輝度値を算出することと、平均輝度値が所定値以上である場合に、リスク対処制御を実行することとは本発明に必須の構成でなく、必要に応じて追加してもよく省略してもよい。 The driving assistance device 19 may use the function of the control unit 23 to obtain the brightness value of each pixel from an image captured using the imaging device 11, calculate the average brightness value of the image by dividing the sum of the brightness values by the total number of pixels, and execute risk management control if the average brightness value is equal to or greater than a predetermined value. This is because when a strong light source such as sunlight or reflected light is incident on the camera, the brightness value of the acquired image becomes saturated, the entire image becomes bright (white), and the shape of an obstacle cannot be accurately recognized from the contrast of the image. In other words, if the entire image becomes white, the driving state of the other vehicle V2 cannot be accurately recognized. The predetermined value can be set to an appropriate value (for example, 200 or more) within a range in which the driving state of the other vehicle V2 can be accurately recognized. Note that obtaining the brightness value of each pixel from an image captured using the imaging device 11, calculating the average brightness value of the image by dividing the sum of the brightness values by the total number of pixels, and executing risk management control if the average brightness value is equal to or greater than a predetermined value are not essential components of the present invention, and may be added or omitted as necessary.
[システムにおける処理]
 図7A~7Cを参照して、運転支援装置19が情報を処理する際の手順を説明する。図7A~7Cは、本実施形態の運転支援システム10において実行される、情報の処理を示すフローチャートの一例である。以下に説明する処理は、運転支援装置19のプロセッサであるCPU191により所定の時間間隔で実行される。なお、図7A~7Cに示すフローチャートは、自車両V1が道路をレーンキープ制御で走行する走行シーンを前提とする。
[Processing in the system]
The procedure for the driving assistance device 19 to process information will be described with reference to Figures 7A to 7C. Figures 7A to 7C are an example of a flowchart showing information processing executed in the driving assistance system 10 of this embodiment. The processing described below is executed at predetermined time intervals by the CPU 191, which is the processor of the driving assistance device 19. Note that the flowcharts shown in Figures 7A to 7C are premised on a driving scene in which the host vehicle V1 is driving on a road using lane keeping control.
 まず、図7AのステップS1にて、認識部21の機能により、自車両V1に搭載された複数の撮像装置11を用いて他車両V2を検出する。ステップS2にて、検出結果から、自車両V1の周囲に他車両V2が存在するか否かを判定する。自車両V1の周囲に他車両V2が存在しない場合は、ステップS3に進み、通常の自律走行制御を実行して図7CのステップS20に進む。これに対し、自車両V1の周囲に他車両V2が存在する場合は、ステップS4に進み、判定部22の機能により、所定時間後の自車両V1と他車両V2の走行状態を予測する。 First, in step S1 of FIG. 7A, the recognition unit 21 detects another vehicle V2 using the multiple imaging devices 11 mounted on the host vehicle V1. In step S2, it is determined from the detection result whether or not another vehicle V2 is present around the host vehicle V1. If no other vehicle V2 is present around the host vehicle V1, the process proceeds to step S3, where normal autonomous driving control is executed and the process proceeds to step S20 of FIG. 7C. On the other hand, if another vehicle V2 is present around the host vehicle V1, the process proceeds to step S4, where the determination unit 22 predicts the driving state of the host vehicle V1 and the other vehicle V2 after a predetermined time.
 所定時間後の自車両V1と他車両V2の走行状態から、自車両V1と他車両V2が所定時間後に同一車線を走行していると判定した場合は、ステップS6に進み、制御部23の機能により、自車両V1が他車両V2を追い越すか否かを判定する。自車両V1が他車両V2を追い越さないと判定した場合は、ステップS3に進む。これに対し、自車両V1が他車両V2を追い越すと判定した場合は、図7CのステップS18に進む。なお、ステップS6は本発明に必須のステップでなく、必要に応じて設けてもよい。 If it is determined from the traveling state of the host vehicle V1 and the other vehicle V2 after the predetermined time that the host vehicle V1 and the other vehicle V2 will be traveling in the same lane after the predetermined time, the process proceeds to step S6, where the control unit 23 determines whether the host vehicle V1 will overtake the other vehicle V2. If it is determined that the host vehicle V1 will not overtake the other vehicle V2, the process proceeds to step S3. On the other hand, if it is determined that the host vehicle V1 will overtake the other vehicle V2, the process proceeds to step S18 in FIG. 7C. Note that step S6 is not essential to the present invention and may be provided as necessary.
 一方、ステップS5にて、自車両V1と他車両V2が所定時間後に同一車線を走行していないと判定した場合は、ステップS7に進み、所定時間内に他車両V2が重複部分に進入するか否かを判定する。所定時間内に他車両V2が重複部分に進入しないと判定した場合は、ステップS3に進む。これに対し、所定時間内に他車両V2が重複部分に進入すると判定した場合は、ステップS8に進み、判定部22の機能により、他車両V2の全長D1と、重複部分の、隣接車線に沿った方向の長さD2とを算出する。 On the other hand, if it is determined in step S5 that the vehicle V1 and the other vehicle V2 will not be traveling in the same lane after the predetermined time, the process proceeds to step S7, where it is determined whether the other vehicle V2 will enter the overlapping portion within the predetermined time. If it is determined that the other vehicle V2 will not enter the overlapping portion within the predetermined time, the process proceeds to step S3. On the other hand, if it is determined that the other vehicle V2 will enter the overlapping portion within the predetermined time, the process proceeds to step S8, where the function of the determination unit 22 is used to calculate the total length D1 of the other vehicle V2 and the length D2 of the overlapping portion in the direction along the adjacent lane.
 ステップS8に続き、図7BのステップS9にて、判定部22の機能により、他車両V2の全長D1が、重複部分の長さD2より長いか否かを判定する。他車両V2の全長D1が、重複部分の長さD2より長い場合は、図7CのステップS14に進む。これに対し、他車両V2の全長D1が、重複部分の長さD2以下である場合は、ステップS10に進む。 Following step S8, in step S9 of FIG. 7B, the function of the determination unit 22 determines whether or not the overall length D1 of the other vehicle V2 is longer than the length D2 of the overlapping portion. If the overall length D1 of the other vehicle V2 is longer than the length D2 of the overlapping portion, the process proceeds to step S14 of FIG. 7C. On the other hand, if the overall length D1 of the other vehicle V2 is equal to or less than the length D2 of the overlapping portion, the process proceeds to step S10.
 ステップS10にて、判定部22の機能により、他車両V2が大型車両であるか否かを判定する。他車両V2が大型車両であると判定した場合は、図7CのステップS14に進む。これに対し、他車両V2が大型車両でないと判定した場合は、ステップS11に進む。ステップS11にて、判定部22の機能により、単一の撮像装置11で他車両V2の全体を検出可能か否かを判定する。単一の撮像装置11では他車両V2の全体を検出できない場合は、図7CのステップS14に進む。これに対し、単一の撮像装置11で他車両V2の全体を検出できる場合は、ステップS12に進む。 In step S10, the function of the determination unit 22 is used to determine whether or not the other vehicle V2 is a large vehicle. If it is determined that the other vehicle V2 is a large vehicle, the process proceeds to step S14 in FIG. 7C. On the other hand, if it is determined that the other vehicle V2 is not a large vehicle, the process proceeds to step S11. In step S11, the function of the determination unit 22 is used to determine whether or not the other vehicle V2 can be detected in its entirety by a single imaging device 11. If the other vehicle V2 cannot be detected in its entirety by a single imaging device 11, the process proceeds to step S14 in FIG. 7C. On the other hand, if the other vehicle V2 can be detected in its entirety by a single imaging device 11, the process proceeds to step S12.
 ステップS12にて、判定部22の機能により画像の平均輝度値を算出し、続くステップS13にて、平均輝度値が所定値以上であるか否かを判定する。平均輝度値が所定値未満であれば図7AのステップS3に進む。これに対し、平均輝度値が所定値以上であれば図7CのステップS14に進む。なお、ステップS9~S13は本発明に必須のステップでなく、必要に応じて設けてもよい。 In step S12, the average brightness value of the image is calculated using the function of the determination unit 22, and in the following step S13, it is determined whether or not the average brightness value is equal to or greater than a predetermined value. If the average brightness value is less than the predetermined value, the process proceeds to step S3 in FIG. 7A. In contrast, if the average brightness value is equal to or greater than the predetermined value, the process proceeds to step S14 in FIG. 7C. Note that steps S9 to S13 are not essential steps for the present invention, and may be provided as necessary.
 ステップS13に続き、図7CのステップS14にて、制御部23の機能により、他車両V2が自車両V1の前方を走行しているか否かを判定する。他車両V2が自車両V1の前方を走行していないと判定した場合は、ステップS15に進み、撮像装置11の検出結果を用いずに又は当該検出結果の重み付けを小さくして自車両V1の走行を自律制御する。その後、ステップS20に進む。 Following step S13, in step S14 of FIG. 7C, the control unit 23 determines whether or not the other vehicle V2 is traveling ahead of the host vehicle V1. If it is determined that the other vehicle V2 is not traveling ahead of the host vehicle V1, the process proceeds to step S15, where the traveling of the host vehicle V1 is autonomously controlled without using the detection result of the imaging device 11 or by reducing the weighting of the detection result. Then, the process proceeds to step S20.
 これに対し、他車両V2が自車両V1の前方を走行していると判定した場合は、ステップS16に進み、自車両V1が他車両V2を追い抜くか否かを判定する。自車両V1が他車両V2を追い抜かないと判定した場合は、ステップS17に進み、他車両V2が検出範囲の重複部分に含まれないように自車両V1の走行を自律制御する。これに対し、自車両V1が他車両V2を追い抜くと判定した場合は、ステップS18に進み、予め設定された所定の走行速度より速い走行速度を設定し、続くステップS19にて、撮像装置11の検出結果を用いずに又は当該検出結果の重み付けを小さく設定して追い抜き又は追い越しを実行する。そして、ステップS20に進む。 In contrast, if it is determined that the other vehicle V2 is traveling ahead of the host vehicle V1, the process proceeds to step S16, where it is determined whether the host vehicle V1 will overtake the other vehicle V2. If it is determined that the host vehicle V1 will not overtake the other vehicle V2, the process proceeds to step S17, where the traveling of the host vehicle V1 is autonomously controlled so that the other vehicle V2 is not included in the overlapping portion of the detection range. In contrast, if it is determined that the host vehicle V1 will overtake the other vehicle V2, the process proceeds to step S18, where a traveling speed higher than a predetermined traveling speed is set, and in the following step S19, the overtaking or passing is performed without using the detection result of the imaging device 11 or by setting the weighting of the detection result to a small value. Then, the process proceeds to step S20.
 ステップS20にて、支援部20の機能により、自車両V1が目的地に到達したか否かを判定する。自車両V1が目的地に到達したと判定した場合は、ルーチンの実行を終了し、表示装置18を用いて、自車両V1のドライバーに手動運転による走行を促す。これに対し、自車両V1が目的地に到達していないと判定した場合は、図7AのステップS1に進む。なお、手動運転とは、運転支援装置19が走行動作の自律走行制御を行わず、ドライバーの操作により車両の走行を制御することを言う。なお、本発明に係る運転支援装置19及び運転支援方法は、車両の走行速度のみを自律制御する場合と、車両の操舵操作のみを自律制御する場合と、車両の走行速度と操舵操作の両方を自律制御する場合とのいずれの場合にも用いることができる。また、本発明に係る運転支援装置19及び運転支援方法は、自律走行制御のみならず、ドライバーの手動運転における運転操作を支援するためも用いることができる。 In step S20, the function of the support unit 20 determines whether the vehicle V1 has reached the destination. If it is determined that the vehicle V1 has reached the destination, the execution of the routine is terminated, and the display device 18 is used to prompt the driver of the vehicle V1 to drive manually. In contrast, if it is determined that the vehicle V1 has not reached the destination, the process proceeds to step S1 in FIG. 7A. Note that manual driving refers to the driving support device 19 not performing autonomous driving control of the driving operation, but rather controlling the driving of the vehicle through the driver's operation. Note that the driving support device 19 and the driving support method according to the present invention can be used in any of the following cases: autonomous control of only the vehicle's driving speed, autonomous control of only the vehicle's steering operation, and autonomous control of both the vehicle's driving speed and steering operation. In addition, the driving support device 19 and the driving support method according to the present invention can be used not only for autonomous driving control, but also to support the driver's driving operation in manual driving.
[本発明の実施態様]
 以上のとおり、本実施形態によれば、自車両V1が走行する自車線の隣接車線において、自車両V1に搭載された複数の撮像装置11の検出範囲の一部が重複している、プロセッサにより実行される運転支援方法において、前記プロセッサは、他車両V2が、所定時間内に前記検出範囲の重複部分に進入するか否かを判定し、前記他車両V2が、前記所定時間内に前記重複部分に進入すると判定した場合は、前記複数の撮像装置11の検出結果から前記他車両V2の走行状態を誤認識するリスクに対処するように自車両V1を制御するリスク対処制御を実行する、運転支援方法が提供される。これにより、他車両V2の走行状態の誤認識が自車両V1の走行状態に与える影響を抑制できる。
[Embodiments of the invention]
As described above, according to the present embodiment, in a driving assistance method executed by a processor in which the detection ranges of a plurality of imaging devices 11 mounted on the vehicle V1 overlap in a lane adjacent to the vehicle V1 traveling on the lane, the processor determines whether or not the other vehicle V2 will enter the overlapping portion of the detection range within a predetermined time, and if it is determined that the other vehicle V2 will enter the overlapping portion within the predetermined time, executes risk countermeasure control to control the vehicle V1 so as to counter the risk of erroneously recognizing the traveling state of the other vehicle V2 from the detection results of the plurality of imaging devices 11. This makes it possible to suppress the effect of erroneous recognition of the traveling state of the other vehicle V2 on the traveling state of the vehicle V1.
 また、本実施形態の運転支援方法によれば、前記リスク対処制御は、前記他車両V2が前記重複部分に進入しないように前記自車両V1の走行を自律制御することと、前記検出結果から誤認識した前記他車両V2の走行状態が、前記自車両V1の走行状態に影響しないように前記自車両V1の走行を自律制御することと、のうち少なくとも一方を含む。これにより、走行シーンに応じて、他車両V2の走行状態の誤認識が自車両V1の走行状態に与える影響を抑制できる。 Furthermore, according to the driving assistance method of this embodiment, the risk response control includes at least one of autonomously controlling the driving of the host vehicle V1 so that the other vehicle V2 does not enter the overlapping portion, and autonomously controlling the driving of the host vehicle V1 so that the driving state of the other vehicle V2, which has been erroneously recognized from the detection result, does not affect the driving state of the host vehicle V1. This makes it possible to suppress the effect of erroneous recognition of the driving state of the other vehicle V2 on the driving state of the host vehicle V1 depending on the driving scene.
 また、本実施形態の運転支援方法によれば、前記他車両V2が前記重複部分に進入しないように前記自車両V1の走行を自律制御することは、前記他車両V2が前記検出範囲に含まれないように前記自車両V1の走行速度及び/又は前記自車両V1と前記他車両V2との車間距離を設定することと、前記自車両V1を前記隣接車線に車線変更させることと、を含み、誤認識した前記他車両V2の走行状態が、前記自車両V1の走行状態に影響しないように前記自車両V1の走行を自律制御することは、前記検出結果を用いないで前記自車両V1の走行を自律制御することと、前記検出結果の重み付けを小さく設定して前記自車両V1の走行を自律制御することと、を含む。これにより、走行シーンに応じて、他車両V2の走行状態の誤認識が自車両V1の走行状態に与える影響を抑制できる。 Furthermore, according to the driving assistance method of this embodiment, autonomously controlling the traveling of the host vehicle V1 so that the other vehicle V2 does not enter the overlapping portion includes setting the traveling speed of the host vehicle V1 and/or the distance between the host vehicle V1 and the other vehicle V2 so that the other vehicle V2 is not included in the detection range, and changing the host vehicle V1 to the adjacent lane, and autonomously controlling the traveling of the host vehicle V1 so that the erroneously recognized traveling state of the other vehicle V2 does not affect the traveling state of the host vehicle V1 includes autonomously controlling the traveling of the host vehicle V1 without using the detection result, and autonomously controlling the traveling of the host vehicle V1 by setting the weighting of the detection result to a small value. This makes it possible to suppress the effect of erroneous recognition of the traveling state of the other vehicle V2 on the traveling state of the host vehicle V1 depending on the traveling scene.
 また、本実施形態の運転支援方法によれば、前記プロセッサは、前記他車両V2の全長D1が、前記重複部分の、前記隣接車線に沿う方向の長さD2より長いか否かを判定し、前記全長D1が、前記重複部分の前記長さD2より長いと判定した場合は、前記リスク対処制御を実行する。これにより、より正確に他車両V2が重複部分に進入するか否かを予測できる。 Furthermore, according to the driving assistance method of this embodiment, the processor determines whether the overall length D1 of the other vehicle V2 is longer than the length D2 of the overlapping portion in the direction along the adjacent lane, and if it determines that the overall length D1 is longer than the length D2 of the overlapping portion, executes the risk response control. This makes it possible to more accurately predict whether the other vehicle V2 will enter the overlapping portion.
 また、本実施形態の運転支援方法によれば、前記プロセッサは、前記他車両V2が大型車両であるか否かを判定し、前記他車両V2が大型車両であると判定した場合は、前記リスク対処制御を実行する。これにより、他車両V2の車種に応じてリスク対処制御を実行できる。 Furthermore, according to the driving assistance method of this embodiment, the processor determines whether the other vehicle V2 is a large vehicle, and if it determines that the other vehicle V2 is a large vehicle, executes the risk response control. This allows risk response control to be executed according to the vehicle type of the other vehicle V2.
 また、本実施形態の運転支援方法によれば、前記プロセッサは、前記他車両V2が前記大型車両でないと判定した場合は、前記複数の撮像装置11のうち、前記他車両V2の全体を検出できる単一の撮像装置11が存在するか否かを判定し、前記他車両V2の全体を検出できる前記単一の撮像装置11が存在しないと判定したときは、前記リスク対処制御を実行する。これにより、他車両V2の走行状態を誤認識するリスクが低い場合にリスク対処制御を実行することを抑制できる。 Furthermore, according to the driving assistance method of this embodiment, when the processor determines that the other vehicle V2 is not the large vehicle, it determines whether or not there is a single imaging device 11 among the multiple imaging devices 11 that can detect the other vehicle V2 in its entirety, and when it determines that there is not a single imaging device 11 that can detect the other vehicle V2 in its entirety, it executes the risk response control. This makes it possible to suppress the execution of risk response control when the risk of erroneously recognizing the traveling state of the other vehicle V2 is low.
 また、本実施形態の運転支援方法によれば、前記プロセッサは、前記撮像装置11を用いて撮影した画像から各画素の輝度値を取得し、前記輝度値の総和を前記画素の総数で除して前記画像の平均輝度値を算出し、前記平均輝度値が所定値以上である場合は、前記リスク対処制御を実行する。これにより、画像のコントラストから障害物の形状を正確に認識できない場合にリスク対処制御を実行できる。 Furthermore, according to the driving assistance method of this embodiment, the processor acquires the brightness value of each pixel from the image captured using the imaging device 11, calculates the average brightness value of the image by dividing the sum of the brightness values by the total number of pixels, and executes the risk response control if the average brightness value is equal to or greater than a predetermined value. This makes it possible to execute risk response control when the shape of an obstacle cannot be accurately recognized from the contrast of the image.
 また、本実施形態の運転支援方法によれば、前記プロセッサは、前記他車両V2が前記自車両V1の後方を走行しているか否かを判定し、前記他車両V2が前記自車両V1の後方を走行していると判定した場合は、前記検出結果を用いずに又は前記検出結果の重み付けを小さくして前記自車両V1の走行を自律制御する。これにより、走行シーンに応じたリスク対処制御を実行できる。 Furthermore, according to the driving assistance method of this embodiment, the processor determines whether the other vehicle V2 is traveling behind the host vehicle V1, and if it determines that the other vehicle V2 is traveling behind the host vehicle V1, autonomously controls the traveling of the host vehicle V1 without using the detection result or by reducing the weighting of the detection result. This makes it possible to execute risk response control according to the traveling scene.
 また、本実施形態の運転支援方法によれば、前記プロセッサは、前記他車両V2が前記自車両V1の前方を走行しているか否かを判定し、前記他車両V2が前記自車両V1の前方を走行していると判定した場合は、前記自車両V1が前記他車両V2の追い抜き又は追い越しを行うか否かを判定し、前記自車両V1が前記追い抜き又は前記追い越しを行なわないと判定したときは、前記他車両V2が前記重複部分に含まれないように前記自車両V1の走行を自律制御する。これにより、走行シーンに応じたリスク対処制御を実行できる。 Furthermore, according to the driving assistance method of this embodiment, the processor determines whether the other vehicle V2 is traveling ahead of the host vehicle V1, and if it determines that the other vehicle V2 is traveling ahead of the host vehicle V1, it determines whether the host vehicle V1 will overtake or pass the other vehicle V2, and if it determines that the host vehicle V1 will not overtake or pass the other vehicle V2, it autonomously controls the traveling of the host vehicle V1 so that the other vehicle V2 is not included in the overlapping portion. This makes it possible to execute risk response control according to the driving scene.
 また、本実施形態の運転支援方法によれば、前記プロセッサは、前記自車両V1が前記追い抜き又は前記追い越しを行なうと判定したときは、予め設定された所定の走行速度より速い走行速度を設定するとともに、前記検出結果を用いずに又は前記検出結果の重み付けを小さく設定して前記追い抜き又は前記追い越しを行う。これにより、走行シーンに応じたリスク対処制御を実行できる。 Furthermore, according to the driving assistance method of this embodiment, when the processor determines that the host vehicle V1 will overtake or be overtaken, it sets a traveling speed faster than a predetermined traveling speed that has been set in advance, and performs the overtaking or be overtaken without using the detection result or by setting the weighting of the detection result to be small. This makes it possible to execute risk response control according to the driving scene.
 また、本実施形態によれば、自車両V1が走行する自車線の隣接車線において検出範囲の一部が重複している、自車両V1に搭載された複数の撮像装置11と、他車両V2が、所定時間内に前記検出範囲の重複部分に進入するか否かを判定する判定部22と、前記他車両V2が、前記所定時間内に前記重複部分に進入すると判定した場合は、前記複数の撮像装置11の検出結果から前記他車両V2の走行状態を誤認識するリスクに対処するように自車両V1を制御するリスク対処制御を実行する制御部23と、を備える、運転支援装置19が提供される。これにより、他車両V2の走行状態の誤認識が自車両V1の走行状態に与える影響を抑制できる。 Furthermore, according to this embodiment, a driving assistance device 19 is provided that includes: a plurality of imaging devices 11 mounted on the host vehicle V1, the detection ranges of which overlap in a lane adjacent to the host vehicle V1 in which the host vehicle V1 is traveling; a determination unit 22 that determines whether or not another vehicle V2 will enter the overlapping portion of the detection ranges within a predetermined time; and a control unit 23 that, if it is determined that the other vehicle V2 will enter the overlapping portion within the predetermined time, executes risk response control to control the host vehicle V1 so as to respond to the risk of erroneously recognizing the traveling state of the other vehicle V2 from the detection results of the plurality of imaging devices 11. This makes it possible to suppress the impact of erroneous recognition of the traveling state of the other vehicle V2 on the traveling state of the host vehicle V1.
[実施態様の組み合わせ]
 本発明に係る運転支援方法及び運転支援装置19は、以下の実施態様(1)~(11)を含む。
[Combination of embodiments]
The driving assistance method and driving assistance device 19 according to the present invention include the following embodiments (1) to (11).
 実施態様(1):自車両V1が走行する自車線の隣接車線において、自車両V1に搭載された複数の撮像装置11の検出範囲の一部が重複している、運転支援方法において、他車両V2が、所定時間内に前記検出範囲の重複部分に進入するか否かを判定し、前記他車両V2が、前記所定時間内に前記重複部分に進入すると判定した場合は、前記複数の撮像装置11の検出結果から前記他車両V2の走行状態を誤認識するリスクに対処するように自車両V1を制御するリスク対処制御を実行する、運転支援方法。 Embodiment (1): A driving assistance method in which the detection ranges of multiple imaging devices 11 mounted on the vehicle V1 overlap in a lane adjacent to the vehicle V1's own lane, the driving assistance method determines whether another vehicle V2 will enter the overlapping portion of the detection ranges within a predetermined time, and if it is determined that the other vehicle V2 will enter the overlapping portion within the predetermined time, executes risk management control to control the vehicle V1 so as to address the risk of erroneously recognizing the driving state of the other vehicle V2 from the detection results of the multiple imaging devices 11.
 実施態様(2):前記リスク対処制御が、前記他車両V2が前記重複部分に進入しないように前記自車両V1の走行を自律制御することと、前記検出結果から誤認識した前記他車両V2の走行状態が、前記自車両V1の走行状態に影響しないように前記自車両V1の走行を自律制御することと、のうち少なくとも一方を含む。  Implementation (2): The risk response control includes at least one of autonomously controlling the driving of the host vehicle V1 so that the other vehicle V2 does not enter the overlapping portion, and autonomously controlling the driving of the host vehicle V1 so that the driving state of the other vehicle V2, which is erroneously recognized from the detection result, does not affect the driving state of the host vehicle V1.
 実施態様(3):前記他車両V2が前記重複部分に進入しないように前記自車両V1の走行を自律制御することは、前記他車両V2が前記検出範囲に含まれないように前記自車両V1の走行速度及び/又は前記自車両V1と前記他車両V2との車間距離を設定することと、前記自車両V1を前記隣接車線に車線変更させることと、を含み、誤認識した前記他車両V2の走行状態が、前記自車両V1の走行状態に影響しないように前記自車両V1の走行を自律制御することは、前記検出結果を用いないで前記自車両V1の走行を自律制御することと、前記検出結果の重み付けを小さく設定して前記自車両V1の走行を自律制御することと、を含む。  Implementation (3): Autonomously controlling the travel of the host vehicle V1 so that the other vehicle V2 does not enter the overlapping portion includes setting the travel speed of the host vehicle V1 and/or the distance between the host vehicle V1 and the other vehicle V2 so that the other vehicle V2 is not included in the detection range, and changing lanes of the host vehicle V1 to the adjacent lane, and autonomously controlling the travel of the host vehicle V1 so that the erroneously recognized travel state of the other vehicle V2 does not affect the travel state of the host vehicle V1 includes autonomously controlling the travel of the host vehicle V1 without using the detection result, and autonomously controlling the travel of the host vehicle V1 by setting a small weighting for the detection result.
 実施態様(4):前記他車両V2の全長D1が、前記重複部分の、前記隣接車線に沿う方向の長さD2より長いか否かを判定し、前記全長D1が、前記重複部分の前記長さD2より長いと判定した場合は、前記リスク対処制御を実行する。  Implementation example (4): Determine whether the overall length D1 of the other vehicle V2 is longer than the length D2 of the overlapping portion in the direction along the adjacent lane, and if it is determined that the overall length D1 is longer than the length D2 of the overlapping portion, execute the risk response control.
 実施態様(5):前記他車両V2が大型車両であるか否かを判定し、前記他車両V2が大型車両であると判定した場合は、前記リスク対処制御を実行する。 Implementation (5): Determine whether the other vehicle V2 is a large vehicle, and if it is determined that the other vehicle V2 is a large vehicle, execute the risk response control.
 実施態様(6):前記複数の撮像装置11のうち、前記他車両V2の全体を検出できる単一の撮像装置11が存在するか否かを判定し、前記他車両V2の全体を検出できる前記単一の撮像装置11が存在しないと判定したときは、前記リスク対処制御を実行する。 Implementation (6): Among the multiple imaging devices 11, it is determined whether or not there is a single imaging device 11 that can detect the entirety of the other vehicle V2, and when it is determined that there is no single imaging device 11 that can detect the entirety of the other vehicle V2, the risk response control is executed.
 実施態様(7):前記撮像装置11を用いて撮影した画像から各画素の輝度値を取得し、前記輝度値の総和を前記画素の総数で除して前記画像の平均輝度値を算出し、前記平均輝度値が所定値以上である場合は、前記リスク対処制御を実行する。  Implementation example (7): The luminance value of each pixel is obtained from an image captured using the imaging device 11, the sum of the luminance values is divided by the total number of pixels to calculate an average luminance value of the image, and if the average luminance value is equal to or greater than a predetermined value, the risk response control is executed.
 実施態様(8):前記他車両V2が前記自車両V1の後方を走行しているか否かを判定し、前記他車両V2が前記自車両V1の後方を走行していると判定した場合は、前記検出結果を用いずに又は前記検出結果の重み付けを小さくして前記自車両V1の走行を自律制御する。  Implementation example (8): It is determined whether the other vehicle V2 is traveling behind the host vehicle V1, and if it is determined that the other vehicle V2 is traveling behind the host vehicle V1, the traveling of the host vehicle V1 is autonomously controlled without using the detection result or by reducing the weighting of the detection result.
 実施態様(9):前記他車両V2が前記自車両V1の前方を走行しているか否かを判定し、前記他車両V2が前記自車両V1の前方を走行していると判定した場合は、前記自車両V1が前記他車両V2の追い抜き又は追い越しを行うか否かを判定し、前記自車両V1が前記追い抜き又は前記追い越しを行なわないと判定したときは、前記他車両V2が前記重複部分に含まれないように前記自車両V1の走行を自律制御する。  Implementation example (9): Determine whether the other vehicle V2 is traveling ahead of the host vehicle V1, and if it is determined that the other vehicle V2 is traveling ahead of the host vehicle V1, determine whether the host vehicle V1 will overtake or pass the other vehicle V2, and if it is determined that the host vehicle V1 will not overtake or pass the other vehicle V2, autonomously control the traveling of the host vehicle V1 so that the other vehicle V2 is not included in the overlapping portion.
 実施態様(10):前記他車両V2が前記自車両V1の前方を走行しているか否かを判定し、前記他車両V2が前記自車両V1の前方を走行していると判定した場合は、前記自車両V1が前記他車両V2の追い抜き又は追い越しを行うか否かを判定し、前記自車両V1が前記追い抜き又は前記追い越しを行なうと判定したときは、予め設定された所定の走行速度より速い走行速度を設定するとともに、前記検出結果を用いずに又は前記検出結果の重み付けを小さく設定して前記追い抜き又は前記追い越しを行う。  Implementation (10): Determine whether the other vehicle V2 is traveling ahead of the host vehicle V1, and if it is determined that the other vehicle V2 is traveling ahead of the host vehicle V1, determine whether the host vehicle V1 will overtake or pass the other vehicle V2, and if it is determined that the host vehicle V1 will overtake or pass the other vehicle V2, set a traveling speed higher than a predetermined traveling speed set in advance, and perform the overtaking or pass without using the detection result or by setting the weighting of the detection result to a small value.
 実施態様(11):自車両V1が走行する自車線の隣接車線において検出範囲の一部が重複している、自車両V1に搭載された複数の撮像装置11と、他車両V2が、所定時間内に前記検出範囲の重複部分に進入するか否かを判定する判定部22と、前記他車両V2が、前記所定時間内に前記重複部分に進入すると判定した場合は、前記複数の撮像装置11の検出結果から前記他車両V2の走行状態を誤認識するリスクに対処するように自車両V1を制御するリスク対処制御を実行する制御部23と、を備える、運転支援装置19。 Embodiment (11): A driving assistance device 19 including: a plurality of imaging devices 11 mounted on a host vehicle V1, the detection ranges of which overlap in a lane adjacent to the host vehicle V1 on which the host vehicle V1 is traveling; a determination unit 22 that determines whether another vehicle V2 will enter the overlapping portion of the detection ranges within a predetermined time; and a control unit 23 that, if it is determined that the other vehicle V2 will enter the overlapping portion within the predetermined time, executes risk response control to control the host vehicle V1 so as to address the risk of erroneously recognizing the traveling state of the other vehicle V2 from the detection results of the plurality of imaging devices 11.
 実施態様(1)は運転支援方法に係るものであり、実施態様(2)~(10)と組み合わせてもよい。具体的な組み合わせとしては以下に示すものが挙げられる。
・実施態様(1)と実施態様(2)及び(4)~(10)のうちいずれか1つとの組み合わせ
・実施態様(1)と実施態様(2)及び(4)~(10)のうちいずれか2つとの組み合わせ
・実施態様(1)と実施態様(2)及び(4)~(10)のうちいずれか3つとの組み合わせ
・実施態様(1)と実施態様(2)及び(4)~(10)のうちいずれか4つとの組み合わせ
・実施態様(1)と実施態様(2)及び(4)~(10)のうちいずれか5つとの組み合わせ
・実施態様(1)と実施態様(2)及び(4)~(10)のうちいずれか6つとの組み合わせ
・実施態様(1)と実施態様(2)及び(4)~(10)のうちいずれか7つとの組み合わせ
・実施態様(1)と実施態様(2)及び(4)~(10)の組み合わせ
・実施態様(1)~(3)と実施態様(4)~(10)のうちいずれか1つとの組み合わせ
・実施態様(1)~(3)と実施態様(4)~(10)のうちいずれか2つとの組み合わせ
・実施態様(1)~(3)と実施態様(4)~(10)のうちいずれか3つとの組み合わせ
・実施態様(1)~(3)と実施態様(4)~(10)のうちいずれか4つとの組み合わせ
・実施態様(1)~(3)と実施態様(4)~(10)のうちいずれか5つとの組み合わせ
・実施態様(1)~(3)と実施態様(4)~(10)のうちいずれか6つとの組み合わせ
・実施態様(1)~(10)の組み合わせ
The embodiment (1) relates to a driving assistance method, and may be combined with the embodiments (2) to (10). Specific combinations include the following:
・A combination of embodiment (1) with any one of embodiment (2) and (4) to (10) ・A combination of embodiment (1) with any two of embodiment (2) and (4) to (10) ・A combination of embodiment (1) with any three of embodiment (2) and (4) to (10) ・A combination of embodiment (1) with any four of embodiment (2) and (4) to (10) ・A combination of embodiment (1) with any five of embodiment (2) and (4) to (10) ・A combination of embodiment (1) with any six of embodiment (2) and (4) to (10) ・A combination of embodiment (1) with any seven of embodiment (2) and (4) to (10) A combination of embodiment (1) with embodiment (2) and (4) to (10); a combination of embodiment (1) to (3) with any one of embodiment (4) to (10); a combination of embodiment (1) to (3) with any two of embodiment (4) to (10); a combination of embodiment (1) to (3) with any three of embodiment (4) to (10); a combination of embodiment (1) to (3) with any four of embodiment (4) to (10); a combination of embodiment (1) to (3) with any five of embodiment (4) to (10); a combination of embodiment (1) to (3) with any six of embodiment (4) to (10); a combination of embodiment (1) to (10)
 実施態様(11)は運転支援装置19に係るものであり、実施態様(2)~(10)と組み合わせてもよい。具体的な組み合わせとしては以下に示すものが挙げられる。
・実施態様(11)と実施態様(2)及び(4)~(10)のうちいずれか1つとの組み合わせ
・実施態様(11)と実施態様(2)及び(4)~(10)のうちいずれか2つとの組み合わせ
・実施態様(11)と実施態様(2)及び(4)~(10)のうちいずれか3つとの組み合わせ
・実施態様(11)と実施態様(2)及び(4)~(10)のうちいずれか4つとの組み合わせ
・実施態様(11)と実施態様(2)及び(4)~(10)のうちいずれか5つとの組み合わせ
・実施態様(11)と実施態様(2)及び(4)~(10)のうちいずれか6つとの組み合わせ
・実施態様(11)と実施態様(2)及び(4)~(10)のうちいずれか7つとの組み合わせ
・実施態様(11)と実施態様(2)及び(4)~(10)の組み合わせ
・実施態様(11)及び(2)~(3)と実施態様(4)~(10)のうちいずれか1つとの組み合わせ
・実施態様(11)及び(2)~(3)と実施態様(4)~(10)のうちいずれか2つとの組み合わせ
・実施態様(11)及び(2)~(3)と実施態様(4)~(10)のうちいずれか3つとの組み合わせ
・実施態様(11)及び(2)~(3)と実施態様(4)~(10)のうちいずれか4つとの組み合わせ
・実施態様(11)及び(2)~(3)と実施態様(4)~(10)のうちいずれか5つとの組み合わせ
・実施態様(11)及び(2)~(3)と実施態様(4)~(10)のうちいずれか6つとの組み合わせ
・実施態様(2)~(11)の組み合わせ
The embodiment (11) relates to a driving support device 19, and may be combined with the embodiments (2) to (10). Specific combinations include the following:
・A combination of embodiment (11) and any one of embodiments (2) and (4) to (10) ・A combination of embodiment (11) and any two of embodiments (2) and (4) to (10) ・A combination of embodiment (11) and any three of embodiments (2) and (4) to (10) ・A combination of embodiment (11) and any four of embodiments (2) and (4) to (10) ・A combination of embodiment (11) and any five of embodiments (2) and (4) to (10) ・A combination of embodiment (11) and any six of embodiments (2) and (4) to (10) ・A combination of embodiment (11) and any seven of embodiments (2) and (4) to (10) ・An embodiment (11) and an embodiment (2 ) and (4) to (10) A combination of embodiment (11) and (2) to (3) and any one of embodiments (4) to (10) A combination of embodiment (11) and (2) to (3) and any two of embodiments (4) to (10) A combination of embodiment (11) and (2) to (3) and any three of embodiments (4) to (10) A combination of embodiment (11) and (2) to (3) and any four of embodiments (4) to (10) A combination of embodiment (11) and (2) to (3) and any five of embodiments (4) to (10) A combination of embodiment (11) and (2) to (3) and any six of embodiments (4) to (10) A combination of embodiments (2) to (11)
10…運転支援システム
 11…撮像装置
 12…測距装置
 13…自車状態検出装置
 14…地図情報
 15…自車位置検出装置
 16…ナビゲーション装置
 17…車両制御装置
  171…車速制御装置
  172…操舵制御装置
 18…表示装置
 19…運転支援装置
  191…CPU(プロセッサ)
  192…ROM
  193…RAM
20…支援部
 21…認識部
 22…判定部
 23…制御部
A1,A2…検出範囲
B1,B2,B3,B4…検出範囲
C1,C2,C3,C4…重複部分
D1…全長
D2,D2a…長さ
D3,D4…距離
L1,L2,L3…車線
P1,P2,P3,P4,P5,P6,P7,P8,P9…位置
T1,T2,T3…走行軌跡
V1…自車両
V2,V2x,V3…他車両
REFERENCE SIGNS LIST 10 Driving assistance system 11 Imaging device 12 Distance measuring device 13 Vehicle state detection device 14 Map information 15 Vehicle position detection device 16 Navigation device 17 Vehicle control device 171 Vehicle speed control device 172 Steering control device 18 Display device 19 Driving assistance device 191 CPU (processor)
192...ROM
193...RAM
20... Support unit 21... Recognition unit 22... Determination unit 23... Control unit A1, A2... Detection range B1, B2, B3, B4... Detection range C1, C2, C3, C4... Overlapping portion D1... Total length D2, D2a... Length D3, D4... Distance L1, L2, L3... Lanes P1, P2, P3, P4, P5, P6, P7, P8, P9... Positions T1, T2, T3... Travel trajectory V1... Own vehicle V2, V2x, V3... Other vehicles

Claims (11)

  1.  自車両が走行する自車線の隣接車線において、前記自車両に搭載された複数の撮像装置の検出範囲の一部が重複している、プロセッサにより実行される運転支援方法において、
     前記プロセッサは、
     他車両が前記検出範囲の重複部分に進入するか否かを判定し、
     前記他車両が前記重複部分に進入すると判定した場合は、前記複数の撮像装置の検出結果から前記他車両の走行状態を誤認識するリスクに対処するように前記自車両を制御するリスク対処制御を実行する、運転支援方法。
    A driving assistance method executed by a processor, in which detection ranges of a plurality of image capture devices mounted on a host vehicle partially overlap in a lane adjacent to a host vehicle lane in which the host vehicle is traveling,
    The processor,
    determining whether another vehicle is entering the overlapping portion of the detection range;
    A driving assistance method, comprising: if it is determined that the other vehicle is entering the overlapping portion, executing risk management control to control the host vehicle so as to address the risk of erroneously recognizing the driving state of the other vehicle from the detection results of the multiple imaging devices.
  2.  前記リスク対処制御は、前記他車両が前記重複部分に進入しないように前記自車両の走行を自律制御することと、前記検出結果から誤認識した前記他車両の走行状態が、前記自車両の走行状態に影響しないように前記自車両の走行を自律制御することと、のうち少なくとも一方を含む、請求項1に記載の運転支援方法。 The driving assistance method according to claim 1, wherein the risk response control includes at least one of autonomously controlling the driving of the host vehicle so that the other vehicle does not enter the overlapping portion, and autonomously controlling the driving of the host vehicle so that the driving state of the other vehicle, which is erroneously recognized from the detection result, does not affect the driving state of the host vehicle.
  3.  前記他車両が前記重複部分に進入しないように前記自車両の走行を自律制御することは、
     前記他車両が前記検出範囲に含まれないように前記自車両の走行速度及び/又は前記自車両と前記他車両との車間距離を設定することと、
     前記自車両を前記隣接車線に車線変更させることと、を含み、
     誤認識した前記他車両の走行状態が、前記自車両の走行状態に影響しないように前記自車両の走行を自律制御することは、
     前記検出結果を用いないで前記自車両の走行を自律制御することと、
     前記検出結果の重み付けを小さく設定して前記自車両の走行を自律制御することと、を含む、請求項2に記載の運転支援方法。
    Autonomous control of the traveling of the host vehicle so that the other vehicle does not enter the overlapping portion is
    setting a travel speed of the host vehicle and/or a vehicle distance between the host vehicle and the other vehicle so that the other vehicle is not included in the detection range;
    changing the host vehicle to the adjacent lane;
    Autonomously controlling the traveling of the vehicle so that the erroneously recognized traveling state of the other vehicle does not affect the traveling state of the vehicle,
    Autonomously controlling the traveling of the host vehicle without using the detection result;
    The driving assistance method according to claim 2 , further comprising: autonomously controlling the traveling of the host vehicle by setting a small weighting for the detection result.
  4.  前記プロセッサは、
     前記他車両の全長が、前記重複部分の、前記隣接車線に沿う方向の長さより長いか否かを判定し、
     前記全長が、前記重複部分の前記長さより長いと判定した場合は、前記リスク対処制御を実行する、請求項1~3のいずれか一項に記載の運転支援方法。
    The processor,
    determining whether or not a total length of the other vehicle is longer than a length of the overlapping portion in a direction along the adjacent lane;
    The driving support method according to any one of claims 1 to 3, further comprising the step of: executing the risk treatment control when it is determined that the total length is longer than the length of the overlapping portion.
  5.  前記プロセッサは、
     前記他車両が大型車両であるか否かを判定し、
     前記他車両が大型車両であると判定した場合は、前記リスク対処制御を実行する、請求項1~4のいずれか一項に記載の運転支援方法。
    The processor,
    determining whether the other vehicle is a large vehicle;
    The driving support method according to any one of claims 1 to 4, further comprising the step of: executing the risk handling control when it is determined that the other vehicle is a large vehicle.
  6.  前記プロセッサは、
     前記他車両が前記大型車両でないと判定した場合は、前記複数の撮像装置のうち、前記他車両の全体を検出できる単一の撮像装置が存在するか否かを判定し、
     前記他車両の全体を検出できる前記単一の撮像装置が存在しないと判定したときは、前記リスク対処制御を実行する、請求項5に記載の運転支援方法。
    The processor,
    When it is determined that the other vehicle is not the large vehicle, it is determined whether or not there is a single imaging device among the plurality of imaging devices that can detect the other vehicle in its entirety;
    The driving support method according to claim 5 , further comprising the step of: executing the risk handling control when it is determined that there is no single imaging device capable of detecting the whole of the other vehicle.
  7.  前記プロセッサは、
     前記撮像装置を用いて撮影した画像から各画素の輝度値を取得し、
     前記輝度値の総和を前記画素の総数で除して前記画像の平均輝度値を算出し、
     前記平均輝度値が所定値以上である場合は、前記リスク対処制御を実行する、請求項1~6のいずれか一項に記載の運転支援方法。
    The processor,
    Acquire a luminance value of each pixel from an image captured by the imaging device;
    Calculating an average luminance value of the image by dividing the sum of the luminance values by the total number of pixels;
    The driving support method according to any one of claims 1 to 6, further comprising the step of: executing the risk handling control when the average luminance value is equal to or greater than a predetermined value.
  8.  前記プロセッサは、
     前記他車両が前記自車両の後方を走行しているか否かを判定し、
     前記他車両が前記自車両の後方を走行していると判定した場合は、前記検出結果を用いずに又は前記検出結果の重み付けを小さくして前記自車両の走行を自律制御する、請求項1~7のいずれか一項に記載の運転支援方法。
    The processor,
    determining whether the other vehicle is traveling behind the host vehicle;
    When it is determined that the other vehicle is traveling behind the vehicle, the driving assistance method according to any one of claims 1 to 7, wherein the driving of the vehicle is autonomously controlled without using the detection result or by reducing the weighting of the detection result.
  9.  前記プロセッサは、
     前記他車両が前記自車両の前方を走行しているか否かを判定し、
     前記他車両が前記自車両の前方を走行していると判定した場合は、前記自車両が前記他車両の追い抜き又は追い越しを行うか否かを判定し、
     前記自車両が前記追い抜き又は前記追い越しを行なわないと判定したときは、前記他車両が前記重複部分に含まれないように前記自車両の走行を自律制御する、請求項1~8のいずれか一項に記載の運転支援方法。
    The processor,
    determining whether the other vehicle is traveling ahead of the host vehicle;
    When it is determined that the other vehicle is traveling ahead of the host vehicle, it is determined whether the host vehicle will overtake or be overtaken by the other vehicle;
    The driving assistance method according to any one of claims 1 to 8, wherein, when it is determined that the host vehicle will not overtake or be overtaken, the driving of the host vehicle is autonomously controlled so that the other vehicle is not included in the overlapping portion.
  10.  前記プロセッサは、
     前記自車両が前記追い抜き又は前記追い越しを行なうと判定したときは、予め設定された所定の走行速度より速い走行速度を設定するとともに、前記検出結果を用いずに又は前記検出結果の重み付けを小さく設定して前記追い抜き又は前記追い越しを行う、請求項9に記載の運転支援方法。
    The processor,
    10. The driving assistance method according to claim 9, wherein, when it is determined that the host vehicle will overtake or be overtaken, a travel speed higher than a predetermined travel speed is set, and the host vehicle performs the overtaking or be overtaken without using the detection result or by setting a weighting of the detection result to a small value.
  11.  自車両が走行する自車線の隣接車線において検出範囲の一部が重複している、前記自車両に搭載された複数の撮像装置と、
     他車両が前記検出範囲の重複部分に進入するか否かを判定する判定部と、
     前記他車両が前記重複部分に進入すると判定した場合は、前記複数の撮像装置の検出結果から前記他車両の走行状態を誤認識するリスクに対処するように前記自車両を制御するリスク対処制御を実行する制御部と、を備える、運転支援装置。
    A plurality of imaging devices mounted on a vehicle, the imaging devices having detection ranges that partially overlap each other in lanes adjacent to a lane in which the vehicle is traveling;
    a determination unit that determines whether or not another vehicle is entering the overlapping portion of the detection range;
    a control unit that, when it is determined that the other vehicle is entering the overlapping portion, executes risk management control to control the host vehicle so as to address the risk of erroneously recognizing the driving state of the other vehicle from the detection results of the multiple imaging devices.
PCT/JP2022/035669 2022-09-26 2022-09-26 Driving assistance method and driving assistance device WO2024069689A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/035669 WO2024069689A1 (en) 2022-09-26 2022-09-26 Driving assistance method and driving assistance device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/035669 WO2024069689A1 (en) 2022-09-26 2022-09-26 Driving assistance method and driving assistance device

Publications (1)

Publication Number Publication Date
WO2024069689A1 true WO2024069689A1 (en) 2024-04-04

Family

ID=90476573

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/035669 WO2024069689A1 (en) 2022-09-26 2022-09-26 Driving assistance method and driving assistance device

Country Status (1)

Country Link
WO (1) WO2024069689A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06281456A (en) * 1993-03-29 1994-10-07 Mitsubishi Heavy Ind Ltd Obstacle detecting device
JP2020166409A (en) * 2019-03-28 2020-10-08 株式会社デンソーテン In-vehicle device, in-vehicle system and surroundings monitoring method
JP2021154935A (en) * 2020-03-27 2021-10-07 パナソニックIpマネジメント株式会社 Vehicle simulation system, vehicle simulation method and computer program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06281456A (en) * 1993-03-29 1994-10-07 Mitsubishi Heavy Ind Ltd Obstacle detecting device
JP2020166409A (en) * 2019-03-28 2020-10-08 株式会社デンソーテン In-vehicle device, in-vehicle system and surroundings monitoring method
JP2021154935A (en) * 2020-03-27 2021-10-07 パナソニックIpマネジメント株式会社 Vehicle simulation system, vehicle simulation method and computer program

Similar Documents

Publication Publication Date Title
CN110281930B (en) Vehicle control device, vehicle control method, and storage medium
US10807608B2 (en) Vehicle control system, vehicle control method, and storage medium
JP6387548B2 (en) Vehicle control system, vehicle control method, and vehicle control program
JP6354085B2 (en) Vehicle control system, vehicle control method, and vehicle control program
JP6327423B2 (en) Vehicle control system, vehicle control method, and vehicle control program
US10643474B2 (en) Vehicle control device, vehicle control method, and recording medium
JP6745294B2 (en) Vehicle control device, vehicle control method, and program
JP6843819B2 (en) Traffic guide recognition device, traffic guide recognition method, and program
US20190071075A1 (en) Vehicle control system, vehicle control method, and vehicle control program
JP6613527B2 (en) Vehicle control device, vehicle control method, and program
CN109466542B (en) Vehicle control device, vehicle control method, and storage medium
JPWO2017187622A1 (en) Vehicle control system, vehicle control method, and vehicle control program
US12012095B2 (en) Vehicle and method of controlling the same
US10803307B2 (en) Vehicle control apparatus, vehicle, vehicle control method, and storage medium
JP6600889B2 (en) Vehicle control device, vehicle control method, and program
JP7035408B2 (en) Vehicle driving control method and equipment
JP7478570B2 (en) Vehicle control device
US11440546B2 (en) Travel control apparatus, vehicle, travel control method, and non-transitory computer-readable storage medium
WO2024069689A1 (en) Driving assistance method and driving assistance device
US20200385023A1 (en) Vehicle control apparatus, vehicle, operation method of vehicle control apparatus, and non-transitory computer-readable storage medium
WO2024069690A1 (en) Driving assistance method and driving assistance device
WO2022123278A1 (en) Travel assistance method and travel assistance device
JP7181956B2 (en) Mobile body control device, control method, and vehicle
JP7503941B2 (en) Control device, control method, and program
US20220055615A1 (en) Vehicle control device, vehicle control method, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22960754

Country of ref document: EP

Kind code of ref document: A1