CN114987460A - Method and apparatus for blind spot assist of vehicle - Google Patents

Method and apparatus for blind spot assist of vehicle Download PDF

Info

Publication number
CN114987460A
CN114987460A CN202210433836.5A CN202210433836A CN114987460A CN 114987460 A CN114987460 A CN 114987460A CN 202210433836 A CN202210433836 A CN 202210433836A CN 114987460 A CN114987460 A CN 114987460A
Authority
CN
China
Prior art keywords
vehicle
information
blind spot
blind
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210433836.5A
Other languages
Chinese (zh)
Inventor
李和安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mercedes Benz Group AG
Original Assignee
Mercedes Benz Group AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mercedes Benz Group AG filed Critical Mercedes Benz Group AG
Priority to CN202210433836.5A priority Critical patent/CN114987460A/en
Publication of CN114987460A publication Critical patent/CN114987460A/en
Priority to DE102023001629.2A priority patent/DE102023001629A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/18Propelling the vehicle
    • B60W30/18009Propelling the vehicle related to particular drive situations
    • B60W30/18159Traversing an intersection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0953Predicting travel path or likelihood of collision the prediction being responsive to vehicle dynamic parameters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0956Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/18Propelling the vehicle
    • B60W30/18009Propelling the vehicle related to particular drive situations
    • B60W30/18145Cornering
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/10Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/143Alarm means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • B60W2552/53Road markings, e.g. lane marker or crosswalk
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/45External transmission of data to or from the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/45External transmission of data to or from the vehicle
    • B60W2556/65Data transmitted between vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the field of automatic guidance of vehicles. The invention provides a method for blind spot assistance for a vehicle, the method comprising the steps of: s1: acquiring surrounding environment information of a vehicle and motion state information of the vehicle; s2: judging whether the vehicle is in a blind area auxiliary scene or not according to the surrounding environment information and the motion state information; s3: in the case of a vehicle in a blind spot assistance scene, field of view compensation information is determined based on vehicle-to-vehicle and/or vehicle-to-infrastructure communication, said field of view compensation information being used to compensate for the missing field of view information of the vehicle due to the blocked field of view. The present invention also provides an apparatus for blind spot assistance of a vehicle and a machine-readable storage medium. According to the blind area auxiliary method and device, scenes for providing blind area auxiliary are screened by considering the motion information and the environment information of the vehicle, so that the number of unnecessary information push and error alarms can be reduced, and a more efficient blind area auxiliary function is realized.

Description

Method and apparatus for blind spot assist of vehicle
Technical Field
The invention relates to a method for blind spot assistance for a vehicle, to a device for blind spot assistance for a vehicle and to a machine-readable storage medium.
Background
With the development of traffic industry and the improvement of national economic level, the quantity of vehicles kept is steadily increased, however, the quantity of traffic accidents is increased. "ghost probe" accidents are frequent cases of such traffic accidents, for example, if a straight-driving vehicle waits for a red light at an intersection together with a large vehicle on an adjacent lane, the left and right sight lines of the vehicle are blocked by the vehicles. Once the green light is lit, a dangerous situation may arise because some vehicles or vulnerable road users may still be passing through the intersection in front of the vehicle.
For this purpose, a method for assisting the driving of a vehicle is proposed in the prior art, in which the visibility of the vehicle to the surroundings is evaluated on the basis of vehicle perception data and a blind spot detection function is triggered if the visibility falls below a threshold value. Further, a perception assistance method for an autonomous vehicle is also known, in which a driving environment is perceived using a vehicle sensor and a blind spot is recognized, and in response to the recognition of the blind spot, an image covering the blind spot is received from an image capturing device disposed within a predetermined distance from the blind spot.
However, the above-described solutions still suffer from a number of disadvantages, in particular, only considering analyzing the vehicle field of view based on environmental information and thereby activating blind spot detection, but this may lead to triggering the blind spot assistance mechanism too frequently and thus interfering with the normal driving of the driver.
In this context, it is expected to provide an improved blind area assisting strategy, aiming at eliminating the potential safety hazard caused by the blind area of the visual field through a more precise triggering mechanism.
Disclosure of Invention
The present invention is directed to a method for blind spot assist of a vehicle, an apparatus for blind spot assist of a vehicle, and a machine-readable storage medium that solve at least some of the problems in the prior art.
According to a first aspect of the present invention, there is provided a method for blind spot assist of a vehicle, the method comprising the steps of:
s1: acquiring surrounding environment information of a vehicle and motion state information of the vehicle;
s2: judging whether the vehicle is in a blind area auxiliary scene or not according to the surrounding environment information and the motion state information; and
s3: in the case of a vehicle in a blind spot assistance scene, field of view compensation information is determined based on vehicle-to-vehicle and/or vehicle-to-infrastructure communication, said field of view compensation information being used to compensate for the missing field of view information of the vehicle due to the blocked field of view.
The invention comprises in particular the following technical concepts: when the scene of the vehicle is identified, not only the surrounding environment is considered, but also the motion state of the vehicle is considered, so that the visual field compensation information can be obtained for the vehicle only in the special scene that the driving safety of the vehicle is seriously influenced by a blind area, and the driving behavior of the vehicle is not interfered blindly in the general situation. Therefore, the number of unnecessary information push and error alarms is obviously reduced, and accurate matching of functions and requirements is realized.
Alternatively, it is determined in step S2 that the vehicle is in the blind spot assist scene in the following cases:
the vehicle stops in front of the intersection area and/or the zebra crossing area due to the signal indication of the traffic signal lamp, and a large vehicle is present alongside the vehicle on the adjacent lane; or alternatively
Vehicles are going to cross the intersection area and/or zebra crossing area to keep driving straight without traffic lights, and there are large vehicles alongside the vehicles on adjacent lanes.
The following technical advantages are achieved in particular here: pedestrians or vehicles that do not normally cross roads in intersection areas and zebra crossing areas, despite traffic lights governing driving authority, cannot completely avoid red light running behavior of pedestrians and vehicles, and furthermore, while regulations have stipulated that vehicles should slow down when approaching these areas, vehicles may not be able to recognize pedestrians crossing roads well due to obstructed view. By mainly considering the dangerous scenes in the blind area auxiliary strategy, the vehicle can acquire more accurate and effective early warning information.
Optionally, the step S3 includes:
requesting off-board detection information from other vehicles and/or infrastructure located in the blind spot assistance scene, the off-board detection information including information about the surrounding environment detected by the other vehicles and/or infrastructure; and
and extracting the visual field compensation information from the vehicle exterior detection information.
The following technical advantages are achieved in particular here: through the information interaction strategy based on the Internet of vehicles, the perception range of the vehicle can be expanded by using over-the-horizon information, and potential safety hazards caused by visual field blind areas are eliminated.
Optionally, the other vehicles include one or more traffic objects that obscure the vehicle's field of view in a blind spot assistance scene.
The following technical advantages are achieved in particular here: compared with other surrounding vehicles, the detection data of the shielded vehicles can generally completely contain the environmental information in the blind area, so that the content in the vision blind area of the vehicles can be better restored, and the auxiliary detection efficiency and reliability of the blind area are improved.
Optionally, the method further comprises the steps of:
displaying a surround view image of the vehicle surroundings created by means of the vehicle own detection data on a display unit of the vehicle; and
superimposing the view compensation information on the surround view image in the form of a real image and/or in the form of a virtual reality.
The following technical advantages are achieved in particular here: by supplementing the blind area image on the original view image of the vehicle, a driver can know the traffic condition in the shielded area more intuitively, so that a relevant decision can be made more accurately, and the driving safety is enhanced.
Optionally, the method further comprises the steps of:
predicting a potential motion track of an occluded object located in a blind zone of a field of view of the vehicle according to the field of view compensation information;
predicting a potential travel trajectory of the vehicle assuming a determined state of motion of the vehicle; and
and controlling the vehicle to continuously run at the intersection area and/or the zebra crossing area according to the potential motion track of the occluded object and the potential running track of the vehicle.
The following technical advantages are achieved in particular here: even if the driver fails to take measures according to the blind area auxiliary detection result in time, the vehicle can be intervened to avoid traffic accidents, and the safety of the whole scheme is improved.
Optionally, in a case where there is a spatiotemporal intersection between the potential motion trajectory of the occluded object and the potential travel trajectory of the vehicle, controlling the vehicle to continue traveling includes:
in the event that the vehicle is about to resume starting due to a signal indication of a traffic light, controlling the vehicle to start at a speed below a preset threshold or controlling the vehicle to temporarily remain in a stopped state until the occluded object is no longer in the potential driving trajectory of the vehicle; and/or
In the event that the vehicle is about to cross an intersection area and/or a zebra crossing area, the vehicle is controlled to decelerate and stop until the occluded object is no longer in the potential driving route of the vehicle.
The following technical advantages are achieved in particular here: by adopting an individualized control strategy in a scene, traffic accidents caused by the vision blind area can be effectively prevented, and the system safety is improved.
Optionally, the method further comprises the steps of:
displaying, on a display unit of the vehicle, only occluded objects for which a risk of collision with the vehicle is higher than a threshold; and/or
When the collision risk of the occluded object and the vehicle is higher than a threshold value, the display form of the occluded object on the display unit is changed and/or an acoustic alarm is given to the driver.
The following technical advantages are achieved in particular here: the display modes of the occluded objects in the blind areas are divided according to the risk level, so that the driver or the passenger can be warned in a more obvious manner, and the driving safety is further improved.
According to a second aspect of the present invention, there is provided an apparatus for blind spot assistance of a vehicle, the apparatus being for performing the method according to the first aspect of the present invention, the apparatus comprising:
an acquisition module configured to be able to acquire surrounding environment information of a vehicle and motion state information of the vehicle;
a determination module configured to determine whether a vehicle is in a blind spot assistance scene based on ambient environment information and motion state information; and
a determination module configured to determine, based on the vehicle-to-vehicle communication and/or the vehicle-to-infrastructure communication, field-of-view compensation information for compensating for missing field-of-view information of the vehicle due to the blocked field of view, if the vehicle is in a blind spot assistance scene.
According to a third aspect of the present invention, there is provided a machine-readable storage medium having stored thereon a computer program for performing the method according to the first aspect of the present invention when run on a computer.
Drawings
The principles, features and advantages of the present invention will be better understood by describing the invention in more detail below with reference to the accompanying drawings. The drawings comprise:
FIG. 1 shows a block diagram of an apparatus for blind spot assist of a vehicle according to an exemplary embodiment of the present invention;
FIG. 2 shows a flowchart of a method for blind spot assist of a vehicle according to an exemplary embodiment of the present invention;
FIG. 3 shows a flow chart of one method step in FIG. 2;
FIGS. 4a and 4b show schematic diagrams of the use of the method according to the invention in an exemplary application scenario; and
fig. 5 shows a schematic representation of the use of the method according to the invention in a further exemplary application scenario.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings and exemplary embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the scope of the invention.
Fig. 1 shows a block diagram of an apparatus for blind spot assist of a vehicle according to an exemplary embodiment of the present invention.
As shown in fig. 1, a vehicle 100 comprises a device 1 according to the invention. Here, the vehicle 100 further includes, for example, a panoramic vision sensing system including a front view camera 11, a left view camera 12, a rear view camera 13, and a right view camera 14, a radar sensor 15, a lidar sensor 16, a communication interface 17, and a motion state sensor 18. With these in-vehicle environment sensors, the vehicle 100 can perform various functions such as back-up assistance, obstacle detection, road structure recognition, and the like to support partially autonomous travel or fully autonomous travel, for example. In this case, the communication interface 17 based on internet of vehicles can receive traffic information from other traffic participants, infrastructure and/or road supervision platforms and can also share the traffic information collected by the vehicle 100 to the other traffic participants. It should be noted herein that the vehicle 100 may include other types and numbers of sensors besides the sensors shown in fig. 1, and the present invention is not particularly limited thereto.
In order to be able to assist the vehicle 100 with blind spots, the device 1 comprises, for example, an acquisition module 10, a determination module 20 and an evaluation module 30, which are connected to one another in terms of communication technology.
The acquisition module 10 is configured to acquire surrounding environment information of the vehicle 100 and motion state information of the vehicle 100. Thus, the acquisition module 10 may for example be connected to a panoramic visual perception system of the vehicle 100 in order to receive images of the road ahead and/or to the sides of the vehicle 100, which are then analyzed in the acquisition module 10 by means of trained object classifiers and/or artificial neural networks and the following information is extracted: whether the traffic scene where the vehicle 100 is located is an intersection area, whether a zebra crossing area exists in the front, whether a large vehicle exists in the periphery, which signal phase the traffic signal lamp is located at, and the like. Furthermore, the acquisition module 10 is also connected to, for example, a radar sensor 15, a lidar sensor 16 of the vehicle 100, so that the image analysis results can be verified or supplemented by means of sensor fusion techniques. In order to obtain the surrounding environment information, the obtaining module 10 may also be connected to a positioning navigation unit (not shown), for example, so as to obtain the position of the vehicle 100 in the map, and determine whether the vehicle 100 is currently located in the intersection area or whether the zebra crossing is included in the surrounding road elements in combination with the road information stored in the map. In order to determine the movement information of the vehicle 100, the detection module 10 is connected to a movement state sensor 18, for example, and the movement state sensor 18 is configured as a wheel speed sensor, an acceleration sensor or an inertial sensor, for example, by means of which movement state sensor 18 information such as speed, acceleration, movement direction, geographical position, etc. of the vehicle 100 itself can be obtained.
The determination module 20 receives the surrounding environment information and the motion state information of the vehicle 100 from the acquisition module 10. In the determination module 20, it is determined whether the vehicle 100 is in a blind spot assist scene based on the received information. To this end, the checking module 20 performs classification of a scene where the vehicle is located, for example, in consideration of environmental factors and a speed section of the vehicle 100, and checks whether the current scene coincides with one or more pre-stored preset scenes.
The decision module 20 is connected to the evaluation module 30 in order to provide the result of the scene analysis to the evaluation module 30. If the vehicle 100 is confirmed to be in the blind spot assist scenario, the field of view compensation information is resolved in the resolution module 30 based on vehicle-to-vehicle communication and/or vehicle-to-infrastructure communication. The evaluation module 30 is then connected, for example, to the communication interface 17, which allows to request their respective detection data from the other traffic participants and the infrastructure surrounding the vehicle 100, which detection data can then be used by the evaluation module 30 to compensate for the missing information of the field of view of the vehicle itself. As shown in fig. 1, the determination module 30 is also connected to a head-Up Display 19 (HUD) of the vehicle 100, so that not only can a panoramic image generated by means of the vehicle's own panoramic vision sensing system be projected, but also an image of the blind field of view can be superimposed on top of this or moving objects in the blind field of view can be marked. Furthermore, the determination module 30 can also be connected to the lateral and longitudinal guide mechanisms 41, 42 of the vehicle 100, for example, in order to be able to control the guidance of the vehicle 100 in a blind spot assistance scenario.
Fig. 2 shows a flowchart of a method for blind spot assistance for a vehicle according to an exemplary embodiment of the invention. The method exemplarily comprises the steps S1-S4 and may be implemented, for example, using the device 1 shown in fig. 1.
In step S1, the surrounding environment information of the vehicle and the motion state information of the vehicle are acquired.
Here, the motion state information of the vehicle includes not only data capable of reflecting the current motion of the vehicle, such as the speed, acceleration, ignition state, motion direction, position, steering angle, and the like of the vehicle. Furthermore, the movement state information may also comprise driving intentions or planned trajectories of the vehicle, which may be read from a respective navigation positioning unit and comprise, for example: the vehicle intends to keep traveling straight or turning in the intersection area.
The surrounding environment information of the vehicle includes, for example: road elements in the surrounding environment (e.g. traffic lights, zebra crossings), the geometry and the class of other traffic objects in the surrounding environment, the traffic scene in which the vehicle is located (intersection area, overpass, school area).
In step S2, it is determined whether the vehicle is in the blind spot assist scene based on the surrounding environment information and the motion state information.
In the sense of the present invention, a blind spot assistance scene does not refer solely to a scene in which a blind spot is present in the driver's field of view or in the vehicle's perception range, but rather is understood in particular to be a filtered scene: in this scenario, the presence of blind spots poses a particularly high risk to the driving safety of the vehicle due to the particularity of the motion state of the vehicle itself and the surrounding traffic patterns.
As an example, it is determined that the vehicle is in a blind spot assist scenario when:
-the vehicle stops in front of the intersection area and/or zebra crossing area due to the signal indication of the traffic light and there is a large vehicle alongside the vehicle on the adjacent lane; or alternatively
Vehicles are about to cross the intersection area and/or zebra crossing area without traffic lights and keep driving straight, and there are large vehicles alongside the vehicles on the adjacent lanes.
In the first scenario described above, the vehicle is in a stationary or low speed rolling condition, which can be determined on the one hand by comparing the vehicle speed with a predetermined threshold value (e.g. 10km/h) and on the other hand by checking the ignition status of the vehicle. In the intersection area or the zebra crossing area, there are usually more pedestrians or bicycles crossing the road, and in this case, if the vehicle view is blocked, it is difficult to obtain the traffic situation about the side road from the dynamically changed vehicle gap because the vehicle position is fixed. Once the vehicle suddenly resumes starting from a stop, a serious safety accident may be caused.
In the second scenario, the vehicle is moving straight through an intersection or a zebra crossing area without a traffic light, and since no traffic light limits the right of way, the vehicle is required to fully observe the pedestrian dynamics at the roadside and timely avoid the pedestrian when necessary. In this case, if blind areas are generated on the left and right sides of the vehicle, the pedestrian and the vehicle passing the road cannot be recognized from each other, and a safety accident is easily caused.
In this step, it may be determined whether the size of the nearby vehicle relates to a large vehicle, for example, based on the length and width information in the horizontal direction or the vertical direction. Specifically, the extracted candidate contour may be compared with a pre-stored reference contour, for example, by means of a trained object classifier or an artificial neural network, and if the degree of agreement between the two is satisfactory, it is determined that the relevant vehicle is related to a large vehicle.
The criteria and conditions of the blind spot assistance scenarios mentioned here can be stored in advance as reference data in the vehicle local or cloud server, for example, in order to be called up or read when a check is specifically performed. In addition to the two exemplary blind spot assist scenarios listed above, the respective blind spot assist functions may also be turned on or off manually by the driver as needed. In addition, the blind spot assist function may also be automatically turned off when the vehicle departs from the predefined blind spot assist scenario.
If it is determined that the vehicle is not in the blind spot assist scenario, then the corresponding blind spot assist function may not be provided in step S3', e.g., the view compensation information may not be sought or intervening in the normal driving of the vehicle.
If it is determined that the vehicle is in the blind spot assist scene, the field of view compensation information for compensating for the field of view information of the vehicle that is missing due to the blocked field of view may be found based on the vehicle-to-vehicle communication and/or the vehicle-to-infrastructure communication in step S3.
The field-of-view compensation information is present here, for example, in diverse forms and formats, which include, for example: an image of a vehicle's blind field of view, a category, a location, a speed, and/or a potential motion trajectory of an occluded object located in the vehicle's blind field of view.
If the vehicle is at least partially automatically guided, the currently planned driving behavior of the vehicle can be intervened by means of the visibility compensation information, which includes, for example, intervening on the navigation trajectory, lateral guidance and longitudinal guidance aspects of the vehicle.
If the vehicle is manually guided, the vision compensation information may be provided acoustically and/or optically to the driver to achieve a certain warning effect, thereby letting the driver make reasonable driving decisions taking into account the supplementary traffic information in the vision blind areas.
Next, in optional step S4, a surround view image of the vehicle surroundings created by means of the vehicle own detection data is first displayed on the display unit of the vehicle, and then the field-of-view compensation information is superimposed on this surround view image in the form of a real image and/or in the form of virtual reality.
In this step, a three-dimensional projection of the surroundings of the vehicle can be provided, for example, by means of the vehicle's own panoramic visual perception system and displayed on the dashboard screen or HUD of the vehicle. In the original all-round image, since a large vehicle exists beside the vehicle, traffic conditions behind the large vehicle cannot be displayed. As an example, left and right blind area images of the vehicle may be generated based on the field compensation information, and then the corresponding blind area images may be directly presented on the display unit in the form of an auxiliary window. As another example, the actual blind image may be perspective-transformed or virtualized and then overlaid on the portion of the original surround-view image that is occluded by an adjacent vehicle or building, so that when such supplemental information is presented, the driver perceives the content of the occluded area from a perspective corresponding to its actual perception.
Furthermore, occluded objects in the blind zone can also be labeled in different ways in this step according to the collision risk level. As an example, only occluded objects having a collision risk with the vehicle above a threshold may be displayed on the display unit of the vehicle, so that the displayed number of moving objects is reduced, avoiding unnecessarily disturbing the driver. As another example, the display form of the occluded object on the display unit (e.g., enclosing or displaying a warning sign near the dangerous object with a bounding box) may be changed and/or an acoustic warning may be issued to the driver when the risk of collision of the occluded object with the vehicle is above a threshold.
Fig. 3 shows a flow chart of one method step in fig. 2. In the exemplary embodiment, method step S3 in FIG. 2 includes, for example, steps S31-S38. This example is based on the following assumptions: the vehicle stops in front of the stop line of the intersection area, for example, due to a traffic light.
In step S31, the vehicle-exterior detection information is requested from another vehicle located in the blind spot assist scene that obstructs the view of the vehicle. Here, "other vehicles" especially refer to traffic objects located on adjacent lanes of the vehicle which obstruct the view of the vehicle, including for example the above mentioned "large vehicles".
In step S32, off-board detection information is requested from the infrastructure located in the blind spot assist scene. Here, the "infrastructure" includes, for example, signboards, street lamps, traffic lights, etc. having an environment sensing and information interaction function, which are located around the vehicle.
"off-board detection information" is understood to be information about the surroundings detected by other vehicles and/or infrastructure. Depending on the type and number of sensors that the other vehicles and infrastructure have, the off-board detection information is present, for example, in the form of image data, video data, point cloud data, ultrasound data. Furthermore, the off-board detection data may further comprise, if the other vehicles and the infrastructure further comprise respective signal processing units: the position, velocity (as a vector, i.e. including absolute value and direction), direction of motion, acceleration and category information of the occluded object in the blind zone (passenger car, minibus, truck, bus, pedestrian, two-wheel vehicle).
In step S33, visual field compensation information is extracted from the vehicle exterior detection information, and an occluded object in the blind field of view is identified and continuously monitored based on the visual field compensation information. In this step, if the received off-board detection data exists in the form of images or video streams, these images or video streams may be analyzed to identify moving objects and perform tracking on them. If the received off-board detection data relates to characteristic information of the occluded object, their motion pattern can be analyzed from the characteristic information.
In step S34, it is checked whether the traffic signal light changes from red to green.
If the traffic light has not indicated passability, it may remain in this step and continue with this check.
If the traffic light indicates passable or is about to indicate passable, then the potential motion trajectory of the occluded object may be predicted in step S35 using the observed data within 2 seconds before the traffic light turns green. Here, by considering the motion state within the time window, estimation can be performed more accurately on the behavior of the occluded object.
Next in step S36, a potential travel trajectory of the vehicle may be predicted assuming the determined motion state of the vehicle. Here, it may be assumed, for example, that the vehicle is started at a constant speed or acceleration, from which a potential travel trajectory of the vehicle over a future determined time period may be calculated.
In step S37, it is checked whether there is a spatiotemporal intersection of the movement trajectory of the occluded object and the potential travel trajectory of the vehicle. Here, "spatiotemporal intersection" is understood not only as an intersection or overlap of the travel trajectory of the vehicle and the occurrence position of the activity trajectory of the other traffic object, but also means that the vehicle and the other traffic object arrive at the intersection point in time synchronization.
If the above-defined risk of collision is not detected, this check may be kept and continuously performed in step S37, while the vehicle may be controlled to start normally.
If a spatiotemporal intersection is found to exist, the vehicle may be controlled to start at a speed below a preset threshold or to temporarily remain in a stopped state until the occluded object is no longer in the potential driving trajectory of the vehicle in step S38.
In one embodiment, not shown, the vehicle may also be controlled to stop slowing as it is about to cross intersection areas and/or zebra crossing areas until the occluded object is no longer in the vehicle's potential travel path.
Fig. 4a and 4b show schematic diagrams of the use of the method according to the invention in an exemplary application scenario.
In the scenario shown in fig. 4a, the vehicle 100 is traveling on a first road 501 and is about to reach the intersection 500. Here, the vehicle 100 is intended to travel straight through the intersection 500, for example, and thus will continue to travel on the second road 502 after passing through the intersection 500.
At this intersection 500 a traffic light 510 is arranged, which traffic light 510 in the scene shown in fig. 4a just indicates a red light (straight-ahead no-go). The vehicle 100 and other vehicles 200, 300 in its adjacent lanes then each stop at a stop line in front of the intersection 500 in response to such a signal indication.
In order to implement the blind spot assistance function for the vehicle 100 in the intersection area 500, it is first necessary to determine whether the vehicle 100 is in the blind spot assistance scene in combination with the surrounding environment information and the motion state information of the vehicle 100. Therefore, the vehicle 100 detects the surrounding road environment by means of the in-vehicle camera having the field of view 110, and recognizes the following information: the vehicle 100 is currently in the intersection area 500, the zebra crossing 330 is located a short distance in front of the vehicle 100, and the truck 200 and the van 300 with large volumes are respectively parked on the adjacent lanes on the right and left sides of the vehicle 100. At the same time, the vehicle 100 also recognizes by means of the motion state sensor that it is in a stationary state. It can then be determined in this case that the vehicle 100 is in the predefined blind spot assistance scene.
In this scenario, a pedestrian 401 and a cyclist 402 crossing the road are obstructed by the large vehicles 200, 300 on adjacent lanes of the vehicle 100, and therefore these moving objects 401, 402 are not visible to the vehicle 100.
In the scenario shown in fig. 4b, the vehicle 100 is traveling straight in a certain direction and is about to reach the zebra crossing area 503. Such a zebra crossing area 503 shown in fig. 4b is only for non-motor vehicles to pass in a direction transverse to the main road, and no traffic signal light is provided in such a zebra crossing area 503 because the traffic flow of the main road is not high in this example. Generally, the vehicle 100 is required to decelerate and let go before passing through the zebra crossing region 503, and the vehicle can normally pass through without a pedestrian.
As can be seen from fig. 4b, the vehicle 100 will arrive at the zebra crossing area 503, and at this time, the truck 200 is located on the adjacent lane on the right side of the vehicle 100, the driver's sight line 110 of the vehicle 100 is blocked by the truck 200 due to the large length of the truck 200, and the driver of the vehicle 100 cannot cross the truck 200 to observe the pedestrian 401 crossing the road due to the large height of the truck 200.
In this case, in conjunction with the surrounding environment information and the motion state information of the vehicle 100, it is also possible to determine that the vehicle 100 is in the blind spot assist scene.
Fig. 5 shows a schematic representation of the use of the method according to the invention in another exemplary application scenario. With reference to fig. 5, it is described how a corresponding blind spot assistance is achieved with the aid of vehicle networking communication technology when the vehicle 100 is located in the blind spot assistance scene shown in fig. 4 a.
As shown in fig. 5, the vehicle 100 requests the neighboring vehicles 200, 300 of their respective captured images about the surroundings. For example, the truck 200 on the right adjacent lane may be requested for its images taken by means of the front view camera and the right view camera, and the van 300 on the left adjacent lane may be requested for its images taken by means of the front view camera and the left view camera. Fig. 5 shows exemplary detection ranges 210, 220 of other vehicles 200, 300 that are relevant for blind spot assistance of vehicle 100. Further, the vehicle 100 requests images of the road environment that they capture to a plurality of cameras 521, 522, 523 arranged at the intersection 500, of which the detection range 520 of one camera 521 is exemplarily shown in fig. 5. After the vehicle 100 receives these image data, they may be analyzed to identify the speed, location, and category of the occluded objects 401, 402 located in the blind field of view.
In addition to requesting image data, motion information and category information about moving objects in the surrounding environment may also be directly requested by the vehicle 100 from other vehicles 200, 300 and the infrastructure 521, for example, in order to more efficiently complete blind spot assistance.
In this example, the traffic light 510 is about to turn green, but at this point the pedestrian 401 and cyclist 402 have not yet reached the other end of the road, and a collision event may occur if the vehicle 100 is directly activated at this point. In this case, the risk of collision can be estimated, for example, from the predicted movement trajectories of the pedestrian 401 and the bicycle 402 and the potential movement trajectories of the vehicle 100 itself, and these moving objects 401, 402 can be highlighted in the already displayed panoramic image of the vehicle 100 when the risk of collision is above a threshold value, or the driver of the vehicle 100 can also be warned by a sound signal.
Although specific embodiments of the invention have been described herein in detail, they have been presented for purposes of illustration only and are not to be construed as limiting the scope of the invention. Various substitutions, alterations, and modifications may be conceived of without departing from the spirit and scope of the invention.

Claims (10)

1. A method for blind spot assistance for a vehicle (100), the method comprising the steps of:
s1: acquiring surrounding environment information of the vehicle (100) and motion state information of the vehicle (100);
s2: judging whether the vehicle (100) is in a blind area auxiliary scene or not according to the surrounding environment information and the motion state information; and
s3: in the case of a vehicle (100) in a blind spot assistance scene, visibility compensation information is determined on the basis of vehicle-to-vehicle and/or vehicle-to-infrastructure communication, said visibility compensation information being used to compensate for the missing visibility information of the vehicle (100) due to the blocked visibility.
2. The method according to claim 1, wherein it is determined in step S2 that the vehicle (100) is in a blind spot assistance scene if:
a vehicle (100) stops in front of an intersection area (500) and/or a zebra crossing area (503) due to a signal indication of a traffic light (510), and there is a large vehicle alongside the vehicle (100) on an adjacent lane; or alternatively
The vehicle (100) is about to cross an intersection area (500) and/or a zebra crossing area (503) without a traffic light (510) to keep running straight, and a large vehicle is arranged beside the vehicle (100) on an adjacent lane.
3. The method according to claim 1 or 2, wherein the step S3 comprises:
requesting off-board detection information from other vehicles (200, 300) and/or infrastructure (521) located in the blind spot assistance scene, the off-board detection information including information about the surroundings detected by the other vehicles (200, 300) and/or infrastructure (521); and
and extracting the visual field compensation information from the vehicle exterior detection information.
4. The method of claim 3, wherein the other vehicles (200, 300) include one or more traffic objects that obstruct a field of view of the vehicle (100) in a blind spot assistance scene.
5. The method according to any one of claims 1 to 4, wherein the method further comprises the steps of:
displaying, on a display unit (19) of the vehicle (100), a surround view image of the surroundings of the vehicle (100) created by means of the vehicle (100) own detection data; and
superimposing the view compensation information on the surround view image in the form of a real image and/or in the form of a virtual reality.
6. The method according to any one of claims 1 to 5, wherein the method further comprises the steps of:
predicting a potential motion trajectory of an occluded object (401, 402) located in a field of view blind zone of the vehicle (100) from the field of view compensation information;
predicting a potential travel trajectory of the vehicle (100) on the assumption of a determined state of motion of the vehicle (100); and
controlling the vehicle (100) to continue to travel at the intersection area (500) and/or the zebra crossing area (503) according to the potential movement track of the occluded object (401, 402) and the potential travel track of the vehicle (100).
7. The method according to claim 6, wherein controlling the continued travel of the vehicle (100) in case of a spatiotemporal intersection of the potential motion trajectory of the occluded object (401, 402) and the potential travel trajectory of the vehicle (100) comprises:
in case the vehicle (100) is about to resume starting due to a signal indication of a traffic light (510), controlling the vehicle (100) to start at a speed below a preset threshold or controlling the vehicle (100) to temporarily remain in a stopped state until the occluded object (401, 402) is no longer in a potential driving trajectory of the vehicle (100);
in case the vehicle (100) is about to cross an intersection area (500) and/or a zebra crossing area (503), the vehicle (100) is controlled to decelerate and stop until the occluded object (401, 402) is no longer in the potential driving route of the vehicle (100).
8. The method according to any one of claims 1 to 7, wherein the method further comprises the steps of:
displaying on a display unit (19) of the vehicle (100) only occluded objects (401, 402) having a risk of collision with the vehicle (100) above a threshold; and/or
When the collision risk of the occluded object (401, 402) with the vehicle (100) is higher than a threshold value, the display form of the occluded object (401, 402) on the display unit (19) is changed and/or an acoustic warning is issued to the driver.
9. An apparatus (1) for blind spot assistance for a vehicle (100), the apparatus (1) being configured to perform the method according to any one of claims 1 to 8, the apparatus (1) comprising:
an acquisition module (10), the acquisition module (10) being configured to be able to acquire surrounding environment information of a vehicle (100) and motion state information of the vehicle (100);
a determination module (20), the determination module (20) being configured to be able to determine whether the vehicle (100) is in a blind spot assistance scene from the ambient environment information and the motion state information; and
an evaluation module (30), wherein the evaluation module (30) is configured to be able to evaluate visibility compensation information for compensating for the missing visibility information of the vehicle (100) due to the blocked visibility, on the basis of vehicle-to-vehicle communication and/or vehicle-to-infrastructure communication, if the vehicle (100) is in a blind spot assistance scene.
10. A machine-readable storage medium, on which a computer program is stored for, when run on a computer, performing the method according to any one of claims 1 to 8.
CN202210433836.5A 2022-04-24 2022-04-24 Method and apparatus for blind spot assist of vehicle Pending CN114987460A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210433836.5A CN114987460A (en) 2022-04-24 2022-04-24 Method and apparatus for blind spot assist of vehicle
DE102023001629.2A DE102023001629A1 (en) 2022-04-24 2023-04-24 Method and device for blind spot assistance in a vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210433836.5A CN114987460A (en) 2022-04-24 2022-04-24 Method and apparatus for blind spot assist of vehicle

Publications (1)

Publication Number Publication Date
CN114987460A true CN114987460A (en) 2022-09-02

Family

ID=83026072

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210433836.5A Pending CN114987460A (en) 2022-04-24 2022-04-24 Method and apparatus for blind spot assist of vehicle

Country Status (2)

Country Link
CN (1) CN114987460A (en)
DE (1) DE102023001629A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116321072A (en) * 2023-03-13 2023-06-23 阿里云计算有限公司 Data compensation method and device based on perception failure
CN116798272A (en) * 2023-08-23 2023-09-22 威海爱思特传感技术有限公司 Road crossroad blind area vehicle early warning system and method based on vehicle communication
CN116923288A (en) * 2023-06-26 2023-10-24 桂林电子科技大学 Intelligent detection system and method for driving ghost probe
CN117218619A (en) * 2023-11-07 2023-12-12 安徽中科星驰自动驾驶技术有限公司 Lane recognition method and system for automatic driving vehicle
WO2024178741A1 (en) * 2023-03-02 2024-09-06 Hong Kong Applied Science and Technology Research Institute Company Limited System and method for road monitoring

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024178741A1 (en) * 2023-03-02 2024-09-06 Hong Kong Applied Science and Technology Research Institute Company Limited System and method for road monitoring
CN116321072A (en) * 2023-03-13 2023-06-23 阿里云计算有限公司 Data compensation method and device based on perception failure
CN116321072B (en) * 2023-03-13 2024-01-23 阿里云计算有限公司 Data compensation method and device based on perception failure
CN116923288A (en) * 2023-06-26 2023-10-24 桂林电子科技大学 Intelligent detection system and method for driving ghost probe
CN116923288B (en) * 2023-06-26 2024-04-12 桂林电子科技大学 Intelligent detection system and method for driving ghost probe
CN116798272A (en) * 2023-08-23 2023-09-22 威海爱思特传感技术有限公司 Road crossroad blind area vehicle early warning system and method based on vehicle communication
CN116798272B (en) * 2023-08-23 2023-11-28 威海爱思特传感技术有限公司 Road crossroad blind area vehicle early warning system and method based on vehicle communication
CN117218619A (en) * 2023-11-07 2023-12-12 安徽中科星驰自动驾驶技术有限公司 Lane recognition method and system for automatic driving vehicle

Also Published As

Publication number Publication date
DE102023001629A1 (en) 2023-10-26

Similar Documents

Publication Publication Date Title
US11938967B2 (en) Preparing autonomous vehicles for turns
US11155249B2 (en) Systems and methods for causing a vehicle response based on traffic light detection
CN109427199B (en) Augmented reality method and device for driving assistance
US9723243B2 (en) User interface method for terminal for vehicle and apparatus thereof
CN114987460A (en) Method and apparatus for blind spot assist of vehicle
JP4967015B2 (en) Safe driving support device
US9507345B2 (en) Vehicle control system and method
WO2016186039A1 (en) Automobile periphery information display system
KR102000929B1 (en) Mirror replacement system for a vehicle
WO2020201796A1 (en) Vehicle control method and vehicle control device
CN109733283B (en) AR-based shielded barrier recognition early warning system and recognition early warning method
CN112771592B (en) Method for warning a driver of a motor vehicle, control device and motor vehicle
US20190286125A1 (en) Transportation equipment and traveling control method therefor
JP2020093766A (en) Vehicle control device, control system and control program
CN113165649A (en) Controlling vehicle to turn through multiple lanes
JP7409265B2 (en) In-vehicle display device, method and program
CN114537398A (en) Method and device for assisting a vehicle in driving at an intersection
JP2010012904A (en) Drive assist device
WO2022162909A1 (en) Display control device and display control method
CN116935695A (en) Collision warning system for a motor vehicle with an augmented reality head-up display
JP7359099B2 (en) Mobile object interference detection device, mobile object interference detection system, and mobile object interference detection program
CN115131749A (en) Image processing apparatus, image processing method, and computer-readable storage medium
CN114523905A (en) System and method for displaying detection and track prediction of targets around vehicle
WO2023194793A1 (en) Information providing device and information providing method
JP7432198B2 (en) Situation awareness estimation system and driving support system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication