CN108422932B - Driving assistance system, method and vehicle - Google Patents

Driving assistance system, method and vehicle Download PDF

Info

Publication number
CN108422932B
CN108422932B CN201710081070.8A CN201710081070A CN108422932B CN 108422932 B CN108422932 B CN 108422932B CN 201710081070 A CN201710081070 A CN 201710081070A CN 108422932 B CN108422932 B CN 108422932B
Authority
CN
China
Prior art keywords
vehicle
visual representation
road
sensor
driving assistance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710081070.8A
Other languages
Chinese (zh)
Other versions
CN108422932A (en
Inventor
王丛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Audi AG
Original Assignee
Audi AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Audi AG filed Critical Audi AG
Priority to CN201710081070.8A priority Critical patent/CN108422932B/en
Publication of CN108422932A publication Critical patent/CN108422932A/en
Application granted granted Critical
Publication of CN108422932B publication Critical patent/CN108422932B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/105Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/802Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views

Abstract

The present disclosure provides a driving assistance system, method and vehicle for a vehicle; wherein the driving assistance system includes: one or more sensors configured to acquire sensor data reflecting road conditions; a processing system configured to generate a first visual representation of road conditions within a blind area for the vehicle at a current location based at least in part on sensor data collected by the sensor at a previous location of the vehicle; and a display device configured to present the first visual representation. The scheme is favorable for guiding driving operation and avoiding scraping with the road surface in the blind area.

Description

Driving assistance system, method and vehicle
Technical Field
The present disclosure relates to the field of vehicles, and more particularly, to a driving assistance system, method, and vehicle equipped with the driving assistance system.
Background
Safe driving of the vehicle relies on the driver's observation and estimation of the surroundings. Based on the knowledge of the road condition, the driver determines and performs an appropriate driving operation.
However, driver awareness of road conditions may be inaccurate. Further, during vehicle travel, there is a blind spot due to shading by vehicle components (e.g., a hood). The driver of the vehicle cannot timely know the condition of the road surface in the blind area, which may cause the vehicle to collide with or scrape against an object (e.g., an obstacle or a protrusion on the road, etc.) in the blind area, thereby affecting driving safety and possibly causing damage to the vehicle itself.
Disclosure of Invention
To overcome one or more of the drawbacks of the prior art, the present disclosure discloses a driving assistance system and method for a vehicle, which may allow timely feedback of road conditions in a blind area to a driver.
In one aspect of the present disclosure, there is provided a driving assistance system including: one or more sensors configured to acquire sensor data reflecting road conditions; a processing system configured to generate a first visual representation of a road condition within a blind zone for a vehicle at a current location based at least in part on sensor data collected by a sensor at a previous location of the vehicle; and a display device configured to present the first visual representation.
In a second aspect of the present disclosure, a vehicle is provided that includes the driving assistance system described above.
In a third aspect of the present disclosure, there is provided a driving assistance method including: collecting sensor data reflecting road conditions; generating a first visual representation for road conditions within a blind area of the vehicle at a current location based at least in part on sensor data collected by the sensor at a previous location of the vehicle; and presenting the first visual representation.
In a fourth aspect of the present disclosure, there is provided a driving assistance apparatus including: an acquisition module configured to acquire sensor data reflecting a road condition; a generation module configured to generate a first visual representation of road conditions within a blind zone for a vehicle at a current location based at least in part on sensor data collected at a previous location of the vehicle; and a display module configured to provide for display of the first visual representation.
In a fifth aspect of the present disclosure, there is provided a driving assistance system including: one or more sensors configured to acquire sensor data reflecting road conditions; a processing system configured to generate a visual representation of road conditions within a blind zone for the vehicle at a current location based at least in part on sensor data collected by the sensor at a previous location of the vehicle; and a communication device configured to provide data comprising the visual representation to the wearable display device for presentation of the visual representation by the wearable display device.
According to some embodiments of the present disclosure, a visual representation for road conditions within a blind area at a current location is generated from sensor data collected based on previous locations, and the visual representation is presented to a driver. This is favorable to guiding the driving operation, avoids taking place to scrape in the road surface in the blind area.
Drawings
In order to clearly illustrate the technical solutions in the embodiments of the present disclosure, a brief description of drawings required in the description of the embodiments is given below. It is apparent that the drawings described below are some embodiments of the present disclosure, and based on these drawings, other drawings can be obtained by those of ordinary skill in the art without any inventive effort.
FIG. 1 illustrates an exemplary vehicle blind spot;
FIG. 2 illustrates an exemplary system in which the present disclosure may be implemented;
FIG. 3 shows a schematic diagram depicting the distancing of the present disclosure;
4-5 show schematic diagrams of image ranges to be processed for different driving routes;
FIG. 6 shows a schematic diagram of a visual representation according to an embodiment of the present disclosure;
FIG. 7 illustrates an exemplary road profile;
FIG. 8 illustrates an exemplary visual representation based on a road profile;
fig. 9 shows an exemplary flowchart of a driving assistance method according to one embodiment of the present disclosure;
fig. 10 shows a block diagram of a driving assistance apparatus according to an embodiment of the present disclosure.
Detailed Description
Specific embodiments of the present disclosure will be given below with reference to the accompanying drawings and examples. The particular embodiments described herein are merely illustrative of the present disclosure and are in no way intended to limit the disclosure or its applications or uses. It should be understood that the drawings show only relevant portions of the present disclosure for ease of description.
SUMMARY
As described above, the vehicle may collide or scrape due to the presence of blind areas and inaccurate driver awareness of road conditions. Such a problem is more serious when the vehicle runs on a complicated road surface (e.g., a road under construction). If no additional under guard components are installed, the vehicle may suffer damage.
Some vehicles are equipped with a camera, for example, a camera on the roof of the vehicle. Such a camera may, for example, capture images of the surroundings. The monitoring range of the camera may cover a certain part of the area outside the driver's view. In the prior art, images captured by such cameras are fed back to the driver in real time as the vehicle is moving. This enables the driver to obtain information about the road conditions in some additional areas out of his field of view.
However, there are areas that the camera cannot detect. As shown in fig. 1, in one exemplary scenario, a camera 101 located in front of and above the vehicle may scan an area 102 in front of the vehicle. However, due to the obstruction of the hood, the area 103 cannot be detected by the camera 101 in real time. This area includes a portion near the front of the vehicle, and a portion under the hood and the vehicle body.
That is, even if the image captured by the camera is fed back to the driver in real time, the driver cannot perceive the environment (e.g., the road flatness) within the area 103 in real time. In such a case, the driver may need to rely on memory to estimate the road conditions in the blind area at the current position from an observed or fed-back image he has taken at the previous position, which may be inaccurate.
It may be advantageous to provide the road conditions within the area 103 to the driver in real time to guide the driver's operation. The area 103 is closer to the vehicle than the area 102, and thus road conditions within the area 103 may be more meaningful to the driver in some cases, for example, when the vehicle is traveling slowly over a complex road surface (e.g., a road under construction) to avoid undulations on the road surface or to attempt to stop at a location on the complex road surface.
For the sake of illustration, only the blind area in the front region of the vehicle is described above. It should be understood that there are also blind spots on the sides of the vehicle, for example due to the connecting pillars.
Embodiments of the present disclosure are directed to providing a driving assistance system capable of visually feeding back road conditions in a detection blind area 103 at a current position of a camera 101 as shown in fig. 1 to a driver's road conditions in real time. In a further embodiment, such feedback may further be used to guide the driver's operation.
Example System
The various concepts presented throughout this disclosure may be implemented in a variety of vehicles, including sport-utility vehicles (SUVs), passenger cars, utility vehicles (SUVs), Hybrid Electric Vehicles (HEVs), battery cars, trucks, and the like. However, those skilled in the art will appreciate that these are provided for illustrative purposes only, and that one or more aspects of the present disclosure may be implemented or included in one or more other types of vehicles.
Referring now to FIG. 2, a vehicle 20 including various vehicle electronic modules, sub-systems, and/or components is illustrated. As shown in fig. 2, in some embodiments, the driving assistance system 200 may include: a sensor assembly 21, a processing system 22, and a display system 23. As will be described below, in a further embodiment, the driving assistance system 200 may further include a communication device 24, and a user interface 25. It should be understood that the driving assistance system 200 is only one example of a driving assistance system, and the driving assistance system 200 may have more or fewer components than illustrated, or a different configuration of components.
The processing system 22 may include a memory 222 and a processor 221. The memory 222 may be volatile memory, such as Random Access Memory (RAM), static RAM (sram), dynamic RAM (dram), or non-volatile memory, such as Read Only Memory (ROM), flash memory, magnetic disk, or the like, or some combination of the two. The memory 222 may be used to store program instructions that are executable by the processor 221. In addition, the memory 222 may also include a database 2221. As will be described below, the database 2221 may include sensor data collected by the sensor assembly 21 during a time period before the vehicle reaches the current location. The database 2221 may also include other information related to the vehicle 20, such as the model, speed, acceleration, size, maximum steering angle, etc. of the vehicle 20. In particular, some or all of the functions described in this disclosure may be implemented when such sensor data and information is processed by processor 222 according to stored instructions.
The processor 221 may be a general-purpose processor such as a Central Processing Unit (CPU), vehicle controller (MCU), Digital Signal Processor (DSP), GPU, etc., configured to perform some or all of the functions described herein by executing program instructions stored in the memory 222. Additionally or alternatively, the processor 221 may also include programmable hardware elements, such as Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), and the like. As will be described below, in embodiments of the present disclosure, the processor 221 may generate a visual representation for road conditions within a blind area based on sensor data collected by the sensor assembly 21.
The sensor assembly 21 may include one or more of an image sensor 211, a radar sensor 212, an ultrasonic sensor 213, and a laser sensor 244.
The image sensor 211 is used to capture images of the surroundings of the vehicle. In one exemplary arrangement, the image sensor 211 may include a camera, or a combination thereof, mounted at different locations on the vehicle. For example, the image sensor may include a front-facing camera disposed at a front end of the vehicle, a top-view camera disposed above the vehicle, and/or cameras disposed on both sides and a rear end of the vehicle. These cameras can be used to acquire images of the environment in different directions.
Similarly to the image sensor 211, the radar sensor 212 and the ultrasonic sensor 213 may also be disposed at various positions of the vehicle. In one exemplary arrangement, the radar sensor 212 and the ultrasonic sensor 213 may be arranged on the front and rear sides of the vehicle, respectively, to acquire profile characteristics of a location on the road.
In one embodiment, the driving assistance system 200 may also include sensors (not shown in fig. 1) that provide various information related to the vehicle 10. Such sensors may include one or more of a speed sensor, an acceleration sensor, a steering angle sensor, a GPS. As will be described below, the speed provided by the speed sensor may be used for processing of the image, and the vehicle position provided by the GPS may be used to determine the driving route of the vehicle.
The user interface 25 may include various devices that allow a user (e.g., a driver) to interact with the vehicle. In an exemplary embodiment, the user interface 25 may include a touch-sensitive display screen, a keypad, a joystick, a microphone, buttons, and the like. The user may select options, enter information, adjust the presentation of the visual representation, and configure any components and functions through the interface 25.
Display system 23 may include various devices that provide visual representations to a user, the content of which may include, for example, images, videos, text, or animations. As will be described below, in embodiments of the present disclosure, the display system 23 may be used to provide visual feedback to the user of road conditions within a blind area. In addition, the display system 23 may also provide the user with a recommended driving route generated based on the road condition. The display system 23 may be integrated on the vehicle, for example as a touch sensitive display screen that may receive user input. In one embodiment, the display system 23 may also include a projection device 231 that may, for example, project a visual representation on the windshield of the vehicle or other location that is readily visible to the driver. In another embodiment, the display system 23 may also include a heads up display device (HUD). The HUD is able to provide a visual representation to the user without the user having to change his gaze direction.
Additionally or alternatively, display system 23 may also include a wearable display device that is separate from the vehicle. In one embodiment, the wearable display device may be a Head Mounted Display (HMD) 233. HMD 233 may present visual representations generated by processing system 22 in the user field of view, for example, through augmented reality techniques, such visual representations may be presented as superimposed on the road, or on vehicle components such as the windshield, and hood, for example.
In embodiments using a separate HMD 233, the driving assistance system 200 may also include a communication device 24. The communication device 24 is suitably configured to communicate information between the HMD 233 and other components of the driving assistance system 200. The information includes, for example, a visual representation to be rendered by HMD 233 generated by processing system 22 based on the sensor data. In some embodiments, the communication device 24 may communicate information based on WLAN, bluetooth, etc.
Generation of visual representations
Reference is now made to fig. 3. Fig. 3 illustrates one exemplary scenario for describing the principles of the present disclosure. Some of the features of fig. 3 will be described in connection with the driving assistance system 200 of fig. 2.
As shown in fig. 3, the vehicle 300 is traveling forward on the road surface. An image sensor 211 (e.g., a camera) disposed on the roof of the vehicle 300 captures images of the area ahead of the vehicle in real time. In this exemplary scenario, as the vehicle 300 travels to the location 30, the image sensor 211 captures images within the area 301 in front of the vehicle.
The vehicle 300 continues to travel to location 31 forward of location 30. At position 31, the camera 211 has a detection blind area 302. The driver of the vehicle desires to obtain a visual representation of the road conditions within the blind zone 302. In an embodiment of the present disclosure, such a visual representation will be generated by the processor 221 based on images acquired by the camera 211 at previous locations.
As a specific example, as shown in fig. 3, a blind area 302 of a vehicle 300 at a position 31 is covered by a monitoring area 301 of a camera at a position 30. In such a case, when the vehicle 300 is traveling to the location 31, the processor 221 may generate a visual representation of the road condition within the blind area 302 at the location 31 based on the images of the corresponding portion of the area 301 captured by the image sensor 211 at the location 30.
It should be understood that the exemplary processes described above are only provided to illustrate the principles and concepts of embodiments of the present disclosure. In practice, a visual representation of the blind spot 302 at location 31 may be obtained by combining images acquired at a plurality of previous locations. This may be accomplished, for example, through image processing and computer vision techniques, the details of which are well known to those skilled in the art and may be omitted from this disclosure in order to avoid unnecessarily obscuring this disclosure.
Referring now to fig. 4 and 5, fig. 4 and 5 show schematic views of image portions to be processed for different driving routes, respectively. The driving route (e.g., turn or go straight) may be predicted by the navigation application based on the location of the vehicle.
As shown in fig. 4, the vehicle 401 will remain straight. For a straight-ahead vehicle 401, providing only a visual representation of the road conditions in the area corresponding to the width of the vehicle 401 already enables a safe driving of the vehicle. Thus, the processing system only needs to store and process road images within the width of the vehicle. This is advantageous in reducing the amount of storage of data and the amount of computation required for image processing.
However, for the vehicle 501 in fig. 5 to be steered (e.g., left turned), it is also necessary to consider the areas on both sides of the vehicle. In addition to the area within the width of the vehicle body, it is also desirable to be able to visualize the road conditions within a certain angle towards the turning direction of the vehicle.
In an embodiment, a visual representation is provided for road conditions within an area corresponding to a maximum steering angle of the vehicle 501. The width W of the region may be determined based on the maximum steering angle of the vehicle 501.
The visual representation for road conditions within a blind area at the current location has been described above. In some embodiments, a visual representation of the road condition within the monitored area for the sensor (e.g., camera) at the current location may also be generated. Unlike blind areas, the visual representation within the monitored area is generated based on sensor data (e.g., images) acquired in real-time at the current location.
Referring to FIG. 6, a schematic diagram of a visual representation is shown, according to one embodiment of the present disclosure. In this embodiment, in addition to the first visual representation for road conditions within the blind area 601 at the current location, a second visual representation for road conditions within the monitoring area 602 of the camera is generated. In a further embodiment, the first visual representation and the second visual representation may also be composited.
In another embodiment, the processing system may generate a third visual representation of the road condition for the past distance based on the stored image for the past distance 603 and combine the first visual representation, the second visual representation, and the third visual representation in addition to the first visual representation and the second visual representation described above.
In one embodiment, the processing system stores images within 100m of the vehicle that has traveled and generates a visual representation of road conditions within 100m behind the vehicle based on the stored images. Such a visual representation may be advantageous in reverse operation.
In some embodiments, the parameters used by the camera to capture images may also be adjusted based on the speed of travel of the vehicle. As an example, when the vehicle is traveling on a complex road surface, typically at a low vehicle speed (e.g., 0-50km/h), the camera may be set to capture images at a higher resolution to provide a high definition, more accurate visual representation. When the vehicle is on a relatively flat road surface, the vehicle speed is generally high (for example, 50-120km/h), the camera can be set to collect images with lower resolution so as to improve the response speed. Accordingly, the resolution of the provided visual representation may be lower.
Embodiments have been described above for generating a visual representation of road conditions within a blind area based on road images captured by a camera. Additionally or alternatively, in some embodiments, the visual representation described above may also be generated based on data acquired by one or more of a radar sensor, an ultrasonic sensor, and a laser sensor. The sensors described above may capture the profile of a location on a roadway. In this context, the contour may include the flatness of the road surface, as well as the dimensions (e.g., height, width, etc.) of various locations (e.g., depressions, bumps, etc.) on the road, etc.
Referring now to FIG. 7, an exemplary road profile 700 that may be collected by the sensors is shown. As shown in fig. 7, a horizontal plane where a contact point of a tire of a vehicle with a road surface is located is set as a reference plane 701 for evaluating the flatness of the road surface. In an example, the location 703 is determined to be above a reference plane, which location 703 may thus be indicated as a protrusion; the road surface at the position 704 is determined to be below the reference surface, and the position 704 may thus be determined to be a pothole. In a further embodiment, it is also possible to determine at which locations scraping is likely to occur based on the height of the chassis and the flatness of the road surface.
Similar to the image captured by the image sensor, the contour of the road surface may also be visualized and positions with different heights may be indicated differently in the visual representation. FIG. 8 illustrates an exemplary road surface profile-based visual representation. As shown in fig. 8, different colors are used to refer to locations on the road surface having different heights. In one embodiment, locations corresponding to level roads are represented by gray, locations corresponding to bumps are represented by red, and locations corresponding to dimples are represented by blue. Furthermore, the change in height may be indicated by a change in color depth.
In other embodiments, in the visual representation, a contour map indicating the height variation of the location on the road may also be presented.
In some embodiments, the recommended driving route may also be generated based on the flatness of the road surface. As shown in fig. 8, the generated recommended driving route 805 avoids bumps and depressions on the road surface, which is advantageous for improving safety.
Presentation of visual representations
In some embodiments, the visual representation described above may be projected onto the hood of the vehicle via a projection device such that the visual representation presented on the hood appears to be a true representation of the road surface in a blind area obscured by the hood.
In some embodiments, the visual representation may be presented on the front window of the vehicle via the HUD.
In some embodiments, data including the visual representation may be sent to a wearable display device (e.g., HMD) via a communication device to be visually represented by the HMD for overlay on vehicle components such as roads, windshields, and hoods.
In some embodiments, the visual representation may also be presented via a man-machine interface (MMI) of a touch-sensitive display screen or the like.
Example procedure
Fig. 9 shows an exemplary flowchart of a driving assistance method according to an embodiment of the present disclosure. The driving assistance method 900 may be performed, for example, on the system shown in fig. 2.
In block 901, sensor data reflecting road conditions is collected.
The sensor data may include images reflecting road conditions captured by an image sensor such as a camera. Additionally or alternatively, the sensor data may also include profile data acquired by radar sensors, ultrasound sensors.
In block 902, a visual representation of road conditions within a blind zone for a vehicle at a current location is generated based at least in part on sensor data collected by sensors at previous locations of the vehicle.
In some embodiments, in addition to the first visual representation for the road condition within the blind zone, a second visual representation for the road condition within the monitored area is generated based on sensor data collected by the sensor at the current location.
In some embodiments, in addition to the first and second visual representations described above, a third visual representation is generated based on sensor data for a past distance (e.g., 100m) stored in memory.
In a further embodiment, the first visual representation may also be composited with the second visual representation and/or the third visual representation.
In block 903, a visual representation is presented.
In some embodiments, the first visual representation is projected on a hood of the vehicle by a projection device. In other embodiments, the first visual representation is presented on a front window of the vehicle through a heads-up display. In still other embodiments, the visual representation is presented in the driver's field of view by a wearable display device via a communication means.
Fig. 10 shows a block diagram of a driving assistance apparatus 1000 according to an embodiment of the present disclosure. The apparatus 1000 comprises: an acquisition module 1001, a generation module 1002, and a display module 1003. The apparatus 1000 may be implemented by various hardware, software, or a combination thereof in the driving assistance system shown in fig. 2.
The acquisition module 1001 is configured to acquire sensor data reflecting road conditions.
In some embodiments, the acquisition module 1001 acquires road images acquired in real time by one or more image sensors (e.g., cameras) disposed on the vehicle.
In other embodiments, the acquisition module 1001 acquires profile features of locations on the roadway acquired by radar sensors, ultrasonic sensors disposed on the vehicle.
The generation module 1002 is configured to generate a visual representation of road conditions within a blind zone for a vehicle at a current location based at least in part on sensor data collected at a previous location.
In some embodiments, in addition to the first visual representation for the road condition within the blind zone, a second visual representation for the road condition within the monitored area is generated based on sensor data collected by the sensor at the current location.
In some embodiments, in addition to the first and second visual representations described above, a third visual representation is generated based on sensor data for a past distance (e.g., 100m) stored in memory.
In a further embodiment, the first visual representation may also be composited with the second visual representation and/or the third visual representation.
The display module 1003 is configured to provide for the display of visual representations.
In some embodiments, the display module 1003 may project the visual representation described above onto the hood of the vehicle via a projection device, such that the visual representation presented on the hood appears to be a real representation of the road surface in a blind area obscured by the hood.
In some embodiments, the display module 1003 may present a visual representation on the front window of the vehicle via the HUD.
In some embodiments, the display module 1003 may send data including the visual representation to a wearable display device (e.g., HMD) via a communication device to overlay the visual representation fibers by the HMD on the vehicle components of the road, windshield, hood, and the like.
In some embodiments, the display module 1003 may also present visual representations via a touch-sensitive display screen.
Those skilled in the art will appreciate that the elements of the devices disclosed herein may be distributed among the devices of the embodiments and may also be variably located in one or more devices different from those of the embodiments. The units of the above embodiments may be integrated into one unit or may be further divided into a plurality of sub-units.
While the disclosure has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood by those skilled in the art that such limitations are not to be limited to the disclosed embodiment, but is intended to cover various arrangements included without departing from the broadest interpretation so as to encompass all such modifications and equivalent arrangements.

Claims (14)

1. A driving assistance system comprising:
a plurality of sensors configured to collect sensor data reflecting road conditions, the sensors including the following sensors:
an image sensor configured to acquire an image of a surrounding environment; and
a radar sensor and/or an ultrasonic sensor configured to acquire a profile of one or more locations on a roadway;
wherein, in the road profile, a horizontal plane where a contact point of a tire of a vehicle with a road surface is located is set as a reference plane for evaluating the flatness of the road surface;
a processing system configured to generate a first visual representation of road conditions within a blind area for a vehicle at a current location based at least in part on sensor data collected by the sensor at a previous location of the vehicle; and
a display device configured to present the first visual representation.
2. The driving assistance system according to claim 1, wherein the processing system is further configured to:
generating a second visual representation of a road condition within a monitoring area for the sensor based on sensor data acquired at a current location; and
compositing the first and second visual representations for display by the display device.
3. The driving assistance system according to claim 2, wherein the processing system is further configured to:
storing sensor data for past distances;
generating a third visual representation of road conditions for the past distance based on the sensor data for the past distance; and
compositing the first visual representation, the second visual representation, and the third visual representation for display by the display device.
4. The driving assistance system according to claim 1, wherein the processing system is further configured to:
generating, for a vehicle traveling straight ahead, the visual representation based on sensor data for a region corresponding to a vehicle width; and
for a vehicle that is turning, generating the visual representation based on sensor data for a region corresponding to a maximum steering angle of the vehicle.
5. The driving assistance system according to claim 1, wherein a parameter used to acquire the sensor data is adjusted based on a speed of the vehicle.
6. The driving assistance system according to claim 1, wherein the display device is further configured to: positions on the road having different contours are indicated differently in the presented visual representation.
7. The driving assistance system according to claim 6, wherein the display device is further configured to distinguishably indicate positions on the road having different contours by:
using different colors to indicate positions having different heights relative to the reference plane; or
An outline map indicating the height variation of the location is presented on the visual representation.
8. The driving assistance system according to claim 6 or 7, wherein the display device is further configured to: displaying a virtual driving route generated by the processing system based on the contour.
9. The driving assistance system of claim 1, wherein the display device comprises a projection device configured to project the first visual representation on a hood of the vehicle.
10. The driving assistance system according to any one of claims 1-7 and 9, wherein the display device includes a head-up display configured to present a visual representation of a road condition on a front window of the vehicle.
11. A vehicle comprising the driving assistance system according to any one of claims 1 to 10.
12. A driving assistance method comprising:
collecting sensor data reflecting road conditions, wherein the sensors comprise the following sensors:
an image sensor configured to acquire an image of a surrounding environment; and
a radar sensor and/or an ultrasonic sensor configured to acquire a profile of one or more locations on a roadway;
wherein, in the road profile, a horizontal plane where a contact point of a tire of a vehicle with a road surface is located is set as a reference plane for evaluating the flatness of the road surface;
generating a first visual representation of road conditions within a blind zone for the vehicle at a current location based at least in part on sensor data collected by the sensor at a previous location of the vehicle; and
presenting the first visual representation.
13. A driving assistance apparatus comprising:
an acquisition module configured to acquire sensor data reflecting road conditions, the sensors including the following sensors:
an image sensor configured to acquire an image of a surrounding environment; and
a radar sensor and/or an ultrasonic sensor configured to acquire a profile of one or more locations on a roadway;
wherein, in the road profile, a horizontal plane where a contact point of a tire of a vehicle with a road surface is located is set as a reference plane for evaluating the flatness of the road surface;
a generation module configured to generate a first visual representation of road conditions within a blind area for a vehicle at a current location based at least in part on sensor data collected at a previous location; and
a display module configured to provide for display of the first visual representation.
14. A driving assistance system comprising:
a plurality of sensors configured to collect sensor data reflecting road conditions, the sensors including the following sensors:
an image sensor configured to acquire an image of a surrounding environment; and
a radar sensor and/or an ultrasonic sensor configured to acquire a profile of one or more locations on a roadway;
wherein, in the road profile, a horizontal plane where a contact point of a tire of a vehicle with a road surface is located is set as a reference plane for evaluating the flatness of the road surface;
a processing system configured to generate a visual representation of road conditions within a blind area for a vehicle at a current location based at least in part on sensor data collected by the sensor at a previous location of the vehicle; and
a communication device configured to transmit data comprising the visual representation to a wearable display device for presentation of the visual representation by the wearable display device.
CN201710081070.8A 2017-02-15 2017-02-15 Driving assistance system, method and vehicle Active CN108422932B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710081070.8A CN108422932B (en) 2017-02-15 2017-02-15 Driving assistance system, method and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710081070.8A CN108422932B (en) 2017-02-15 2017-02-15 Driving assistance system, method and vehicle

Publications (2)

Publication Number Publication Date
CN108422932A CN108422932A (en) 2018-08-21
CN108422932B true CN108422932B (en) 2021-06-01

Family

ID=63155349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710081070.8A Active CN108422932B (en) 2017-02-15 2017-02-15 Driving assistance system, method and vehicle

Country Status (1)

Country Link
CN (1) CN108422932B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111660924A (en) * 2019-12-06 2020-09-15 摩登汽车有限公司 Automobile steering lamp control system and method
CN113119863B (en) * 2020-01-15 2023-04-07 未来(北京)黑科技有限公司 Whole vehicle window imaging system
CN111427331B (en) * 2020-03-24 2022-03-04 新石器慧通(北京)科技有限公司 Perception information display method and device of unmanned vehicle and electronic equipment
CN114619964A (en) * 2022-04-20 2022-06-14 芜湖汽车前瞻技术研究院有限公司 Display system and intelligent vehicle of intelligence passenger cabin

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103237685A (en) * 2010-12-30 2013-08-07 明智汽车公司 Apparatus and method for displaying a blind spot
CN103609100A (en) * 2011-06-09 2014-02-26 爱信精机株式会社 Image generation device
CN105564309A (en) * 2014-10-14 2016-05-11 中兴通讯股份有限公司 Method of realizing perspective visual line blind area and driving auxiliary glasses
CN106163872A (en) * 2014-04-09 2016-11-23 捷豹路虎有限公司 For showing equipment and the method for information
CN106167045A (en) * 2015-05-21 2016-11-30 Lg电子株式会社 Human pilot auxiliary device and control method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103237685A (en) * 2010-12-30 2013-08-07 明智汽车公司 Apparatus and method for displaying a blind spot
CN103609100A (en) * 2011-06-09 2014-02-26 爱信精机株式会社 Image generation device
CN106163872A (en) * 2014-04-09 2016-11-23 捷豹路虎有限公司 For showing equipment and the method for information
CN105564309A (en) * 2014-10-14 2016-05-11 中兴通讯股份有限公司 Method of realizing perspective visual line blind area and driving auxiliary glasses
CN106167045A (en) * 2015-05-21 2016-11-30 Lg电子株式会社 Human pilot auxiliary device and control method thereof

Also Published As

Publication number Publication date
CN108422932A (en) 2018-08-21

Similar Documents

Publication Publication Date Title
EP3496069B1 (en) Travel assistance device and computer program
JP7010221B2 (en) Image generator, image generation method, and program
CN108422932B (en) Driving assistance system, method and vehicle
JP4475308B2 (en) Display device
JP4847051B2 (en) Vehicle surrounding monitoring method and system
US9216684B2 (en) Display apparatus
CN104204847B (en) For the method and apparatus for the surrounding environment for visualizing vehicle
JP5114550B2 (en) How to display roadway progress
JP2017200182A (en) Topographic visualization for vehicle and vehicle driver
US10255705B2 (en) Apparatus and method for displaying information
KR102494865B1 (en) Vehicle, and control method for the same
US10488658B2 (en) Dynamic information system capable of providing reference information according to driving scenarios in real time
US20170043720A1 (en) Camera system for displaying an area exterior to a vehicle
US11620834B2 (en) Periphery monitoring device
US11004226B2 (en) Image processing device, image processing method, and image display system for grasping a distance from an obstacle with a smaller visual line shift
KR102463688B1 (en) Method for Displaying Information using in Augmented Reality Head-up Display System
US20140368655A1 (en) Method for operating a driver assist system for maneuvering and/or parking a motor vehicle
CN110901534A (en) A-pillar perspective implementation method and system
US11607999B2 (en) Method and apparatus for invisible vehicle underbody view
WO2023145852A1 (en) Display control device, display system, and display control method
CN108297691B (en) Method and system for providing notifications on a camera display of a vehicle
US20230406338A1 (en) Display control device, display device, display system, vehicle, display control method, and non-transitory storage medium
JP2017118409A (en) Display device for vehicle, and display method for vehicle
JP2023180903A (en) Display control device for vehicle, display system for vehicle, display control method for vehicle, and program
CN117261765A (en) Method for presenting the surroundings of a vehicle with a trailer, computing device and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant