CN108422932A - driving assistance system, method and vehicle - Google Patents
driving assistance system, method and vehicle Download PDFInfo
- Publication number
- CN108422932A CN108422932A CN201710081070.8A CN201710081070A CN108422932A CN 108422932 A CN108422932 A CN 108422932A CN 201710081070 A CN201710081070 A CN 201710081070A CN 108422932 A CN108422932 A CN 108422932A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- visual representation
- road surface
- driving assistance
- sensor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/10—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
- B60R2300/105—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/80—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
- B60R2300/802—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mechanical Engineering (AREA)
- Traffic Control Systems (AREA)
Abstract
Present disclose provides for the driving assistance system of vehicle, method and vehicle;Wherein, the driving assistance system includes:One or more sensors are configured as the sensing data of acquisition reflection condition of road surface;Processing system is configured as being based at least partially on the sensing data that is acquired on the previous position of vehicle by sensor to generate the first visual representation of the condition of road surface in the blind area for vehicle on current location;And display equipment, it is configured as that first visual representation is presented.Said program is conducive to instruct driver behavior, avoids scraping with the road surface in blind area.
Description
Technical field
This disclosure relates to vehicular field, and more particularly, to driving assistance system system, method, and equipped with
The vehicle of the driving assistance system.
Background technology
The safety traffic of vehicle depends on observation and estimation of the driver to ambient enviroment.Based on recognizing condition of road surface
Know, driver is determining and executes suitable driver behavior.
However, driver may be inaccurate to the cognition of condition of road surface.In addition, in vehicle traveling, due to vehicle
Blocking for component (for example, bonnet) and there are blind areas.The driver of vehicle cannot learn the pavement behavior in blind area in time, this
It may lead to vehicle and object (for example, protrusion etc. on barrier or road) collision or scraping in blind area, thus influence
Driving safety and vehicle itself may be damaged.
Invention content
One or more of in order to overcome the defects of the prior art, the present disclosure discloses a kind of driving for vehicle
Auxiliary system and method can make the condition of road surface in blind area feeding back to driver in time.
In in one aspect of the present disclosure, a kind of driving assistance system is provided, including:One or more sensors, quilt
It is configured to the sensing data of acquisition reflection condition of road surface;Processing system, is configured as being based at least partially on and is existed by sensor
The sensing data acquired on the previous position of vehicle generates condition of road surface in the blind area for vehicle on current location
First visual representation;And display equipment, it is configured as that first visual representation is presented.
In the second aspect of the disclosure, a kind of vehicle, including above-mentioned driving assistance system are provided.
In the third aspect of the disclosure, a kind of driving assistance method is provided, including:Acquisition reflection condition of road surface
Sensing data;It is at least partially based on the sensing data acquired on the previous position of vehicle by sensor, generates and is directed to vehicle
First visual representation of the condition of road surface in blind area on current location;And the first visual representation is presented.
In the 4th aspect of the disclosure, a kind of drive assistance device is provided, including:Acquisition module is configured as
The sensing data of acquisition reflection condition of road surface;Generation module, the previous position for being configured as being based at least partially on vehicle are adopted
The sensing data of collection generates the first visual representation for condition of road surface of the vehicle in the blind area on current location;And
Display module is configured to supply the display to the first visual representation.
In the 5th aspect of the disclosure, a kind of driving assistance system is provided, including:One or more sensors,
It is configured as the sensing data of acquisition reflection condition of road surface;Processing system is configured as being based at least partially on by sensor
The sensing data acquired on the previous position of vehicle, to generate the road like in the blind area for vehicle on current location
The visual representation of condition;And communication device, it is configured as the data including visual representation being supplied to wearable display equipment, with
The visual representation is presented by wearable display equipment.
According to some embodiments of the present disclosure, generated for current according to based on the sensing data acquired in previous position
The visual representation of the condition of road surface in blind area on position, and visual representation is presented to driver.This is conducive to guidance and drives
Operation is sailed, avoids scraping in the road surface in blind area.
Description of the drawings
In order to be clearly shown the technical solution in embodiment of the disclosure, the institute in the description to embodiment is given below
The brief introduction of the attached drawing needed.It should be evident that drawings described below is some embodiments of the present disclosure, it is attached based on these
Figure, those of ordinary skill in the art can obtain other attached drawings, without any performing creative labour.
Fig. 1 shows illustrative vehicle blind zone;
Fig. 2 shows the exemplary systems that the disclosure can wherein may be implemented;
Fig. 3 shows the separate schematic diagram of the description disclosure;
Fig. 4-5 shows the schematic diagram of the image range to be processed for different drive routes;
Fig. 6 shows the schematic diagram of the visual representation of one embodiment according to the disclosure;
Fig. 7 shows illustrative road profile;
Fig. 8 shows that the exemplary visual based on road profile indicates;
Fig. 9 shows the exemplary process diagram of the driving assistance method of one embodiment according to the disclosure;
Figure 10 shows the block diagram of the drive assistance device according to an embodiment of the present disclosure.
Specific implementation mode
The specific implementation mode of the disclosure is provided below by reference to drawings and examples.Particular embodiment described herein
It is only used for explaining the disclosure, and is not intended to the limitation disclosure or its application or uses.It should be understood that for the ease of retouching
It states, attached drawing illustrates only the relevant portion of the disclosure.
It summarizes
As described above, the inaccurate cognition due to the presence of blind area and driver to condition of road surface, vehicle can
It can collide or scrape.When vehicle traveling on complex road surface (for example, road in construction), this problem is tighter
Weight.If being fitted without additional foot guard component, vehicle can suffer from losing.
Some vehicle configurations have camera, for example, the camera of vehicle roof.Such camera can for example acquire week
The image in collarette border.The monitoring range of camera may cover outside the visual field of driver, certain subregion.In the prior art
In, when vehicle when driving, the image acquired by such camera is fed back to driver in real time.This can make driver
Obtain information related with the condition of road surface in some additional areas outside its visual field.
However, there are the regions that camera can not detect.As shown in Figure 1, under an illustrative scene, position
Camera 101 in vehicle front upper place can scan the region 102 of vehicle front.But due to blocking for bonnet, region 103 is not
It can in real time be detected by camera 101.This subregion includes under vehicle nearby square part and bonnet and vehicle body
Part.
That is, even if by the image Real-time Feedback of camera acquisition to driver, driver still can not feel in real time
Know the environment (for example, surface evenness) in region 103.Under such circumstances, driver may need to rely on memory, from him
The image of the observation or feedback that are obtained on previous position estimates the condition of road surface in the blind area on current location, this may be
Inaccurate.
It with the operation of guiding driver may be advantageous that condition of road surface in region 103 is supplied to driver in real time
's.Compared with region 102, region 103 closer to vehicle, therefore the condition of road surface in region 103 in some cases for
Driver may be more meaningful, for example, when vehicle travels at a slow speed on complex road surface (for example, road in construction) to hide road
Fluctuating on face attempts when the position on complex road surface is stopped.
For purposes of illustration only, foregoing describes only the blind areas in vehicle front region.It should be understood that in the side of vehicle
There is also blind areas, such as due to blind area caused by connecting pole.
Embodiment of the disclosure is intended to provide such driving assistance system, and camera 101 as shown in Figure 1 can exist
Visual feedback gives driver's condition of road surface to condition of road surface in real time in check frequency 103 on current location.Further real
It applies in example, such feedback can also be further used in the operation of guiding driver.
Example system
Each conception of species provided in the entire disclosure can realize in various vehicles, including sports car, car,
Multi-function vehicle (SUV), hybrid vehicle (HEV), battery truck, truck etc..It will be understood by those skilled in the art, however, that these
Be provided for illustration purposes only, and the one or more aspects of the disclosure can be implemented or incorporated in it is one or more its
In the vehicle of his type.
Referring now to Fig. 2, the vehicle 20 including various vehicle electronic modules, subsystem and/or component is shown.Such as
Shown in Fig. 2, in some embodiments, driving assistance system 200 may include:Sensor module 21, processing system 22, Yi Jixian
Show system 23.As will be described below, in a further embodiment, driving assistance system 200 can also include communication dress
Set 24 and user interface 25.It should be understood that driving assistance system 200 is only an example of driving assistance system, drive
Sailing auxiliary system 200 can have than illustrating more or fewer components, or be configured with different components.
Processing system 22 may include memory 222 and processor 221.Memory 222 can be such as arbitrary access
The volatile memory of memory (RAM), static state RAM (SRAM), dynamic ram (DRAM) etc., or such as read-only memory
(ROM), the nonvolatile memory of flash memory, disk etc., or both certain combination.Memory 222 can be used for depositing
Store up the program instruction that can be executed by processor 221.In addition, memory 222 can also include database 2221.It will be retouched in following article
It states, database 2221 may include the biography acquired in the period before vehicle reaches current location by sensor module 21
Sensor data.Database 2221 can also include other information related with vehicle 20, for example, the model of vehicle 20, speed plus
Speed, size, steering locking angle degree etc..Specifically, when these sensing datas and information by processor 222 according to the finger of storage
Some or all of when enabling processing, may be implemented in the function described in the disclosure.
Processor 221 can be such as central processing unit (CPU), Vehicle Controller (MCU), digital signal processor
(DSP), the general processor of GPU etc. is configured as realizing this by executing the program instruction being stored in memory 222
Some or all of literary described function.Additionally or alternatively, processor 221 can also include programmable hardware member
Part, such as application-specific integrated circuit (ASIC), field programmable gate array (FPGA) etc..As will be described below, in the disclosure
In embodiment, processor 221 can be generated based on the sensing data collected by sensor module 21 for the road in blind area
The visual representation of road situation.
Sensor module 21 may include imaging sensor 211, radar sensor 212, ultrasonic sensor 213 and
One or more of laser sensor 244.
Imaging sensor 211 is used for the image of collection vehicle ambient enviroment.In an illustrative arrangement, image sensing
Device 211 may include be mounted on vehicle different location on camera, camera, or combinations thereof.For example, imaging sensor can be with
Including be arranged front of the car front camera, be arranged vehicle up direction top view camera, and/or be arranged in vehicle both sides
With the camera of rear end.These cameras can be used for acquiring the ambient image on different directions.
Similarly with imaging sensor 211, radar sensor 212 and ultrasonic sensor 213 can also be disposed in vehicle
Various positions on.In an illustrative arrangement, radar sensor 212 and ultrasonic sensor 213 can respectively by
It is arranged in front side and the rear side of vehicle, to acquire the contour feature of the position on road.
In one embodiment, driving assistance system 200 can also include providing various information related with vehicle 10
Sensor (not shown in FIG. 1).Such sensor may include velocity sensor, acceleration transducer, steering angle biography
One or more of sensor, GPS.As will be described below, the speed provided by velocity sensor can be used for image
Processing, the vehicle location provided by GPS is determined for the drive route of vehicle.
User interface 25 may include the various equipment for allowing user (for example, driver) to be interacted with vehicle.Exemplary
Embodiment in, user interface 25 may include touch sensitive display screen, keyboard, control stick, microphone, button etc..User can
With by interface 25 come select option, input information, the presentation mode for adjusting visual representation and to any part and function into
Row configuration.
Display system 23 may include providing a user the various equipment of visual representation, and the content of visual representation for example can be with
Including image, video, word or animation.As will be described below, in embodiment of the disclosure, display system 23 can be used
In providing a user the visual feedback to the condition of road surface in blind area.It is based in addition, display system 23 can also provide a user
The drive route for the recommendation that condition of road surface generates.Display system 23 can be integrated on vehicle, such as conduct can receive use
The touch sensitive display screen of family input.In one embodiment, display system 23 can also include projection device 231, such as
Visual representation can be projected in the other positions that on the windshield of vehicle or driver is easily visible.In another implementation
In example, display system 23 can also include head-up display device (HUD).HUD can provide a user visual representation without with
Family changes his gaze-direction.
Additionally or alternatively, display system 23 can also include the wearable display equipment with Vehicles separation.At one
In embodiment, which can be a head-mounted display (HMD) 233.HMD 233 can for example pass through increasing
Strong reality technology is presented the visual representation generated by processing system 22 in the user visual field, and such visual representation for example can be by
It is rendered as being superimposed upon on road or be superimposed upon on the vehicle parts such as windshield and bonnet.
In the embodiment using the HMD 233 of separation, driving assistance system 200 can also include communication device 24.It is logical
T unit 24 is appropriately configured to transmit information between HMD 233 and the other component of driving assistance system 200.Information example
It such as include the visual representation to be presented by HMD 233 generated based on sensing data by processing system 22.In some embodiments
In, communication device 24 can transmit information based on modes such as WLAN, bluetooths.
The generation of visual representation
Referring now to Fig. 3.Fig. 3 shows an exemplary scenario of the principle for describing the disclosure.Some in Fig. 3
Feature will be described in conjunction with the driving assistance system 200 in Fig. 2.
As shown in figure 3, vehicle 300 just travels on road surface forwards.Imaging sensor at 300 top of vehicle is set
The image of 211 (for example, cameras) collection vehicle front region in real time.In the exemplary scenario, when vehicle 300 drives to
When position 30, imaging sensor 211 collects the image in vehicle front region 301.
Vehicle 300 continues to drive to the position 31 in 30 front of position.At position 31, camera 211 has check frequency
302.The driver of vehicle it is expected to obtain the visual representation of the condition of road surface in the blind area 302.In embodiment of the disclosure, this
The visual representation of sample will be generated by processor 221 based on the image acquired at previous position by camera 211.
As a specific example, as shown in Figure 3, blind area 302 of the vehicle 300 at position 31 is in place by camera
The monitoring region 301 at 30 is set to be covered.In this case, when vehicle 300 drives to position 31, processor 221 can
To be generated at position 31 based on the image of the corresponding part in the region 301 acquired at position 30 by imaging sensor 211
Blind area 302 in condition of road surface visual representation.
It should be understood that above-mentioned example process is only provided to illustrate the principle of embodiment of the disclosure and general
It reads.It in practice, can be by will be acquired on multiple previous positions for the visual representation of the blind area 302 at position 31
Image synthesizes to obtain.This can for example be realized by image procossing and computer vision technique, the details of these technologies
It is well known to the skilled person, and may omit in the disclosure, to avoid unnecessarily the disclosure being made to be difficult to
Understand.
Referring now to Figure 4 and 5, Figure 4 and 5 respectively illustrate the image section to be processed for different drive routes
Schematic diagram.Drive route (for example, turn to or keep straight on) can be predicted by navigation application based on the position of vehicle.
As illustrated in fig. 4, vehicle 401 will keep keeping straight on.For the vehicle 401 of straight trip, only provides and be directed to and vehicle 401
The visual representation of condition of road surface in the corresponding region of width has been able to ensure the safety traffic of vehicle.Processing system as a result,
System only needs storage, handles the road image within the scope of vehicle width.This is advantageously reduced at the amount of storage and image of data
Manage required calculation amount.
However, for the vehicle 501 that will turn to (for example, left-hand rotation) in Fig. 5, it is also necessary to consider the area of vehicle both sides
Domain.In addition to the region in body width, also it is desirable to can be by can towards the condition of road surface in the certain angle in Vehicular turn direction
Depending on change.
In embodiment, it provides for the condition of road surface in region corresponding with the steering locking angle degree of vehicle 501
Visual representation.The width W in the region can be determined based on the steering locking angle degree of vehicle 501.
The visual representation for the condition of road surface in the blind area on current location is hereinbefore described.In some embodiments
In, the visual of the condition of road surface in the monitoring region for sensor (for example, camera) on current location can also be generated
It indicates.Unlike blind area, the visual representation monitored in region is based on the sensor number acquired in real time on current location
It is generated according to (for example, image).
With reference to Fig. 6, the schematic diagram of the visual representation of one embodiment according to the disclosure is shown.In the embodiment
In, in addition to the first visual representation for the condition of road surface in the blind area 601 on current location, also generate the prison for camera
Survey the second visual representation of the condition of road surface in region 602.In a further embodiment, can also by the first visual representation and
Second visual representation synthesizes.
In another embodiment, other than above-mentioned the first visual representation and the second visual representation, processing system is also
The visual table of third for the condition of road surface of distance in the past can be generated based on storage, image for past distance 603
Show, and the first visual representation, the second visual representation, third visual representation are synthesized.
In one embodiment, the image within the scope of 100m that processing system storage vehicle has been run over, and be based on
The image of storage generates the visual representation of the condition of road surface within the scope of the 100m of rear of vehicle.Such visual representation is in car backing operation
In it may be advantageous.
In some embodiments, the travel speed of vehicle is also based on to adjust the ginseng that camera is used for acquiring image
Number.As an example, when vehicle on complicated road surface when driving, general speed is relatively low (for example, 0-50km/h), then can be arranged
Camera acquires image with higher resolution ratio, to provide high-definition, more accurate visual representation.When vehicle is opposite
On smooth road surface, general speed is higher (for example, 50-120km/h), and camera can be arranged and acquired with lower resolution ratio
Image, to improve response speed.Correspondingly, the resolution ratio of the visual representation provided may be relatively low.
The road image that is acquired based on camera is hereinbefore described to generate the visual table of the condition of road surface in blind area
The embodiment shown.Additionally or alternatively, in some embodiments, be also based on by radar sensor, ultrasonic sensor,
And the data of one or more of laser sensor acquisition generate above-mentioned visual representation.Above-mentioned sensor can be adopted
Collect the profile of the position on road.Herein, profile may include various positions (example in the flatness and road on road surface
Such as, hollow, protrusion etc.) etc. size (for example, height, width etc.).
Referring now to Fig. 7, the illustrative road profile 700 that can be acquired by sensor is shown.As shown in Figure 7
, the horizontal plane where the tire of vehicle and the contact point on road surface is arranged to reference plane 701, for the smooth of evaluation road surface
Degree.In this example, position 703 is determined to be on reference plane, and the position 703 is it is possible thereby to be indicated as protrusion;Position 704
The road surface at place is determined to be under reference plane, and the position 704 is it is possible thereby to be confirmed as hollow.In further embodiment
In, the flatness of the height and road surface that are also based on chassis determines may to be scraped in which position.
Similar with the image acquired by imaging sensor, the profile on road surface can also be visualized, and in visualization table
The position with different height can be distinctively indicated in showing.Fig. 8 shows the illustratively visualization table based on road surface profile
Show.As shown in Figure 8, refer to the position with different height on road surface using different colors.In one embodiment, right
It should be indicated by grey in the position of smooth-riding surface, the position for corresponding to protrusion is indicated by red, and corresponding to the position of hollow
It is indicated by blue.In addition, the variation of height can be indicated with the variation of color depth.
In other examples, in visual representation, the height change of the position on instruction road can also be presented
Profile diagram.
In some embodiments, the flatness on road surface is also based on to generate the drive route of recommendation.As shown in Figure 8
, the recommendation drive route 805 of generation avoids protrusion and hollow on road surface, this is conducive to improve safety.
The presentation of visual representation
In some embodiments, above-mentioned visual representation can be projected on the bonnet of vehicle via projection device,
So that the visual representation being present on bonnet appears to the true presentation on road surface in the blind area blocked by bonnet.
In some embodiments, visual representation can be presented on the front window of vehicle via HUD.
In some embodiments, the data including visual representation can be sent to wearable display via communication device
Device (for example, HMD), with by HMD by visual representation at the vehicle part for being superimposed upon road, windshield and bonnet etc.
On.
In some embodiments, visual table can also be presented via the man-machine interface (MMI) of touch sensitive display screen etc.
Show.
Instantiation procedure
Fig. 9 shows the exemplary process diagram of driving assistance method according to an embodiment of the present disclosure.The driving auxiliary square
It is executed in the system that method 900 can be for example shown in Fig. 2.
In frame 901, the sensing data of acquisition reflection condition of road surface.
Sensing data may include the image of the reflection condition of road surface acquired by the imaging sensor of camera etc..
Additionally or alternatively, sensing data can also include the outline data acquired by radar sensor, ultrasonic sensor.
In box 902, it is at least partially based on the sensing data next life acquired on the previous position of vehicle by sensor
At the visual representation of the condition of road surface for vehicle in blind area on current location.
In some embodiments, in addition to the first visual representation for condition of road surface in blind area, also worked as based on sensor
The sensing data acquired on front position is directed to the second visual representation for monitoring the condition of road surface in region to generate.
In some embodiments, it in addition to above-mentioned the first visual representation and the second visual representation, is also based on being stored in storage
The sensing data for distance (for example, 100m) in the past in device generates third visual representation.
It in a further embodiment, can also be by the first visual representation and the second visual representation and/or the visual table of third
Show synthesis.
In frame 903, visual representation is presented.
In some embodiments, the first visual representation is projected on the bonnet of vehicle by projection device.Another
In a little embodiments, the first visual representation is presented on the front window of vehicle by head-up display.It passes through in yet other embodiments,
By communication device, visual representation is presented in the visual field of driver by wearable display equipment.
Figure 10 shows the block diagram of drive assistance device 1000 according to an embodiment of the present disclosure.Device 1000 includes:It adopts
Collect module 1001, generation module 1002 and display module 1003.The device 1000 can be assisted by driving shown in Figure 2
Various hardware, software in system, or combinations thereof realize.
Acquisition module 1001 is configured as the sensing data of acquisition reflection condition of road surface.
In some embodiments, acquisition module 1001 obtains one or more imaging sensors by being arranged on vehicle
The road image that (for example, camera) acquires in real time.
In further embodiments, acquisition module 1001 is obtained is passed by radar sensor, the ultrasonic wave being arranged on vehicle
The contour feature of position on the road of sensor acquisition.
Generation module 1002 is configured as being based at least partially on the sensing data acquired on previous position, and generation is directed to
The visual representation of condition of road surface of the vehicle in the blind area on current location.
In some embodiments, in addition to the first visual representation for condition of road surface in blind area, also worked as based on sensor
The sensing data acquired on front position is directed to the second visual representation for monitoring the condition of road surface in region to generate.
In some embodiments, it in addition to above-mentioned the first visual representation and the second visual representation, is also based on being stored in storage
The sensing data for distance (for example, 100m) in the past in device generates third visual representation.
It in a further embodiment, can also be by the first visual representation and the second visual representation and/or the visual table of third
Show synthesis.
Display module 1003 is configured to supply the display to visual representation.
In some embodiments, above-mentioned visual representation can be projected to vehicle by display module 1003 via projection device
Bonnet on so that the visual representation being present on bonnet appears to road surface in the blind area blocked by bonnet
True presentation.
In some embodiments, visual representation can be presented on the front window of vehicle by display module 1003 via HUD.
In some embodiments, the data including visual representation can be sent to by display module 1003 via communication device
Wearable display device (for example, HMD), visual representation is superimposed upon road, windshield, Yi Jiyin at fiber by HMD
On the vehicle part for holding up lid etc..
In some embodiments, visual representation can also be presented via touch sensitive display screen in display module 1003.
It will be understood by those skilled in the art that the equipment that the unit in devices disclosed herein can be distributed in embodiment
In, and can also alternatively be located in one or more equipment different from equipment those of in embodiment.Foregoing embodiments
Unit can be integrated into a unit or can be further divided into multiple subelements.
The disclosure, those skilled in the art are described although having been combined and being considered most practical and preferred embodiment
It should be understood that such limitation is not limited to the disclosed embodiments, and it is intended to cover included various arrangements, without
Deviation most widely understands range, in order to cover all such modifications and equivalent arrangements.
Claims (15)
1. a kind of driving assistance system, including:
One or more sensors are configured as the sensing data of acquisition reflection condition of road surface;
Processing system is configured as being based at least partially on the sensor acquired on the previous position of vehicle by the sensor
Data, to generate the first visual representation of the condition of road surface in the blind area for the vehicle on current location;And
It shows equipment, is configured as that first visual representation is presented.
2. driving assistance system according to claim 1, wherein the processing system is configured to:
Based on the condition of road surface in the monitoring region generated in the sensing data that current location acquires for the sensor
Second visual representation;And
By first visual representation and the synthesis of the second visual representation to be shown by the display equipment.
3. driving assistance system according to claim 2, wherein the processing system is configured to:
Sensing data of the storage for distance in the past;
Based on the sensing data for distance in the past, the third for generating the condition of road surface for being directed to the past distance is visual
It indicates;And
First visual representation, the second visual representation and third visual representation are synthesized to be shown by the display equipment.
4. driving assistance system according to claim 1, wherein the processing system is configured to:
For the vehicle of straight trip, the visual table is generated based on the sensing data for region corresponding with vehicle width
Show;And
For the vehicle turned to, based on the sensor number for region corresponding with the steering locking angle degree of the vehicle
According to generating the visual representation.
5. driving assistance system according to claim 1, wherein be used to acquire to adjust based on the speed of the vehicle
The parameter of the sensing data.
6. driving assistance system according to any one of claims 1-5, wherein the sensor include it is following in one
The sensor of a or multiple types:
Imaging sensor is configured as the image of acquisition ambient enviroment;And
Radar sensor and/or ultrasonic sensor are configured as the profile of one or more positions on acquisition road.
7. driving assistance system according to claim 6, wherein the display equipment is configured to:It is presenting
Visual representation on distinctively indicate on the road have differently contoured position.
8. driving assistance system according to claim 7, wherein the display equipment is configured to by following
Mode come distinctively indicate on the road have differently contoured position:
The position that there is different height relative to reference plane is indicated using different colors;Or
The profile diagram of the height change of indicating positions is presented on visual representation.
9. according to driving assistance system shown in any one of claim 6-8, wherein the display equipment is further configured
For:Show the virtual driving route generated based on the profile by the processing system.
10. driving assistance system according to claim 1, wherein the display equipment includes projection device, is configured
For first visual representation is projected on the bonnet of the vehicle.
11. according to claim 1-10 any one of them driving assistance systems, wherein the display equipment includes head-up display
Device is configured as that the visual representation of condition of road surface will be presented on the front window of the vehicle.
12. the driving assistance system described in any one of a kind of vehicle, including claim 1-11.
13. a kind of driving assistance method, including:
The sensing data of acquisition reflection condition of road surface;
It is at least partially based on the sensing data acquired on the previous position of vehicle by the sensor, generates and is directed to the vehicle
First visual representation of the condition of road surface in blind area on current location;And
First visual representation is presented.
14. a kind of drive assistance device, including:
Acquisition module is configured as the sensing data of acquisition reflection condition of road surface;
Generation module is configured as being based at least partially on the sensing data acquired on previous position, and generation exists for vehicle
First visual representation of the condition of road surface in blind area on current location;And
Display module is configured to supply the display to first visual representation.
15. a kind of driving assistance system, including:
One or more sensors are configured as the sensing data of acquisition reflection condition of road surface;
Processing system is configured as being based at least partially on the sensor acquired on the previous position of vehicle by the sensor
Data, to generate the visual representation of the condition of road surface in the blind area for the vehicle on current location;And
Communication device, be configured as will include the visual representation data transmission give wearable display equipment, with by it is described can
It wears and shows that the visual representation is presented in equipment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710081070.8A CN108422932B (en) | 2017-02-15 | 2017-02-15 | Driving assistance system, method and vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710081070.8A CN108422932B (en) | 2017-02-15 | 2017-02-15 | Driving assistance system, method and vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108422932A true CN108422932A (en) | 2018-08-21 |
CN108422932B CN108422932B (en) | 2021-06-01 |
Family
ID=63155349
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710081070.8A Active CN108422932B (en) | 2017-02-15 | 2017-02-15 | Driving assistance system, method and vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108422932B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111427331A (en) * | 2020-03-24 | 2020-07-17 | 新石器慧通(北京)科技有限公司 | Perception information display method and device of unmanned vehicle and electronic equipment |
CN111660924A (en) * | 2019-12-06 | 2020-09-15 | 摩登汽车有限公司 | Automobile steering lamp control system and method |
CN113119863A (en) * | 2020-01-15 | 2021-07-16 | 未来(北京)黑科技有限公司 | Full car window imaging system |
CN114619964A (en) * | 2022-04-20 | 2022-06-14 | 芜湖汽车前瞻技术研究院有限公司 | Display system and intelligent vehicle of intelligence passenger cabin |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103237685A (en) * | 2010-12-30 | 2013-08-07 | 明智汽车公司 | Apparatus and method for displaying a blind spot |
CN103609100A (en) * | 2011-06-09 | 2014-02-26 | 爱信精机株式会社 | Image generation device |
CN105564309A (en) * | 2014-10-14 | 2016-05-11 | 中兴通讯股份有限公司 | Method of realizing perspective visual line blind area and driving auxiliary glasses |
CN106163872A (en) * | 2014-04-09 | 2016-11-23 | 捷豹路虎有限公司 | For showing equipment and the method for information |
CN106167045A (en) * | 2015-05-21 | 2016-11-30 | Lg电子株式会社 | Human pilot auxiliary device and control method thereof |
-
2017
- 2017-02-15 CN CN201710081070.8A patent/CN108422932B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103237685A (en) * | 2010-12-30 | 2013-08-07 | 明智汽车公司 | Apparatus and method for displaying a blind spot |
CN103609100A (en) * | 2011-06-09 | 2014-02-26 | 爱信精机株式会社 | Image generation device |
CN106163872A (en) * | 2014-04-09 | 2016-11-23 | 捷豹路虎有限公司 | For showing equipment and the method for information |
CN105564309A (en) * | 2014-10-14 | 2016-05-11 | 中兴通讯股份有限公司 | Method of realizing perspective visual line blind area and driving auxiliary glasses |
CN106167045A (en) * | 2015-05-21 | 2016-11-30 | Lg电子株式会社 | Human pilot auxiliary device and control method thereof |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111660924A (en) * | 2019-12-06 | 2020-09-15 | 摩登汽车有限公司 | Automobile steering lamp control system and method |
CN113119863A (en) * | 2020-01-15 | 2021-07-16 | 未来(北京)黑科技有限公司 | Full car window imaging system |
CN111427331A (en) * | 2020-03-24 | 2020-07-17 | 新石器慧通(北京)科技有限公司 | Perception information display method and device of unmanned vehicle and electronic equipment |
CN114619964A (en) * | 2022-04-20 | 2022-06-14 | 芜湖汽车前瞻技术研究院有限公司 | Display system and intelligent vehicle of intelligence passenger cabin |
Also Published As
Publication number | Publication date |
---|---|
CN108422932B (en) | 2021-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021197189A1 (en) | Augmented reality-based information display method, system and apparatus, and projection device | |
US8536995B2 (en) | Information display apparatus and information display method | |
JP6470335B2 (en) | Vehicle display system and method for controlling vehicle display system | |
JP7010221B2 (en) | Image generator, image generation method, and program | |
US10647201B2 (en) | Drive assist device and drive assist method | |
US9073484B2 (en) | Surrounding area monitoring apparatus for vehicle | |
EP2544449B1 (en) | Vehicle perimeter monitoring device | |
JP6554131B2 (en) | Vehicle display system and method for controlling vehicle display system | |
US20120127312A1 (en) | Vehicle periphery image generation apparatus | |
CN108422932A (en) | driving assistance system, method and vehicle | |
US10099617B2 (en) | Driving assistance device and driving assistance method | |
CN106030679A (en) | Vehicle surroundings monitoring device | |
WO2020012879A1 (en) | Head-up display | |
KR20140114373A (en) | Method and device for visualizing the surroundings of a vehicle | |
US11100718B2 (en) | Method for operating a display device in a motor vehicle | |
US10969579B2 (en) | Augmented reality glasses, method for determining a pose of augmented reality glasses, and transportation vehicle suitable for using the augmented reality glasses or the method | |
CN110171357A (en) | Vehicle and its control method | |
JPH0717328A (en) | Circumference recognition auxiliary device for vehicle | |
CN108973858A (en) | For ensuring the device of travel route safety | |
JP2007008382A (en) | Device and method for displaying visual information | |
JP2008037118A (en) | Display for vehicle | |
US20240042857A1 (en) | Vehicle display system, vehicle display method, and computer-readable non-transitory storage medium storing vehicle display program | |
JP2023017641A (en) | Vehicle display control device, vehicle display device, vehicle display control method, and vehicle display control program | |
KR20180094717A (en) | Driving assistance apparatus using avm | |
US10093246B2 (en) | Vehicle driving assistance device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |