CN111348055A - Driving assistance method, driving assistance system, computing device, and storage medium - Google Patents
Driving assistance method, driving assistance system, computing device, and storage medium Download PDFInfo
- Publication number
- CN111348055A CN111348055A CN201811565912.8A CN201811565912A CN111348055A CN 111348055 A CN111348055 A CN 111348055A CN 201811565912 A CN201811565912 A CN 201811565912A CN 111348055 A CN111348055 A CN 111348055A
- Authority
- CN
- China
- Prior art keywords
- information
- vehicle
- driving
- road
- predicted
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000009877 rendering Methods 0.000 claims description 35
- 239000003550 marker Substances 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 20
- 230000006870 function Effects 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 230000003190 augmentative effect Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 230000002085 persistent effect Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 239000002355 dual-layer Substances 0.000 description 1
- 239000005357 flat glass Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000000725 suspension Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
- B60W2050/143—Alarm means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
- B60W2050/146—Display means
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Human Computer Interaction (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a driving assistance method, a driving assistance system, a computing device and a storage medium. The method comprises the following steps: acquiring vehicle running information of a vehicle; determining a predicted driving state at a driving set time or after a set distance based on the vehicle driving information; and outputting prediction information including the predicted travel state. Therefore, convenience is provided for guiding the driver to safely drive the vehicle.
Description
Technical Field
The present disclosure relates to the field of vehicle driving, and in particular, to a driving assistance method, a driving assistance system, a computing device, and a storage medium.
Background
In recent years, driving assistance techniques have been attracting attention and have been developed in many ways. For example, with the development of augmented reality technology (AR), driving assistance schemes using the augmented reality technology have come to work.
Augmented Reality (AR) is a technology that calculates the position and angle of a camera image in real time and adds corresponding images and videos. AR navigation is software that presents a planned path in a video image of a camera forward of a vehicle.
At present, the mainstream AR navigation products in the market have single functions, or the AR navigation products are AR extensions of the traditional navigation, and only show AR guidelines such as straight guidelines, steering guidelines, etc. to the user; or the AR extension of the conventional ADAS (vehicle assisted driving technology, also called Advanced Driver Assistance Systems) only shows the AR ADAS warning information, such as the front vehicle collision warning, the line pressing warning, etc., to the user. The functions of guidance and warning cannot be well integrated in the same product, a safe driving area in front cannot be displayed for a driver, and the driver cannot be well assisted to safely drive.
Therefore, there is still a need for a driving assistance scheme that can assist the driver in driving safely.
Disclosure of Invention
An object of this disclosure is to provide one kind can drive the convenient supplementary driving scheme of safe driving for the navigating mate drives the vehicle, has promoted safe driving's experience.
According to a first aspect of the present disclosure, there is provided a driving assist method including: acquiring vehicle running information of a vehicle; determining a predicted travel state after traveling for a set period of time or a set distance based on the vehicle travel information; and outputting prediction information including the predicted travel state.
Optionally, the predicted driving state comprises a location to be reached by the vehicle after driving for at least one set time period or at least one set distance, and the predicted driving state is presented using a space occupying marker corresponding to the location.
Optionally, the driving assistance method may further include: acquiring road information of the vehicle; and predicting a travel risk based on the predicted travel state and the road information, wherein the prediction information includes the travel risk.
Optionally, the driving risk occurrence object of the driving risk includes at least one of: lane edges, road edges, pedestrians, other vehicles, obstacles, fixtures; and/or the driving risk comprises at least one of: lane departure, road departure, collision with a pedestrian or other vehicle, collision with an obstacle or fixed establishment.
Optionally, the driving risk is presented in combination with a space occupying identifier corresponding to a predicted position where the vehicle is located at the predicted driving risk occurrence time and the driving risk occurrence object or an image thereof.
Optionally, the step of presenting the prediction information may include: and displaying the prediction information in an overlapping way on the corresponding road image or at the position corresponding to the road real scene.
Optionally, the driving assistance method may further include: rendering the prediction information to display the prediction information in an overlapping manner on a corresponding road image or at a position corresponding to a road real scene, wherein rendering parameters used for rendering the prediction information are respectively related to the safety degree of the prediction information.
Optionally, the vehicle running information is acquired through an on-board sensor; and/or acquiring the road information through a vehicle-mounted camera.
Optionally, the step of acquiring the road information may include: and acquiring the road information from the live-action video of the road in front of the running vehicle, which is acquired by the vehicle-mounted camera.
Optionally, the vehicle travel information may include at least one of: current position information, current driving direction information, driving speed information, steering control information and power control information; and/or the road information includes lane line information, road curvature information, obstacle information, stationary facility information, and pedestrian and other vehicle information located in front of or near the vehicle on the road.
According to a second aspect of the present disclosure, there is also provided a prompting method, including: acquiring traveling information of an object traveling on a road; determining a predicted travel state after traveling for a set time period or a set distance based on the travel information of the object; outputting prediction information, the prediction information including the predicted travel state.
According to a third aspect of the present disclosure, there is also provided a driving assistance system including: the driving information acquisition module is used for acquiring vehicle driving information of the vehicle; a driving state prediction module for determining a predicted driving state after driving for a set time period or a set distance based on the vehicle driving information; and an output module for outputting prediction information including the predicted travel state.
Optionally, the driving assistance system may further include: the road information acquisition module is used for acquiring the road information of the vehicle; and the driving risk prediction module is used for predicting driving risks based on the predicted driving state and the road information, wherein the predicted information comprises the driving risks.
Optionally, the driving assistance system may further include: and the rendering module is used for rendering the prediction information so as to display the prediction information in an overlapping manner on a corresponding road image or at a position corresponding to a road real scene, wherein rendering parameters used for rendering the prediction information are respectively related to the safety degree of the prediction information.
According to a fourth aspect of the present disclosure, there is also provided a driving assistance system including: one or more vehicle travel information sensors for sensing vehicle travel information of the vehicle; a data processing device for determining a predicted travel state after traveling for a set period of time or a set distance based on the vehicle travel information; and an output device for outputting prediction information to a user, the prediction information including the predicted travel state.
Optionally, the output device is a rendering and display device; and/or the output device is an audio output device.
Optionally, the driving assistance system may further include: and the data processing equipment analyzes the real-time image to acquire road information of the vehicle, predicts the driving risk based on the predicted driving state and the road information, and the predicted information comprises the driving risk.
According to a fifth aspect of the present disclosure, there is also provided a computing device comprising: a processor; and a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method of any of the above.
According to a sixth aspect of the present disclosure, there is also provided a non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the method of any of the above.
Therefore, the driving condition of the vehicle in a future preset time, such as a position to be reached, a driving risk and the like, is predicted by combining the vehicle driving information and the road information in the actual driving process of the vehicle, and the predicted driving condition is output to a user in an image, animation or voice mode, so that a safe driving area in front of a driving vehicle and possible risks are presented to a driver, the driver is guided to safely drive, the driver is assisted to timely find the risks in the driving process, and effective response can be realized.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in greater detail exemplary embodiments thereof with reference to the attached drawings, in which like reference numerals generally represent like parts throughout.
Fig. 1 shows a schematic block diagram of the structure of a driving assistance system according to one embodiment of the present disclosure.
Fig. 2A-2E show schematic diagrams of presenting prediction information in different scenarios.
Fig. 3 shows a schematic flow diagram of a driving assistance method according to an embodiment of the present disclosure.
Fig. 4 shows a schematic flow diagram of a driving assistance method according to another embodiment of the present disclosure.
Fig. 5 shows a schematic block diagram of a driving assistance system according to one embodiment of the present disclosure.
FIG. 6 illustrates a schematic structural diagram of a computing device according to an embodiment of the invention.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
[ term interpretation ]
A navigation engine: and software for planning a travel path according to the starting point and the end point and then displaying the planned path on a map.
AR: augmented Reality (AR) is a technology for calculating the position and angle of a camera image in real time and adding corresponding images and videos.
And (3) AR navigation: and showing the software for planning the path in the video image of the forward camera of the vehicle.
Image recognition technology: refers to a technique of processing, analyzing and understanding an image with a computer to recognize various different modes of objects and objects.
And (3) navigation mode: refers to a driving mode in which a driving operation is performed based on navigation information generated by a navigation tool.
Cruise mode: refers to a driving mode in which a user can drive a vehicle according to map information/own taste/experience without depending on a navigation tool.
[ scheme overview ]
The disclosure provides an auxiliary driving scheme, which predicts a driving state and a driving risk of a vehicle at a future moment based on driving information of the vehicle and road information during driving of the vehicle, and outputs the predicted driving state and the driving risk to a user in an image, animation or voice mode, for example, relevant predicted information (or other information corresponding to the predicted information) is displayed on a corresponding road image in an overlapping mode, or is displayed on a position corresponding to a road real scene in an overlapping mode, so as to realize AR display, or voice broadcast is carried out on the user, and therefore safe driving of a driver is assisted.
The driving assistance scheme can be combined with a navigation mode and/or a cruise mode, and the experience of safe driving is improved when good navigation and/or cruise service is provided for a driver.
The driving assistance scheme of the present disclosure will be further described with reference to the accompanying drawings and embodiments.
[ Driving assistance System ]
Fig. 1 shows a schematic block diagram of the structure of a driving assistance system according to one embodiment of the present disclosure. The driving assistance system 100 may be mounted in an in-vehicle operation system to provide a driving assistance service to a user. Optionally, the driving assistance system 100 may also implement the driving assistance scheme of the present disclosure in combination with image recognition, lane positioning, AR, and other technologies.
FIG. 1 illustrates a number of modules that may be involved in some embodiments. It should be understood that not all of these modules are necessary to implement the technical solution of the present disclosure. For example, as will be apparent from reading the following detailed description of embodiments, not all of these modules may be required to participate in some scenarios.
As shown in fig. 1, the driving assistance system 100 of the present disclosure may include a variety of information collecting devices 110 (which may include, for example, one or more vehicle travel information sensors 111, an in-vehicle camera 112, etc.), a data processing device 120, and an output device 130.
Here, the information collection device 110 can collect information on the traveling vehicle and the environment around the traveling vehicle. The data processing device 120 can analyze and process the relevant information collected by the information collecting device, and can determine the driving condition of the future vehicle after the vehicle is driven for a set time period or a set distance based on the information, such as the driving state of the driving vehicle during actual driving or the driving risk possibly existing based on the actual road, so as to obtain the relevant prediction information. The output device 130 is capable of outputting relevant predictive information to a user.
As one example of the present disclosure, the output device may be, for example, a rendering and display device. The rendering and display device, particularly the display module thereof, may be a display screen or a projection device, so as to display or project the relevant prediction information presented to the user on the front window glass of the vehicle, so as to display the predicted prediction information in an overlapping manner on the corresponding road image or the position corresponding to the road real scene.
The output device can also be an audio output device, and the determined prediction information can be output to the driver in a voice broadcast mode so as to remind the driver of safe driving.
Therefore, AR display is achieved, and the driver is assisted in safe driving while obtaining more visual and friendly driving experience. Moreover, the safe driving position area in the front or the driving risk can be broadcasted through voice, and convenience is further provided for safe driving of drivers.
As one example of the present disclosure, the predicted information of the prediction may include a predicted travel state, and the predicted travel state may include, for example, a position to be reached by the vehicle after traveling for at least one set time period or at least one set distance.
The information collecting device 110 may include an in-vehicle sensor, for example, may include one or more vehicle travel information sensors 111 for sensing vehicle travel information of the vehicle. The data processing device 120 may determine a predicted travel state after traveling for a set period of time or a set distance, for example, based on the vehicle travel information. The output device 130 may output the predicted travel state to the user.
The vehicle driving information may include various operating condition information during the driving of the vehicle, including but not limited to current position information of the vehicle, current driving direction information, driving speed information, steering control information, power control information, and temperatures of various media.
The vehicle driving information sensor may include various vehicle sensors including, but not limited to, sensors for measuring temperature, pressure, flow, position, gas concentration, speed, light brightness, humidity, distance, etc. to collect various operating condition information during the driving of the vehicle. The sensors can be installed at corresponding positions of the vehicle, such as a clutch, a brake, an accelerator, a throttle valve, a transmitter, a crankshaft pulley or chain, a camshaft, a distributor, a transmission, a steering gear, a steering wheel, a suspension, an ABS and the like, so as to realize the acquisition of various working condition information in the driving process of the vehicle.
The set time period or the set distance may be a set time period or a set length of a trip to be traveled by the vehicle from the present time or the present position within a predetermined time in the future. The predetermined time in the future may be a relatively long time in the future or may be a relatively short time in the future. The future predetermined time may be related to the real-time operating condition (e.g., vehicle speed) of the vehicle, or may be a preset fixed time. In other words, the data processing device 120 may determine the predicted travel state after the vehicle continues to travel from the current time or current position for the set period of time or the set distance in accordance with the current travel state.
As an example, in combination with the vehicle speed information, in a slow speed situation (for example, the vehicle speed is lower than 20 km/h), the predetermined time in the future may be a longer time (for example, 5 minutes), and the at least one set time period may be, for example, minutes, 2 minutes, 3 minutes, 4 minutes, 5 minutes. The at least one set distance may be 10 meters, 20 meters, 30 meters, 40 meters, 50 meters.
In a fast case (e.g., vehicle speed greater than or equal to 40 km/h), the predetermined time in the future may be a short time (e.g., 5s), and the at least one set time period may be 1s, 2s, 3s, 4s, 5 s. The at least one set distance may be 5 meters, 10 meters, 15 meters, 20 meters, 25 meters.
In a practical application scenario, the actual driving state of the vehicle may be firstly determined by combining the collected vehicle condition information, and then the data processing device 120 may make a relevant prediction based on the vehicle actual condition. The present disclosure does not limit the predicted period or time of day or even the object (e.g., upcoming location). In the example of the present disclosure, a fixed future predetermined time, i.e., 5s, may be set in advance, and the at least one set time may preferably be 1s, 2s, and 3 s. In another example of the present disclosure, a fixed set distance may be preset, for example, 5 meters, 10 meters, 15 meters.
The output device 130 can then output the determined prediction information to the user. For example, when the output device 130 is a rendering and display device, the rendering and display device can render and present prediction information predicted by the data processing device 120. In which the rendering and display device 130 may also render and present the guide information corresponding to the prediction information. When the output device 130 is an audio output device, the audio output device can broadcast the prediction information to the user in a voice broadcast manner.
In a preferred example, the rendering and display device may include a rendering module and a display module. The rendering module can render the prediction information or the guide information corresponding to the prediction information, and the display module can display the prediction information rendered by the rendering module or the guide information corresponding to the prediction information in an overlapping manner on the corresponding road image or the position corresponding to the road real scene. Wherein a space occupying marker (e.g., directional arrow) corresponding to the predicted location may be rendered and displayed to present the predicted driving state.
Therefore, the driver is guided to safely drive along the current road by predicting the position to be reached by the vehicle in the future and presenting the space occupying mark corresponding to the predicted position.
In addition, when a vehicle travels on a road, there may be some unknown driving risks that affect the safety of the driver, such as lane departure, road departure, collision with pedestrians or other vehicles, collision with obstacles or fixed facilities, and the like. Also, there may be a variety of driving risk occurrence objects causing driving risks, such as lane edges, road edges, pedestrians, obstacles, fixed facilities, pedestrians, and the like.
As another example of the present disclosure, the data processing device 120 as described above may also perform prediction of a travel risk, and the prediction information may include the predicted travel risk.
Specifically, the running risk which may exist in the running process of the vehicle can be predicted by collecting the relevant information of the actual road and the surrounding environment where the vehicle runs currently in real time and combining the predicted running state of the vehicle predicted as above.
In which road information and/or surrounding environment information may be acquired, for example, by the in-vehicle camera 112 included in the information collection apparatus 110.
In particular, in-vehicle camera 112 may be used to capture real-time images and/or live-action video of the vehicle surroundings. The data processing equipment can acquire road information from real-time images and/or live-action videos of the road in front of the vehicle, which are acquired by the vehicle-mounted camera. The road information may include, but is not limited to, lane line information, road curvature information, obstacle information, stationary facility information, and pedestrian and other vehicle information located in front of or near the vehicle on the road, and the like.
Thereafter, the data processing device may predict a travel risk based on the predicted travel state and the road information. The predicted driving risk may also be displayed superimposed on the corresponding road image, or at a position corresponding to the road scene. Alternatively, the predicted driving risk may also be reported to the user. Wherein, when carrying out the pronunciation broadcast to the user, can be based on the severity of the risk of traveling, preferentially report the highest risk of traveling of risk degree.
From this, through combining real-time road conditions to predict the risk of traveling that probably exists among the vehicle driving process to warn navigating mate, be convenient for it can avoid this risk of traveling in time, effectively, facilitate for navigating mate safe driving, promote navigating mate safe driving's experience.
In addition, under different scenes, the rendering and display device can render the prediction information by adopting different rendering parameters, so that different prediction information displayed on the road image or the corresponding position of the road real scene in an overlapping manner can respectively realize different functions for the driver.
As one example of the present disclosure, rendering parameters (e.g., colors) used to render the prediction information may be respectively related to the security degree of the prediction information. For example, the prediction information without driving risk may be rendered in green, the prediction information with a lower risk may be rendered in yellow, and the prediction information with a higher risk may be rendered in red.
Therefore, the prediction information related to different safety degrees can respectively realize different functions such as guidance, reminding, warning and the like for the driver.
For the traveling risk occurrence object that may cause the traveling risk, for example, in the case where there is a pedestrian, another vehicle, an obstacle, a fixed facility, or the like in front of the vehicle, the traveling risk may be presented in combination with a space occupying marker corresponding to the predicted position where the vehicle is located at the predicted time of the occurrence of the traveling risk and the traveling risk occurrence object or its image.
Therefore, the driving risk which is possibly caused by the situation of the driver can be clearly informed, so that the driver can make a correct solution strategy according to the actual risk scene in real time, and various possible driving risks can be avoided.
For better understanding of the technical solutions of the present disclosure, the following will be made with reference to the exemplary descriptions of presenting relevant prediction information (or guidance information corresponding to the prediction information) during the vehicle driving process in different scenarios shown in fig. 2A to 2D.
FIG. 2A illustrates a schematic diagram of presenting predictive information, according to one embodiment of the present disclosure.
As shown in fig. 2A, on a straight road R1 with good road conditions, for a traveling vehicle M1, three green directional arrows a1, a2, A3 are used as space occupation indicators corresponding to predicted positions to respectively present the predicted positions to be reached by the vehicle at 1s, 2s, 3 s. As can be seen from fig. 2A, the three directional arrows a1, a2, A3 are all in appropriate areas within the lane. These three arrows indicate that the vehicle can safely travel along the forward direction of the straight road in the current vehicle traveling state.
FIG. 2B illustrates a schematic diagram of presenting predictive information, according to one embodiment of the present disclosure.
As shown in fig. 2B, on a curved road R2 with good road conditions, corresponding to a running vehicle M2, three green directional arrows B1, B2, and B3 are used as space occupying markers corresponding to predicted positions to respectively present predicted positions to be reached by the vehicle at 1s, 2s, and 3 s. As can be seen from fig. 2B, the three directional arrows B1, B2, B3 are all in appropriate areas within the curved lane. The three arrows indicate that the vehicle can safely travel along the forward direction of the curved road in the current vehicle traveling state.
FIG. 2C illustrates a schematic diagram of presenting predictive information, according to one embodiment of the present disclosure.
As shown in fig. 2C, the current actual travel route of the vehicle is not parallel to the road R3 on which the vehicle is located, that is, the vehicle is not currently traveling in the direction in which the road R3 extends. If the vehicle is still traveling forward according to its current direction of travel, there is a risk of traveling pressing the left lane edge line. Therefore, according to the predicted forward predicted position and the associated driving risk, the safe position to which the vehicle will arrive at 1s and 2s can be identified by two green arrows C1 and C2, respectively, and the position to which the vehicle will arrive at 3s and at which there is a risk of a wire press can be identified by a yellow arrow C3. Therefore, the yellow space occupation mark with reminding meanings is displayed on the object (such as a lane line) or the related image which is possibly subjected to the driving risk in an overlapping mode, so that the driver is reminded of paying attention to the yellow space occupation mark, the driving direction can be corrected in time, and the line pressing is avoided.
FIG. 2D illustrates a schematic diagram of presenting predictive information, according to one embodiment of the present disclosure.
As shown in fig. 2D, the vehicle M4 has another traveling vehicle ahead of the current travel road R4 and the vehicle M4 is short in separation distance from the other traveling vehicle, and the vehicle M4 is at risk of collision with the other vehicle ahead according to the predicted travel state of the predicted vehicle and the predicted travel risk. Therefore, according to the predicted forward predicted position and the associated running risk, the safe position to which the vehicle M4 will arrive at 1s can be identified with a green arrow D1 when presenting the predicted information to the driver; a position to be reached by the vehicle M4 at 2s, where there is a possibility of a collision risk with the preceding vehicle, is identified with a red arrow D2; the position where the vehicle M4 is about to arrive at the position at 3s where other vehicles M5 are present is identified in conjunction with the vehicle identification D3. Therefore, the driver is warned to pay attention to deceleration or stop by combining the red space occupying mark with warning meaning and the front vehicle which can cause collision risk, so as to assist the driver to drive safely.
FIG. 2E illustrates a schematic diagram of presenting predictive information, according to one embodiment of the disclosure.
As shown in fig. 2E, an obstacle O1 (e.g., a temporary road repair or a broken vehicle, etc.) exists on the current travel road R5 of the vehicle M6, so that the vehicle M6 cannot continue to travel safely. In this case, therefore, the warning information may be displayed superimposed on the corresponding road image or at the position corresponding to the road scene, based on the predicted forward predicted position and the related travel risk: "this road obstruction! "OR" this way failure! To warn the driver that the road can not be passed safely, please select other roads (such as a side lane) to continue driving or turn around to select a driving path again.
Thus far, the application of the driving assistance system of the present disclosure in different scenarios has been schematically illustrated in conjunction with fig. 2A-2E. It should be understood that the above examples are illustrative only of examples to which the disclosure may be applicable and are not intended to limit the disclosure in any way.
And it should be understood that in the above example, the different vehicle speeds upon which the moving vehicle is based will also affect the prediction of the risk of travel, as well as the spacing of the space occupying markers presented corresponding to the predicted locations. For example, in the example shown in fig. 2C, the vehicle is traveling based on a speed of 60 km/h. In the example shown in fig. 2D, the vehicle travels based on a speed of 40 km/h. Thus, the distances between the displayed space placeholders corresponding to the predicted locations are different.
Therefore, the method and the device predict the possible driving situation of the driving vehicle at the future moment by collecting the vehicle driving information and/or the road information, show the safe driving area and/or the possible driving risk to the user corresponding to the space occupation identifier of the predicted position, integrate the functions of guiding and warning, and improve the experience of safe driving.
[ Driving assistance method ]
Fig. 3 shows a schematic flow diagram of a driving assistance method according to an embodiment of the present disclosure. Fig. 4 shows a schematic flow diagram of a driving assistance method according to another embodiment of the present disclosure. The method shown in fig. 3 or fig. 4 may be implemented by the driving assistance system shown in fig. 1, and a basic implementation process of the driving assistance method is described below, and details related thereto may be referred to the above description, and will not be described again below.
As shown in fig. 3, in step S310, vehicle travel information of the vehicle is acquired. Wherein the vehicle travel information may be acquired by an in-vehicle sensor. The vehicle travel information includes at least one of: current position information, current driving direction information, driving speed information, steering control information and power control information.
In step S320, a predicted travel state after a set time period or a set distance of travel is determined based on the vehicle travel information.
In step S330, prediction information including the predicted travel state is output. The prediction information may be output in an image or animation manner, or may be output in a voice playing manner.
The predicted travel state includes a location to be reached by the vehicle after traveling for at least a set period of time or at least a set distance, and the predicted travel state may be presented using a space occupying marker corresponding to the location.
Therefore, AR display is achieved, so that a driver can more intuitively and friendly view related prediction information to assist the driver in safe driving. Moreover, the safe driving position area in the front or the driving risk can be broadcasted through voice, and convenience is further provided for safe driving of drivers.
As another example of the present disclosure, as shown in fig. 4, in step S410, vehicle travel information of a vehicle may be acquired.
In step S420, road information on which the vehicle is located may be acquired. Wherein the road information may be acquired by an in-vehicle camera. And the road information can be acquired from the live-action video of the road ahead of the vehicle, which is acquired by the vehicle-mounted camera. The road information may include lane line information, road curvature information, obstacle information, stationary facility information, and pedestrian and other vehicle information located in front of or near the vehicle on the road.
In step S430, a predicted travel state after traveling for a set period of time or a set distance may be determined based on the vehicle travel information.
A travel risk may be predicted based on the predicted travel state and the road information at step S440. The driving risk occurrence object of the driving risk may include at least one of: lane edges, road edges, pedestrians, other vehicles, obstacles, fixtures. The driving risk may include at least one of: lane departure, road departure, collision with a pedestrian or other vehicle, collision with an obstacle or fixed establishment.
The prediction information output at step S450 may include the travel risk.
As one example of the present disclosure, the driving risk may be presented in conjunction with a space occupying identifier corresponding to a predicted location at which a vehicle is located at a predicted driving risk occurrence time and the driving risk occurrence object or an image thereof.
As an example of the present disclosure, the driving assistance methods may further include rendering the prediction information so that the prediction information is displayed in an overlay manner on the corresponding road image or at a position corresponding to the road real scene in step S330 and/or step S450. Wherein rendering parameters used for rendering the prediction information are respectively related to the safety degree of the prediction information.
The details of the driving assistance method may be referred to the above description, and are not described herein again.
[ Driving assistance System ]
Fig. 5 shows a schematic block diagram of a driving assistance system according to one embodiment of the present disclosure. The driving assistance system may be configured to implement the driving assistance method described above, and the specific implementation of the driving assistance system is the same as or similar to the implementation of the system or the method described above, and may specifically refer to the related description above, which is not described herein again.
As shown in fig. 5, the driving assistance system 500 may include a driving information acquisition module 510, a driving state prediction module 520, and an output module 530.
The travel information acquisition module 510 may acquire vehicle travel information of the vehicle.
The driving state prediction module 520 may determine a predicted driving state after driving for a set period of time or a set distance based on the vehicle driving information.
The output module 530 may output prediction information including the predicted travel state.
As one example, the driving assistance system may further include a road information acquisition module and a travel risk prediction module (not shown in the drawings).
The road information acquisition module can acquire the road information of the vehicle;
the travel risk prediction module may predict a travel risk based on the predicted travel state and the road information, wherein the prediction information includes the travel risk.
As one example, the driving assistance system may further include a rendering module (not shown in the figure).
The rendering module may be configured to render the prediction information to display the prediction information in an overlay manner on a corresponding road image or at a position corresponding to a road real scene where the warning information is displayed. And rendering parameters used for rendering the prediction information are respectively related to the safety degree of the prediction information.
The implementation of the functions of each module of the driving assistance system is the same as or similar to that of the system or the method, and the specific implementation may refer to the above related description, which is not repeated herein.
[ PROVIDING METHOD ]
As an example of the present disclosure, the driving assistance scheme described above may also be implemented as a prompting method implemented on the interface side.
Specifically, travel information of an object traveling on a road may be acquired.
The object may be a pedestrian or a vehicle traveling on a road.
The travel information of the pedestrian can be acquired by a mobile device held by the pedestrian. The travel information may be obtained by, for example, determining information about a road on which the pedestrian is located based on GPS positioning information for the mobile device.
The traveling information of the vehicle can be obtained through some sensors on the vehicle, a vehicle-mounted camera or other devices.
Based on the travel information of the object, a predicted travel state after a travel set time period or a set distance may be determined.
The set time period or the set distance may be a set time period or a set length of a trip to be traveled by the vehicle from the present time or the present position within a predetermined time in the future. For example, the set time period may be 1s, 2s, or the like. For example, the set distance may be 5 meters, 10 meters, 15 meters, etc.
Outputting prediction information, the prediction information including the predicted travel state.
The audio information may be visually output in an image display mode or an AR mode, or may be output by playing audio information.
The obtaining of the information, the determining of the predicted driving state, or the outputting of the predicted information may be the same as or similar to those described above, and reference may be made to the related description, which is not repeated herein.
[ calculating device ]
FIG. 6 illustrates a schematic structural diagram of a computing device according to an embodiment of the invention.
Referring to fig. 6, computing device 600 includes memory 610 and processor 620.
The processor 620 may be a multi-core processor or may include a plurality of processors. In some embodiments, processor 620 may include a general-purpose host processor and one or more special coprocessors such as a Graphics Processor (GPU), a Digital Signal Processor (DSP), or the like. In some embodiments, processor 620 may be implemented using custom circuits, such as an Application Specific Integrated Circuit (ASIC) or a Field Programmable Gate Array (FPGA).
The memory 610 may include various types of storage units, such as system memory, Read Only Memory (ROM), and permanent storage. Wherein the ROM may store static data or instructions that are required by the processor 620 or other modules of the computer. The persistent storage device may be a read-write storage device. The persistent storage may be a non-volatile storage device that does not lose stored instructions and data even after the computer is powered off. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the permanent storage may be a removable storage device (e.g., floppy disk, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as a dynamic random access memory. The system memory may store instructions and data that some or all of the processors require at runtime. In addition, the memory 610 may include any combination of computer-readable storage media, including various types of semiconductor memory chips (DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic and/or optical disks, may also be employed. In some embodiments, memory 610 may include a removable storage device that is readable and/or writable, such as a Compact Disc (CD), a digital versatile disc read only (e.g., DVD-ROM, dual layer DVD-ROM), a Blu-ray disc read only, an ultra-dense disc, a flash memory card (e.g., SD card, min SD card, Micro-SD card, etc.), a magnetic floppy disk, or the like. Computer-readable storage media do not contain carrier waves or transitory electronic signals transmitted by wireless or wired means.
The memory 610 has stored thereon processable code that, when processed by the processor 620, causes the processor 620 to perform the driving assistance methods described above.
The driving assistance according to the invention has been described in detail above with reference to the drawings.
Furthermore, the method according to the invention may also be implemented as a computer program or computer program product comprising computer program code instructions for carrying out the above-mentioned steps defined in the above-mentioned method of the invention.
Alternatively, the invention may also be embodied as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) which, when executed by a processor of an electronic device (or computing device, server, etc.), causes the processor to perform the steps of the above-described method according to the invention.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (19)
1. A driving assist method characterized by comprising:
acquiring vehicle running information of a vehicle;
determining a predicted travel state after traveling for a set period of time or a set distance based on the vehicle travel information; and
outputting prediction information including the predicted travel state.
2. The method of claim 1,
the predicted travel state includes a position to be reached by the vehicle after traveling for at least one set time period or at least one set distance, and
presenting the predicted travel state using a space occupying marker corresponding to the location.
3. The method of claim 1 or 2, further comprising:
acquiring road information of the vehicle; and
predicting a travel risk based on the predicted travel state and the road information,
wherein the prediction information includes the travel risk.
4. The method of claim 3,
the driving risk occurrence object of the driving risk includes at least one of: lane edges, road edges, pedestrians, other vehicles, obstacles, fixtures; and/or
The driving risk includes at least one of: lane departure, road departure, collision with a pedestrian or other vehicle, collision with an obstacle or fixed establishment.
5. The method of claim 4,
presenting the driving risk in combination with a space occupying identifier corresponding to a predicted position where the vehicle is located at the predicted driving risk occurrence time and the driving risk occurrence object or the image thereof.
6. The method of claim 1, wherein the step of presenting the prediction information comprises:
and displaying the prediction information in an overlapping way on the corresponding road image or at the position corresponding to the road real scene.
7. The method of claim 6, further comprising:
rendering the prediction information to display the prediction information on a corresponding road image or the warning information in a position corresponding to a road real scene in an overlapping manner,
and rendering parameters used for rendering the prediction information are respectively related to the safety degree of the prediction information.
8. The method of claim 1,
acquiring the vehicle running information through a vehicle-mounted sensor; and/or
And acquiring the road information through a vehicle-mounted camera.
9. The method of claim 8, wherein the step of obtaining the road information comprises:
and acquiring the road information from the live-action video of the road in front of the running vehicle, which is acquired by the vehicle-mounted camera.
10. The method of claim 3,
the vehicle travel information includes at least one of: current position information, current driving direction information, driving speed information, steering control information and power control information; and/or
The road information includes lane line information, road curvature information, obstacle information, stationary facility information, and pedestrian and other vehicle information located in front of or near the vehicle on the road.
11. A method of prompting, comprising:
acquiring traveling information of an object traveling on a road;
determining a predicted travel state after traveling for a set time period or a set distance based on the travel information of the object;
outputting prediction information, the prediction information including the predicted travel state.
12. A driving assistance system characterized by comprising:
the driving information acquisition module is used for acquiring vehicle driving information of the vehicle;
a driving state prediction module for determining a predicted driving state after driving for a set time period or a set distance based on the vehicle driving information; and
an output module to output prediction information, the prediction information including the predicted travel state.
13. The system of claim 12, further comprising:
the road information acquisition module is used for acquiring the road information of the vehicle;
a travel risk prediction module for predicting a travel risk based on the predicted travel state and the road information,
wherein the prediction information includes the travel risk.
14. The system of claim 12, further comprising:
a rendering module for rendering the prediction information to display the prediction information on a corresponding road image or the warning information is displayed at a position corresponding to a road real scene in an overlapping manner,
and rendering parameters used for rendering the prediction information are respectively related to the safety degree of the prediction information.
15. A driving assistance system characterized by comprising:
one or more vehicle travel information sensors for sensing vehicle travel information of the vehicle;
a data processing device for determining a predicted travel state after traveling for a set period of time or a set distance based on the vehicle travel information; and
an output device for outputting prediction information to a user, the prediction information including the predicted travel state.
16. The system of claim 15,
the output device comprises a rendering and display device; and/or
The output device includes an audio output device.
17. The system of claim 15, further comprising:
a vehicle-mounted camera for shooting a real-time image of the surrounding environment of the vehicle,
wherein the data processing device analyzes the real-time image to acquire road information on which the vehicle is located, predicts a running risk based on the predicted running state and the road information, and
the prediction information includes the travel risk.
18. A computing device, comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method of any of claims 1-11.
19. A non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the method of any of claims 1-11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811565912.8A CN111348055A (en) | 2018-12-20 | 2018-12-20 | Driving assistance method, driving assistance system, computing device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811565912.8A CN111348055A (en) | 2018-12-20 | 2018-12-20 | Driving assistance method, driving assistance system, computing device, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111348055A true CN111348055A (en) | 2020-06-30 |
Family
ID=71190011
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811565912.8A Pending CN111348055A (en) | 2018-12-20 | 2018-12-20 | Driving assistance method, driving assistance system, computing device, and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111348055A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113147748A (en) * | 2021-03-26 | 2021-07-23 | 江铃汽车股份有限公司 | ADAS display method and system based on AR live-action navigation |
CN113393669A (en) * | 2021-06-11 | 2021-09-14 | 阿波罗智联(北京)科技有限公司 | Control method, device, equipment, medium and program product for vehicle |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08178679A (en) * | 1994-12-19 | 1996-07-12 | Honda Motor Co Ltd | On-vehicle display device |
JP2004110394A (en) * | 2002-09-18 | 2004-04-08 | Toyota Motor Corp | Obstacle detecting device for vehicle |
JP2005193845A (en) * | 2004-01-09 | 2005-07-21 | Nissan Motor Co Ltd | Vehicular monitoring device |
JP2007076496A (en) * | 2005-09-14 | 2007-03-29 | Fujitsu Ten Ltd | Parking support device |
JP2007272350A (en) * | 2006-03-30 | 2007-10-18 | Honda Motor Co Ltd | Driving support device for vehicle |
CN105473408A (en) * | 2013-08-20 | 2016-04-06 | 奥迪股份公司 | Device and method for controlling motor vehicle |
WO2016152000A1 (en) * | 2015-03-20 | 2016-09-29 | 株式会社デンソー | Safety confirmation assist apparatus, safety confirmation assist method |
WO2018080862A1 (en) * | 2016-10-31 | 2018-05-03 | Delphi Technologies, Inc. | Driver assistance system |
-
2018
- 2018-12-20 CN CN201811565912.8A patent/CN111348055A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08178679A (en) * | 1994-12-19 | 1996-07-12 | Honda Motor Co Ltd | On-vehicle display device |
JP2004110394A (en) * | 2002-09-18 | 2004-04-08 | Toyota Motor Corp | Obstacle detecting device for vehicle |
JP2005193845A (en) * | 2004-01-09 | 2005-07-21 | Nissan Motor Co Ltd | Vehicular monitoring device |
JP2007076496A (en) * | 2005-09-14 | 2007-03-29 | Fujitsu Ten Ltd | Parking support device |
JP2007272350A (en) * | 2006-03-30 | 2007-10-18 | Honda Motor Co Ltd | Driving support device for vehicle |
CN105473408A (en) * | 2013-08-20 | 2016-04-06 | 奥迪股份公司 | Device and method for controlling motor vehicle |
WO2016152000A1 (en) * | 2015-03-20 | 2016-09-29 | 株式会社デンソー | Safety confirmation assist apparatus, safety confirmation assist method |
WO2018080862A1 (en) * | 2016-10-31 | 2018-05-03 | Delphi Technologies, Inc. | Driver assistance system |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113147748A (en) * | 2021-03-26 | 2021-07-23 | 江铃汽车股份有限公司 | ADAS display method and system based on AR live-action navigation |
CN113393669A (en) * | 2021-06-11 | 2021-09-14 | 阿波罗智联(北京)科技有限公司 | Control method, device, equipment, medium and program product for vehicle |
EP4102481A1 (en) * | 2021-06-11 | 2022-12-14 | Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. | Method and apparatus for controlling vehicle, device, medium, and program product |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11462022B2 (en) | Traffic signal analysis system | |
KR102093047B1 (en) | Traffic prediction based on map image for autonomous driving | |
JP6398957B2 (en) | Vehicle control device | |
JP7518893B2 (en) | Emergency Vehicle Detection | |
JP4513740B2 (en) | Route guidance system and route guidance method | |
JP4446204B2 (en) | Vehicle navigation apparatus and vehicle navigation program | |
JP4792866B2 (en) | Navigation system | |
JP6036371B2 (en) | Vehicle driving support system and driving support method | |
CN105799712B (en) | Vehicle and control method thereof | |
CN106097774A (en) | Track change assisting system | |
CN105774806A (en) | Vehicle travelling control device | |
CN104658290A (en) | Processing unit, system and method for suggesting crossroad driving | |
CN104422462A (en) | Vehicle navigation method and vehicle navigation device | |
JP2008305042A (en) | Car navigation device, road sign recognition method and program | |
JP6651796B2 (en) | Driving support device | |
CN113129624A (en) | Fastest lane determining algorithm under traffic jam condition | |
US9383219B2 (en) | Information display device and information display method | |
CN108715164A (en) | Driving ancillary equipment and method for vehicle | |
US20220153266A1 (en) | Vehicle adaptive cruise control system, method and computer readable medium for implementing the method | |
US11423668B2 (en) | Vehicle and control method thereof | |
JP2011028415A (en) | Driving support device | |
CN111348055A (en) | Driving assistance method, driving assistance system, computing device, and storage medium | |
CN114582153A (en) | Long solid line reminding method and system for ramp entrance and vehicle | |
JP2011075437A (en) | Device, method and program for displaying vehicle periphery | |
JP2021149319A (en) | Display control device, display control method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20201123 Address after: Room 603, 6 / F, Roche Plaza, 788 Cheung Sha Wan Road, Kowloon, China Applicant after: Zebra smart travel network (Hong Kong) Ltd. Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands Applicant before: Alibaba Group Holding Ltd. |
|
TA01 | Transfer of patent application right | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200630 |
|
RJ01 | Rejection of invention patent application after publication |