CN112061132A - Driving assistance method and driving assistance device - Google Patents

Driving assistance method and driving assistance device Download PDF

Info

Publication number
CN112061132A
CN112061132A CN201910442000.XA CN201910442000A CN112061132A CN 112061132 A CN112061132 A CN 112061132A CN 201910442000 A CN201910442000 A CN 201910442000A CN 112061132 A CN112061132 A CN 112061132A
Authority
CN
China
Prior art keywords
information
vehicle
external environment
environment information
prompt
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910442000.XA
Other languages
Chinese (zh)
Inventor
许侃
姚维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Banma Zhixing Network Hongkong Co Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910442000.XA priority Critical patent/CN112061132A/en
Publication of CN112061132A publication Critical patent/CN112061132A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/04Traffic conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Mathematical Physics (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a driving assistance method and a driving assistance apparatus. The driving assistance method includes: acquiring external environment information of a vehicle, wherein the external environment information comprises at least one of traffic environment information and natural environment information; acquiring vehicle interior information, the vehicle interior information including: at least one of vehicle internal environment information, driver state information and vehicle operation information; when the external environment information and the vehicle internal information meet a specific matching relationship, determining that a preset scene is met; and sending out corresponding prompt information according to the preset scene. The driving assistance method and the driving assistance device provided by the embodiment of the invention can monitor the internal environment and the external environment of the vehicle at the same time, give a prompt to a driver according to the interaction of the external environment information and the internal information of the vehicle, and realize driving assistance.

Description

Driving assistance method and driving assistance device
Technical Field
The present invention relates to the field of automatic driving technologies, and in particular, to a driving assistance method and a driving assistance apparatus.
Background
In the driving process of a user, information required for driving is provided for a driver by utilizing a computing processing device, and driving assistance is realized, so that the technology becomes a common technology for intelligently driving a vehicle at present.
For example, during driver driving, an unfamiliar destination is often encountered with reference to a map provided by navigation software. The navigation software can be installed in a vehicle-mounted computer or a mobile phone of a user. In the navigation process, the navigation software can broadcast information such as road conditions and lane changes through the voice broadcast function.
However, the current driving assistance method has at least the following disadvantages:
first, the voice broadcast function only plays the effect of one-way information transfer, and the mode of transfer information is comparatively simple, can't interact with the user, more can't detect whether the user really received the information or carried out.
Secondly, when a user drives a vehicle, the user is easy to operate the vehicle-mounted computer or the mobile phone by mistake, so that the user is silent, and sometimes easily misses navigation information.
Third, in some cases, when the voice broadcast is frequent, driving of the user may be disturbed.
Therefore, the driving-assistance manner using only voice as one-way information transfer has shown more and more drawbacks.
Disclosure of Invention
In view of the fact that the voice broadcast function only plays a role in one-way information transmission and cannot better assist driving in the prior art, an embodiment of the present invention provides a driving assistance method and a driving assistance device to solve the problems in the prior art.
In order to solve the above problem, an embodiment of the present invention discloses a driving assistance method, including:
acquiring external environment information of a vehicle, wherein the external environment information comprises at least one of traffic environment information and natural environment information;
acquiring vehicle interior information, the vehicle interior information including: at least one of vehicle internal environment information, driver state information and vehicle operation information;
when the external environment information and the vehicle internal information meet a specific matching relationship, determining that a preset scene is met;
and sending out corresponding prompt information according to the preset scene.
In order to solve the above problem, an embodiment of the present invention discloses a driving assistance device including:
the external information acquisition module is used for acquiring external environment information, and the external environment information comprises at least one of traffic environment information and natural environment information;
an internal information acquisition module for acquiring vehicle internal information, the vehicle internal information including: at least one of vehicle internal environment information, driver state information and vehicle operation information;
the scene judging module is used for determining that a preset scene is met when the external environment information and the vehicle internal information meet a specific matching relationship;
and the prompt module is used for sending prompt information when judging that the external environment information and the vehicle internal information meet the preset matching relationship.
An embodiment of the present invention further discloses a terminal device, including:
one or more processors; and
one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the terminal device to perform the above-described methods.
One embodiment of the invention also discloses one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause a terminal device to perform the above-described method.
As can be seen from the above, the embodiments of the present invention include the following advantages:
according to the driving assistance method provided by the embodiment of the invention, the internal environment and the external environment of the vehicle can be monitored simultaneously, and the driver is prompted according to the interaction of the internal environment and the external environment to assist driving.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of the core concept of the present invention.
Fig. 2 is a flowchart of a driving assistance method of the first embodiment of the invention.
Fig. 3 is a schematic diagram illustrating a manner in which some prompt messages are issued according to an embodiment of the present invention.
Fig. 4 is a flowchart illustrating the sub-steps of step S203 according to the second embodiment of the present invention.
Fig. 5A to 5D are schematic diagrams of various scenes and corresponding prompting manners.
Fig. 6 is a block diagram of a driving assistance apparatus according to a third embodiment of the invention.
Fig. 7 is a flowchart of a driving assistance method of a fourth embodiment of the invention.
Fig. 8 is a block diagram of a driving assistance apparatus according to a fifth embodiment of the invention.
Fig. 9 schematically shows a block diagram of a terminal device for performing the method according to the invention.
Fig. 10 schematically shows a storage unit for holding or carrying program code implementing the method according to the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present invention.
One of the core ideas of the present invention is to provide a driving assistance method and apparatus, as shown in fig. 1, the present invention can simultaneously collect vehicle internal information and external environment information by using an external information detection apparatus 11 and an internal information detection apparatus 12, and when the vehicle internal information and the external environment information satisfy a preset matching relationship, the present invention scheme can send out prompt information to assist a driver in driving.
First embodiment
A first embodiment of the invention proposes a driving assistance method. The method is applied to vehicles which are respectively provided with information acquisition devices, such as cameras, capable of acquiring information inside the vehicle and information outside the vehicle. The in-vehicle information may include, for example, in-vehicle environment information, driver state information, vehicle operation information, and the like, and the out-vehicle information may include, for example, traffic environment information and natural environment information. Through detecting the environment in and outside the car, can remind the condition that accords with preset scene to supplementary navigating mate's driving.
Fig. 2 is a flowchart showing steps of a driving assistance method according to a first embodiment of the invention. As shown in fig. 2, the driving assistance method according to the first embodiment of the present invention may include the steps of:
s101, obtaining external environment information of a vehicle, wherein the external environment information comprises at least one of traffic environment information and natural environment information;
in this step, during the running of the vehicle, the vehicle may acquire an external environment in which the vehicle runs through an external camera sensor, an in-vehicle computer, or the like.
The external environment information may include, for example, traffic environment information and natural environment information. The natural environment information may be, for example, weather, light, road surface humidity, visibility, and other factors affecting driving. The natural environment information can be determined by real-time pictures shot by a camera outside the vehicle, or by weather information, light information, air humidity information, visibility information and the like acquired from a network by a vehicle-mounted computer.
The traffic environment information may be obtained by an external camera, a sensor, etc. of the vehicle. The traffic environment information may include, for example, road condition information, road information, parking space information, obstacle information, and the like. The road condition information is, for example, a road congestion condition currently displayed by the current vehicle-mounted computer, and the road information is, for example, information that needs to be changed and is displayed by navigation software of the current vehicle-mounted computer. The parking space information is, for example, the number of a parking space on the ground shot by an external camera of the vehicle or an original backing image; the obstacle information may include, for example, position information of the vehicle and a preceding vehicle, position information of the vehicle and a following vehicle, position information of the vehicle and a pedestrian, and the like, and the position information may include, for example, a distance, and may be obtained by a vehicle body sensor or may be obtained by photographing with an external camera and calculating.
Before or after, or simultaneously with, the execution of step S101, step S102 may be executed as follows:
s102, obtaining vehicle interior information, wherein the vehicle interior information comprises: at least one of vehicle internal environment information, driver state information and vehicle operation information;
in this step, vehicle interior information may be acquired by an in-vehicle device such as an interior camera sensor, an interior sensor, and a thermo-hygrometer with respect to an interior environment of the vehicle while the vehicle is traveling.
The vehicle interior information may include, for example, in-vehicle environment information, driver state information, vehicle operation information, and the like.
The in-vehicle environment information may include, for example, an in-vehicle temperature, an in-vehicle humidity, and the like, and may be obtained, for example, by an in-vehicle thermometer, a hygrometer, and an in-vehicle computer.
The driver state information may include, for example, behavior of the driver, line of sight information of the driver, and the like. Some behaviors of the driver, such as lowering the head, playing a mobile phone and monitoring and obtaining through a camera in the vehicle, and other behaviors, such as braking, accelerating, decelerating, turning a steering wheel, turning a steering lamp and the like, can be obtained from a control bus of the vehicle through an on-board computer.
In some embodiments, detecting whether the driver is heading may be accomplished by:
shooting the current sitting posture of a driver in real time through a camera in the vehicle; and calculating the included angle between the head and the body of the user in the current sitting posture. When the included angle is smaller or larger than a specified angle, the current behavior of the driver can be considered as "head down".
In some embodiments, detecting whether the driver is playing the mobile phone may be implemented as follows:
shooting the current sitting posture of a driver through a camera in the vehicle, and calculating the sight line of the driver according to the current sitting posture; determining whether electronic products such as mobile phones exist in front of the sight line of a driver; and if the electronic product is judged to be in front of the sight line of the driver, determining the state of the driver playing the mobile phone.
Table 1 shows some common vehicle interior information and exterior environment information, as follows.
Figure BDA0002072305090000051
Figure BDA0002072305090000061
TABLE 1
After performing steps S101 and S102, step S103 may be performed as follows:
s103, when the external environment information and the vehicle internal information meet a specific matching relationship, determining that a preset scene is met;
in this step, comprehensive judgment can be performed according to the internal information and the external information of the vehicle to determine whether the vehicle conforms to the specific matching relationship, that is, whether the vehicle enters a scene needing prompting can be determined through the comprehensive judgment of the internal information and the external information.
The execution main body, such as a vehicle-mounted computer, can comprehensively judge according to the external environment information and the vehicle internal information to confirm whether the preset scene is met. The preset scenario is, for example, some common driving problems or pain points, such as: the system is characterized in that the system cannot be started in time after the red light turns green, a driver does not realize the need of lane changing, does not turn a turn light when changing lanes, does not decelerate when passing pedestrians in rainy and snowy weather, and cannot remember parking space information after the driver stops. When the execution subject judges that the execution subject is in the scene, the driving assistance function of the embodiment of the invention can be used for reminding, so that the problems are avoided.
In an embodiment of the present invention, a plurality of scenes may be preset, and when both the internal information and the external information satisfy the preset matching relationship, it may be considered that a certain preset scene is satisfied, and prompt information may be given for these scenes.
After step S103 is performed, step S104 may be performed as follows:
and S104, sending out corresponding prompt information according to the preset scene.
Fig. 3 is a schematic diagram illustrating a manner in which some prompt messages are issued according to an embodiment of the present invention. As shown in fig. 3, the prompting information may include, for example, at least one of an audible prompting information, a visual prompting information, and a tactile prompting information.
The auditory prompting information is, for example, a prompt aiming at the auditory sense of the driver, such as voice broadcast and ring tone reminding; the visual prompting information is displayed on a screen of a vehicle-mounted computer for a driver to watch, or arouses the attention of the driver in a light mode and the like; the tactile indication information is, for example, a vibration indication, and for example, vibration functions may be provided on both sides of the steering wheel to generate a vibration effect on the hands of the driver.
As can be seen from the above, the driving assistance method according to the first embodiment of the present invention has at least the following technical effects:
according to the driving assistance method provided by the embodiment of the invention, the internal environment and the external environment of the vehicle can be monitored simultaneously, and the driver is prompted according to the interaction of the internal environment and the external environment to assist driving. Compared with the prior art that the prompt is carried out by using voice singly, the method and the device adopt external information and internal information simultaneously, so that the judgment angle is more polyhedral, the judgment accuracy is higher, wrong judgment and missed judgment are avoided, and the problems that the prompt is too frequent and the judgment of a driver is interfered when only the internal information is collected for prompt are avoided.
Second embodiment
A second embodiment of the invention proposes a driving assistance method. Fig. 4 is a flowchart showing steps of a driving assistance method according to a second embodiment of the invention. As shown in fig. 4, the driving assistance method according to the embodiment of the invention includes the steps of:
s201, obtaining external environment information of a vehicle, wherein the external environment information comprises at least one of traffic environment information and natural environment information;
s202, obtaining vehicle interior information, wherein the vehicle interior information comprises: at least one of vehicle internal environment information, driver state information and vehicle operation information;
s203, when the external environment information and the vehicle internal information meet a specific matching relationship, determining that a preset scene is met;
and S204, sending out corresponding prompt information according to the preset scene.
The above steps S201 to S204 are the same as or similar to the steps S101 to S104 of the first embodiment, and only the difference therebetween will be described in this embodiment.
In an embodiment, the step S203 of determining that the external environment information and the vehicle interior information satisfy a specific matching relationship may include the following sub-steps:
s2031, judging whether the external environment information is matched with the vehicle interior information;
and S2032, when the judgment result is yes, determining a corresponding preset scene according to the matched external environment information and the matched vehicle interior information. The execution main body, for example, the vehicle-mounted computer, may store the content of at least one preset scene therein. When the external environment information and the vehicle internal information both meet specific requirements, the vehicle is considered to enter a preset scene, and a corresponding reminding mode needs to be executed according to the preset scene.
Table 2 shows a schematic table of multiple preset scenes and corresponding prompt messages according to an embodiment, and the following first to fifth preset scenes are described below with reference to fig. 5A to 5D, respectively:
Figure BDA0002072305090000081
Figure BDA0002072305090000091
TABLE 2
In a first preset scene, the external environment information comprises traffic light information, and the vehicle interior information comprises behavior information of a driver; as shown in fig. 5A, when the driver still lowers his head or watches the mobile phone after the red light starts to count down, the slow response to the traffic light may cause traffic jam. This scenario occurs with a high probability in driving. In this scenario, in order to avoid this problem, external traffic light information may be detected by an external detection device (e.g., an external camera), behavior information of the driver may be detected by a vehicle internal detection device, and when it is detected that the traffic light enters a red light countdown state among the traffic environment information in the external environment information, but the detected vehicle internal environment information driver is still standing down or playing a mobile phone, i.e., is in a stay-stopped state, it is determined that the first preset scenario is entered.
In a first preset scenario, in step S204, the step of sending a corresponding prompt message according to the preset scenario may include:
and according to the first preset scene, sending out prompt information through at least one of visual prompt, auditory prompt and tactile prompt when the red light countdown is finished.
For example, voice broadcast and vehicle-mounted computer screen display can be simultaneously carried out, and prompt is carried out by using vibration. The content of voice broadcast can be 'will green light, please prepare for', and the vehicle-mounted computer screen can synchronously display the countdown information synchronous with the traffic light. The vibration cues may be vibrations of a predetermined intensity and frequency on either side of the steering wheel.
In a second preset scene, the external environment information includes lane change information displayed by an in-vehicle map, and the in-vehicle information includes driver's attention information and turn lamp operation information; as shown in fig. 5B, the external camera of the vehicle detects that the current vehicle is on the road a, but the internal camera of the vehicle or the map state of the vehicle-mounted computer indicates that the intersection ahead needs to be changed to C, so the detected external environment information includes lane change information; however, the attention of the driver is not focused on driving, for example, the driver's solid line information shows that he is not looking at the in-vehicle map, or the driver's posture shows that he is making a trouble, etc.
In some cases, a reminder may be triggered when it is further detected that the driver is not intending to change lanes or is not turning on a turn signal. There may be several situations where the driver has no intention to change lanes: firstly, although the navigation function of a driver is in an on state, the navigation voice cannot be broadcasted due to the fact that the system is muted; secondly, the internal camera detects that the driver does not look at the map and does not know to change the lane; thirdly, the internal camera or the internal control information displays that the driver does not turn the turn signal lamp, and simultaneously detects that the driver does not have the intention of changing lanes.
The situation that the driver does not turn the turn signal but has the intention of changing the lane can be judged by the following modes: acquiring turn-on state information of a steering lamp and steering wheel control information from a vehicle control bus; when the steering lamp is not turned on and the steering wheel control information shows that the driver rotates the steering wheel, the situation that the driver forgets to turn on the steering lamp can be judged.
When the above condition is detected, it may be confirmed that the second preset scene is met.
In a second preset scenario, in step S204, the step of sending out a corresponding prompt message according to the preset scenario may include:
and prompting the driver to change the lane by at least one of visual prompting, auditory prompting and tactile prompting according to the second preset scene. For example, the tactile prompt triggers vibration on the steering wheel, the vibration divides the left side and the right side, and the direction changing is triggered according to the requirement.
In a third preset scene, the external environment information comprises road information, and the vehicle internal information comprises acceleration and deceleration information; as shown in fig. 5C, when the external camera detects that there is a puddle on the road surface in front and there is a pedestrian on the roadside, and the internal detection device detects that the vehicle is not decelerating, it can be determined that the vehicle enters the third preset scene.
The detection that the front road surface has the puddle can be obtained by detecting the difference of the reflection degree of the normal road surface and the reflection degree of the puddle and/or the difference of the colors, which can be realized by the technical personnel in the field and is not described herein again. The pedestrian at the roadside can be obtained by calculation through a correlation algorithm of portrait recognition, which can be realized by those skilled in the art, and is not described herein again.
In a third preset scenario, in step S204, the step of sending out a corresponding prompt message according to the preset scenario may include:
and sending prompt information through at least one of visual prompt, auditory prompt and tactile prompt according to the third preset scene.
For example, the vehicle-mounted console can trigger visual reminding and perform voice broadcasting: the front part is provided with a water pit and pedestrians, and the pedestrians are requested to slow down and walk slowly. Thereby realizing the aim of civilized driving.
In a fourth preset scene, the external environment information comprises parking space information, and the vehicle interior information comprises parking operation; as shown in fig. 5D, when the external camera detects that there is a parking space on the ground or beside the parking space, and can detect specific parking space information, and the vehicle internal information is the detected parking operation of the vehicle, it is determined that the fourth preset scene is met.
In a fourth preset scenario, in step S204, the step of sending out the corresponding prompt information according to the preset scenario may include:
and sending the detected parking space information through visual reminding according to the fourth preset scene. For example, the parking space information may be identified and extracted, and the parking space information may include a parking space number, a specific location, and the like, and then the notification is sent to the same mobile phone where the user logs in the system through a prompt message such as a mobile phone short message.
In summary, the second embodiment of the present invention provides a driving assistance method, which has at least the following advantages:
the driving assistance method according to the second embodiment of the present invention can monitor the internal environment and the external environment of the vehicle at the same time, and give a prompt to the driver according to the interaction between the internal environment and the external environment to assist driving. Compared with the prior art that the internal information is singly collected for prompting, the external information and the internal information are simultaneously adopted, so that the judgment angle is more multifaceted, the judgment accuracy is higher, wrong judgment and missed judgment are avoided, and the problems that the prompt is too frequent and the judgment of a driver is interfered when only the internal information is collected for prompting are also avoided.
In addition, the driving assistance method proposed by the present embodiment includes at least the following advantages:
in the optional embodiment of the invention, analysis and judgment can be carried out by combining various conditions, prompt or no prompt can be selected, and the information bombing to the driving user is reduced.
In the optional embodiment of the invention, a feedback mode with multiple modes can be provided, so that the mode of receiving information by a user is richer, and the interaction is more natural.
Third embodiment
A third embodiment of the present invention proposes a driving assistance apparatus, which, as shown in fig. 6, may include the following modules:
an external information obtaining module 301, configured to obtain external environment information, where the external environment information includes traffic environment information and natural environment information;
an interior information obtaining module 302, configured to obtain vehicle interior information, where the vehicle interior information includes: at least one of vehicle internal environment information, driver state information and vehicle operation information;
the scene judging module 303 is configured to determine that a preset scene is met when the external environment information and the vehicle internal information satisfy a specific matching relationship;
and the prompt module 304 is configured to send out corresponding prompt information according to the preset scene.
In an optional embodiment, the scene determining module 303 may include:
a determination unit configured to determine whether the external environment information and the vehicle interior information match;
and the determining unit is used for determining the corresponding preset scene according to the matched external environment information and the vehicle interior information when the judgment is yes.
In an optional embodiment, the traffic environment information includes at least one of road condition information, road information, traffic light information, and obstacle information.
In an optional embodiment, the traffic information includes a road congestion condition.
In an optional embodiment, the road information includes at least one of map-displayed lane change information and parking space information.
In an optional embodiment, the obstacle information includes at least one of position information of the vehicle and a preceding vehicle, position information of the vehicle and a following vehicle, and position information of the vehicle and a pedestrian. In an embodiment, the location information may include a distance.
In an alternative embodiment, the natural environment information includes weather information and light information.
In an optional embodiment, the in-vehicle environment information includes at least one of an in-vehicle temperature and an in-vehicle humidity.
In an optional embodiment, the driver status information comprises at least one of behavior of the driver and sight line information of the driver.
In an optional embodiment, the vehicle operation information includes steering wheel operation information, turn signal operation information, brake information, acceleration and deceleration information, and parking operation information.
In an optional embodiment, the prompt information includes at least one of an audible prompt information, a visual prompt information, a tactile prompt information.
In an alternative embodiment, the external environment information includes traffic light information, and the vehicle interior information includes behavior information of a driver;
the scene determination module may be configured to: when the traffic light information is judged to be in a red light countdown state and the behavior information of the driver is in a parking state, determining that a first preset scene is met;
the prompting module may be to: and according to the first preset scene, sending out prompt information through at least one of visual prompt, auditory prompt and tactile prompt when the red light countdown is finished.
In an alternative embodiment, the external environment information includes lane change information displayed by an in-vehicle map, and the vehicle interior information includes driver's attention information and turn lamp operation information;
the scene judging module is used for:
when the lane changing information displayed by the in-vehicle map is judged to be lane changing required, the attention of the driver is lower than a threshold value, and the steering lamp operation information is that the steering lamp is not operated, confirming that a second preset scene is met;
the prompt module is used for:
and sending prompt information through at least one of visual prompt, auditory prompt and tactile prompt according to the second preset scene.
In one embodiment, the external environment information includes road information, and the vehicle interior information includes acceleration and deceleration information;
the scene judging module is used for: when the road information is judged to be the ponding road and the passing pedestrian, and the acceleration and deceleration information is not decelerated, determining that a third preset scene is met;
the prompt module is used for:
and sending prompt information through at least one of visual prompt, auditory prompt and tactile prompt according to the third preset scene.
In one embodiment, the external environment information includes parking space information, and the vehicle interior information includes a parking operation;
the scene judging module is used for: when parking space information that the vehicle is parked is detected and the parking operation behavior is detected, confirming that a fourth preset scene is met;
the prompt module is used for:
and sending the detected parking space information through visual reminding according to the fourth preset scene.
In summary, the driving assistance device proposed in the present embodiment has at least the following advantages:
according to the driving assistance device provided by the embodiment of the invention, the internal environment and the external environment of the vehicle can be monitored simultaneously, and the driver is prompted according to the interaction of the internal environment and the external environment to assist driving.
In the optional embodiment of the invention, various conditions can be combined for analysis and judgment, the prompting type can be selected, and the information interference on drivers is reduced by setting different prompting intensities.
In the optional embodiment of the invention, a feedback mode with multiple modes can be provided, so that the mode of receiving information by a user is richer, and the interaction is more natural.
Fig. 7 is a flowchart of a driving assistance method of a fourth embodiment of the invention. A fourth embodiment of the present invention proposes a driving assistance method that may include the steps of:
s401, obtaining external environment information of a vehicle, wherein the external environment information comprises at least one of traffic environment information and natural environment information;
s402, obtaining vehicle interior information, wherein the vehicle interior information comprises: at least one of vehicle internal environment information, driver state information and vehicle operation information;
s403, determining to accord with a preset scene according to the external environment information and the vehicle internal information;
and S404, sending out corresponding prompt information according to the preset scene.
The foregoing steps S401, S402 and S404 are the same as or similar to steps S101, S102 and S104 of the first embodiment, and are not repeated herein. In step S403, the execution subject, such as an in-vehicle computer, may perform comprehensive judgment according to the vehicle internal information and the external environment information, so as to comprehensively judge whether the vehicle driving enters a preset scene that needs to be prompted.
The preset scenario is, for example, some common driving problems or pain points, such as: the system is characterized in that the system cannot be started in time after the red light turns green, a driver does not realize the need of lane changing, does not turn a turn light when changing lanes, does not decelerate when passing pedestrians in rainy and snowy weather, and cannot remember parking space information after the driver stops. When the execution subject judges that the execution subject is in the scene, the driving assistance function of the embodiment of the invention can be used for reminding, so that the problems are avoided.
In an embodiment of the present invention, a plurality of scenes may be preset, and when the vehicle interior information corresponds to the first information content and the external environment information corresponds to the second information content, it may be considered that a certain preset scene is satisfied, and in a subsequent step, for example, in step S404, corresponding prompt information may be given for the preset scenes.
Fig. 8 is a block diagram of a driving assistance apparatus according to a fifth embodiment of the present invention, corresponding to the fourth embodiment. A fifth embodiment of the present invention proposes a driving assist device, which may include the following modules, as shown in fig. 8:
an external information obtaining module 501, configured to obtain external environment information, where the external environment information includes traffic environment information and natural environment information;
an internal information obtaining module 502 for obtaining vehicle internal information, which includes: at least one of vehicle internal environment information, driver state information and vehicle operation information;
a scene determining module 503, configured to determine, according to the external environment information and the vehicle internal information, that a preset scene is met;
and the prompt module 504 is configured to send out corresponding prompt information according to the preset scene.
For the apparatus embodiment, since it is basically similar to the method embodiment, it is described relatively simply, and for the relevant points, refer to the partial description of the method embodiment.
Fig. 9 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present invention. As shown in fig. 9, the terminal device may include an input device 90, a processor 91, an output device 92, a memory 93, and at least one communication bus 94. The communication bus 94 is used to enable communication connections between the elements. The memory 93 may comprise a high speed RAM memory, and may also include a non-volatile storage NVM, such as at least one disk memory, in which various programs may be stored in the memory 93 for performing various processing functions and implementing the method steps of the present embodiment.
Alternatively, the processor 91 may be implemented by, for example, a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and the processor 91 is coupled to the input device 90 and the output device 92 through a wired or wireless connection.
Alternatively, the input device 90 may include a variety of input devices, such as at least one of a user-oriented user interface, a device-oriented device interface, a software-programmable interface, a camera, and a sensor. Optionally, the device interface facing the device may be a wired interface for data transmission between devices, or may be a hardware plug-in interface (e.g., a USB interface, a serial port, etc.) for data transmission between devices; optionally, the user-facing user interface may be, for example, a user-facing control key, a voice input device for receiving voice input, and a touch sensing device (e.g., a touch screen with a touch sensing function, a touch pad, etc.) for receiving user touch input; optionally, the programmable interface of the software may be, for example, an entry for a user to edit or modify a program, such as an input pin interface or an input interface of a chip; an audio input device such as a microphone may receive voice data. The output device 92 may include a display, a sound, or other output device.
In this embodiment, the processor of the terminal device includes a module for executing the functions of the modules of the data processing apparatus in each device, and specific functions and technical effects may refer to the foregoing embodiments, which are not described herein again.
Fig. 10 is a schematic diagram of a hardware structure of a terminal device according to another embodiment of the present invention. FIG. 10 is a specific embodiment of the implementation of FIG. 9. As shown in fig. 10, the terminal device of the present embodiment includes a processor 101 and a memory 102.
The processor 101 executes the computer program code stored in the memory 102 to implement the driving assistance method of fig. 1 to 5 in the above-described embodiment.
The memory 102 is configured to store various types of data to support operations at the terminal device. Examples of such data include instructions for any application or method operating on the terminal device, such as messages, pictures, videos, and so forth. The memory 102 may include a Random Access Memory (RAM) and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
Optionally, the processor 101 is provided in the processing assembly 100. The terminal device may further include: a communication component 103, a power component 104, a multimedia component 105, an audio component 106, an input/output interface 107 and/or a sensor component 108. The specific components included in the terminal device are set according to actual requirements, which is not limited in this embodiment.
The processing component 100 generally controls the overall operation of the terminal device. The processing component 100 may include one or more processors 101 to execute instructions to perform all or part of the steps of the methods of fig. 1-5 described above. Further, the processing component 100 can include one or more modules that facilitate interaction between the processing component 100 and other components. For example, the processing component 100 may include a multimedia module to facilitate interaction between the multimedia component 105 and the processing component 100.
The power supply component 104 provides power to the various components of the terminal device. The power components 104 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the terminal device.
The multimedia component 105 includes a display screen that provides an output interface between the terminal device and the user. In some embodiments, the display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The audio component 106 is configured to output and/or input audio signals. For example, the audio component 106 may include a Microphone (MIC) configured to receive external audio signals when the terminal device is in an operational mode, such as a voice recognition mode. The received audio signal may further be stored in the memory 102 or transmitted via the communication component 103. In some embodiments, the audio component 106 also includes a speaker for outputting audio signals.
The input/output interface 107 provides an interface between the processing component 100 and peripheral interface modules, which may be click wheels, buttons, etc. These buttons may include, but are not limited to: a volume button, a start button, and a lock button.
The sensor component 108 includes one or more sensors for providing various aspects of status assessment for the terminal device. For example, the sensor component 108 can detect the open/closed status of the terminal device, the relative positioning of the components, the presence or absence of user contact with the terminal device. The sensor assembly 108 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact, including detecting the distance between the user and the terminal device. In some embodiments, the sensor assembly 108 may also include a camera or the like.
The communication component 103 is configured to facilitate wired or wireless communication between the terminal device and other devices. The terminal device may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In one embodiment, the terminal device may include a SIM card slot for inserting a SIM card therein, so that the terminal device can log on to a GPRS network and establish communication with the server via the internet.
From the above, the communication component 103, the audio component 106, the input/output interface 107 and the sensor component 108 involved in the embodiment of fig. 10 can be implemented as the input device in the embodiment of fig. 9.
An embodiment of the present invention provides a terminal device, including: one or more processors; and one or more machine readable media having instructions stored thereon, which when executed by the one or more processors, cause the terminal device to perform a method as described in one or more of the embodiments of the invention.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The driving assistance method and the driving assistance device provided by the present invention are described in detail above, and the principle and the implementation manner of the present invention are explained in the present document by applying specific examples, and the description of the above embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (20)

1. A driving assistance method characterized by comprising:
acquiring external environment information of a vehicle, wherein the external environment information comprises at least one of traffic environment information and natural environment information;
acquiring vehicle interior information, the vehicle interior information including: at least one of vehicle internal environment information, driver state information and vehicle operation information;
when the external environment information and the vehicle internal information meet a specific matching relationship, determining that a preset scene is met;
and sending out corresponding prompt information according to the preset scene.
2. The method according to claim 1, wherein the step of determining that a preset scene is met when the external environment information and the vehicle interior information satisfy a specific matching relationship comprises:
judging whether the external environment information is matched with the vehicle internal information;
and when the matching is judged, determining a corresponding preset scene according to the matched external environment information and the vehicle internal information.
3. The method of claim 1, wherein the traffic environment information comprises at least one of road condition information, road information, traffic light information, and obstacle information.
4. The method of claim 3, wherein the traffic information comprises road congestion conditions.
5. The method of claim 3, wherein the road information comprises at least one of map-displayed lane change information, parking space information.
6. The method of claim 3, wherein the obstacle information includes at least one of position information of a vehicle and a preceding vehicle, position information of a vehicle and a following vehicle, and position information of a vehicle and a pedestrian.
7. The method of claim 1, wherein the natural environment information comprises at least one of weather information and light information.
8. The method of claim 1, wherein the in-vehicle environmental information includes at least one of an in-vehicle temperature and an in-vehicle humidity.
9. The method of claim 1, wherein the driver status information includes at least one of driver behavior and driver gaze information.
10. The method of claim 1, wherein the vehicle operation information includes at least one of steering wheel operation information, turn signal operation information, brake information, acceleration and deceleration information, and parking operation information.
11. The method of claim 1, wherein the prompting information comprises at least one of an audible prompting information, a visual prompting information, and a tactile prompting information.
12. The method of claim 1, wherein the external environmental information includes traffic light information, and the vehicle interior information includes behavior information of a driver;
the step of determining that a preset scene is met when the external environment information and the vehicle interior information satisfy a specific matching relationship includes:
when the traffic light information is judged to be in a red light countdown state and the behavior information of the driver is in a parking state, determining that a first preset scene is met;
the step of sending out corresponding prompt information according to the preset scene comprises the following steps:
and according to the first preset scene, sending out prompt information through at least one of visual prompt, auditory prompt and tactile prompt when the red light countdown is finished.
13. The method according to claim 1, wherein the external environment information includes lane change information displayed by an in-vehicle map, and the vehicle interior information includes driver's attention information and turn lamp operation information;
the step of determining that a preset scene is met when the external environment information and the vehicle interior information satisfy a specific matching relationship includes:
when the lane changing information displayed by the in-vehicle map is judged to be lane changing required, the attention of the driver is lower than a threshold value, and the steering lamp operation information is that the steering lamp is not operated, confirming that a second preset scene is met;
the step of sending out corresponding prompt information according to the preset scene comprises the following steps:
and sending prompt information through at least one of visual prompt, auditory prompt and tactile prompt according to the second preset scene.
14. The method according to claim 1, wherein the external environment information includes road information, and the vehicle interior information includes acceleration and deceleration information;
the step of determining that a preset scene is met when the external environment information and the vehicle interior information satisfy a specific matching relationship includes:
when the road information is judged to be the ponding road and the passing pedestrian, and the acceleration and deceleration information is not decelerated, determining that a third preset scene is met;
the step of sending out corresponding prompt information according to the preset scene comprises the following steps:
and sending prompt information through at least one of visual prompt, auditory prompt and tactile prompt according to the third preset scene.
15. The method according to claim 1, wherein the external environment information includes parking space information, and the vehicle interior information includes a parking operation;
the step of determining that a preset scene is met when the external environment information and the vehicle interior information satisfy a specific matching relationship includes:
when parking space information that the vehicle is parked is detected and the parking operation behavior is detected, confirming that a fourth preset scene is met;
the step of sending out corresponding prompt information according to the preset scene comprises the following steps:
and sending the detected parking space information through visual reminding according to the fourth preset scene.
16. A driving assistance method characterized by comprising:
acquiring external environment information of a vehicle, wherein the external environment information comprises at least one of traffic environment information and natural environment information;
acquiring vehicle interior information, the vehicle interior information including: at least one of vehicle internal environment information, driver state information and vehicle operation information;
determining to accord with a preset scene according to the external environment information and the vehicle internal information;
and sending out corresponding prompt information according to the preset scene.
17. A driving assistance apparatus characterized by comprising:
the external information acquisition module is used for acquiring external environment information, and the external environment information comprises at least one of traffic environment information and natural environment information;
an internal information acquisition module for acquiring vehicle internal information, the vehicle internal information including: at least one of vehicle internal environment information, driver state information and vehicle operation information;
the scene judging module is used for determining that a preset scene is met when the external environment information and the vehicle internal information meet a specific matching relationship;
and the prompt module is used for sending prompt information when judging that the external environment information and the vehicle internal information meet the preset matching relationship.
18. A driving assistance apparatus characterized by comprising:
the external information acquisition module is used for acquiring external environment information, and the external environment information comprises at least one of traffic environment information and natural environment information;
an internal information acquisition module for acquiring vehicle internal information, the vehicle internal information including: at least one of vehicle internal environment information, driver state information and vehicle operation information;
the scene determining module is used for determining that a preset scene is met according to the external environment information and the vehicle internal information;
and the prompt module is used for sending prompt information when judging that the external environment information and the vehicle internal information meet the preset matching relationship.
19. A terminal device, comprising:
one or more processors; and
one or more machine-readable media having instructions stored thereon that, when executed by the one or more processors, cause the terminal device to perform the method recited by one or more of claims 1-16.
20. One or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause a terminal device to perform the method recited by one or more of claims 1-16.
CN201910442000.XA 2019-05-24 2019-05-24 Driving assistance method and driving assistance device Pending CN112061132A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910442000.XA CN112061132A (en) 2019-05-24 2019-05-24 Driving assistance method and driving assistance device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910442000.XA CN112061132A (en) 2019-05-24 2019-05-24 Driving assistance method and driving assistance device

Publications (1)

Publication Number Publication Date
CN112061132A true CN112061132A (en) 2020-12-11

Family

ID=73658128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910442000.XA Pending CN112061132A (en) 2019-05-24 2019-05-24 Driving assistance method and driving assistance device

Country Status (1)

Country Link
CN (1) CN112061132A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114755035A (en) * 2022-06-15 2022-07-15 中汽信息科技(天津)有限公司 Intelligent driving multidimensional test method based on vehicle-mounted terminal

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105185140A (en) * 2015-09-30 2015-12-23 上海修源网络科技有限公司 Auxiliary driving method and system
US9290174B1 (en) * 2014-10-23 2016-03-22 GM Global Technology Operations LLC Method and system for mitigating the effects of an impaired driver
CN105741586A (en) * 2016-04-29 2016-07-06 刘学 Automatic determining method and automatic determining system for vehicle road condition
CN107097793A (en) * 2016-02-23 2017-08-29 Lg电子株式会社 Driver assistance and the vehicle with the driver assistance
US20190012910A1 (en) * 2017-07-10 2019-01-10 Toyota Research Institute, Inc. Providing user assistance in a vehicle based on traffic behavior models
CN109263646A (en) * 2018-08-03 2019-01-25 昆明理工大学 A kind of safety driving system based on artificial intelligence

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9290174B1 (en) * 2014-10-23 2016-03-22 GM Global Technology Operations LLC Method and system for mitigating the effects of an impaired driver
CN105185140A (en) * 2015-09-30 2015-12-23 上海修源网络科技有限公司 Auxiliary driving method and system
CN107097793A (en) * 2016-02-23 2017-08-29 Lg电子株式会社 Driver assistance and the vehicle with the driver assistance
CN105741586A (en) * 2016-04-29 2016-07-06 刘学 Automatic determining method and automatic determining system for vehicle road condition
US20190012910A1 (en) * 2017-07-10 2019-01-10 Toyota Research Institute, Inc. Providing user assistance in a vehicle based on traffic behavior models
CN109263646A (en) * 2018-08-03 2019-01-25 昆明理工大学 A kind of safety driving system based on artificial intelligence

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114755035A (en) * 2022-06-15 2022-07-15 中汽信息科技(天津)有限公司 Intelligent driving multidimensional test method based on vehicle-mounted terminal

Similar Documents

Publication Publication Date Title
JP6881444B2 (en) Systems and methods for transmitting information to vehicles, vehicles, and non-transient computer-readable storage media
WO2022062659A1 (en) Intelligent driving control method and apparatus, vehicle, electronic device, and storage medium
US10669980B2 (en) Method, apparatus, and system for launching engine start-stop function in vehicles
US9319860B2 (en) Mobile terminal that determine whether the user is walking while watching the mobile terminal
CN108860141B (en) Parking method, parking device and storage medium
EP3648076B1 (en) Method and apparatus for interacting traffic information, and computer storage medium
CN110660259B (en) Parking prompting method and device, electronic equipment and readable medium
JP6613623B2 (en) On-vehicle device, operation mode control system, and operation mode control method
CN109017554B (en) Driving reminding method and device and computer readable storage medium
JP6826940B2 (en) Electronics, roadside units, operating methods and control programs and transportation systems
CN109624985B (en) Early warning method and device for preventing fatigue driving
CN104102007A (en) Head-mounted display and control method thereof
CN107704794B (en) Vehicle information processing method, device and system
JP2010175516A (en) Device and method for evaluating energy saving
CN112612281A (en) Parking control method and device for automobile and computer storage medium
US20190129412A1 (en) Parking-space-exit assist system
CN112061132A (en) Driving assistance method and driving assistance device
CN114207692B (en) Driving support device, driving support system, and driving support method
KR20210147410A (en) Method and apparatus for providing information using vehicle's camera
CN110114809B (en) Method and device for alerting a driver to start at a light signal device with a varying output function
JP2013057321A (en) Energy saving evaluation device and energy saving evaluation method
CN114245302A (en) Interactive method and device for automatic driving of vehicle, electronic equipment and storage medium
JP2008026241A (en) On-vehicle navigation system
CN111824170B (en) Method, system, device and electronic equipment for obtaining vehicle performance information
CN115734161A (en) Method and device for detecting parking state, spike device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201217

Address after: Room 603, 6 / F, Roche Plaza, 788 Cheung Sha Wan Road, Kowloon, China

Applicant after: Zebra smart travel network (Hong Kong) Limited

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.