CN117445810A - Vehicle auxiliary driving method, device, medium and vehicle - Google Patents

Vehicle auxiliary driving method, device, medium and vehicle Download PDF

Info

Publication number
CN117445810A
CN117445810A CN202210854845.1A CN202210854845A CN117445810A CN 117445810 A CN117445810 A CN 117445810A CN 202210854845 A CN202210854845 A CN 202210854845A CN 117445810 A CN117445810 A CN 117445810A
Authority
CN
China
Prior art keywords
vehicle
road section
information
section
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210854845.1A
Other languages
Chinese (zh)
Inventor
高宝岚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Rockwell Technology Co Ltd
Original Assignee
Beijing Rockwell Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Rockwell Technology Co Ltd filed Critical Beijing Rockwell Technology Co Ltd
Priority to CN202210854845.1A priority Critical patent/CN117445810A/en
Publication of CN117445810A publication Critical patent/CN117445810A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/27Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/0098Details of control systems ensuring comfort, safety or stability not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The disclosure relates to a vehicle driving assisting method, device, medium and vehicle. The control vehicle assisted driving method includes: acquiring real-time information of a vehicle; judging whether the vehicle is on a target road section or not based on the real-time information; wherein, a visual blind area of a driver exists under the target road section; generating a display instruction when the vehicle is in the target road section; based on the display instruction, a panoramic image of the surroundings of the vehicle is displayed. According to the technical scheme, the panoramic image is automatically controlled and displayed when the vehicle is positioned on the target road section, so that a driver can conveniently check the surrounding environment of the vehicle, and whether risks exist or not or whether potential risks exist is timely distinguished, the problem of vision blind areas of the driver is solved, the accuracy of judging whether the vehicle exists risks or not by the driver is improved, traffic accidents are avoided, the life safety of the driver and vehicle passengers is further improved, and traffic safety is improved.

Description

Vehicle auxiliary driving method, device, medium and vehicle
Technical Field
The disclosure relates to the technical field of vehicles, and in particular relates to a vehicle auxiliary driving method, a device, a medium and a vehicle.
Background
With the rapid development of transportation industry, vehicles gradually become a main riding tool for people to live. As the demand for the number of vehicles increases, vehicle traffic safety issues are becoming a concern for users.
In the existing traffic accidents, the proportion of the traffic accidents caused by road condition factors of the driving road sections is higher, for example, when a vehicle passes through a garage, a mountain road, a bridge or other steep slope road sections, the traffic accidents are caused due to the fact that the driving road sections are limited, and then the traffic accidents are caused.
Disclosure of Invention
In order to solve the technical problems, the present disclosure provides a vehicle driving assisting method, a device, a medium and a vehicle, so that a driver can check whether there is a risk around the vehicle in time, and the driving safety is improved.
In a first aspect, the present disclosure provides a vehicle assisted driving method, including:
acquiring real-time information of a vehicle;
based on the real-time information, judging whether the vehicle is in a target road section or not; wherein, the visual blind area of the driver exists under the target road section;
generating a display instruction when the vehicle is in the target road section;
and displaying the panoramic image around the vehicle based on the display instruction.
Optionally, the real-time information includes at least one of environmental information, pitch angle, turn signal state, steering angle of a steering wheel, and head state of a driver, and the determining whether the vehicle is in the target road section based on the real-time information includes:
judging whether the vehicle is positioned on at least one section among a steering section, a garage entering section, a garage exiting section, a mountain entering section, a mountain exiting section, a tunnel entering section, a tunnel exiting section, a bridge passing section and a slope passing section or not based on the environment information;
and/or, based on the pitching angle, judging whether the vehicle is on a road section passing by a slope;
and/or determining whether the vehicle is in a turning road section based on at least one of the turn signal state, the steering wheel steering angle, and the driver head state.
Optionally, the target road section includes an uphill road section, and the determining whether the vehicle is in the target road section based on the real-time information includes:
determining the current gradient of the road where the vehicle is located based on the real-time information;
and judging whether the vehicle is on the over-slope road section or not based on the current gradient.
Optionally, the determining whether the vehicle is on the uphill road section includes:
Judging whether the current gradient is larger than a preset gradient threshold value or not;
judging whether the duration time of the current gradient larger than a preset gradient threshold value is larger than a preset time threshold value when the current gradient is larger than the preset gradient threshold value;
the vehicle assisted driving method further includes:
and when the current gradient is greater than a preset gradient threshold value and the duration is greater than a preset time threshold value, determining that the vehicle is on the over-slope road section.
Optionally, the determining, based on the real-time information, the current gradient of the road on which the vehicle is located includes:
based on the real-time information, carrying out image feature recognition and determining identification features; the identification feature comprises at least one of a lane line image, a roadside landscape distance and road sign indication information;
determining the current gradient of the road where the vehicle is located through neural network or pattern recognition based on the identification characteristics;
and/or extracting the current pose of the vehicle based on the real-time information;
determining the pitching angle of the vehicle based on the current pose so as to determine the current gradient of the road where the vehicle is located; wherein the angle value of the current gradient is equal to the angle value of the pitch angle.
Optionally, the determining whether the vehicle is in a steering section based on at least one of the turn signal state, the steering angle of the steering wheel, and the driver's head state includes:
Judging whether at least one of the turning lamp being in an on state, the steering angle of the steering wheel being greater than a first angle threshold value and the driver head being offset and exceeding a preset range is satisfied based on at least one of the turning lamp state, the steering angle of the steering wheel and the driver head state;
the method further comprises the steps of:
and when at least one condition that the turn light is in an on state, the steering angle of the steering wheel is larger than a first angle threshold value and the head of the driver is deviated and the deviation exceeds a preset range is met, determining that the vehicle is in a steering road section.
Optionally, the vehicle driving assisting method further includes:
generating an early warning instruction when the vehicle is in a target road section; the early warning instruction is at least related to the road section characteristics of the target road section;
and carrying out safety early warning prompt based on the early warning instruction.
Optionally, the real-time information includes environmental information, and the vehicle driving assistance method further includes:
acquiring environmental information;
determining obstacle information around the vehicle based on the environmental information; the obstacle information includes an obstacle position and/or a distance between the obstacle and the vehicle;
generating an obstacle prompting instruction based on the obstacle information;
And carrying out obstacle position and/or distance prompt based on the obstacle prompt instruction.
Optionally, the real-time information further includes current gear information, and the vehicle driving assisting method further includes:
acquiring gear information
Generating an auxiliary display instruction based on the gear information;
and based on the auxiliary display instruction, superposing and displaying the image corresponding to the front camera or the rear camera associated with the gear information in the panoramic image.
Optionally, the displaying, in the panoramic image, the image corresponding to the front camera or the rear camera associated with the gear information in a superimposed manner includes:
when the gear information is a forward gear, controlling to display images corresponding to the front camera in the panoramic image in a superposition manner;
and when the gear information is a backward gear, controlling to display images corresponding to the rear camera in the panoramic image in a superposition manner.
In a second aspect, an embodiment of the present disclosure further provides a vehicle driving support apparatus, including:
the information acquisition module is used for acquiring real-time information of the vehicle;
the road section judging module is used for judging whether the vehicle is positioned on a target road section or not based on the real-time information; wherein, the visual blind area of the driver exists under the target road section;
The instruction generation module is used for generating a display instruction when the vehicle is in the target road section;
and the image display module is used for displaying panoramic images around the vehicle based on the display instruction.
Optionally, the real-time information includes at least one of environment information, pitch angle, steering lamp state, steering angle of steering wheel and head state of driver, and the road section judging module is used for judging whether the vehicle is at least one road section of a steering road section, a garage entering road section, a garage exiting road section, a mountain entering road section, a mountain exiting road section, a tunnel entering road section, a tunnel exiting road section, a bridge passing road section and a slope passing road section based on the environment information;
and/or, based on the pitching angle, judging whether the vehicle is on a road section passing by a slope;
and/or determining whether the vehicle is in a turning road section based on at least one of the turn signal state, the steering wheel steering angle, and the driver head state.
Optionally, the target road section comprises an over-slope road section, and the road section judging module is used for determining the current gradient of the road where the vehicle is located based on the real-time information;
and judging whether the vehicle is on the over-slope road section or not based on the current gradient.
Optionally, the road section judging module is used for judging whether the current gradient is greater than a preset gradient threshold value;
judging whether the duration time of the current gradient larger than a preset gradient threshold value is larger than a preset time threshold value when the current gradient is larger than the preset gradient threshold value;
and when the current gradient is greater than a preset gradient threshold value and the duration is greater than a preset time threshold value, determining that the vehicle is on the over-slope road section.
Optionally, the road section judging module is used for carrying out image feature recognition based on the real-time information to determine the identification feature; the identification feature comprises at least one of a lane line image, a roadside landscape distance and road sign indication information;
determining the current gradient of the road where the vehicle is located through neural network or pattern recognition based on the identification characteristics;
and/or extracting the current pose of the vehicle based on the real-time information;
determining the pitching angle of the vehicle based on the current pose so as to determine the current gradient of the road where the vehicle is located; wherein the angle value of the current gradient is equal to the angle value of the pitch angle.
Optionally, the road section judging module is configured to judge whether at least one of a turn-on state of the turn lamp, a steering angle of the steering wheel being greater than a first angle threshold, and a driver head being offset and being out of a preset range is satisfied based on at least one of the turn-lamp state, the steering angle of the steering wheel and the driver head state;
And when at least one condition that the turn light is in an on state, the steering angle of the steering wheel is larger than a first angle threshold value and the head of the driver is deviated and the deviation exceeds a preset range is met, determining that the vehicle is in a steering road section.
Optionally, the vehicle driving assisting device further comprises a prompt module;
the instruction generation module is used for generating an early warning instruction when the vehicle is positioned on a target road section; the early warning instruction is at least related to the road section characteristics of the target road section;
and the prompt module is used for carrying out safety early warning prompt based on the early warning instruction.
Optionally, the real-time information includes environment information, and the information acquisition module is configured to acquire the environment information, and determine obstacle information around the vehicle based on the environment information; the obstacle information includes an obstacle position and/or a distance between the obstacle and the vehicle;
the instruction generation module is used for generating an obstacle prompting instruction based on the obstacle information;
the prompting module is used for prompting the position and/or distance of the obstacle based on the obstacle prompting instruction.
Optionally, the real-time information further includes current gear information, and the information acquisition module is configured to acquire the gear information; the instruction generation module is used for generating an auxiliary display instruction based on gear information;
And the image display module is used for displaying images corresponding to the front camera or the rear camera associated with the gear information in the panoramic image in a superposition mode based on the auxiliary display instruction.
Optionally, the image display module is used for controlling to superimpose and display the image corresponding to the front camera in the panoramic image when the gear information is the forward gear;
and when the gear information is a backward gear, controlling to display images corresponding to the rear camera in the panoramic image in a superposition manner.
In a third aspect, the presently disclosed embodiments also provide a computer-readable storage medium storing a program or instructions that cause a computer to perform the steps of any one of the vehicle-assisted driving methods as provided in the first aspect.
In a fourth aspect, an embodiment of the present disclosure further provides a vehicle, including: a processor and a memory;
the processor is operable to perform the steps of any one of the vehicle assisted driving methods as provided in the first aspect by invoking a program or instruction stored in the memory.
Compared with the prior art, the technical scheme provided by the disclosure has the following advantages:
the vehicle auxiliary driving method comprises the steps of obtaining real-time information of a vehicle; judging whether the vehicle is on a target road section or not based on the real-time information; wherein, a visual blind area of a driver exists under the target road section; generating a display instruction when the vehicle is in the target road section; based on the display instruction, a panoramic image of the surroundings of the vehicle is displayed. Therefore, whether the vehicle is located on a target road section or not can be judged through real-time information of the vehicle, when the vehicle is judged to be located on the target road section, the existence of a visual blind area of the driver is indicated, a display instruction is generated at the moment, and then the vehicle-mounted display device is controlled to display panoramic images around the vehicle based on the display instruction, so that the driver can acquire environmental information around the vehicle by looking at the panoramic images, and the driver can conveniently judge whether risks or potential risks exist around the vehicle or not.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic flow chart of a method for assisting driving of a vehicle according to an embodiment of the disclosure;
fig. 2 is a schematic flow chart of a vehicle driving assisting device according to an embodiment of the disclosure;
fig. 3 is a schematic structural diagram of a vehicle according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, a further description of aspects of the present disclosure will be provided below. It should be noted that, without conflict, the embodiments of the present disclosure and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein; it will be apparent that the embodiments in the specification are only some, but not all, embodiments of the disclosure.
The vehicle auxiliary driving method provided by the embodiment of the disclosure can be applied to the driving process, and the panoramic image around the vehicle is automatically controlled and displayed on the target road section to make up the visual blind area of the driver, so that the risk can be accurately identified by the driver, the risk can be timely avoided, and the driving safety can be improved. Specifically: whether the vehicle is located on a target road section or not is judged through the acquired real-time information of the vehicle, when the vehicle is judged to be located on the target road section, the existence of a visual blind area of the driver is indicated, a display instruction is generated at the moment, and the vehicle-mounted display device is controlled to display panoramic images around the vehicle, so that the driver can acquire environmental information around the vehicle through checking the panoramic images, and accordingly whether the surrounding of the vehicle is at risk or not is judged to be at potential risk or not, the visual blind area of the driver due to the limitation of the road section is made up through displaying the panoramic images around the vehicle, the problem of the visual blind area of the driver is solved, the accuracy of judging whether the surrounding of the vehicle is at risk or not by the driver is improved, traffic accidents are avoided, driving safety is improved, and life safety of the driver and vehicle passengers is ensured.
The following describes an example of a vehicle driving support method, an example of a device, a medium and a vehicle according to embodiments of the present disclosure with reference to the accompanying drawings.
In some embodiments, fig. 1 is a schematic flow chart of a vehicle driving assistance method according to an embodiment of the disclosure.
As shown in fig. 1, the vehicle assisted driving method includes:
s101, acquiring real-time information of the vehicle.
The real-time information is information representing a real-time state in the driving process of the vehicle, and the real-time state can include a motion state and a surrounding environment state of the vehicle, and can also include an internal and external temperature state, a brightness state or other states related to the vehicle, which are not limited herein.
Specifically, in the running process of the vehicle, real-time information of the vehicle can be collected through a vehicle sensor and transmitted to a processor; the processor receives the real-time information to achieve acquisition of the real-time information. For example, the processor may actively invoke the real-time information, or may passively receive the real-time information, which is not limited herein.
Illustratively, a vehicle may configure an image sensor that may include one or more looking-around cameras through which images are captured, the images including scene information around the vehicle, or referred to as environmental information, the cameras transmitting the captured images to a processor; correspondingly, the processor acquires the image. The vehicle can be further provided with a radar detection sensor, the radar detection sensor is used for acquiring point cloud data so as to determine the distance information of the vehicle from other entities around the vehicle, and the distance information is transmitted to the processor; correspondingly, the processor acquires the distance information. The vehicle can be further provided with a pose measuring sensor, the pitching angle of the vehicle can be acquired through the pose measuring sensor, the pitching angle can comprise position information and pose information, and the position information can be longitude and latitude information or three-dimensional coordinate information; the attitude information may include, for example, pitch angle information and roll angle information; the pose measurement sensor transmits the acquired pitching angle to the processor; correspondingly, the processor acquires the pitch angle. In this manner, the processor may obtain real-time information collected by various sensors of the vehicle configuration.
In other embodiments, the real-time information of the vehicle may be obtained by other manners known to those skilled in the art, which is not described herein again.
S102, judging whether the vehicle is in a target road section or not based on real-time information; wherein, there is driver's vision blind area under the target highway section.
The real-time information of the vehicle can represent the real-time state of the vehicle, so that the driving road section where the vehicle is located can be determined by processing the real-time information, and whether the vehicle is located in a target road section or not is further judged, namely whether the vehicle is located in a road section where a visual blind area exists for a driver or not is judged. If the vehicle is on the target road section, the existence of a visual blind area of the driver is indicated, and at the moment, the driver cannot observe the scene in the visual blind area, and cannot distinguish whether the risk exists, so that the risk of driving the vehicle is increased, and the personal safety problem of the driver and passengers is caused.
In some embodiments, the real-time information includes at least one of environmental information, pitch angle, turn signal status, steering angle of the steering wheel, and driver head status, and determining whether the vehicle is in the target road segment based on the real-time information includes:
based on the environmental information, judging whether the vehicle is at least one section among a steering section, a garage entering section, a garage exiting section, a mountain entering section, a mountain exiting section, a tunnel entering section, a tunnel exiting section, a bridge passing section and a slope passing section;
And/or, based on the pitching angle, judging whether the vehicle is on the uphill road section;
and/or determining whether the vehicle is in a turn section based on at least one of a turn signal state, a steering wheel steering angle, and a driver head state.
Specifically, the real-time information of the vehicle may include one of the environmental information, the pitch angle, the turn signal state, the steering wheel steering angle, and the driver head state, or include a plurality of the environmental information, the pitch angle, the turn signal state, the steering wheel steering angle, and the driver head state. Based on this, whether the vehicle is on the target road section or not can be determined by the environmental information, or whether the vehicle is on the target road section or not can be determined by the pitching angle, or whether the vehicle is on the target road section or not can be determined by at least one of the turn signal state, the steering wheel steering angle, and the driver head state, or whether the vehicle is on the target road section or not can be determined by combining the environmental information, the pitching angle, the turn signal state, the steering wheel steering angle, and the driver head state.
For example, when the real-time information is the environment information, the target road section may include at least one of a turn road section, a garage entering road section, a garage exiting road section, a mountain entering road section, a mountain exiting road section, a tunnel entering road section, a tunnel exiting road section, a bridge passing road section, and a slope passing road section. At this time, the step S102 may specifically include: based on the environmental information, whether the vehicle is at least one of a steering road section, a garage entering road section, a garage exiting road section, a mountain entering road section, a mountain exiting road section, a tunnel entering road section, a tunnel exiting road section, a bridge passing road section and a slope passing road section is judged.
For example, when the real-time information is a pitch angle, the target road segment may include an uphill road segment. At this time, the step S102 may specifically include: based on the pitching angle, whether the vehicle is on the uphill road section is judged.
For example, when the real-time information is at least one of a turn signal state, a steering wheel steering angle, and a driver's head state, the target link may include a steering link. At this time, the step S102 may specifically include: whether the vehicle is in a turning road section is determined based on at least one of a turn signal state, a steering wheel steering angle, and a driver's head state.
The three processes for judging whether the vehicle is in the target road section can be alternatively executed or combined; when the three are combined for execution, the execution sequence is not limited. When the three are executed alternatively, the data processing rate can be improved and the data processing time can be shortened due to the fact that the data processing amount is small, so that the response rate of the vehicle is improved; when the three are combined and executed, the accuracy of the judging result can be improved because the data of the two aspects are combined to judge the target road section, so that frequent triggering of the vehicle-mounted display equipment is avoided, the power consumption is saved, and the duration of the vehicle is prolonged.
Illustratively, the uphill road section may include an uphill road section and a downhill road section.
In some embodiments, the target road segment includes an uphill road segment. At this time, S102 may specifically include:
determining the current gradient of the road where the vehicle is located based on the real-time information;
based on the current gradient, whether the vehicle is on the over-slope road section is judged.
The current gradient of the road where the vehicle is located is the angle of the included angle between the reference plane and the horizontal plane in the vehicle. For example, the reference plane in the vehicle may be a plane parallel to the vehicle chassis, may be a plane in which the vehicle chassis is located, or a plane in which the seats are located in the vehicle, or other planes that may serve as reference planes, and are not limited herein.
Based on the above, if the current gradient is too large, the included angle between the reference plane and the horizontal plane in the vehicle is too large, and the vehicle is judged to be in a driving road section passing through a steep slope, namely in a passing slope road section; if the current gradient is smaller, the included angle between the reference plane and the horizontal plane in the vehicle is smaller, and the vehicle is judged to be in a softer road section instead of an over-slope road section. Therefore, the current gradient of the road section where the vehicle is located can be determined firstly based on the real-time information, and whether the vehicle is located on the road section passing by the slope is judged further based on the current gradient.
Thereby, a determination is made as to whether the vehicle is in the target link.
In the following, an exemplary description is given of how the current gradient of the road on which the vehicle is located is determined.
In some embodiments, determining the current grade of the road on which the vehicle is located based on the real-time information may include:
based on the real-time information, carrying out image feature recognition and determining identification features;
based on the identification feature, a current grade of the road on which the vehicle is located is determined.
Wherein the identification feature may be a feature associated with a gradient of the vehicle, the identification feature may include at least one of a lane line image, a roadside view distance, and road sign indication information, based on which a current gradient of a road on which the vehicle is located may be determined. For example, the identification feature may be determined in connection with an image captured by the camera and a feature of the image.
Specifically, the environmental image shot by the camera includes real-time information of the vehicle, such as lane lines, roadside landscape distances, road sign indication information and the like, and the identification characteristics are determined; further, the current gradient of the road on which the vehicle is positioned is determined through a neural network or pattern recognition, namely through a related image recognition algorithm or a neural network algorithm in combination with the information characteristics of the image.
It should be noted that neural network or pattern recognition is a well-known technical means for those skilled in the art, and will not be described herein.
In some embodiments, determining the current grade of the road on which the vehicle is located based on the real-time information may further include:
determining a current pose of the vehicle based on the real-time information;
based on the current pose, a pitch angle of the vehicle is determined to determine a current grade of a road on which the vehicle is located.
The current pose of the vehicle represents the position and the pose of the vehicle, the pose comprises a pitching pose, the pitching pose comprises a pitching angle, and the angle value of the pitching angle is equal to the angle value of the current gradient.
Therefore, the pitching angle of the vehicle can be extracted from the real-time information, so that the current pose of the vehicle is determined, and further, the magnitude of the current gradient of the road on which the vehicle is positioned is determined based on the current pose of the vehicle.
In some embodiments, the current gradient of the road where the vehicle is located may be determined by combining the identification feature corresponding to the image and the current pose of the vehicle, that is, the current gradient of the road where the vehicle is located may be determined by combining the two methods, and the execution sequence of the two methods is not limited.
It can be understood that when one of the modes is adopted to determine the current gradient of the road where the vehicle is located, the data processing rate can be improved due to the small data processing amount, and the data processing time can be shortened, so that the response rate of the vehicle is improved; when the current gradient of the road where the vehicle is located is determined by combining the two modes, the accuracy of the judging result can be improved, so that frequent triggering of the vehicle-mounted display equipment is avoided, power consumption is saved, and the duration of the vehicle is prolonged.
In connection with the current gradient of the road on which the vehicle is located, it is exemplarily described how to determine whether the vehicle is on an uphill section.
In some implementations, determining whether the vehicle is on an uphill road segment may include:
judging whether the current gradient is larger than a preset gradient threshold value or not;
when the current gradient is greater than a preset gradient threshold value, judging whether the duration time of the current gradient greater than the preset gradient threshold value is greater than a preset time threshold value;
the vehicle assisted driving method further includes:
and when the current gradient is greater than the preset gradient threshold value and the duration is greater than the preset time threshold value, determining that the vehicle is on the uphill road section.
The preset gradient threshold is used for measuring whether the current gradient is overlarge or not, and the preset time threshold is used for measuring whether the duration of the state of overlarge current gradient is overlong or not; if the gradient is too large and the duration is too long, the vehicle can be judged to be in the section of the passing slope; otherwise, i.e., as long as one of the current grade and the duration does not satisfy the above condition, it may be determined that the vehicle is not on an uphill road section. For example, if only that the current grade is greater than the preset grade threshold is satisfied, but the duration does not reach the preset time threshold, it is determined that the vehicle is not on an uphill road segment.
For example, the preset grade threshold may be 30 degrees and the preset time threshold may be 5 seconds; based on this, when it is determined that the current gradient of the vehicle is greater than 30 degrees and the duration is greater than 5 seconds, it may be determined that the vehicle is on an uphill road section. In particular, the uphill road section may include an uphill road section and a downhill road section. It can be understood that the current gradient is expressed in absolute value, including the gradient corresponding to the uphill road section and the gradient corresponding to the downhill road section.
Therefore, whether the vehicle is on the over-slope road section or not is judged through the preset gradient threshold value and the preset time threshold value, the judgment accuracy of whether the vehicle is on the over-slope road section or not can be improved, and misjudgment is avoided; further, the switching times of the vehicle-mounted electronic equipment can be reduced, and the power consumption is reduced, so that the duration of the vehicle is prolonged.
In other embodiments, the determination may also be made by distinguishing between an uphill section and a downhill section.
For example, when the current grade is greater than a first preset grade threshold and the duration is greater than a first preset time threshold, determining that the vehicle is on an uphill road segment; and when the current gradient is smaller than the second preset gradient threshold value and the duration is larger than the second preset time threshold value, determining that the vehicle is on a downhill road section. The absolute value of the first preset gradient threshold value and the absolute value of the second preset gradient threshold value can be equal or unequal; the first preset time threshold and the second preset time threshold may be equal or unequal, and are not limited herein.
For example, the first pre-set grade threshold may be +30 degrees, the second grade threshold may be-30 degrees, and the first pre-set time threshold and the second pre-set time threshold are each 5 seconds. Based on this, when the current gradient of the road on which the vehicle is located is greater than +30 degrees and the duration is greater than 5 seconds, it is determined that the vehicle is on an uphill section; and when the current gradient of the road on which the vehicle is positioned is smaller than-30 degrees and the duration is longer than 5 seconds, determining that the vehicle is positioned on a downhill road section.
In other embodiments, the specific values of the preset gradient threshold and the preset time threshold may be set based on the requirement of the vehicle auxiliary driving method, and the first preset gradient threshold, the second preset gradient threshold, the first preset time threshold and the second preset time threshold may be set based on the requirement of the vehicle auxiliary driving method, which is not described herein again but is not limited thereto.
In the following, an exemplary description is given of how it is determined that the vehicle is in a turning road section.
In some embodiments, determining whether the vehicle is in a turn segment may include:
judging whether at least one of the turning lamp being in an on state, the steering angle of the steering wheel being greater than a first angle threshold value and the driver head being offset and exceeding a preset range is satisfied based on at least one of the turning lamp state, the steering angle of the steering wheel and the driver head state;
The vehicle assisted driving method further includes:
and when at least one condition that the turn light is in an on state, the steering angle of the steering wheel is larger than a first angle threshold value and the head of the driver is deviated and the deviation exceeds a preset range is met, determining that the vehicle is in a steering road section.
Specifically, the state of the turn signal of the vehicle can be obtained, and whether the vehicle is in a turning road section or not can be judged according to the state of the turn signal, for example, when the left/right turn signal is obtained to be in an on state, the vehicle can be determined to be in the turning road section or about to be in the turning road section; or, acquiring the steering angle of the steering wheel of the vehicle, and judging whether the vehicle is in a steering road section according to the steering angle of the steering wheel, for example, when the steering angle of the steering wheel is greater than a first angle threshold value, determining that the vehicle is in the steering road section; or, the head deviation of the driver is obtained, and then whether the vehicle is in the steering road section is judged according to the head deviation of the driver, for example, when the head deviation of the driver is obtained and the deviation exceeds a preset range (the deviation is to a target area or the deviation angle is greater than a preset deviation angle), the vehicle can be determined to be in the steering road section or about to be in the steering road section. Thus, when at least one of the conditions that the turn signal is on, the steering angle of the steering wheel is greater than the first angle threshold, and the driver's head is offset beyond a preset range is satisfied, it may be determined that the vehicle is in a turning road section.
The first angle threshold and the offset preset range may be set according to requirements of the vehicle driving assistance method provided in the embodiment of the disclosure.
The turn signal is detected by a turn signal sensor or by a turn signal link, for example, to determine the turn signal state. And acquiring information such as steering angle, steering direction, rotation speed and the like of the steering wheel through a rotation angle sensor. The head state of the driver is obtained through the sensing camera, the sensing camera obtains an image, and the head state of the driver is determined through a related image recognition algorithm or a neural network algorithm.
S103, generating a display instruction when the vehicle is in the target road section.
The display instruction is an instruction for controlling the vehicle-mounted display device to display panoramic images around the vehicle.
In combination with the above, when the vehicle is in the target road section, since the sight of the driver is blocked, a visual blind area of the driver exists, and safety risks exist; and generating a display instruction to control the vehicle-mounted display equipment to display the panoramic image, so as to make up the visual blind area of the driver.
It can be understood that when it is determined that the vehicle is not on the target link, the display instruction is not generated; accordingly, the vehicle may continue traveling while maintaining the current state.
And S104, displaying the panoramic image around the vehicle based on the display instruction.
The panoramic image around the vehicle comprises a panoramic image around the vehicle, namely a vehicle looking around image; images corresponding to other orientations of the vehicle roof, ground, etc. may also be included, and are not limited herein.
The processor instructs the vehicle-mounted display device to display panoramic images around the vehicle based on the display instruction. At this time, the driver can observe the scene around the vehicle through the panoramic image, thereby avoiding the visual blind area generated by realizing shielding.
The vehicle driving assisting method provided by the embodiment of the disclosure comprises the steps of obtaining real-time information of a vehicle; judging whether the vehicle is on a target road section or not based on the real-time information; wherein, a visual blind area of a driver exists under the target road section; generating a display instruction when the vehicle is in the target road section; based on the display instruction, a panoramic image of the surroundings of the vehicle is displayed. Therefore, whether the vehicle is located on a target road section or not can be judged through real-time information of the vehicle, when the vehicle is judged to be located on the target road section, the existence of a visual blind area of the driver is indicated, a display instruction is generated at the moment, and then the vehicle-mounted display device is controlled to display panoramic images around the vehicle based on the display instruction, so that the driver can acquire environmental information around the vehicle by looking at the panoramic images, and the driver can conveniently judge whether risks or potential risks exist around the vehicle or not.
In some embodiments, the vehicle assisted driving method may further include:
generating an early warning instruction when the vehicle is in a target road section; the early warning instruction is at least related to the road section characteristics of the target road section;
based on the early warning instruction, safety early warning prompt is carried out.
The early warning instruction is used for indicating an early warning device of the vehicle to perform safety early warning so as to draw attention of a driver, thereby being beneficial to the driver to concentrate attention and improving driving safety.
The road section features refer to features of the target road section different from other road sections, such as ascending and descending features, and the accuracy of early warning prompt can be improved by combining the road section features, so that drivers can flexibly adopt corresponding driving modes aiming at different targets.
Specifically, when the vehicle is in different target road sections, different early warning instructions can be generated so as to utilize the early warning instructions aiming at the target road sections to carry out targeted early warning reminding.
For example, when the vehicle is in the bridge-crossing section, a bridge-crossing warning command may be generated to instruct the warning device of the vehicle to send out a corresponding warning prompt voice, for example, "the vehicle is in the bridge-crossing section", and/or the vehicle-mounted electronic device displays "please note" on the bridge-crossing section; when the vehicle is on the uphill road section, an uphill early warning instruction can be generated to instruct an early warning device of the vehicle to send out corresponding early warning prompt voice, for example, the vehicle is on the uphill road section, and/or the vehicle-mounted electronic equipment displays the uphill road section, please pay attention.
In other embodiments, when the target road section is another road section, the corresponding early warning instruction may be generated based on the road section characteristics, and the corresponding early warning device may be instructed to perform early warning prompt.
In other embodiments, the vehicle warning device may also use other warning modes to perform safety warning, for example, the vehicle speaker sounds a beep to alert the driver. In addition, the safety warning prompt can be performed in combination with a plurality of different modes, for example, the warning prompt voice and the vehicle-mounted loudspeaker can simultaneously emit 'beep' to perform the warning prompt, so as to improve the vigilance of the driver, or the mode of voice combined display, or other combined modes, which are not limited herein.
In other embodiments, when the real-time information includes distance information between the vehicle and other objects surrounding it, the safety precaution cue further includes: and highlighting the acquired obstacle, the area range of the obstacle and the relative distance value between the obstacle and the vehicle in the panoramic image. Further, according to different distance levels between the obstacle and the vehicle, alarm information can be sent out at different frequencies through the vehicle loudspeaker, for example, the smaller the distance is, the larger the frequency is; when the distance between the obstacle and the vehicle is very short, the voice prompt information can be sent to prompt the driver to stop running so as to avoid traffic accidents, and further ensure the safety of the driver, passengers and the vehicle.
In some embodiments, the real-time information includes environmental information, and the vehicle assisted driving method further includes:
acquiring environmental information;
determining obstacle information around the vehicle based on the environmental information; the obstacle information includes an obstacle position and/or a distance between the obstacle and the vehicle;
generating an obstacle prompting instruction based on the obstacle information;
based on the obstacle presenting instruction, the obstacle position and/or distance is presented.
Wherein the obstacle presenting instructions may be instructions generated based on obstacle information for presenting the driver with obstacle information surrounding the vehicle, including the position and/or distance of the obstacle.
Specifically, based on the acquired environmental information, the obstacle position and/or the distance between the obstacle and the vehicle may be acquired, that is, the obstacle information is acquired, and based on the obstacle information, an obstacle presenting instruction is generated, and the vehicle voice presenting device performs voice presentation based on the obstacle presenting instruction. Illustratively, when an obstacle is detected 2 meters in front of the vehicle, an obstacle-presenting instruction is generated at this time, and the vehicle makes a voice prompt based on the obstacle-presenting instruction, for example, "there is an obstacle 2 meters in front of the vehicle"; when an obstacle is detected at 2 meters behind the vehicle, an obstacle presenting instruction is generated, and the vehicle carries out voice presentation based on the obstacle presenting instruction, for example, "there is an obstacle at 2 meters behind the vehicle". Therefore, when the obstacle exists around the vehicle, the position of the obstacle and the distance of the obstacle can be accurately informed to the driver, and the driving safety of the vehicle is improved.
In some embodiments, the real-time information further includes current gear information, and the vehicle assisted driving method further includes:
acquiring environmental information;
generating an auxiliary display instruction based on the gear information;
and based on the auxiliary display instruction, superposing and displaying the images corresponding to the front camera or the rear camera associated with the gear information in the panoramic image.
The auxiliary display instruction is used for indicating the vehicle-mounted display equipment to display the image shot by the front camera or display the image shot by the rear camera.
Specifically, when the panoramic image is displayed in the vehicle-mounted display device, the auxiliary display instruction is used for instructing the vehicle-mounted display device to highlight the image shot by the camera corresponding to the vehicle travelling direction, namely, the image corresponding to the front camera or the rear image camera associated with the gear information is overlapped and displayed in the panoramic image, and the image shot by the front camera or the rear image camera is highlighted, so that a driver can clearly observe that the local detail of the obstacle exists in the front area or the rear area of the vehicle.
In some embodiments, superimposing and displaying images corresponding to a front camera or a rear camera associated with gear information in a panoramic image includes:
when the gear information is a forward gear, controlling to display images corresponding to the front camera in a superimposed manner in the panoramic image;
And when the gear information is a backward gear, controlling to display images corresponding to the rear camera in a superimposed manner in the panoramic image.
Specifically, when the gear of the vehicle is in the forward gear, the auxiliary display instruction instructs the vehicle-mounted display device to highlight the image shot by the front camera, at this time, the panoramic image and the image shot by the front camera are displayed in the vehicle-mounted display device in a superimposed manner, and the image shot by the front camera is highlighted, so that the driver can fully acquire the view scene in front of the vehicle; when the gear of the vehicle is in the reverse gear, the auxiliary display instruction instructs the vehicle-mounted display device to highlight the image shot by the rear camera, the panoramic image and the image shot by the rear camera are displayed in the vehicle-mounted display device in a superimposed mode, and the image shot by the rear camera is highlighted, so that a driver can fully acquire the view field scene behind the vehicle.
It should be noted that, the difference between the "images corresponding to the front camera and the rear camera" and the "panoramic image around the vehicle" in the foregoing is that: the panoramic images around the vehicle are obtained through cameras arranged at different positions of the vehicle, for example, the local images corresponding to different directions of the vehicle are obtained, and then the local images are spliced and combined to form the panoramic images around, which is equivalent to the global image obtained by reducing and splicing the influences corresponding to the original data in proportion; the images corresponding to the front camera and the rear camera are images obtained by de-distorting the original images acquired by the corresponding cameras, and correspond to images obtained by locally amplifying the local parts of the global images. When the vehicle is advancing, displaying the panoramic image in the vehicle-mounted display device, and simultaneously highlighting the panoramic image in front of the vehicle, namely displaying the corresponding locally-enlarged panoramic image in front of the vehicle; when the vehicle is in a reverse state, the panoramic image is displayed in the vehicle-mounted display device, and meanwhile, the panoramic image at the rear of the vehicle is also highlighted, namely, the corresponding locally-enlarged panoramic image at the rear of the vehicle is also displayed.
So set up, when being convenient for the driver observe panoramic image all around, can make the driver observe the panoramic image in vehicle the place ahead or rear more clearly, realize the observation to the environmental details to observe the less object of size and avoid, thereby further improve driving safety.
Based on the same inventive concept, the embodiments of the present disclosure further provide a vehicle driving assisting device, which is configured to execute the steps of any one of the vehicle driving assisting methods provided in the foregoing embodiments, and have the same or corresponding beneficial effects, which are not described herein.
In some embodiments, fig. 2 is a schematic structural diagram of a driving assisting device for a vehicle according to an embodiment of the disclosure.
As shown in fig. 2, the vehicle driving support apparatus includes:
an information acquisition module 21 for acquiring real-time information of the vehicle; the road section judging module 22 is used for judging whether the vehicle is in a target road section or not based on the real-time information; wherein, a visual blind area of a driver exists under the target road section; an instruction generation module 23 for generating a display instruction when the vehicle is in the target section; the image display module 24 is used for displaying panoramic images around the vehicle based on the display instruction.
In some embodiments, the real-time information includes at least one of environmental information, pitch angle, turn light status, steering angle of the steering wheel, and head status of the driver, and the road segment determining module 22 is configured to determine whether the vehicle is at least one of a turn road segment, a garage entry road segment, a garage exit road segment, a mountain entry road segment, a mountain exit road segment, a tunnel entry road segment, a tunnel exit road segment, a bridge road segment, and a slope passing road segment based on the environmental information; and/or, based on the pitching angle, judging whether the vehicle is on the uphill road section; and/or determining whether the vehicle is in a turn section based on at least one of a turn signal state, a steering wheel steering angle, and a driver head state.
In some embodiments, the target road segment includes an uphill road segment, and the road segment determination module 22 is configured to determine a current grade of a road on which the vehicle is located based on the real-time information; based on the current gradient, whether the vehicle is on the over-slope road section is judged.
In some embodiments, the road segment determination module 22 is configured to determine whether the current grade is greater than a preset grade threshold; when the current gradient is greater than a preset gradient threshold value, judging whether the duration time of the current gradient greater than the preset gradient threshold value is greater than a preset time threshold value; and when the current gradient is greater than the preset gradient threshold value and the duration is greater than the preset time threshold value, determining that the vehicle is on the uphill road section.
In some embodiments, the road segment judging module 22 is configured to perform image feature recognition based on the real-time information, and determine the identification feature; determining the current gradient of the road where the vehicle is located through neural network or pattern recognition based on the identification characteristics; and/or extracting a current pose of the vehicle based on the real-time information; based on the current pose, a pitch angle of the vehicle is determined to determine a current grade of a road on which the vehicle is located.
In some embodiments, the road segment determination module 22 is configured to determine whether at least one of a turn-on state of the turn-lamp, a steering angle of the steering wheel greater than a first angle threshold, and a driver head offset that is outside a preset range is satisfied based on at least one of a turn-lamp state, a steering angle of the steering wheel, and a driver head state; and when at least one condition that the turn light is in an on state, the steering angle of the steering wheel is larger than a first angle threshold value and the head of the driver is deviated and the deviation exceeds a preset range is met, determining that the vehicle is in a steering road section.
In some embodiments, the vehicle driving assisting device further includes a prompt module, and the instruction generating module 23 is configured to generate an early warning instruction when the vehicle is in the target road section; the early warning instruction is at least related to the road section characteristics of the target road section; the prompt module is used for carrying out safety early warning prompt based on the early warning instruction.
In some embodiments, the real-time information includes environmental information, and the information acquisition module 21 is configured to acquire the environmental information, and determine obstacle information around the vehicle based on the environmental information; the obstacle information includes an obstacle position and/or a distance between the obstacle and the vehicle; the instruction generation module is used for generating an obstacle prompting instruction based on the obstacle information; the prompting module is used for prompting the position and/or distance of the obstacle based on the obstacle prompting instruction.
In some embodiments, the real-time information further includes current gear information, the information acquisition module 21 is configured to acquire the gear information, and the instruction generation module 23 is configured to generate an auxiliary display instruction based on the gear information; the image display module 24 is configured to superimpose and display an image corresponding to the front camera or the rear camera associated with the gear information on the panoramic image based on the auxiliary display instruction.
In some embodiments, the image display module 24 is configured to control to superimpose and display an image corresponding to the front camera in the panoramic image when the gear information is a forward gear; and when the gear information is a backward gear, controlling to display images corresponding to the rear camera in a superimposed manner in the panoramic image.
The presently disclosed embodiments also provide a computer-readable storage medium storing a program or instructions that cause a computer to perform the steps of any one of the methods of controlling a vehicle-assisted driving provided in the above-described embodiments.
Illustratively, the program or instructions cause the computer to perform a method for controlling vehicle assisted driving with an on-board display screen, comprising:
acquiring real-time information of a vehicle;
judging whether the vehicle is on a target road section or not based on the real-time information; wherein, a visual blind area of a driver exists under the target road section;
generating a display instruction when the vehicle is in the target road section;
based on the display instruction, a panoramic image of the surroundings of the vehicle is displayed.
On the basis of the foregoing implementation manner, the embodiment of the present disclosure further provides a vehicle, including: a processor and a memory; the processor is configured to implement any one of the vehicle driving assisting methods provided in the foregoing embodiments by calling a program or an instruction stored in the memory, so as to achieve the corresponding beneficial effects.
In some embodiments, fig. 3 is a schematic structural diagram of a vehicle according to an embodiment of the disclosure. As shown in fig. 3, the system includes one or more processors 31 and memory 32.
The processor 31 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device to perform desired functions.
Memory 32 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, random Access Memory (RAM) and/or cache memory (cache) and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on a computer readable storage medium, and the processor 31 may execute the program instructions to implement the vehicle driving assist method provided by embodiments of the present disclosure, and/or other desired functions. Various contents such as an input signal, a signal component, a noise component, and the like may also be stored in the computer-readable storage medium.
The steps of the vehicle driving assisting method according to any of the above embodiments may be performed by the vehicle provided in the above embodiment, and have the same or corresponding advantages, which are not described herein.
In some embodiments, the vehicle further includes on-board sensors, such as lidar, cameras, etc., based on fig. 3, which transmit the collected real-time information around the vehicle to the processor.
The vehicle-mounted sensor comprises one or more looking-around cameras, wherein the looking-around cameras acquire the current road section condition of the vehicle and real-time information of obstacles around the vehicle, namely, the surrounding of the vehicle, and transmit the real-time information to a vehicle processor through wireless connection, so that the real-time information of the vehicle is acquired.
For example, the in-vehicle sensor may further include a radar detection sensor by which an obstacle around the vehicle and a distance between the obstacle and the vehicle are acquired and transmitted to the processor. In other embodiments, the obstacle may be obtained by a technical means known to those skilled in the art, which is not limited herein.
In other embodiments, the vehicle may include other structural and functional components known to those skilled in the art, and are not described in detail herein.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, vehicle-assisted driving method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, vehicle-assisted driving method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude that there are additional identical elements in a process, vehicle assisted driving method, article or apparatus that comprises the element.
The foregoing is merely a specific embodiment of the disclosure to enable one skilled in the art to understand or practice the disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown and described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (13)

1. A vehicle driving assist method, comprising:
acquiring real-time information of a vehicle;
based on the real-time information, judging whether the vehicle is in a target road section or not; wherein, the visual blind area of the driver exists under the target road section;
generating a display instruction when the vehicle is in the target road section;
and displaying the panoramic image around the vehicle based on the display instruction.
2. The vehicle assisted driving method according to claim 1, wherein the real-time information includes at least one of environmental information, a pitch angle, a turn signal state, a steering angle of a steering wheel, and a driver's head state, and the determining whether the vehicle is in a target section based on the real-time information includes:
judging whether the vehicle is positioned on at least one section among a steering section, a garage entering section, a garage exiting section, a mountain entering section, a mountain exiting section, a tunnel entering section, a tunnel exiting section, a bridge passing section and a slope passing section or not based on the environment information;
and/or, based on the pitching angle, judging whether the vehicle is on a road section passing by a slope;
and/or determining whether the vehicle is in a turning road section based on at least one of the turn signal state, the steering wheel steering angle, and the driver head state.
3. The vehicle driving assist method as set forth in claim 1, wherein the target link includes an uphill link, and the determining whether the vehicle is in the target link based on the real-time information includes:
determining the current gradient of the road where the vehicle is located based on the real-time information;
and judging whether the vehicle is on the over-slope road section or not based on the current gradient.
4. The vehicle driving assist method as set forth in claim 3, wherein the determining whether the vehicle is on an uphill section includes:
judging whether the current gradient is larger than a preset gradient threshold value or not;
judging whether the duration time of the current gradient larger than a preset gradient threshold value is larger than a preset time threshold value when the current gradient is larger than the preset gradient threshold value;
the vehicle assisted driving method further includes:
and when the current gradient is greater than a preset gradient threshold value and the duration is greater than a preset time threshold value, determining that the vehicle is on the over-slope road section.
5. The vehicle assisted driving method according to claim 3, wherein the determining the current gradient of the road on which the vehicle is located based on the real-time information includes:
based on the real-time information, carrying out image feature recognition and determining identification features; the identification feature comprises at least one of a lane line image, a roadside landscape distance and road sign indication information;
Determining the current gradient of the road where the vehicle is located through neural network or pattern recognition based on the identification characteristics;
and/or extracting the current pose of the vehicle based on the real-time information;
determining the pitching angle of the vehicle based on the current pose so as to determine the current gradient of the road where the vehicle is located; wherein the angle value of the current gradient is equal to the angle value of the pitch angle.
6. The vehicle assisted driving method according to claim 2, characterized in that the determining whether the vehicle is in a steering section based on at least one of the turn signal state, the steering angle of the steering wheel, and the driver's head state includes:
judging whether at least one of the turning lamp being in an on state, the steering angle of the steering wheel being greater than a first angle threshold value and the driver head being offset and exceeding a preset range is satisfied based on at least one of the turning lamp state, the steering angle of the steering wheel and the driver head state;
the method further comprises the steps of:
and when at least one condition that the turn light is in an on state, the steering angle of the steering wheel is larger than a first angle threshold value and the head of the driver is deviated and the deviation exceeds a preset range is met, determining that the vehicle is in a steering road section.
7. The vehicle driving assist method according to any one of claims 1 to 6, characterized by further comprising:
generating an early warning instruction when the vehicle is in a target road section; the early warning instruction is at least related to the road section characteristics of the target road section;
and carrying out safety early warning prompt based on the early warning instruction.
8. The vehicle driving assist method according to any one of claims 1 to 6, characterized by further comprising:
acquiring environmental information;
determining obstacle information around the vehicle based on the environmental information; the obstacle information includes an obstacle position and/or a distance between the obstacle and the vehicle;
generating an obstacle prompting instruction based on the obstacle information;
and carrying out obstacle position and/or distance prompt based on the obstacle prompt instruction.
9. The vehicle driving assist method according to any one of claims 1 to 6, characterized by further comprising:
acquiring gear information;
generating an auxiliary display instruction based on the gear information;
and based on the auxiliary display instruction, superposing and displaying the image corresponding to the front camera or the rear camera associated with the gear information in the panoramic image.
10. The vehicle driving support method according to claim 9, wherein the superimposing and displaying the image corresponding to the front camera or the rear camera associated with the shift information in the panoramic image includes:
When the gear information is a forward gear, controlling to display images corresponding to the front camera in the panoramic image in a superposition manner;
and when the gear information is a backward gear, controlling to display images corresponding to the rear camera in the panoramic image in a superposition manner.
11. A vehicle driving assist apparatus, comprising:
the information acquisition module is used for acquiring real-time information of the vehicle;
the road section judging module is used for judging whether the vehicle is positioned on a target road section or not based on the real-time information; wherein, the visual blind area of the driver exists under the target road section;
the instruction generation module is used for generating a display instruction when the vehicle is in the target road section;
and the image display module is used for displaying panoramic images around the vehicle based on the display instruction.
12. A computer-readable storage medium storing a program or instructions that cause a computer to execute the steps of the vehicle driving support method according to any one of claims 1 to 10.
13. A vehicle, characterized by comprising: a processor and a memory;
the processor is configured to execute the steps of the vehicle driving support method according to any one of claims 1 to 10 by calling a program or instructions stored in the memory.
CN202210854845.1A 2022-07-18 2022-07-18 Vehicle auxiliary driving method, device, medium and vehicle Pending CN117445810A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210854845.1A CN117445810A (en) 2022-07-18 2022-07-18 Vehicle auxiliary driving method, device, medium and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210854845.1A CN117445810A (en) 2022-07-18 2022-07-18 Vehicle auxiliary driving method, device, medium and vehicle

Publications (1)

Publication Number Publication Date
CN117445810A true CN117445810A (en) 2024-01-26

Family

ID=89578688

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210854845.1A Pending CN117445810A (en) 2022-07-18 2022-07-18 Vehicle auxiliary driving method, device, medium and vehicle

Country Status (1)

Country Link
CN (1) CN117445810A (en)

Similar Documents

Publication Publication Date Title
US10449971B2 (en) Travel control device
US9723243B2 (en) User interface method for terminal for vehicle and apparatus thereof
CN107176165B (en) Vehicle control device
JP6466899B2 (en) Vehicle display device
US11597396B2 (en) Vehicle control device
CN102610125B (en) Method for operating a driver assistance system on a motor vehicle outputting a recommendation related to an overtaking manoeuvre and motor vehicle
US20080015772A1 (en) Drive-assist information providing system for driver of vehicle
JP4719590B2 (en) In-vehicle peripheral status presentation device
CN111915915A (en) Driving scene reconstruction method, device, system, vehicle, equipment and storage medium
US11195415B2 (en) Lane change notification
JP2007233864A (en) Dead angle support information notification device and program
JP2010026759A (en) Driving support device for vehicle
US20170178591A1 (en) Sign display apparatus and method for vehicle
US20220120581A1 (en) End of trip sequence
CN111707283A (en) Navigation method, device, system and equipment based on augmented reality technology
JP2020065141A (en) Vehicle overhead image generation system and method thereof
JP4986070B2 (en) Ambient monitoring device for vehicles
US20200369151A1 (en) Display unit
JP6772527B2 (en) Vehicle control device
JP2021149319A (en) Display control device, display control method, and program
JP2014085900A (en) On-board device
CN115402322A (en) Intersection driving assistance method and system, electronic device and storage medium
CN117445810A (en) Vehicle auxiliary driving method, device, medium and vehicle
JP2023031115A (en) Remote driver support method, support system, and program
US20230132456A1 (en) Image processing device, mobile object, image processing method, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination