CN117152993A - AR device control method, vehicle-mounted AR device and computer-readable storage medium - Google Patents

AR device control method, vehicle-mounted AR device and computer-readable storage medium Download PDF

Info

Publication number
CN117152993A
CN117152993A CN202311091168.3A CN202311091168A CN117152993A CN 117152993 A CN117152993 A CN 117152993A CN 202311091168 A CN202311091168 A CN 202311091168A CN 117152993 A CN117152993 A CN 117152993A
Authority
CN
China
Prior art keywords
vehicle
current
running
driving
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311091168.3A
Other languages
Chinese (zh)
Inventor
陈曰清
刘俊秀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Techology Co Ltd
Original Assignee
Goertek Techology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Techology Co Ltd filed Critical Goertek Techology Co Ltd
Priority to CN202311091168.3A priority Critical patent/CN117152993A/en
Publication of CN117152993A publication Critical patent/CN117152993A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/165Anti-collision systems for passive traffic, e.g. including static obstacles, trees
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/167Driving aids for lane monitoring, lane changing, e.g. blind spot detection

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses an AR equipment control method, a vehicle-mounted AR equipment and a computer readable storage medium, and relates to the technical field of near-to-eye display equipment, wherein the AR equipment control method comprises the following steps: acquiring running environment information of the vehicle, and determining current motion state information of at least one traffic object of the vehicle in a peripheral preset area according to the running environment information; acquiring current running state information of the vehicle, and predicting a first collision risk coefficient of the vehicle and a traffic object in a future preset time period according to the current running state information and the current movement state information; and outputting preset collision early warning prompt information under the condition that the first collision risk coefficient is determined to be larger than a first preset coefficient threshold value. The application can expand the driving safety early warning function of the vehicle-mounted AR equipment and solve the technical problem that collision early warning cannot be accurately provided for a driver through the vehicle-mounted AR equipment due to the use limitation of the vehicle-mounted AR equipment.

Description

AR device control method, vehicle-mounted AR device and computer-readable storage medium
Technical Field
The application relates to the technical field of near-to-eye display equipment, in particular to an AR equipment control method, a vehicle-mounted AR equipment and a computer readable storage medium.
Background
In recent years, with the development of science and technology and the increasingly diversified demands of people, AR (Augmented Reality ) glasses or AR equipment such as AR helmets have come to an unprecedented development opportunity. The application scenes of the AR glasses are more and more, such as AR teaching, AR exhibit browsing and the like, but due to the limitation of the use of the AR equipment, the application of the vehicle-mounted AR equipment in driving scenes is less, and because a driver wears the vehicle-mounted AR equipment in the current market to drive, driving safety cannot be effectively guaranteed.
In a driving scene, surrounding vehicles and objects need to be avoided, and in order to improve the driving safety of the vehicles, a driver is assisted to avoid the surrounding vehicles and the objects, so that avoidance judgment is provided in advance. The prior car-mounted AR glasses cannot accurately detect the possibility that the car collides, and provide driving early warning for a driver, so that serious driving safety hidden hazards exist in the process of driving by wearing car-mounted AR equipment.
Disclosure of Invention
The application mainly aims to provide an AR equipment control method, vehicle-mounted AR equipment and a computer readable storage medium, and aims to solve the technical problem that collision early warning cannot be accurately provided for a driver through the vehicle-mounted AR equipment due to the use limitation of the vehicle-mounted AR equipment.
To achieve the above object, the present application provides an AR device control method, including:
acquiring running environment information of the vehicle, and determining current motion state information of at least one traffic object of the vehicle in a peripheral preset area according to the running environment information, wherein the current motion state information comprises the current object position, the current motion direction, the current motion speed and the current motion acceleration of the traffic object;
acquiring current running state information of the vehicle, and predicting a first collision risk coefficient of the vehicle and the traffic object within a preset time length in the future according to the current running state information and the current movement state information;
and outputting preset collision early warning prompt information under the condition that the first collision risk coefficient is determined to be larger than a first preset coefficient threshold value.
Optionally, the step of predicting the first collision risk coefficient of the vehicle and the traffic object within a preset time period in the future according to the current running state information and the current movement state information includes:
predicting and obtaining a running track point sequence of the vehicle based on the current running state information, wherein the running track point sequence comprises running positions of a plurality of time nodes of the vehicle in a future preset time period;
Predicting a motion trail point sequence of the traffic object based on the current motion state information, wherein the motion trail point sequence comprises motion positions of a plurality of time nodes of the traffic object in a preset time length in the future, and the time nodes of the motion positions are in one-to-one correspondence with the time nodes of the driving positions;
and observing the running track point sequence and the movement track point sequence, and determining a first collision risk coefficient of the vehicle and the traffic object within a preset time length in the future.
Optionally, the step of observing the running track point sequence and the movement track point sequence to determine a first collision risk coefficient of the vehicle and the traffic object within a preset time length in the future includes:
generating a world coordinate system by taking the current position of the vehicle as a coordinate origin and the current running direction of the vehicle as a longitudinal axis;
respectively selecting the running position and the movement position of the same time node from the running track point sequence and the movement track point sequence as an observed running position and an observed movement position;
constructing a first bounding box area corresponding to the vehicle in the world coordinate system according to the observed driving position, and constructing a second bounding box area corresponding to the traffic object in the world coordinate system according to the observed movement position;
And determining a first collision risk coefficient of the vehicle and the traffic object within a preset time length in the future according to the first bounding box area and the second bounding box area.
Optionally, the step of determining the first collision risk coefficient of the vehicle and the traffic object within the future preset time period according to the first bounding box area and the second bounding box area includes:
identifying whether an overlap region is generated between the first bounding box region and the second bounding box region;
if so, determining the predicted collision area of the vehicle and the traffic object according to the area size of the overlapping area, and determining the predicted collision time of the vehicle and the traffic object according to the time node corresponding to the overlapping area;
and determining a first collision risk coefficient of the vehicle and the traffic object within a preset time length in the future according to the size of the predicted collision area and the urgency of the predicted collision time.
Optionally, the current running state information includes a current vehicle position, a current running direction, a current running speed and a current running acceleration of the host vehicle, and after the step of determining the first collision risk coefficient of the host vehicle and the traffic object within the future preset time period, the method further includes:
Under the condition that the first collision risk coefficient is determined to be larger than a first preset coefficient threshold value, determining a plurality of to-be-selected running directions except the current running direction, and taking the current vehicle position, the to-be-selected running direction, the current running speed and the current running acceleration as running state information to be changed;
predicting a second collision risk coefficient of the vehicle and the traffic object within a preset time length in the future according to the running state information to be changed and the current motion state information;
taking the selected running direction corresponding to the second collision risk coefficient smaller than a second preset coefficient threshold value as a safe running direction, wherein the second preset coefficient threshold value is smaller than the first preset coefficient threshold value;
and outputting the driving guide information corresponding to the safe driving direction.
Optionally, the step of outputting the driving guidance information corresponding to the safe driving direction includes:
determining lane information of a road section where the vehicle is located;
determining whether a current driving lane of the vehicle is a direction-changeable lane with a target direction-changing condition according to the lane information, wherein the target direction-changing condition is a direction-changing condition for changing the current driving direction to a safe driving direction;
If not, outputting braking prompt information for controlling the deceleration braking of the vehicle as driving guide information corresponding to the safe driving direction;
if yes, turning the current running direction to the turning prompt information of the safe running direction, and outputting the turning prompt information as running guide information corresponding to the safe running direction.
Optionally, the step of outputting the driving guidance information corresponding to the safe driving direction includes:
acquiring a pre-driving route of the vehicle;
determining a real-time distance between the vehicle and the intersection ahead based on the current vehicle position and the pre-driving route, and judging whether the real-time distance is smaller than a preset distance threshold value;
if the real-time distance is smaller than a preset distance threshold value, identifying the signal lamp state of the intersection in front according to the driving environment information;
determining whether the signal lamp state is a traffic state;
if not, outputting braking prompt information for controlling the deceleration braking of the vehicle as driving guide information corresponding to the safe driving direction;
if yes, turning the current running direction to the turning prompt information of the safe running direction, and outputting the turning prompt information as running guide information corresponding to the safe running direction.
Optionally, before the step of outputting the steering prompt information for steering the current driving direction to the safe driving direction as the driving guide information corresponding to the safe driving direction, the method further includes:
determining lane information of a road section where the vehicle is located;
determining the direction corresponding to the current driving lane of the vehicle according to the lane information, and determining whether the direction is matched with the safe driving direction;
if not, outputting braking prompt information for controlling the deceleration braking of the vehicle as driving guide information corresponding to the safe driving direction;
if yes, then execute: and the step of outputting the steering prompt information for steering the current driving direction to the safe driving direction as driving guide information corresponding to the safe driving direction.
The present application also provides a vehicle-mounted AR device, which is an entity device, and includes: the present application provides a method for controlling an AR device, comprising a memory, a processor, and a program of the AR device control method stored on the memory and executable on the processor, wherein the program of the AR device control method, when executed by the processor, can implement the steps of the AR device control method as described above.
The present application also provides a computer-readable storage medium having stored thereon a program for implementing an AR device control method, the program for implementing the AR device control method being executed by a processor to implement the steps of the AR device control method as described above.
The present application also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of an AR device control method as described above.
According to the technical scheme, the current motion state information of at least one traffic object in a peripheral preset area of a host vehicle is determined according to the driving environment information, wherein the current motion state information comprises the current object position, the current motion direction, the current motion speed and the current motion acceleration of the traffic object, the current driving state information of the host vehicle is obtained, the first collision risk coefficient of the host vehicle and the traffic object in a future preset time period is predicted according to the current driving state information and the current motion state information, and when the first collision risk coefficient is determined to be greater than a first preset coefficient threshold value, preset collision early warning prompt information is output, so that a driver can be timely reminded of surrounding vehicles and objects, the driving safety of the vehicle is improved, the driver can provide avoidance judgment in advance, the driver is assisted in avoiding surrounding vehicles and objects in a way of rendering prompt by AR equipment, and the technical problem that collision cannot be accurately provided for the driver through the vehicle-mounted AR equipment due to the use limitation is effectively solved.
According to the application, the movement state information of other vehicles and objects (such as pedestrians, animals, barriers and the like) around the vehicle driving process is dynamically identified through the AR equipment, the possibility of collision is detected in advance, and an early warning picture is rendered into the AR equipment, or the sound early warning of the AR equipment can be assisted, so that a driver can evaluate the condition of the vehicle on the road in advance and timely drive and avoid, intelligent assistance is provided for the driver through the AR equipment, the safe driving capability of the driver is improved, the driving safety early warning function of the vehicle-mounted AR equipment is effectively expanded, and the technical problem that the collision early warning cannot be accurately provided for the driver through the vehicle-mounted AR equipment due to the use limitation of the vehicle-mounted AR equipment is effectively solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the technical solutions of the present embodiment or the prior art, the drawings used in the description of the embodiment or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a flowchart of a first embodiment of an AR device control method according to the present application;
FIG. 2 is a flowchart of a second embodiment of an AR device control method according to the present application;
FIG. 3 is a schematic view of a scenario in which collision risk prediction is performed in an embodiment of the present application;
FIG. 4 is a schematic diagram of a user interface for collision warning prompt by a vehicle-mounted AR device in an embodiment of the present application;
FIG. 5 is a schematic diagram of a model construction of an AABB bounding box in an embodiment of the present application;
FIG. 6 is a schematic diagram of a model configuration of an OBB bounding box in an embodiment of the present application;
FIG. 7 is a schematic diagram of a scenario in which overlapping regions of bounding boxes are analyzed according to an embodiment of the present application;
FIG. 8 is a schematic view of a scene of overlapping region analysis of bounding boxes according to another embodiment of the present application;
fig. 9 is a schematic block diagram of an AR device control apparatus according to an embodiment of the present application;
fig. 10 is a schematic device structure diagram of a hardware operating environment related to the vehicle-mounted AR device in this embodiment.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
In order to make the above objects, features and advantages of the present application more comprehensible, the following description of the embodiments accompanied with the accompanying drawings will be given in detail. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Example 1
At present, AR glasses application scenes are more and more, such as AR teaching, AR exhibit browsing and the like, but due to the limitation of use of AR equipment, the application of vehicle-mounted AR equipment in driving scenes is less, because drivers wear the vehicle-mounted AR equipment in the current market to drive, and driving safety cannot be effectively guaranteed.
In a driving scene, surrounding vehicles and objects need to be avoided, and in order to improve the driving safety of the vehicles, a driver is assisted to avoid the surrounding vehicles and the objects, so that avoidance judgment is provided in advance. The prior car-mounted AR glasses cannot accurately detect the possibility that the car collides, and provide driving early warning for a driver, so that serious driving safety hidden hazards exist in the process of driving by wearing car-mounted AR equipment.
Based on this, an embodiment of the present application provides an AR device control method, please refer to fig. 1, fig. 1 is a flowchart of a first embodiment of the AR device control method of the present application, which includes:
step S10, acquiring running environment information of the vehicle, and determining current motion state information of at least one traffic object of the vehicle in a peripheral preset area according to the running environment information;
In this embodiment, the driving environment information around the host vehicle may be sensed by various vehicle-mounted sensors, which may include a camera, a front millimeter wave radar, an ultrasonic radar, and the like, and the driving environment information represents the environment information within a preset area around the host vehicle (a range area of a preset distance from the host vehicle).
The traffic object may be a vehicle, a person, an animal, or other obstacle in a preset area around the host vehicle. It should be noted that, the current motion state information refers to parameters of a traffic object, and the current motion state information includes a current object position, a current motion direction, a current motion speed and a current motion acceleration of the traffic object. Of course, the current movement state information includes a type that may further include a traffic object, and if the type is a vehicle, the current movement state information may further include a vehicle type, a vehicle length, a vehicle width, and the like of the vehicle. The current motion state information of at least one traffic object in the peripheral preset area of the vehicle can be dynamically acquired through a camera and a radar which are arranged on the vehicle.
Step S20, current running state information of the vehicle is obtained, and a first collision risk coefficient of the vehicle and the traffic object in a future preset time period is predicted according to the current running state information and the current movement state information;
In this embodiment, the current running state information refers to parameters of the host vehicle, and may include a current vehicle position, a current running direction, a current running speed, and a current running acceleration. Of course, the vehicle type, the length, the width and the like of the vehicle can be also included. The current vehicle position may be obtained through a GPS (Global Position System, global positioning system) positioning function.
It is understood that the first collision risk coefficient may be used to represent a risk degree of collision between the host vehicle and the traffic object within a preset time period in the future, where a larger first collision risk coefficient indicates a higher risk degree.
In this embodiment, the vehicle is provided with a vehicle-mounted camera and a vehicle-mounted radar, the vehicle-mounted camera can be used for photographing or shooting traffic objects around the vehicle to obtain which traffic objects are around, the azimuth and the distance of the traffic objects relative to the vehicle, the vehicle-mounted radar is used for collecting data such as the running direction, the speed and the acceleration of the traffic objects (namely, the current movement state information of the traffic objects) by combining the azimuth collected by the vehicle-mounted camera, in addition, the vehicle-mounted radar is also provided with a vehicle-mounted sensor, the vehicle-mounted sensor is used for collecting data such as the running direction, the speed and the acceleration of the vehicle (namely, the current movement state information of the vehicle) in real time, the vehicle-mounted AR equipment is used for obtaining the current movement state information and the current movement state information in real time to perform further processing, and predicting the movement trend and the movement track of the vehicle and the movement track of the traffic objects in a future preset time, and further predicting the movement trend to obtain a first collision risk coefficient of the vehicle and the traffic objects in the future preset time. Of course, the type of the traffic object can be determined by identifying the image data acquired by the vehicle-mounted camera.
Step S30, outputting preset collision early warning prompt information under the condition that the first collision risk coefficient is determined to be larger than a first preset coefficient threshold value.
In this embodiment, the first preset coefficient threshold is not specifically limited, so as to more accurately detect whether the vehicle and the traffic object collide within a preset time period in the future. For example, the first preset coefficient threshold is 0.8.
It is easy to understand that the collision early warning prompt information is used for prompting the user that the collision risk exists between the vehicle and the traffic object within the preset time period in the future. The vehicle-mounted AR equipment can render the image content corresponding to the collision early warning prompt information into a display interface of the vehicle-mounted AR equipment, and can play the audio content corresponding to the collision early warning prompt information through an earphone built in the vehicle-mounted AR equipment.
In this embodiment, as shown in fig. 3, traffic objects existing around the host vehicle B include a vehicle a and a vehicle C, after current running state information of the host vehicle B and current movement state information of the vehicle C are acquired, future running tracks of the host vehicle B and the vehicle C can be drawn by calculating displacement of the host vehicle B and the vehicle C for a period of time in the future, and a first collision risk coefficient of the host vehicle B and the vehicle C within a preset time period in the future is predicted to be lower, where the first collision risk coefficient is smaller than a first preset coefficient threshold value, so that it can be basically determined that the host vehicle B and the vehicle C cannot collide. In addition, after the current running state information of the host vehicle B and the current movement state information of the vehicle A are acquired, future running tracks of the host vehicle B and the vehicle A can be drawn, the first collision risk coefficient of the host vehicle B and the vehicle A in the preset time period in the future is predicted to be higher, the first collision risk coefficient is larger than a first preset coefficient threshold value, so that the host vehicle B and the vehicle A can be determined to collide with each other with high probability, at the moment, the vehicle-mounted AR equipment can output preset collision early warning prompt information, so that the vehicle-mounted AR equipment can provide accurate collision early warning, and a driver can be guided to change the current running direction to the safe running direction under the current running environment, so that the impending collision is avoided, and the running safety is improved.
The collision early warning prompt information may include a real-time traffic image, and current motion state information of a traffic object predicted to collide with the vehicle, as shown in fig. 4, fig. 4 is a schematic diagram of a user interface for performing collision early warning prompt by using the vehicle-mounted AR device in an embodiment of the present application, the vehicle-mounted AR device displays the traffic image in the vehicle-mounted AR device, the vehicle surrounded by a rectangular frame in the figure is a dangerous traffic object predicted to collide with the vehicle, in the traffic image, it is marked to prompt a user, and the marking method may be a red mark or a flashing prompt. Because the surrounding vehicles and objects need to be avoided in the running process of the vehicle, in order to improve the running safety of the vehicle, a driver is assisted to avoid the surrounding vehicles and the objects, and the avoidance judgment is provided in advance. The vehicle-mounted AR equipment is used for predicting the future running track of traffic objects around the vehicle, so that the future positions of the vehicle and the traffic objects are accurately predicted, the possibility of collision of the vehicle is dynamically detected, namely, dangerous traffic objects which are likely to collide with the vehicle are accurately predicted, a warning chart of a picture is made in advance, and audio warning is assisted, driving early warning is provided for a driver with the maximum possibility, the vehicle-mounted AR equipment has the function of vehicle early warning, the use limitation of the vehicle-mounted AR equipment is broken through, and the accurate collision early warning is provided for the driver. In combination with the above requirements of the present example, the technologies required to be used in the embodiments of the present application mainly include a vehicle track collision early warning algorithm, an early warning image rendered into a vehicle-mounted AR device (for example, AR glasses), and a sound early warning technology of the vehicle-mounted AR device.
According to the technical scheme, the current motion state information of at least one traffic object in a peripheral preset area of a host vehicle is determined according to the driving environment information, wherein the current motion state information comprises the current object position, the current motion direction, the current motion speed and the current motion acceleration of the traffic object, the current driving state information of the host vehicle is obtained, the first collision risk coefficient of the host vehicle and the traffic object in a future preset time period is predicted according to the current driving state information and the current motion state information, and when the first collision risk coefficient is determined to be greater than a first preset coefficient threshold value, preset collision early warning prompt information is output, so that a driver can be timely reminded of surrounding vehicles and objects, the driving safety of the vehicle is improved, the driver can provide avoidance judgment in advance, the driver is assisted in avoiding surrounding vehicles and objects in a way of rendering prompt by AR equipment, and the technical problem that collision cannot be accurately provided for the driver through the vehicle-mounted AR equipment due to the use limitation is effectively solved.
According to the application, the movement state information of other vehicles and objects (such as pedestrians, animals, barriers and the like) around the vehicle in the driving process is dynamically identified through the AR equipment, the possibility of collision is detected in advance, and an early warning picture is rendered into the AR equipment, or the sound early warning of the AR equipment can be also assisted, so that a driver can evaluate the condition of the vehicle on the road in advance, avoid the traffic with collision risk, and timely drive and avoid the traffic, thereby realizing intelligent assistance to the driver through the AR equipment, improving the safety driving capability of the driver, effectively expanding the driving safety early warning function of the vehicle-mounted AR equipment, and further effectively solving the technical problem that the collision early warning cannot be accurately provided for the driver through the vehicle-mounted AR equipment due to the use limitation of the vehicle-mounted AR equipment.
In one possible implementation manner, referring to fig. 2, the step of predicting a first collision risk coefficient of the host vehicle and the traffic object within a future preset time period according to the current running state information and the current movement state information includes:
step S21, predicting and obtaining a running track point sequence of the vehicle based on the current running state information, wherein the running track point sequence comprises running positions of a plurality of time nodes of the vehicle in a future preset time period;
Step S22, predicting and obtaining a motion trail point sequence of the traffic object based on the current motion state information;
the motion track point sequence comprises motion positions of a plurality of time nodes of the traffic object in a preset time length in the future, and the time nodes of the motion positions are in one-to-one correspondence with the time nodes of the driving positions.
For example, the preset duration may be 20 seconds, and prediction of 20 time nodes is performed within 20 seconds in the future, that is, prediction is performed every 1 second, so that the running positions corresponding to 20 time nodes of the vehicle within 20 seconds in the future are predicted, and the running positions corresponding to 20 time nodes form a running track point sequence.
And simultaneously predicting the motion positions of the traffic object corresponding to 20 time nodes in 20 seconds, wherein the motion positions corresponding to the 20 time nodes form a motion track point sequence.
And S23, observing the running track point sequence and the movement track point sequence, and determining a first collision risk coefficient of the vehicle and the traffic object within a preset time length in the future.
In the embodiment, the positions of a plurality of time nodes of the vehicle and the traffic object in the future preset time are dynamically predicted, so that the probability of missed judgment caused by too few predicted time points is reduced, the accuracy of collision risk prediction is improved, and the first collision risk coefficient of the vehicle and the traffic object in the future preset time is accurately determined.
The step of observing the running track point sequence and the movement track point sequence to determine a first collision risk coefficient of the vehicle and the traffic object within a preset time length in the future includes:
step A10, a world coordinate system is generated by taking the current position of the vehicle as a coordinate origin and the current running direction of the vehicle as a longitudinal axis;
in this embodiment, when collision prediction is performed, a world coordinate system is first generated, as shown in fig. 4, in which, in fig. 4, the vehicle located at the origin of coordinates is the host vehicle, that is, the center of the host vehicle at the current position is taken as the origin of coordinates, and the current running direction of the host vehicle is taken as the vertical axis, so as to construct the world coordinate system shown in fig. 4, where, in fig. 4, the coordinate positions of other traffic objects in the world coordinate system are also shown.
Step A20, respectively selecting the running position and the movement position of the same time node from the running track point sequence and the movement track point sequence as an observed running position and an observed movement position;
step A30, constructing a first bounding box area corresponding to the vehicle in the world coordinate system according to the observed driving position, and constructing a second bounding box area corresponding to the traffic object in the world coordinate system according to the observed movement position;
And step A40, determining a first collision risk coefficient of the vehicle and the traffic object in a future preset time period according to the first bounding box area and the second bounding box area.
In this embodiment, as shown in fig. 5, a AABB (Axially Aligned Bounding Box) bounding box algorithm is adopted to simulate the host vehicle and the traffic object into a bounding box, and the area covered by the bounding box is the bounding box area, and the bounding box happens to enclose the host vehicle, four corners of the host vehicle respectively fall on four sides of the bounding box, and the four sides of the bounding box are all parallel to the coordinate axes of the world coordinate system. It will be appreciated by those skilled in the art that the AABB bounding box is simply a bounding box perpendicular to the coordinate axes, and that such a bounding box does not rotate (and is surrounded by a larger bounding box when the object is rotated), and is therefore very simple to use. It can be seen that the AABB bounding box does not rotate after the object rotates, or parallel to the abscissa, but changes in size in order to continue to enclose the cart. I.e. the area size of the AABB bounding box will vary as the object rotates.
As shown in fig. 6, it is known that an OBB bounding box corresponds to an AABB bounding box, and when the bounding box is constructed, the bounding box just encloses the vehicle, and all four sides of the bounding box are parallel to the long side or the wide side of the vehicle, using an OBB (Oriented bounding box, directed bounding box) bounding box algorithm. However, the OBB bounding box rotates along with the orientation of the object, and the area size of the OBB bounding box does not change, but the OBB bounding box is relatively more accurate than the AABB bounding box, but the amount of computation for detecting a collision is somewhat larger than the AABB bounding box.
In the present embodiment, the first bounding box region refers to a region covered by a bounding box constructed for the host vehicle. The second bounding box region refers to the region covered by the bounding box constructed for the transportation.
Of course, the bounding box may be a circular bounding box, an elliptical bounding box, a capsule-shaped bounding box, or the like, in addition to the rectangular bounding box shown in fig. 5 and 6, which is not particularly limited in this embodiment.
It should be noted that, for the bounding box structure of the vehicle, firstly, a database of vehicle length and vehicle width parameters of all vehicle types is constructed, then, vehicle images or videos acquired by the vehicle-mounted cameras are identified to obtain vehicle type information, then, the database is queried to obtain the vehicle length and the vehicle width of the vehicle type, and finally, the bounding boxes of the vehicle are constructed by combining the positions and the running directions of the vehicle, so that the bounding boxes of the vehicle structures of different vehicle types or different running directions are different in size, and for the bounding box structure of the pedestrian, a preset fixed value, for example, 0.5m×0.5m, can be used, which is not limited in this embodiment.
When the first bounding box area and the second bounding box area of a certain time node exist in the future, namely, the collision between the vehicle and the traffic object is predicted at the time node. Therefore, whether collision can be generated between the vehicle and the traffic object within the preset time period in the future can be predicted based on the recognition result of recognizing whether the overlapping area is generated between the first bounding box area and the second bounding box area, and further the first collision risk coefficient of the vehicle and the traffic object within the preset time period in the future can be determined.
As an example, the step of determining, according to the first bounding box area and the second bounding box area, a first collision risk coefficient of the host vehicle and the traffic object within a preset time period in the future includes:
step B10, identifying whether an overlapping area is generated between the first bounding box area and the second bounding box area;
if there is an overlap of the first bounding box region with the second bounding box region, then its projections on both the x-axis and the y-axis must overlap, as shown in FIG. 7. If the first bounding box region does not overlap with the second bounding box region, then it may have an overlap in one of its projections on the x-axis (horizontal axis) and y-axis (vertical axis) (as shown in FIG. 8), or it may have no overlap in both the x-axis and y-axis projections.
Step B20, if yes, determining the predicted collision area of the vehicle and the traffic object according to the area size of the overlapped area;
in this embodiment, the predicted collision area is used to characterize the area of the collision area where the host vehicle is predicted to collide with the vehicle. It can be understood that the larger the area of the overlapping area is, the larger the predicted collision area between the vehicle and the traffic object is, so that the predicted collision area between the vehicle and the traffic object can be determined according to the area of the overlapping area.
For example, the product of the first bounding box region and the projection section of the second bounding box region on the horizontal axis and the projection section on the vertical axis in the world coordinate system may be calculated as the predicted collision area, respectively, as shown in fig. 7.
And step B30, determining a first collision risk coefficient of the vehicle and the traffic object within a preset time period in the future according to the size of the predicted collision area.
It can be understood that the larger the predicted collision area is, the larger the first collision risk coefficient of the vehicle and the traffic object in the future preset time period is. Therefore, according to the size of the predicted collision area, a first collision risk coefficient of the vehicle and the traffic object within a preset time period in the future can be determined.
As another example, the step of determining the first collision risk coefficient of the host vehicle and the traffic object within the future preset time period according to the first bounding box area and the second bounding box area includes:
step C10, identifying whether an overlapping area is generated between the first bounding box area and the second bounding box area;
step C20, if yes, determining the predicted collision time of the vehicle and the traffic object according to the time node corresponding to the overlapped area;
In this embodiment, the predicted collision time is used to characterize a collision time at which a collision between the host vehicle and the vehicle is predicted. When it is predicted that an overlapping area exists between the first bounding box area and the second bounding box area of a certain time node in the future, that is, it is predicted that the vehicle collides with the traffic object at the time node, and at this time, the time node corresponding to the overlapping area is used as predicted collision time.
And step C30, determining a first collision risk coefficient of the vehicle and the traffic object within a preset time length in the future according to the urgency of the predicted collision time.
It can be understood that the closer the predicted collision time is to the current time, the greater the first collision risk coefficient of the vehicle and the traffic object within the preset time length in the future is. Therefore, according to the urgency of the predicted collision time, the first collision risk coefficient of the vehicle and the traffic object within the preset time period in the future can be determined.
As a further example, the step of determining the first collision risk coefficient of the host vehicle and the traffic object within the future preset time period according to the first bounding box area and the second bounding box area includes:
step D10, identifying whether an overlapping area is generated between the first bounding box area and the second bounding box area;
Step D20, if yes, determining the predicted collision area of the vehicle and the traffic object according to the area size of the overlapping area, and determining the predicted collision time of the vehicle and the traffic object according to the time node corresponding to the overlapping area;
and step D30, determining a first collision risk coefficient of the vehicle and the traffic object within a preset time period in the future according to the size of the predicted collision area and the urgency of the predicted collision time.
According to the method, the predicted collision area of the vehicle and the traffic object is determined according to the area size of the overlapping area, the predicted collision time of the vehicle and the traffic object is determined according to the time node corresponding to the overlapping area, and then two factors of the size of the predicted collision area and the urgency of the predicted collision time are comprehensively considered, so that the first collision risk coefficient of the vehicle and the traffic object in the future preset time period is more accurately determined.
In one possible implementation manner, the current running state information includes a current vehicle position, a current running direction, a current running speed and a current running acceleration of the host vehicle, and after the step of determining the first collision risk coefficient of the host vehicle and the traffic object within a future preset time period, the method further includes:
Step E10, under the condition that the first collision risk coefficient is determined to be larger than a first preset coefficient threshold value, determining a plurality of to-be-selected running directions except the current running direction, and taking the current vehicle position, the to-be-selected running direction, the current running speed and the current running acceleration as to-be-changed running state information;
step E20, predicting a second collision risk coefficient of the vehicle and the traffic object in a future preset time period according to the running state information to be changed and the current motion state information;
step E30, taking the selected driving direction corresponding to the second collision risk coefficient smaller than a second preset coefficient threshold value as a safe driving direction, wherein the second preset coefficient threshold value is smaller than the first preset coefficient threshold value;
to facilitate understanding, an example is enumerated in which the traveling directions of the host vehicle can be divided into six traveling directions of right front, right rear, right left front, right front, left front, and right front. The current running direction of the vehicle is the running direction in front, when the current running state information of the vehicle is predicted to keep the current running direction for running, collision with nearby traffic is generated, namely, the first collision risk coefficient is predicted to be larger than a first preset coefficient threshold value, at the moment, four candidate running directions of the vehicle in front left, right, left front and right front (in order to avoid reverse running of a lane, the running direction in front rear is excluded from the candidate running directions) are predicted by analyzing the current running speed and the current running acceleration of the vehicle, whether the vehicle collides with the nearby traffic within a future preset time period or not (or whether the second collision risk coefficient reaches a second preset coefficient threshold value or not) is predicted, and the second preset coefficient threshold value is smaller than the candidate running direction corresponding to the second preset coefficient threshold value, so that the four candidate running directions are used as the safe running directions are predicted, the optimal safe running directions which can be selected by the vehicle are determined according to the collision prediction calculation, the optimal running direction AR which can be selected by the vehicle is further provided, and the map is further displayed in combination with the map for the vehicle navigation device.
After the step E30, a step E40 is executed to output the driving guidance information corresponding to the safe driving direction.
Further, the driving guiding information includes a steering prompting information and a braking prompting information, and step C40, the step of outputting the driving guiding information corresponding to the safe driving direction includes:
step F10, obtaining navigation information, and generating and outputting steering prompt information or braking prompt information by combining the safe driving direction;
and F20, outputting a braking prompt message if the safe driving direction does not exist.
In this embodiment, for all traffic objects (i.e., dangerous traffic objects) with potential collision risks, the safe running direction of the host vehicle is evaluated, so that the user can avoid the dangerous traffic objects according to the safe running direction, one or more safe running directions may exist as well as no safe running direction may exist as the result of the evaluation, at this moment, whether at least one safe running direction exists or not needs to be further determined, the host vehicle can avoid all dangerous traffic objects, if not, the user is reminded of braking the vehicle through voice, if so, the navigation information provided by the navigation map is further combined, whether the host vehicle violates the traffic rules, such as running a red light or reversing, etc., if so, the user is reminded of braking the vehicle through voice, and if not, the safe running direction is output.
According to the method, under the condition that the first collision risk coefficient is determined to be larger than the first preset coefficient threshold, a plurality of candidate driving directions except the current driving direction are determined, the current vehicle position, the candidate driving direction, the current driving speed and the current driving acceleration are taken as driving state information to be changed, then a second collision risk coefficient of the vehicle and a traffic object in a preset time period in the future is predicted according to the driving state information to be changed and the current motion state information, the second collision risk coefficient is smaller than the candidate driving direction corresponding to the second preset coefficient threshold and is taken as a safe driving direction, wherein the second preset coefficient threshold is smaller than the first preset coefficient threshold, driving guide information corresponding to the safe driving direction is output, so that the optimal driving direction is predicted for a user based on the motion state information of the traffic object, the collision threat is furthest reduced for the user, the use limitation of the vehicle-mounted AR device is broken through, and the safe driving direction prompt is provided for the user by using the vehicle-mounted AR device.
Example two
In another embodiment of the present application, the same or similar content as that of the first embodiment may be referred to the description above, and will not be repeated. On the basis, the step of outputting the driving guide information corresponding to the safe driving direction comprises the following steps:
Step G10, determining lane information of a road section where the vehicle is located;
in the present embodiment, the lane information includes the number of lanes of the current link (for example, single lane or multiple lanes), and information whether lane change is possible between the lanes when the current link is multiple lanes. It is known to those skilled in the art that if the lane line in the middle of two lanes is a solid line, the description cannot be changed.
It is easy to understand that the lane information of the road section where the host vehicle is located can be determined according to the driving environment information, for example, a vehicle-mounted camera is mounted on the host vehicle, and the driving environment around the host vehicle can be photographed or imaged by the vehicle-mounted camera to obtain the lane information of the road section where the host vehicle is located. Of course, the current vehicle position of the vehicle can be positioned through the GPS, the current road section of the vehicle is determined, and the lane information of the current road section of the vehicle is determined from the navigation map through networking.
G20, determining whether a current driving lane of the vehicle is a direction-changeable lane with a target direction-changing condition according to the lane information, wherein the target direction-changing condition is a direction-changing condition for turning the current driving direction to a safe driving direction;
the current driving lane refers to a lane where the vehicle is driving currently. It is possible to indicate whether the own vehicle is currently lane-changeable to the left or right lane according to the lane information of the current driving lane.
Step G30, if not, outputting braking prompt information for controlling the deceleration braking of the vehicle as driving guide information corresponding to the safe driving direction;
if the current driving lane of the vehicle is not the direction-changeable lane with the target direction-changing condition, the braking prompt information for controlling the speed reduction and braking of the vehicle is output as driving guide information corresponding to the safe driving direction. For example, the safe driving direction is to the right front, but the current driving lane is not allowed to change to the right, and at this time, the vehicle-mounted AR device outputs a brake prompt message for controlling the deceleration braking of the vehicle, and prompts the driver.
And G40, if so, turning the current running direction to the steering prompt information of the safe running direction, and outputting the steering prompt information as the running guide information corresponding to the safe running direction.
If the current driving lane of the vehicle is a changeable lane with target change conditions, the current driving direction is changed to the steering prompt information of the safe driving direction, and the steering prompt information is output as driving guide information corresponding to the safe driving direction. For example, the safe driving direction is the leftward and forward driving, and the current driving lane allows the leftward lane change, and at this time, the vehicle-mounted AR device outputs steering prompt information for leftward and forward driving to prompt the driver.
According to the method and the device, whether the current driving lane of the vehicle is the lane with the target turning condition or not is determined according to the lane information of the road section where the vehicle is located, if not, the braking prompt information is output, if yes, the steering prompt information for steering in the safe driving direction is output, and therefore the judgment of the actual traffic condition is combined, the output auxiliary driving prompt is more reasonable and accurate, the robustness of the vehicle-mounted AR device with the auxiliary driving function is improved, the use limitation of the vehicle-mounted AR device is broken through, and intelligent auxiliary driving can be accurately provided for a driver through the vehicle-mounted AR device.
In addition, in still another embodiment of the present application, the same or similar contents as those of the first embodiment may be referred to the description above, and will not be repeated. On the basis, the step of outputting the driving guide information corresponding to the safe driving direction comprises the following steps:
step H10, acquiring a pre-driving route of the vehicle;
the pre-driving route is a driving navigation route from a starting point to a destination of the vehicle. It will be appreciated that the current vehicle position is often the starting point position of the pre-travel route.
In this embodiment, the driver may locate the current vehicle position through the GPS navigation function of the vehicle-mounted terminal or other electronic device (for example, a mobile phone), and obtain the pre-driving route after the driver inputs the destination, and then the vehicle-mounted terminal or other electronic device is communicatively connected with the vehicle-mounted AR device through a connection manner such as bluetooth or WiFi (Wireless Fidelity ) and the like, so as to send the information of the pre-driving route of the vehicle to the near-vehicle-mounted AR device, so as to obtain the current pre-driving route information. The current position of the vehicle can be positioned through the vehicle navigator carried by the vehicle-mounted AR equipment, and the pre-driving route can be directly obtained after the driver inputs the destination. It will be appreciated that the pre-travel route should be updated in real time, because as the host vehicle travels, the current vehicle position will change continuously, and the starting point of the pre-travel route corresponds to the current vehicle position, and the pre-travel route is updated accordingly while the starting point changes continuously.
Step H20, determining a real-time distance between the vehicle and the intersection ahead based on the current vehicle position and the pre-running route, and judging whether the real-time distance is smaller than a preset distance threshold value;
step H30, if the real-time distance is smaller than a preset distance threshold value, identifying the signal lamp state of the intersection in front according to the driving environment information;
it should be noted that, the preset distance threshold is not limited specifically, and a person skilled in the art can set the preset distance threshold according to actual situations to accurately indicate that the vehicle is about to pass through the front intersection, and the vehicle-mounted camera or the camera of the vehicle-mounted AR device is sufficient to capture the signal lamp image of the front intersection, so as to identify the signal lamp state of the front intersection according to the signal lamp image (for example, whether the signal lamp image is a red light or a green light) in the driving environment information.
Step H40, determining whether the signal lamp state is a traffic state;
as will be appreciated by those skilled in the art, the traffic light states include a traffic state and a non-traffic state, and it will be appreciated that in existing traffic regulations, the traffic light generally includes a red light, a yellow light, and a green light, wherein the green light represents the traffic state and the red light represents the non-traffic state, and the yellow light is also listed in the non-traffic state in this embodiment.
Step H50, if not, outputting braking prompt information for controlling the deceleration braking of the vehicle as driving guide information corresponding to the safe driving direction;
if the signal lamp state is not the traffic state, the vehicle-mounted AR equipment outputs braking prompt information for controlling the speed reduction and braking of the vehicle to prompt a driver.
And step H60, if yes, turning the current running direction to the steering prompt information of the safe running direction, and outputting the steering prompt information as the running guide information corresponding to the safe running direction.
If the signal lamp state is the traffic state, the vehicle-mounted AR equipment outputs steering prompt information for steering to the safe driving direction, and prompts a driver.
According to the method and the device, the signal lamp state of the intersection in front is identified according to the driving environment information, if the signal lamp state is not the passing state, the braking prompt information for controlling the speed reduction and the braking of the vehicle is output, if the signal lamp state is the passing state, the steering prompt information for steering to the safe driving direction is output, and the driver is prompted, so that the influence of the complex driving environment of the vehicle is fully considered, the condition of illegal running of running a red light is avoided, the driving safety is improved, and the intelligence of the vehicle-mounted AR equipment with an auxiliary driving function is improved.
In one possible implementation manner, before the step of outputting the steering prompt information for steering the current driving direction to the safe driving direction as the driving guide information corresponding to the safe driving direction, the method further includes:
step I10, determining lane information of a road section where the vehicle is located;
in the present embodiment, the lane information includes the direction of each lane of the current road section, for example, whether it is a straight lane or a left-turn lane, and of course, may also include the number of lanes of the current road section, for example, whether it is a single lane or a multiple lane.
Step I20, determining the direction corresponding to the current driving lane of the vehicle according to the lane information, and determining whether the direction is matched with the safe driving direction;
the direction corresponding to the current driving lane is determined according to the lane information, for example, when the vehicle approaches the intersection ahead, the direction corresponding to the current driving lane can be a straight lane, a left-turning lane, a right-turning lane or the like. It is easy to understand that the direction corresponding to the straight lane is directed in the straight direction, the direction corresponding to the left-turn lane is directed in the left-turn direction, and the direction corresponding to the right-turn lane is directed in the right-turn direction. In addition, if the current driving lane is a turning lane, since the turning lane generally needs to turn left first, the direction corresponding to the turning lane may be also considered as the left turning direction in the present embodiment.
Step I30, if not, outputting braking prompt information for controlling the deceleration braking of the vehicle as driving guide information corresponding to the safe driving direction;
if the direction corresponding to the current driving lane is not matched with the safe driving direction, the vehicle-mounted AR equipment outputs braking prompt information for controlling the deceleration braking of the vehicle.
Step I40, if yes, executing: and the step of outputting the steering prompt information for steering the current driving direction to the safe driving direction as driving guide information corresponding to the safe driving direction.
If the direction corresponding to the current driving lane is matched with the safe driving direction, the vehicle-mounted AR equipment outputs steering prompt information for steering to the safe driving direction.
According to the vehicle-mounted AR device, when the direction corresponding to the current driving lane is determined not to be matched with the safe driving direction, braking prompt information for controlling the deceleration braking of the vehicle is output, and when the direction corresponding to the current driving lane is determined to be matched with the safe driving direction, steering prompt information for steering to the safe driving direction is output, so that a user can adapt to traffic rules when wearing the vehicle-mounted AR device, and the vehicle is guided to drive according to the direction corresponding to the current driving lane, the output auxiliary driving prompt is more reasonable and accurate, and the driving guiding safety of the vehicle-mounted AR device is improved.
Example III
An embodiment of the present invention further provides an AR device control apparatus, referring to fig. 9, where the AR device control apparatus includes:
the system comprises an acquisition module 10, a control module and a control module, wherein the acquisition module is used for acquiring running environment information of a vehicle, and determining current motion state information of at least one traffic object of the vehicle in a peripheral preset area according to the running environment information, wherein the current motion state information comprises a current object position, a current motion direction, a current motion speed and a current motion acceleration of the traffic object;
the prediction module 20 is configured to obtain current running state information of the vehicle, and predict a first collision risk coefficient of the vehicle and the traffic object within a preset time period in the future according to the current running state information and the current motion state information;
the output module 30 is configured to output preset collision early warning prompt information when the first collision risk coefficient is determined to be greater than a first preset coefficient threshold.
Optionally, the prediction module 20 is further configured to:
predicting and obtaining a running track point sequence of the vehicle based on the current running state information, wherein the running track point sequence comprises running positions of a plurality of time nodes of the vehicle in a future preset time period;
Predicting a motion trail point sequence of the traffic object based on the current motion state information, wherein the motion trail point sequence comprises motion positions of a plurality of time nodes of the traffic object in a preset time length in the future, and the time nodes of the motion positions are in one-to-one correspondence with the time nodes of the driving positions;
and observing the running track point sequence and the movement track point sequence, and determining a first collision risk coefficient of the vehicle and the traffic object within a preset time length in the future.
Optionally, the prediction module 20 is further configured to:
generating a world coordinate system by taking the current position of the vehicle as a coordinate origin and the current running direction of the vehicle as a longitudinal axis;
respectively selecting the running position and the movement position of the same time node from the running track point sequence and the movement track point sequence as an observed running position and an observed movement position;
constructing a first bounding box area corresponding to the vehicle in the world coordinate system according to the observed driving position, and constructing a second bounding box area corresponding to the traffic object in the world coordinate system according to the observed movement position;
And determining a first collision risk coefficient of the vehicle and the traffic object within a preset time length in the future according to the first bounding box area and the second bounding box area.
Optionally, the prediction module 20 is further configured to:
identifying whether an overlap region is generated between the first bounding box region and the second bounding box region;
if so, determining the predicted collision area of the vehicle and the traffic object according to the area size of the overlapping area, and determining the predicted collision time of the vehicle and the traffic object according to the time node corresponding to the overlapping area;
and determining a first collision risk coefficient of the vehicle and the traffic object within a preset time length in the future according to the size of the predicted collision area and the urgency of the predicted collision time.
Optionally, the prediction module 20 is further configured to:
under the condition that the first collision risk coefficient is determined to be larger than a first preset coefficient threshold value, determining a plurality of to-be-selected running directions except the current running direction, and taking the current vehicle position, the to-be-selected running direction, the current running speed and the current running acceleration as running state information to be changed;
Predicting a second collision risk coefficient of the vehicle and the traffic object within a preset time length in the future according to the running state information to be changed and the current motion state information;
taking the selected running direction corresponding to the second collision risk coefficient smaller than a second preset coefficient threshold value as a safe running direction, wherein the second preset coefficient threshold value is smaller than the first preset coefficient threshold value;
the output module 30 is further configured to:
and outputting the driving guide information corresponding to the safe driving direction.
Optionally, the output module 30 is further configured to:
determining lane information of a road section where the vehicle is located;
determining whether a current driving lane of the vehicle is a direction-changeable lane with a target direction-changing condition according to the lane information, wherein the target direction-changing condition is a direction-changing condition for changing the current driving direction to a safe driving direction;
if not, outputting braking prompt information for controlling the deceleration braking of the vehicle as driving guide information corresponding to the safe driving direction;
if yes, turning the current running direction to the turning prompt information of the safe running direction, and outputting the turning prompt information as running guide information corresponding to the safe running direction.
Optionally, the output module 30 is further configured to:
acquiring a pre-driving route of the vehicle;
determining a real-time distance between the vehicle and the intersection ahead based on the current vehicle position and the pre-driving route, and judging whether the real-time distance is smaller than a preset distance threshold value;
if the real-time distance is smaller than a preset distance threshold value, identifying the signal lamp state of the intersection in front according to the driving environment information;
determining whether the signal lamp state is a traffic state;
if not, outputting braking prompt information for controlling the deceleration braking of the vehicle as driving guide information corresponding to the safe driving direction;
if yes, turning the current running direction to the turning prompt information of the safe running direction, and outputting the turning prompt information as running guide information corresponding to the safe running direction.
Optionally, the output module 30 is further configured to:
determining lane information of a road section where the vehicle is located;
determining the direction corresponding to the current driving lane of the vehicle according to the lane information, and determining whether the direction is matched with the safe driving direction;
if not, outputting braking prompt information for controlling the deceleration braking of the vehicle as driving guide information corresponding to the safe driving direction;
If yes, then execute: and the step of outputting the steering prompt information for steering the current driving direction to the safe driving direction as driving guide information corresponding to the safe driving direction.
The AR equipment control device provided by the embodiment of the invention can solve the technical problem that collision early warning cannot be accurately provided for a driver through the vehicle-mounted AR equipment due to the use limitation of the vehicle-mounted AR equipment by adopting the AR equipment control method in the first embodiment or the second embodiment. Compared with the prior art, the beneficial effects of the AR device control apparatus provided by the embodiment of the present invention are the same as those of the AR device control method provided by the foregoing embodiment, and other technical features of the AR device control apparatus are the same as those disclosed by the foregoing embodiment method, which are not described in detail herein.
Example IV
The embodiment of the invention provides a vehicle-mounted AR device, which comprises: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can perform the AR device control method in the first embodiment.
Referring now to fig. 10, a schematic diagram of a configuration of an in-vehicle AR device suitable for use in implementing embodiments of the present disclosure is shown. The in-vehicle AR device in embodiments of the present disclosure may be augmented reality (Augmented Reality) -AR glasses or AR helmets. The in-vehicle AR device shown in fig. 10 is only one example, and should not impose any limitation on the functions and scope of use of the embodiments of the present disclosure.
As shown in fig. 10, the in-vehicle AR device may include a processing means 1001 (e.g., a central processor, a graphic processor, etc.) which may perform various appropriate actions and processes according to a program stored in a read only memory (ROM 1002) or a program loaded from a storage means into a random access memory (RAM 1004). In the RAM1004, various programs and data required for the operation of the in-vehicle AR device are also stored. The processing device 1001, the ROM1002, and the RAM1004 are connected to each other by a bus 1005. An input/output (I/O) interface is also connected to bus 1005.
In general, the following systems may be connected to the I/O interface 1006: input devices 1007 including, for example, a touch screen, touchpad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, and the like; an output device 1008 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage device 1003 including, for example, a magnetic tape, a hard disk, and the like; and communication means 1009. The communication means 1009 may allow the in-vehicle AR device to communicate wirelessly or wired with other devices to exchange data. While an in-vehicle AR device having various systems is shown in the figures, it should be understood that not all of the illustrated systems are required to be implemented or provided. More or fewer systems may alternatively be implemented or provided.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network through a communication device, or installed from the storage device 1003, or installed from the ROM 1002. The above-described functions defined in the method of the embodiment of the present disclosure are performed when the computer program is executed by the processing device 1001.
The vehicle-mounted AR equipment provided by the invention can solve the technical problem that collision early warning cannot be accurately provided for a driver through the vehicle-mounted AR equipment due to the use limitation of the vehicle-mounted AR equipment by adopting the AR equipment control method in the embodiment. Compared with the prior art, the beneficial effects of the vehicle-mounted AR device provided by the embodiment of the invention are the same as those of the AR device control method provided by the embodiment, and other technical features of the vehicle-mounted AR device are the same as those disclosed by the method of the previous embodiment, so that details are omitted.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the description of the above embodiments, particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Example five
An embodiment of the present invention provides a computer-readable storage medium having computer-readable program instructions stored thereon for performing the AR device control method in the above-described embodiment.
The computer readable storage medium according to the embodiments of the present invention may be, for example, a usb disk, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this embodiment, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The above-mentioned computer-readable storage medium may be contained in an in-vehicle AR device; or may exist alone without being assembled into the in-vehicle AR device.
The above-described computer-readable storage medium carries one or more programs that, when executed by the in-vehicle AR device, cause the in-vehicle AR device to: acquiring running environment information of the vehicle, and determining current motion state information of at least one traffic object of the vehicle in a peripheral preset area according to the running environment information, wherein the current motion state information comprises the current object position, the current motion direction, the current motion speed and the current motion acceleration of the traffic object; acquiring current running state information of the vehicle, and predicting a first collision risk coefficient of the vehicle and the traffic object within a preset time length in the future according to the current running state information and the current movement state information; and outputting preset collision early warning prompt information under the condition that the first collision risk coefficient is determined to be larger than a first preset coefficient threshold value.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented in software or hardware. Wherein the name of the module does not constitute a limitation of the unit itself in some cases.
The computer readable storage medium provided by the application stores the computer readable program instructions for executing the AR equipment control method, so that the technical problem that collision early warning cannot be accurately provided for a driver through the vehicle-mounted AR equipment due to the use limitation of the vehicle-mounted AR equipment can be solved. Compared with the prior art, the beneficial effects of the computer readable storage medium provided by the embodiment of the present application are the same as those of the AR device control method provided by the first or second embodiment, and are not described herein.
Example six
The embodiment of the application also provides a computer program product, which comprises a computer program, wherein the computer program realizes the steps of the AR device control method when being executed by a processor.
The computer program product provided by the application can solve the technical problem that collision early warning cannot be accurately provided for a driver through the vehicle-mounted AR equipment due to the use limitation of the vehicle-mounted AR equipment. Compared with the prior art, the beneficial effects of the computer program product provided by the embodiment of the present application are the same as those of the AR device control method provided by the first or second embodiment, and are not described herein.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the application, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein, or any application, directly or indirectly, within the scope of the application.

Claims (10)

1. An AR device control method, comprising:
acquiring running environment information of the vehicle, and determining current motion state information of at least one traffic object of the vehicle in a peripheral preset area according to the running environment information, wherein the current motion state information comprises the current object position, the current motion direction, the current motion speed and the current motion acceleration of the traffic object;
acquiring current running state information of the vehicle, and predicting a first collision risk coefficient of the vehicle and the traffic object within a preset time length in the future according to the current running state information and the current movement state information;
and outputting preset collision early warning prompt information under the condition that the first collision risk coefficient is determined to be larger than a first preset coefficient threshold value.
2. The AR apparatus control method according to claim 1, wherein the step of predicting a first collision risk coefficient of the host vehicle and the traffic object within a predetermined time period in the future based on the current driving state information and the current movement state information includes:
predicting and obtaining a running track point sequence of the vehicle based on the current running state information, wherein the running track point sequence comprises running positions of a plurality of time nodes of the vehicle in a future preset time period;
Predicting a motion trail point sequence of the traffic object based on the current motion state information, wherein the motion trail point sequence comprises motion positions of a plurality of time nodes of the traffic object in a preset time length in the future, and the time nodes of the motion positions are in one-to-one correspondence with the time nodes of the driving positions;
and observing the running track point sequence and the movement track point sequence, and determining a first collision risk coefficient of the vehicle and the traffic object within a preset time length in the future.
3. The AR apparatus control method according to claim 2, wherein the step of observing the sequence of travel track points and the sequence of motion track points to determine a first collision risk coefficient of the host vehicle and the traffic object within a predetermined time period in the future comprises:
generating a world coordinate system by taking the current position of the vehicle as a coordinate origin and the current running direction of the vehicle as a longitudinal axis;
respectively selecting the running position and the movement position of the same time node from the running track point sequence and the movement track point sequence as an observed running position and an observed movement position;
constructing a first bounding box area corresponding to the vehicle in the world coordinate system according to the observed driving position, and constructing a second bounding box area corresponding to the traffic object in the world coordinate system according to the observed movement position;
And determining a first collision risk coefficient of the vehicle and the traffic object within a preset time length in the future according to the first bounding box area and the second bounding box area.
4. The AR apparatus control method according to claim 3, wherein the step of determining a first collision risk coefficient of the host vehicle and the traffic object within a predetermined time period in the future according to the first bounding box region and the second bounding box region comprises:
identifying whether an overlap region is generated between the first bounding box region and the second bounding box region;
if so, determining the predicted collision area of the vehicle and the traffic object according to the area size of the overlapping area, and determining the predicted collision time of the vehicle and the traffic object according to the time node corresponding to the overlapping area;
and determining a first collision risk coefficient of the vehicle and the traffic object within a preset time length in the future according to the size of the predicted collision area and the urgency of the predicted collision time.
5. The AR apparatus control method according to claim 4, wherein the current traveling state information includes a current vehicle position, a current traveling direction, a current traveling speed, and a current traveling acceleration of the host vehicle, and after the step of determining a first collision risk coefficient of the host vehicle and the traffic object within a predetermined period of time in the future, the method further comprises:
Under the condition that the first collision risk coefficient is determined to be larger than a first preset coefficient threshold value, determining a plurality of to-be-selected running directions except the current running direction, and taking the current vehicle position, the to-be-selected running direction, the current running speed and the current running acceleration as running state information to be changed;
predicting a second collision risk coefficient of the vehicle and the traffic object within a preset time length in the future according to the running state information to be changed and the current motion state information;
taking the selected running direction corresponding to the second collision risk coefficient smaller than a second preset coefficient threshold value as a safe running direction, wherein the second preset coefficient threshold value is smaller than the first preset coefficient threshold value;
and outputting the driving guide information corresponding to the safe driving direction.
6. The AR apparatus control method according to claim 5, wherein the step of outputting the driving guidance information corresponding to the safe driving direction includes:
determining lane information of a road section where the vehicle is located;
determining whether a current driving lane of the vehicle is a direction-changeable lane with a target direction-changing condition according to the lane information, wherein the target direction-changing condition is a direction-changing condition for changing the current driving direction to a safe driving direction;
If not, outputting braking prompt information for controlling the deceleration braking of the vehicle as driving guide information corresponding to the safe driving direction;
if yes, turning the current running direction to the turning prompt information of the safe running direction, and outputting the turning prompt information as running guide information corresponding to the safe running direction.
7. The AR apparatus control method according to claim 5, wherein the step of outputting the driving guidance information corresponding to the safe driving direction includes:
acquiring a pre-driving route of the vehicle;
determining a real-time distance between the vehicle and the intersection ahead based on the current vehicle position and the pre-driving route, and judging whether the real-time distance is smaller than a preset distance threshold value;
if the real-time distance is smaller than a preset distance threshold value, identifying the signal lamp state of the intersection in front according to the driving environment information;
determining whether the signal lamp state is a traffic state;
if not, outputting braking prompt information for controlling the deceleration braking of the vehicle as driving guide information corresponding to the safe driving direction;
if yes, turning the current running direction to the turning prompt information of the safe running direction, and outputting the turning prompt information as running guide information corresponding to the safe running direction.
8. The AR apparatus control method according to claim 7, wherein before the step of outputting the steering hint information for steering the current traveling direction to the safe traveling direction as the traveling guidance information corresponding to the safe traveling direction, the method further comprises:
determining lane information of a road section where the vehicle is located;
determining the direction corresponding to the current driving lane of the vehicle according to the lane information, and determining whether the direction is matched with the safe driving direction;
if not, outputting braking prompt information for controlling the deceleration braking of the vehicle as driving guide information corresponding to the safe driving direction;
if yes, then execute: and the step of outputting the steering prompt information for steering the current driving direction to the safe driving direction as driving guide information corresponding to the safe driving direction.
9. An in-vehicle AR device, characterized in that the in-vehicle AR device comprises:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the steps of the AR device control method of any one of claims 1 to 8.
10. A computer-readable storage medium, on which a program for implementing an AR device control method is stored, the program for implementing the AR device control method being executed by a processor to implement the steps of the AR device control method according to any one of claims 1 to 8.
CN202311091168.3A 2023-08-28 2023-08-28 AR device control method, vehicle-mounted AR device and computer-readable storage medium Pending CN117152993A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311091168.3A CN117152993A (en) 2023-08-28 2023-08-28 AR device control method, vehicle-mounted AR device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311091168.3A CN117152993A (en) 2023-08-28 2023-08-28 AR device control method, vehicle-mounted AR device and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN117152993A true CN117152993A (en) 2023-12-01

Family

ID=88905515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311091168.3A Pending CN117152993A (en) 2023-08-28 2023-08-28 AR device control method, vehicle-mounted AR device and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN117152993A (en)

Similar Documents

Publication Publication Date Title
US20230311749A1 (en) Communication between autonomous vehicle and external observers
JP7082246B2 (en) Operating autonomous vehicles according to road user reaction modeling with shielding
US11117597B2 (en) Pedestrian interaction system for low speed scenes for autonomous vehicles
US11493920B2 (en) Autonomous vehicle integrated user alert and environmental labeling
US10457294B1 (en) Neural network based safety monitoring system for autonomous vehicles
US10943485B2 (en) Perception assistant for autonomous driving vehicles (ADVs)
US11535155B2 (en) Superimposed-image display device and computer program
CN106462727B (en) Vehicle, lane ending detection system and method
US10852736B2 (en) Method to track and to alert autonomous driving vehicles (ADVS) of emergency vehicles
CN111775945B (en) Method and device for detecting closest in-path objects for autopilot
US20150334269A1 (en) Processing apparatus, processing system, and processing method
CN111919211A (en) Turn path visualization for improved spatial and situational awareness in turn maneuvers
US10860868B2 (en) Lane post-processing in an autonomous driving vehicle
CN113205088B (en) Obstacle image presentation method, electronic device, and computer-readable medium
US11556127B2 (en) Static obstacle map based perception system
CN113548043B (en) Collision warning system and method for a safety operator of an autonomous vehicle
CN116734882B (en) Vehicle path planning method, device, electronic equipment and computer readable medium
CN113228128B (en) Driving assistance method and driving assistance device
CN117152993A (en) AR device control method, vehicle-mounted AR device and computer-readable storage medium
CN114475614A (en) Method, device, medium and equipment for screening dangerous targets
CN114511834A (en) Method and device for determining prompt information, electronic equipment and storage medium
KR20220046553A (en) Autonomous Vehicle Interaction System
US20230316900A1 (en) Reproduction system, reproduction method, and storage medium
US20200319651A1 (en) Autonomous vehicle control system testing
CN115100612A (en) Driving behavior analysis method and system based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination