CN114999225B - Information display method of road object and vehicle - Google Patents

Information display method of road object and vehicle Download PDF

Info

Publication number
CN114999225B
CN114999225B CN202210524784.2A CN202210524784A CN114999225B CN 114999225 B CN114999225 B CN 114999225B CN 202210524784 A CN202210524784 A CN 202210524784A CN 114999225 B CN114999225 B CN 114999225B
Authority
CN
China
Prior art keywords
road object
road
state information
vehicle
abnormal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210524784.2A
Other languages
Chinese (zh)
Other versions
CN114999225A (en
Inventor
桑圣昭
王智斌
吴风炎
张希
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Group Holding Co Ltd
Original Assignee
Hisense Group Holding Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Group Holding Co Ltd filed Critical Hisense Group Holding Co Ltd
Priority to CN202210524784.2A priority Critical patent/CN114999225B/en
Publication of CN114999225A publication Critical patent/CN114999225A/en
Application granted granted Critical
Publication of CN114999225B publication Critical patent/CN114999225B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/167Driving aids for lane monitoring, lane changing, e.g. blind spot detection

Abstract

The embodiment of the application discloses an information display method of a road object and a vehicle, and belongs to the technical field of vehicle auxiliary driving. In the embodiment of the application, after an abnormal road object affecting the safe running of the vehicle is screened out from a plurality of road objects and the eye position of the driver of the vehicle is acquired, the shielding state information on the sight path from the driver to the abnormal road object is determined according to the eye position of the driver and the state information of the plurality of road objects. The shielding state information on the sight line path from the driver to the abnormal road object can reflect the condition that the abnormal road object is shielded, and on the basis, the information of the abnormal road object is displayed more accurately based on the shielding state information, the prompting effect is better, and the safety of vehicle driving can be improved.

Description

Information display method of road object and vehicle
Technical Field
The present disclosure relates to the field of vehicle driving assistance, and in particular, to a method for displaying information of a road object and a vehicle.
Background
With the development of vehicle auxiliary driving technology, when facing complex traffic conditions, drivers can be reminded of controlling vehicles in various modes, so that the driving safety of the vehicles is improved.
In the related art, a vehicle may detect a road object having a risk of collision with the vehicle during traveling, wherein the road object may be a vehicle, a pedestrian, an obstacle, or the like. After detecting a road object with collision risk with the vehicle, the driver is reminded by playing or displaying a prompt message through voice or a vehicle-mounted display. However, when facing a complex traffic situation, the prompting manner is not fine and accurate enough, the prompting effect is poor, and even the probability of traffic accidents may be increased due to inaccurate prompting.
Disclosure of Invention
The embodiment of the application provides an information display method of a road object and a vehicle, which can display information of the abnormal road object based on the condition that the abnormal road object is blocked, so that a driver is better assisted to drive the vehicle. The technical scheme is as follows:
in one aspect, there is provided an information display method of a road object, applied to a vehicle, the method including:
acquiring state information of a plurality of road objects within an information acquisition range of the vehicle;
determining an abnormal road object from the plurality of road objects based on the state information of the plurality of road objects, the abnormal road object being a road object affecting safe running of the vehicle;
Acquiring a human eye position of a driver of the vehicle;
determining occlusion state information on a line-of-sight path of the driver to the abnormal road object based on the human eye position of the driver and the state information of the plurality of road objects;
and displaying the information of the abnormal road object based on the shielding state information on the sight line path.
Optionally, the state information of the plurality of road objects includes a shape size and a position of each road object of the plurality of road objects;
the determining, based on the human eye position of the driver and the state information of the plurality of road objects, shielding state information on a line-of-sight path of the driver to the abnormal road object includes:
determining a plurality of key corner points of the abnormal road object based on the shape size and the position of the abnormal road object;
determining the sight line path based on the human eye position of the driver and a plurality of key corner points of the abnormal road object;
and determining shielding state information on the sight line path based on the shape size and the position of other road objects except the abnormal road object in the plurality of road objects.
Optionally, the determining, based on the shape size and the position of the other road objects except the abnormal road object in the plurality of road objects, occlusion state information on the sight line path includes:
Detecting whether a target road object located on the sight line path exists in the other road objects based on the shape size and the position of the other road objects;
and if the target road object does not exist, generating first shielding state information, wherein the first shielding state is used for indicating that the abnormal road object is not shielded on the sight line path.
Optionally, the method further comprises:
if the target road object is present, second occlusion state information is generated based on an area of an overlapping region between a first projection region of the abnormal road object and a second projection region of the target road object in a display region of a head-up display HUD, the second occlusion state information including an area of the overlapping region.
Optionally, the displaying the information of the abnormal road object based on the shielding state information on the sight line path includes:
and when the shielding state information on the sight line path is the second shielding state information, if the area of the overlapped area is not smaller than a reference area threshold value, displaying the image model of the abnormal road object in the first projection area.
Optionally, the method further comprises:
and when the shielding state information on the sight line path is the first shielding state information, or when the shielding state information on the sight line path is the second shielding state information and the area of the overlapped area is smaller than a reference area threshold value, marking and displaying the first projection area in the display area of the HUD.
Optionally, the determining the sight line path based on the eye position of the driver and the plurality of key corner points of the abnormal road object includes:
determining intersection points of connecting lines between the human eye positions of the driver and each key corner point and the display area of the HUD based on the human eye position coordinates of the driver, the position coordinates of the plurality of key corner points and the position coordinates of corner points of the display area of the HUD under a reference coordinate system to obtain a plurality of projection points;
determining the area of a quadrangle surrounded by any four projection points in the plurality of projection points;
taking the quadrangle with the largest area as a first projection area of the abnormal road object in the display area of the HUD;
the line-of-sight path is determined based on the four corner points of the first projection area and the human eye position of the driver.
Optionally, the acquiring the state information of the plurality of road objects within the information acquisition range of the vehicle includes:
receiving state information broadcast by other vehicles in a broadcast signal receiving range of the vehicle;
acquiring state information of a road object positioned in an information acquisition range through radar and/or image acquisition equipment on the vehicle;
and acquiring the state information of the plurality of road objects based on the received state information broadcast by other vehicles and the acquired state information of the road objects.
Optionally, the state information of the plurality of road objects includes a position and a speed of each road object;
the determining an abnormal road object from the plurality of road objects based on the state information of the plurality of road objects includes:
determining a distance between each road object and the vehicle based on the location of each road object;
determining a collision time of each road object with the vehicle based on a distance between the respective road object and the vehicle and a speed of the respective road object;
and taking the road object with the shortest collision time with the vehicle as the abnormal road object.
In another aspect, there is provided an information display apparatus of a road object, for use in a vehicle, the apparatus comprising:
A first acquisition module for acquiring state information of a plurality of road objects within an information acquisition range of the vehicle;
a first determination module configured to determine an abnormal road object from the plurality of road objects, the abnormal road object being a road object affecting safe running of the vehicle, based on state information of the plurality of road objects;
a second acquisition module for acquiring a human eye position of a driver of the vehicle;
a second determining module configured to determine occlusion state information on a line-of-sight path of the driver to the abnormal road object based on a human eye position of the driver and state information of the plurality of road objects;
and the display module is used for displaying the information of the abnormal road object based on the shielding state information on the sight line path.
Optionally, the state information of the plurality of road objects includes a shape size and a position of each road object of the plurality of road objects;
the second determining module is mainly used for:
determining a plurality of key corner points of the abnormal road object based on the shape size and the position of the abnormal road object;
determining the sight line path based on the human eye position of the driver and a plurality of key corner points of the abnormal road object;
And determining shielding state information on the sight line path based on the shape size and the position of other road objects except the abnormal road object in the plurality of road objects.
Optionally, the second determining module is mainly configured to:
detecting whether a target road object located on the sight line path exists in the other road objects based on the shape size and the position of the other road objects;
and if the target road object does not exist, generating first shielding state information, wherein the first shielding state is used for indicating that the abnormal road object is not shielded on the sight line path.
Optionally, the second determining module is mainly configured to:
if the target road object is present, second occlusion state information is generated based on an area of an overlapping region between a first projection region of the abnormal road object and a second projection region of the target road object in a display region of a head-up display HUD, the second occlusion state information including an area of the overlapping region.
Optionally, the display module is mainly used for:
and when the shielding state information on the sight line path is the second shielding state information, if the area of the overlapped area is not smaller than a reference area threshold value, displaying the image model of the abnormal road object in the first projection area.
Optionally, the display module is mainly used for:
and when the shielding state information on the sight line path is the first shielding state information, or when the shielding state information on the sight line path is the second shielding state information and the area of the overlapped area is smaller than a reference area threshold value, marking and displaying the first projection area in the display area of the HUD.
Optionally, the second determining module is mainly configured to:
determining intersection points of connecting lines between the human eye positions of the driver and each key corner point and the display area of the HUD based on the human eye position coordinates of the driver, the position coordinates of the plurality of key corner points and the position coordinates of corner points of the display area of the HUD under a reference coordinate system to obtain a plurality of projection points;
determining the area of a quadrangle surrounded by any four projection points in the plurality of projection points;
taking the quadrangle with the largest area as a first projection area of the abnormal road object in the display area of the HUD;
the line-of-sight path is determined based on the four corner points of the first projection area and the human eye position of the driver.
Optionally, the first obtaining module is mainly configured to:
Receiving state information broadcast by other vehicles in a broadcast signal receiving range of the vehicle;
acquiring state information of a road object positioned in an information acquisition range through radar and/or image acquisition equipment on the vehicle;
and acquiring the state information of the plurality of road objects based on the received state information broadcast by other vehicles and the acquired state information of the road objects.
Optionally, the state information of the plurality of road objects includes a position and a speed of each road object;
the first determining module is mainly used for:
determining a distance between each road object and the vehicle based on the location of each road object;
determining a collision time of each road object with the vehicle based on a distance between the respective road object and the vehicle and a speed of the respective road object;
and taking the road object with the shortest collision time with the vehicle as the abnormal road object.
In another aspect, there is provided an information display apparatus of a road object, for use in a vehicle, the apparatus comprising: a processor for:
acquiring state information of a plurality of road objects within an information acquisition range of the vehicle;
Determining an abnormal road object from the plurality of road objects based on the state information of the plurality of road objects, the abnormal road object being a road object affecting safe running of the vehicle;
acquiring a human eye position of a driver of the vehicle;
determining occlusion state information on a line-of-sight path of the driver to the abnormal road object based on the human eye position of the driver and the state information of the plurality of road objects;
and displaying the information of the abnormal road object based on the shielding state information on the sight line path.
Optionally, the state information of the plurality of road objects includes a shape size and a position of each road object of the plurality of road objects;
the processor is mainly used for:
determining a plurality of key corner points of the abnormal road object based on the shape size and the position of the abnormal road object;
determining the sight line path based on the human eye position of the driver and a plurality of key corner points of the abnormal road object;
and determining shielding state information on the sight line path based on the shape size and the position of other road objects except the abnormal road object in the plurality of road objects.
Optionally, the processor is mainly configured to:
detecting whether a target road object located on the sight line path exists in the other road objects based on the shape size and the position of the other road objects;
and if the target road object does not exist, generating first shielding state information, wherein the first shielding state is used for indicating that the abnormal road object is not shielded on the sight line path.
Optionally, the processor is mainly configured to:
if the target road object is present, second occlusion state information is generated based on an area of an overlapping region between a first projection region of the abnormal road object and a second projection region of the target road object in a display region of a head-up display HUD, the second occlusion state information including an area of the overlapping region.
Optionally, the processor is mainly configured to:
and when the shielding state information on the sight line path is the second shielding state information, if the area of the overlapped area is not smaller than a reference area threshold value, displaying the image model of the abnormal road object in the first projection area.
Optionally, the processor is mainly configured to:
And when the shielding state information on the sight line path is the first shielding state information, or when the shielding state information on the sight line path is the second shielding state information and the area of the overlapped area is smaller than a reference area threshold value, marking and displaying the first projection area in the display area of the HUD.
Optionally, the processor is mainly configured to:
determining intersection points of connecting lines between the human eye positions of the driver and each key corner point and the display area of the HUD based on the human eye position coordinates of the driver, the position coordinates of the plurality of key corner points and the position coordinates of corner points of the display area of the HUD under a reference coordinate system to obtain a plurality of projection points;
determining the area of a quadrangle surrounded by any four projection points in the plurality of projection points;
taking the quadrangle with the largest area as a first projection area of the abnormal road object in the display area of the HUD;
the line-of-sight path is determined based on the four corner points of the first projection area and the human eye position of the driver.
Optionally, the processor is mainly configured to:
receiving state information broadcast by other vehicles in a broadcast signal receiving range of the vehicle;
Acquiring state information of a road object positioned in an information acquisition range through radar and/or image acquisition equipment on the vehicle;
and acquiring the state information of the plurality of road objects based on the received state information broadcast by other vehicles and the acquired state information of the road objects.
Optionally, the state information of the plurality of road objects includes a position and a speed of each road object;
the processor is mainly used for:
determining a distance between each road object and the vehicle based on the location of each road object;
determining a collision time of each road object with the vehicle based on a distance between the respective road object and the vehicle and a speed of the respective road object;
and taking the road object with the shortest collision time with the vehicle as the abnormal road object.
In another aspect, a vehicle is provided that includes a control device and a HUD;
the control device is used for: acquiring state information of a plurality of road objects within an information acquisition range of the vehicle; determining an abnormal road object from the plurality of road objects based on the state information of the plurality of road objects, the abnormal road object being a road object affecting safe running of the vehicle; acquiring a human eye position of a driver of the vehicle; determining occlusion state information on a line-of-sight path of the driver to the abnormal road object based on the human eye position of the driver and the state information of the plurality of road objects; transmitting the abnormal road object information to the HUD based on the shielding state information on the sight line path;
The HUD is used for receiving and displaying information of the abnormal road object.
In another aspect, a computer-readable storage medium is provided, in which a computer program is stored, which when executed by a computer, implements the steps of the information display method of a road object described above.
In another aspect, a computer program product is provided comprising instructions which, when run on a computer, cause the computer to perform the steps of the method for displaying information of a road object as described above.
The beneficial effects that technical scheme that this application embodiment provided include at least:
in the embodiment of the application, after an abnormal road object affecting the safe running of the vehicle is screened out from a plurality of road objects and the eye position of the driver of the vehicle is acquired, the shielding state information on the sight path from the driver to the abnormal road object is determined according to the eye position of the driver and the state information of the plurality of road objects. The shielding state information on the sight line path from the driver to the abnormal road object can reflect the condition that the abnormal road object is shielded, and on the basis, the information of the abnormal road object is displayed more accurately based on the shielding state information, the prompting effect is better, and the safety of vehicle driving can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a system architecture diagram related to an information display method of a road object according to an embodiment of the present application;
fig. 2 is a flowchart of a method for displaying information of a road object according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a reference frame provided by an embodiment of the present application;
fig. 4 is a schematic diagram of a connection line between a human eye position of a driver and a key corner point of a cuboid corresponding to an abnormal road object according to an embodiment of the present application;
FIG. 5 is a schematic view of a first projection area according to an embodiment of the present application;
fig. 6 is a schematic diagram of an image model for displaying an abnormal road object in a display area of a HUD according to an embodiment of the present application;
fig. 7 is a schematic diagram of displaying a first projection area in a display area of a HUD according to an embodiment of the present application;
Fig. 8 is a schematic diagram showing a first projection area in a display area of another HUD according to an embodiment of the present application;
fig. 9 is a schematic diagram of an information display device of a road object according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a control device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Before explaining the embodiments of the present application in detail, a description is given of a system architecture related to the embodiments of the present application.
Fig. 1 is a system architecture diagram related to a method for displaying information of a road object according to an embodiment of the present application. As shown in fig. 1, the system includes a control device 101, an information acquisition device 102, and a HUD (Head Up Display) 103 on the vehicle. Wherein the control device 101 may communicate with the information acquisition device 102 and the HUD103 via a wireless network.
Wherein the control device 101 may collect status information of a plurality of road objects located within an information collection range of the vehicle by the information collection device 102 on the vehicle. Wherein the state information of the plurality of road objects comprises information such as shape size, position, speed and the like of each of the plurality of road objects.
The information acquisition device 102 includes radar and/or an off-board camera. The radar can collect the state information of the road object which is not shielded in the information collecting range of the vehicle, and the camera outside the vehicle can collect the state information of the road object in the self vision. After the radar and/or the off-vehicle camera acquire the state information of the road object, the acquired state information of the road object may be transmitted to the control device 101. The control device 101 may receive status information of road objects collected by radar and/or off-board cameras.
In addition, the control device 101 may also receive status information broadcast by other vehicles within the broadcast signal reception range of the vehicle.
Optionally, the system may further include a road side device 104, where the road side device 104 may collect status information of a plurality of road objects within the range of its own information collection, and broadcast the collected status information of the road objects. Based on this, the control apparatus 101 can also receive the status information broadcast by the roadside apparatus 104 within the broadcast signal reception range of the vehicle.
When the control device 101 acquires the state information of the plurality of road objects, the control device 101 may determine an abnormal road object, which is a road object affecting the safe running of the vehicle, from the plurality of road objects based on the state information of the plurality of road objects.
In addition, the information acquisition device 102 also includes an in-vehicle camera. The in-vehicle camera is used to collect a face image of the driver and transmit the collected face image of the driver to the control device 101. The in-vehicle camera may be a monocular camera, a binocular camera or a depth camera, which is not limited in this embodiment of the present application.
After acquiring the face image of the driver, the control apparatus may determine the human eye position of the driver based on the acquired face image of the driver. After determining the human eye position of the driver, the control apparatus 101 may determine the shielding state information on the line-of-sight path of the driver to the abnormal road object based on the human eye position of the driver and the state information of the plurality of road objects obtained as described above. After determining the shielding state information on the line-of-sight path from the driver to the abnormal road object, the control apparatus 101 may determine information of the abnormal road object to be displayed in the display area of the HUD based on the shielding state information on the line-of-sight path from the driver to the abnormal road object, and then send the information of the abnormal road object to the HUD103.
The HUD103 is configured to receive information of an abnormal road object transmitted from the control device 101, and project the received information of the abnormal road object into a display area of the HUD on a front windshield of the vehicle for display. The HUD103 may be suspended at an upper edge of a front windshield of the vehicle, and its projection direction is opposite to a display area of the HUD103 on the front windshield of the vehicle, and of course, the HUD may be installed at other positions in the vehicle.
The control device 101 may be a CPU (Central Processing Unit ) of the vehicle or a domain controller of the vehicle, which is not limited in the embodiment of the present application.
Next, an information display method of a road object provided in an embodiment of the present application will be described.
Fig. 2 is a method for displaying information of a road object according to an embodiment of the present application. The method may be applied to the control apparatus of any of the vehicles described above, and for the purpose of distinguishing, the vehicle that performs the following steps is referred to as the current vehicle, as shown in fig. 2, the method includes the steps of:
step 201: status information of a plurality of road objects within an information acquisition range of a current vehicle is acquired.
In the embodiment of the application, the control device may collect the state information of the road object located in the information collection range through the radar and/or the image collection device on the current vehicle, and may also receive the state information of other vehicles in the broadcast signal receiving range of the current vehicle. Then, the control device may acquire the state information of the plurality of road objects based on the received state information of the other vehicle broadcast and the acquired state information of the road objects. The plurality of road objects include pedestrians, vehicles, obstacles, etc. The state information of the plurality of road objects includes information of a shape size, a position, a speed, and the like of each of the plurality of road objects. The shape and size of each road object can be length, width and height information corresponding to a peripheral cuboid surrounding the road object.
For example, the radar on the current vehicle may acquire state information of a road object that is not blocked within an information acquisition range of the current vehicle, for example, a position and a speed of the road object, etc., and transmit the acquired state information of the road object to the control device. The image capturing device on the current vehicle may be an off-vehicle camera of the current vehicle, and the off-vehicle camera may capture an image of a road object in a field of view of the current vehicle, and extract status information of the road object from the captured image, for example, extract a position, a shape, a size, and the like of the road object, and then send the captured status information of the road object to the control device. Of course, the camera outside the vehicle may directly transmit the acquired image including the state information of the road object to the control device, and the control device extracts the state information of the road object from the image.
Wherein, the radar may not be able to collect the state information of the blocked road object, and the camera outside the vehicle may not be able to collect the state information of the road object outside the self visual field. Based on this, the control device may also acquire status information of other road objects through the internet of vehicles technology.
In one possible implementation, for a vehicle with a communication function among a plurality of vehicles traveling on a road, such a vehicle may broadcast its own status information. In this case, the control device located on the current vehicle may receive the status information broadcast by the other vehicles within the broadcast signal reception range of the current vehicle.
Alternatively, in another possible implementation, the control device may also receive status information of the road object broadcast by the road side device. It should be noted that, the road side device installed in the road may collect the status information of the road object in the self information acquisition range, and broadcast the collected status information of the road object. Based on this, the control device can receive the status information broadcast by the roadside device within the broadcast signal reception range of the current vehicle.
It should be noted that, the above-mentioned ways of obtaining the status information of the various road objects may be used in any combination, or may be used alone, which is not limited in the embodiment of the present application,
alternatively, as is apparent from the above description, the information acquisition range of the current vehicle includes the broadcast signal reception range of the current vehicle and the information acquisition range of the radar and/or image acquisition device of the current vehicle, and there may be an overlapping range of the broadcast signal reception range of the current vehicle and the information acquisition range of the radar and/or image acquisition device of the current vehicle. In this case, the state information of a certain road object within the broadcast signal reception range of the current vehicle received by the control device may be the same state information of the road object as the state information of the certain road object acquired by the radar and/or the image acquisition device. Based on this, the control device may deduplicate the acquired state information of the road objects, resulting in state information of a plurality of road objects.
Wherein the control device may calculate a distance between each two of the plurality of road objects based on the positions included in the acquired state information of the plurality of road objects, and then determine whether the state information of the same road object exists in the state information of the plurality of road objects based on the distance between each two road objects.
For example, taking any two road objects of the plurality of road objects as an example, the control device may compare a distance between the two road objects with a reference distance threshold, and if the distance between the two road objects is greater than the reference distance threshold, it is indicated that the two road objects are different road objects, and the control device may store state information of the two road objects. If the distance between the two road objects is not greater than the reference distance threshold, the two road objects are the same road object, and at this time, the control device may store the same type of information in the two state information corresponding to the road object, and store the different types of information, so as to obtain the state information of the road object. According to the same method, the control device may compare the distance between every two road objects in the plurality of road objects with the reference distance threshold, so as to de-duplicate the acquired state information of the plurality of road objects, and obtain state information of a plurality of different road objects. The reference distance threshold may be preset, for example, the reference distance threshold is 1m or 2m, which is not limited in the embodiment of the present application.
Step 202: an abnormal road object is determined from the plurality of road objects based on the state information of the plurality of road objects, the abnormal road object being a road object affecting safe running of the current vehicle.
After acquiring the state information of the plurality of road objects, the control apparatus may determine, from the plurality of road objects, a road object having the greatest degree of influence on safe running of the current vehicle, based on the state information of the plurality of road objects, and take the road object as an abnormal road object. The greatest degree of influence on the safe running of the current vehicle may mean that the risk of collision with the current vehicle is greatest.
As is apparent from the foregoing description, the state information of each road object includes the position and the speed of the corresponding road object, based on which the control apparatus determines the distance between each road object and the current vehicle based on the position of each road object, and determines the collision time of the corresponding road object and the current vehicle based on the distance between each road object and the current vehicle and the speed of the corresponding road object; and the road object with the shortest collision time with the current vehicle is regarded as the abnormal road object.
For example, the control device may determine the distance between each road object and the current vehicle based on the position of the corresponding road object and the position of the current vehicle. The control device may also determine a relative speed between each road object and the current vehicle based on the speed of the respective road object and the speed of the current vehicle. Then, the ratio between the distance between each road object and the current vehicle and the relative speed is taken as the collision time of the corresponding road object and the current vehicle.
Illustratively, taking any one of a plurality of road objects as an example, assume that the speed of the road object is V 1 The current vehicle speed is V 2 . Wherein the included angle between the speed direction of the road object and the speed direction of the current vehicle is theta, and the speed component of the road object along the speed direction of the current vehicle is V 1 COS theta, based on which can be obtained by the following formula(1) To determine the relative speed av between the road object and the current vehicle.
ΔV=V 1 cosθ-V 2 (1)
After determining the relative speed between the road object and the current vehicle, assuming that the distance between the road object and the current vehicle is Δs, the control apparatus may determine the collision time of the road object and the current vehicle by the following formula (2).
According to the same method, the control device may determine a collision time between each road object and the current vehicle. After determining the collision time between each road object and the current vehicle, the control device may compare the multiple collision times to determine the shortest collision time, where the road object corresponding to the shortest collision time is the road object with the greatest collision risk with the current vehicle, and at this time, the road object may be used as an abnormal road object.
Alternatively, in some possible implementations, the control device may also determine a relative distance between each road object and the current vehicle, where the road object with the smallest relative distance is the abnormal road object, which is not limited by the embodiment of the present application.
Step 203: the human eye position of the driver of the current vehicle is obtained.
In the embodiment of the application, after the control device determines the abnormal road object, the control device may collect the face image of the driver through the in-vehicle camera installed on the front windshield of the current vehicle, and after the face image of the driver is collected, the control device may obtain the eye position of the driver based on the collected face image of the driver. Wherein, the human eye position of the driver is the midpoint position of the connecting line of the two eye positions of the driver. The in-vehicle camera can be a depth camera, a binocular camera or a monocular camera.
The control device may send an acquisition instruction to the in-vehicle camera, and when the in-vehicle camera receives the acquisition instruction, may acquire a face image of the driver and send the acquired face image of the driver to the control device, and after receiving the face image of the driver, the control device may determine the positions of eyes of the driver through the face recognition network, and further determine the positions of eyes of the driver based on the determined positions of eyes of the driver.
The control device may input the face image of the driver into a face recognition network, and the face recognition network may recognize the face image and output image position coordinates of both eyes of the driver under an image coordinate system of the face image. Thereafter, the control apparatus may determine the monocular image position coordinates of the driver in the image coordinate system of the face image based on the image position coordinates of both eyes of the driver.
For example, assume that the image position coordinates of the two eye positions of the driver are P l :(x l ,y l )、P r :(x r ,y r ). Wherein P is l For left eye position, P r Is the right eye position. Based on this, the control device can determine, from the image position coordinates of the eyes of the driver, that the image position coordinates of the midpoint of the line connecting the eyes of the driver in the image coordinate system is ((x) l +x r )/2,(y l +y r )/2). The image position coordinates of the midpoint are taken as monocular image position coordinates of the driver in the image coordinate system.
After determining the monocular image position coordinate of the driver under the image coordinate system of the face image, if the face image is a depth image acquired by the depth camera, the control device may determine the monocular position coordinate of the driver under the camera coordinate system based on the monocular image position coordinate of the driver and the depth value corresponding to the monocular image position coordinate, and use the monocular position coordinate under the camera coordinate system as the human eye position coordinate of the driver under the camera coordinate system. Wherein the origin of coordinates of the camera coordinate system is the optical center of the camera in the vehicle, the Z axis coincides with the optical axis and is perpendicular to the imaging plane of the facial image, Taking the image capturing direction as a positive direction, the X-axis is parallel to the width direction of the face image, the positive direction is rightward, the Y-axis is parallel to the height direction of the face image, and the positive direction is upward. For convenience of explanation, the position of any point in the camera coordinate system is marked (X, Y, Z). Wherein the human eye position coordinates under the camera coordinate system can be noted as (X) p ,Y p ,Z p )。
Optionally, when the in-vehicle camera is a binocular camera, the collected facial image has two frames, and accordingly, the control device may identify, through the face recognition network, the positions of the eyes based on at least one frame of the two frame facial images, and then determine the coordinates of the single-eye position in the image coordinate system based on the positions of the eyes. And then, calculating depth information based on the two frame face images, further determining monocular position coordinates under a camera coordinate system based on the depth information and the monocular position coordinates under an image coordinate system, and taking the monocular position coordinates under the camera coordinate system as human eye position coordinates of a driver under the camera coordinate system.
Alternatively, when the in-vehicle camera is a monocular camera, after determining the positions of the eyes of the driver in the face image through the face recognition network, the distance between the eyes of the driver from the lens of the camera may be determined based on the pixel distance between the eyes and the preset actual distance between the eyes. Then, monocular position coordinates are calculated based on the position coordinates of the eyes in the face image, monocular position coordinates in a camera coordinate system are calculated by using the monocular position coordinates, the proportional relation between the pixel distance between the eyes and the actual distance, and the distance between the eyes and the lens of the camera, and the monocular position coordinates in the camera coordinate system are used as human eye position coordinates of the driver in the camera coordinate system.
Step 204: based on the human eye position of the driver and the state information of the plurality of road objects, shielding state information on the line-of-sight path of the driver to the abnormal road object is determined.
As is apparent from the foregoing description, the status information of the plurality of road objects includes the shape size and the position of each of the plurality of road objects. On the basis of this, the control apparatus may determine a plurality of key corner points of the abnormal road object based on the shape size and the position of the abnormal road object. Then, a line-of-sight path of the driver is determined based on the eye position of the driver and the plurality of key corner points of the abnormal road object. After determining the sight line path of the driver, the shielding state information on the sight line path is determined based on the shape size and the position of other road objects than the abnormal road object among the plurality of road objects.
As can be seen from the foregoing description, the shape and size information of each road object obtained by the control device is the length, width and height information of the cuboid corresponding to the corresponding road object. Based on this, the control apparatus may take the position of the abnormal road object as the position of the center point of the rectangular parallelepiped corresponding to the abnormal road object.
If the state information of the abnormal road object is obtained by receiving the state information broadcast by other vehicles or the state information broadcast by the road side device, the position in the state information of the abnormal road object may be a position coordinate in the world coordinate system. In this case, the control apparatus may convert the position coordinates of the center point of the rectangular parallelepiped corresponding to the abnormal road object under the world coordinate system into the reference coordinate system based on the coordinate conversion relationship between the world coordinate system and the reference coordinate system.
If the state information of the abnormal road object is acquired through a radar or an image acquisition device, the position included in the state information of the abnormal road object may be a position coordinate under a target coordinate system, wherein the target coordinate system may be a vehicle coordinate system of a current vehicle. In this case, the control apparatus may convert the position coordinates of the center point of the rectangular parallelepiped corresponding to the abnormal road object under the target coordinate system into the reference coordinate system based on the coordinate conversion relationship between the target coordinate system and the reference coordinate system.
The origin of coordinates of the reference coordinate system may be a center point of a steering wheel of the current vehicle, the Z axis points to the front of the current vehicle, the X axis points to the right of the current vehicle, and the Y axis points to the top of the current vehicle. For convenience of explanation, the position of any point in the reference coordinate system is denoted by (X ', Y ', Z ').
After determining the position coordinates of the center point of the cuboid corresponding to the abnormal road object under the reference coordinate system, the control device may determine the position coordinates of the 8 key corner points of the cuboid under the reference coordinate system based on the position coordinates of the center point of the cuboid corresponding to the abnormal road object under the reference coordinate system and the length, width and height information of the cuboid.
Illustratively, as shown in fig. 3, a position of the center point O of the rectangular parallelepiped corresponding to the abnormal road object in the reference coordinate system is marked (X 0 ',Y 0 ',Z 0 '). The length, width and height of the rectangular parallelepiped corresponding to the abnormal road object are d, e and f, respectively. Based on this, taking any key corner point of the cuboid as an example, for example, the key corner point a in fig. 3, the control device may determine the position coordinate (X 1 ',Y 1 ',Z 1 '). The calculation formulas for determining the position coordinates of the key corner point a in the reference coordinate system may refer to the following formulas (3), (4) and (5).
/>
According to the relative position between each key corner point in the plurality of key corner points and the center point of the cuboid, the control device can determine the position coordinate of each key corner point in 8 key corner points of the cuboid corresponding to the abnormal road object under the reference coordinate system.
After determining the position coordinates of the plurality of key corner points in the reference coordinate system, the control device may further determine the human eye position coordinates of the driver in the reference coordinate system.
Wherein the control device stores in advance the position of the steering wheel of the current vehicle relative to the in-vehicle camera, in which case the control device may determine the coordinate conversion relationship between the camera coordinate system and the reference coordinate system based on the relative position between the steering wheel of the current vehicle and the in-vehicle camera. Then, the control device may convert the human eye position coordinates of the driver in the camera coordinate system into the reference coordinate system according to the coordinate conversion relationship between the camera coordinate system and the reference coordinate system.
It should be noted that, the relative positions between each corner of the display area of the HUD and the steering wheel of the current vehicle are also stored in the control device in advance, based on this, the control device may determine the position coordinates of each corner of the display area of the HUD under the reference coordinate system based on the relative positions between each corner of the display area of the HUD and the steering wheel of the current vehicle.
After determining the eye position of the driver, the plurality of key corner points of the abnormal road object, and the position coordinates of each corner point of the display area of the HUD under the reference coordinate system, the control device may determine the intersection point of the connection line between the eye position of the driver and each key corner point and the display area of the HUD based on the eye position coordinates of the driver under the reference coordinate system, the position coordinates of the plurality of key corner points, and the position coordinates of the corner points of the display area of the HUD, to obtain a plurality of projection points. After obtaining the plurality of projection points, the control device may determine an area of a quadrangle surrounded by any four projection points of the plurality of projection points, and further use the quadrangle with the largest area as a first projection area of the abnormal road object in the display area of the HUD. Then, a line-of-sight path is determined based on the four corner points of the first projection region and the human eye position of the driver.
The control device may, for example, determine the range of position coordinates of the individual corner points within the display area of the HUD in the reference coordinate system, which may be a curved surface or a plane, based on the position coordinates of the corner points of the display area of the HUD in the reference coordinate system. And then, the control equipment determines a connecting line between the position coordinates of the human eyes under the reference coordinate system and the position coordinates of each key corner point, determines whether a point with the position coordinates within the position coordinate range of the display area exists on the connecting line under the reference coordinate system, and takes the point as a projection point of the corresponding key corner point in the display area if the point exists. Then, the control device determines the area of the quadrangle surrounded by every four projection points, and further takes the quadrangle with the largest surrounded area as a projection area of the abnormal road object in the display area, namely a first projection area.
For example, as shown in fig. 4, the eye position of the driver in the reference coordinate system is denoted as point P, and 8 key corner points of the rectangular parallelepiped corresponding to the abnormal road object are denoted as A, B, C, D, E, F, G, H. The control device may determine a line AP, BP, CP, DP, EP, FP, GP, HP between the point P and 8 key corner points of the rectangular parallelepiped corresponding to the abnormal road object. The intersection point of AP and the display area of HUD is denoted as A ', the intersection point of BP and the display area of HUD is denoted as B', the intersection point of CP and the display area of HUD is denoted as C ', the intersection point of DP and the display area of HUD is denoted as D', the intersection point of EP and the display area of HUD is denoted as E ', the intersection point of FP and the display area of HUD is denoted as F', the intersection point of GP and the display area of HUD is denoted as G ', and the intersection point of HP and the display area of HUD is denoted as H'. In this way, the control device can obtain a plurality of projection points a ', B', C ', D', E ', F', G ', H'. Wherein the positions of the plurality of projection points a ', B', C ', D', E ', F', G ', H' on the display area of the HUD are shown in fig. 5. The area of the quadrangle enclosed by A ', B', E ', F' is the largest, and based on this, the control device can take the area enclosed by A ', B', E ', F' as the first projection area of the abnormal road object in the HUD display area.
After obtaining the first projection area of the abnormal road object in the display area of the HUD, the control device may use four connecting lines respectively formed by the human eye position of the driver and four corner points of the first projection area as a sight path of the driver to the abnormal road object.
After determining the line-of-sight path of the driver to the abnormal road object, the control apparatus may determine the shielding state information on the line-of-sight path based on the shape size and the position of the other road objects than the abnormal road object among the plurality of road objects.
Wherein the control device may detect whether or not there is a target road object located on the line-of-sight path among the other road objects based on the shape size and the position of the other road objects.
For example, taking any one of the other road objects as an example, the control device may determine a distance between the road object and the current vehicle based on a position of the road object, and determine that the road object is not on a driver's line-of-sight path to the abnormal road object if the distance between the road object and the current vehicle is greater than the distance between the abnormal road object and the current vehicle.
Wherein if the distance between the road object and the current vehicle is not greater than the distance between the abnormal road object and the current vehicle, the control apparatus may determine the position coordinates of the center point of the rectangular parallelepiped corresponding to the road object in the reference coordinate system based on the shape size and the position of the road object with reference to the method described above. Then, the control device may determine a first connection line between the human eye position of the driver and the center point of the rectangular parallelepiped corresponding to the road object based on the position coordinate of the center point and the human eye position coordinate in the reference coordinate system. The control device may then determine a position coordinate of the center point of the first projection area in the reference coordinate system, and after determining the position coordinate of the center point of the first projection area in the reference coordinate system, the control device may determine a second line between the position of the human eye of the driver and the center point of the first projection area.
After determining the first connection and the second connection, the control device may determine an included angle between the first connection and the second connection, compare the included angle with a reference angle threshold, and if the included angle is greater than the reference angle threshold, indicate that the road object is not on the line of sight path from the driver to the abnormal road object. If the included angle is not greater than the reference angle threshold, the road object is indicated to be on the driver's line-of-sight path to the abnormal road object. The reference angle threshold may be preset, for example, the reference angle threshold is 40 degrees or 50 degrees, which is not limited in the embodiment of the present application.
According to the method described above, the control device can determine whether each of the other road objects is on the driver's line-of-sight path to the abnormal road object.
Wherein if none of the road objects is on the driver's line-of-sight path to the abnormal road object, it is determined that there is no target road object on the line-of-sight path, that is, it is indicated that the abnormal road object is not occluded, in which case the control device may generate first occlusion state information for indicating that the abnormal road object is not occluded on the line-of-sight path.
Alternatively, if there is a target road object on the driver's line-of-sight path to the abnormal road object, it is indicated that the abnormal road object is blocked by another road object, in which case the control device may generate second blocking state information including the area of the overlapping region based on the area of the overlapping region between the first projection region of the abnormal road object and the second projection region of the target road object in the display region of the HUD.
For example, the control device may determine the second projection area of the target road object on the display area of the HUD by referring to the foregoing method for determining the first projection area of the abnormal road object on the display area of the HUD, which will not be described in detail in the embodiments of the present application. After determining the second projection area of the target road object on the display area of the HUD, the control apparatus may calculate an area of an overlapping area of the first projection area of the abnormal road object and the second projection area of the target road object in the display area of the HUD, and generate second shielding state information including the area of the overlapping area.
Step 205: information of the abnormal road object is displayed based on the shielding state information on the line-of-sight path.
In the embodiment of the present application, after the first shielding state information or the second shielding state information is generated, the control device may control the HUD to display the information of the abnormal road object based on the first shielding state information or the second shielding state information.
Wherein when the occlusion state information on the line-of-sight path is the second occlusion state information, the control device may compare the area of the overlapping region in the second occlusion state information with the reference area threshold. If the area of the overlapping region included in the blocking status information is not less than the reference area threshold value, it is indicated that the abnormal road object is blocked, in which case an image model of the abnormal road object may be displayed in the first projection region of the abnormal road object in the display region of the HUD. The reference area threshold may be preset, for example, the reference area threshold is 50% of the area of the first projection area of the abnormal road object, which is not limited in the embodiment of the present application.
It should be noted that the shape and size corresponding to the road object may represent the type of the road object, based on which the control device may determine the type of the abnormal road object according to the shape and size of the abnormal road object.
After determining the type of the abnormal road object, the control device may search the type of the road object identical to the type of the abnormal road object from the mapping relationship between the type of the road object and the image model of the road object stored in advance, and use the image model corresponding to the searched type of the road object as the image model of the abnormal road object, so as to control the HUD to display the image model of the abnormal road object in the first projection area.
Fig. 6 shows a schematic diagram of an image model of an abnormal road object displayed in a display area of a HUD. As shown in fig. 6, when the control device controls the HUD to display the image model 01 of the abnormal road object in the first projection area of the abnormal road object in the display area of the HUD, the size of the image model 01 of the abnormal road object may be determined according to the size of the first projection area, and the image model 01 of the abnormal road object may be further filled in the first projection area.
Of course, in some possible implementations, the status information of the road object may also include the type of the respective road object, in which case the control device may acquire the corresponding image model directly based on the type of the road object.
Alternatively, if the area of the overlapping region included in the second blocking state information is not greater than the reference area threshold value, it is indicated that the abnormal road object is not blocked, in which case the control device may label and display the first projection region 02 in the display region of the HUD. Alternatively, when the occlusion state information is the first occlusion state information, the control device may mark and display the first projection region 03 in the display region of the HUD.
For example, as shown in fig. 7 and 8, the control device may mark a first projection area of the abnormal road object in a display area of the HUD and control the HUD to display the first projection area. Fig. 7 is a schematic diagram showing that the first projection area is displayed in the display area of the HUD when the area of the overlapping area included in the second shielding status information is not greater than the reference area threshold. Fig. 8 is a schematic diagram showing a first projection area in a display area of the HUD when the shielding status information is the first shielding status information.
Alternatively, as shown in fig. 6 to 8, the HUD may also display a warning icon 04 when displaying the information of the abnormal road object, where the warning icon 04 is used to alert the driver to the information of the abnormal road object displayed in the display area of the HUD.
Optionally, in some possible cases, for three cases that the area of the overlapping area in the first shielding state information and the second shielding state information is greater than the reference threshold value and the area of the overlapping area is not greater than the reference threshold value, three different display modes may be adopted to display the reminder, so that the driver can determine the shielding condition based on the information display mode.
In the embodiment of the application, after screening out an abnormal road object affecting the safe running of the current vehicle from a plurality of road objects and acquiring the eye position of the driver of the current vehicle, determining the shielding state information on the sight path from the driver to the abnormal road object according to the eye position of the driver and the state information of the plurality of road objects. The shielding state information on the sight line path from the driver to the abnormal road object can reflect the condition that the abnormal road object is shielded, and on the basis, the information of the abnormal road object is displayed more accurately based on the shielding state information, the prompting effect is better, and the safety of vehicle driving can be improved.
Next, in the embodiment of the present application, the state information of the plurality of road objects may be acquired by receiving the state information broadcast by other vehicles within the broadcast signal receiving range of the current vehicle, or by receiving the state information of the road objects broadcast by the road side device. In this way, compared to the way in which the status information of road objects is acquired by radar or image acquisition devices, the status information of certain occluded road objects may be acquired, thereby providing a data basis for the subsequent display of information of occluded abnormal road objects in the display area of the HUD.
In addition, in the embodiment of the application, according to the difference of the shielding state information, different information can be displayed in the projection area of the abnormal road object in the display area of the HUD, that is, different shielding conditions can be reflected through different information display modes, so that a driver can determine the condition that the abnormal road object is shielded according to the information of the abnormal road object displayed in the display area of the HUD, further control the vehicle, and improve the driving safety of the vehicle.
Next, an information display device of a road object provided in an embodiment of the present application will be described.
Referring to fig. 9, an embodiment of the present application provides an information display apparatus 900 for a road object, where the apparatus 900 includes:
a first acquisition module 901, configured to acquire status information of a plurality of road objects within an information acquisition range of a vehicle;
a first determining module 902 configured to determine an abnormal road object from a plurality of road objects based on state information of the plurality of road objects, the abnormal road object being a road object affecting safe running of the vehicle;
a second acquisition module 903 for acquiring a human eye position of a driver of the vehicle;
a second determining module 904, configured to determine, based on a human eye position of the driver and status information of a plurality of road objects, shielding status information on a line-of-sight path from the driver to the abnormal road object;
And a display module 905 for displaying information of the abnormal road object based on the shielding state information on the sight line path.
Optionally, the status information of the plurality of road objects includes a shape size and a position of each of the plurality of road objects;
the second determining module 904 is mainly configured to:
determining a plurality of key corner points of the abnormal road object based on the shape size and the position of the abnormal road object;
determining a sight line path based on the eye position of the driver and a plurality of key corner points of the abnormal road object;
the occlusion state information on the line-of-sight path is determined based on the shape, size, and position of other road objects among the plurality of road objects than the abnormal road object.
Optionally, the second determining module 904 is mainly configured to:
detecting whether a target road object located on a sight line path exists in other road objects based on the shape size and the position of the other road objects;
if the target road object does not exist, first shielding state information is generated, wherein the first shielding state is used for indicating that the abnormal road object is not shielded on the sight line path.
Optionally, the second determining module 904 is mainly configured to:
if the target road object is present, second occlusion state information is generated based on an area of an overlapping region between a first projection region of the abnormal road object and a second projection region of the target road object in a display region of the head-up display HUD, the second occlusion state information including an area of the overlapping region.
Optionally, the display module 905 is mainly used for:
when the shielding state information on the sight line path is the second shielding state information, if the area of the overlapping area is not less than the reference area threshold value, displaying the image model of the abnormal road object in the first projection area.
Optionally, the display module 905 is mainly used for:
and when the shielding state information on the sight line path is the first shielding state information, or when the shielding state information on the sight line path is the second shielding state information and the area of the overlapped area is smaller than the reference area threshold value, marking and displaying the first projection area in the display area of the HUD.
Optionally, the second determining module 904 is mainly configured to:
determining intersection points of a connecting line between the human eye position of the driver and each key corner point and the display area of the HUD based on the human eye position coordinates of the driver, the position coordinates of the plurality of key corner points and the position coordinates of corner points of the display area of the HUD under the reference coordinate system, and obtaining a plurality of projection points;
determining the area of a quadrangle surrounded by any four projection points in the plurality of projection points;
taking the quadrangle with the largest area as a first projection area of the abnormal road object in the display area of the HUD;
A line-of-sight path is determined based on the four corner points of the first projection region and the position of the driver's eyes.
Alternatively, the first acquisition module 901 is mainly configured to:
receiving state information broadcast by other vehicles within a broadcast signal receiving range of the vehicle;
acquiring state information of a road object positioned in an information acquisition range through radar and/or image acquisition equipment on a vehicle;
and acquiring the state information of a plurality of road objects based on the received state information broadcast by other vehicles and the acquired state information of the road objects.
Optionally, the status information of the plurality of road objects includes a position and a speed of each road object;
the first determining module 902 is mainly used for:
determining a distance between each road object and the vehicle based on the location of each road object;
determining a collision time of each road object with the vehicle based on a distance between the respective road object and the vehicle and a speed of the respective road object;
the road object having the shortest collision time with the vehicle is taken as the abnormal road object.
In summary, in the embodiment of the present application, after an abnormal road object that affects the safe driving of the vehicle is screened out from a plurality of road objects and the eye position of the driver of the vehicle is obtained, the shielding state information on the line-of-sight path of the driver to the abnormal road object is determined according to the eye position of the driver and the state information of the plurality of road objects. The shielding state information on the sight line path from the driver to the abnormal road object can reflect the condition that the abnormal road object is shielded, and on the basis, the information of the abnormal road object is displayed more accurately based on the shielding state information, the prompting effect is better, and the safety of vehicle driving can be improved.
It should be noted that, when the information display device for a road object provided in the foregoing embodiment displays information of an abnormal road object, only the division of the foregoing functional modules is used as an example, in practical application, the foregoing functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the information display device of the road object provided in the above embodiment and the information display method embodiment of the road object belong to the same concept, and detailed implementation processes of the information display device and the information display method embodiment of the road object are detailed in the method embodiment, and are not repeated here.
Fig. 10 is a block diagram of a control apparatus 1000 according to an exemplary embodiment. The control device in the above-described embodiment can be realized by the control terminal 1000.
In general, the control apparatus 1000 includes: a processor 1001 and a memory 1002.
The processor 1001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 1001 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 1001 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1001 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 1001 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 1002 may include one or more computer-readable storage media, which may be non-transitory. Memory 1002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1002 is configured to store at least one instruction for execution by processor 1001 to implement the information display method for road objects provided by the method embodiments herein.
In some embodiments, the control device 1000 may further optionally include: a peripheral interface 1003, and at least one peripheral. The processor 1001, the memory 1002, and the peripheral interface 1003 may be connected by a bus or signal line. The various peripheral devices may be connected to the peripheral device interface 1003 via a bus, signal wire, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1004, a positioning component 1005, and a power supply 1006.
Peripheral interface 1003 may be used to connect I/O (Input/Output) related at least one peripheral to processor 1001 and memory 1002. In some embodiments, processor 1001, memory 1002, and peripheral interface 1003 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1001, memory 1002, and peripheral interface 1003 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
Radio Frequency circuit 1004 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. Radio frequency circuitry 1004 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1004 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1004 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 1004 may communicate with other control devices via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 1004 may also include NFC (Near Field Communication ) related circuitry, which is not limited in this application.
The location component 1005 is used to locate the current geographic location of the control device 1000 to enable navigation or LBS (Location Based Service, location-based services). The positioning component 1005 may be a GPS (Global Positioning System ), beidou system or galileo system based positioning component.
The power supply 1006 is used to power the various components in the control device 1000. The power source 1006 may be alternating current, direct current, disposable or rechargeable. When the power source 1006 comprises a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the control device 1000 further includes one or more sensors.
That is, the embodiments of the present application not only provide a control apparatus including a processor and a memory for storing processor-executable instructions, wherein the processor is configured to perform the information display method of the road object shown in fig. 2. Moreover, the embodiment of the present application also provides a computer-readable storage medium, in which a computer program is stored, which when executed by a processor, can implement the information display method of the road object shown in fig. 2.
It will be appreciated by those skilled in the art that the structure shown in fig. 10 is not limiting of the control device 1000 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
The embodiment of the present application further provides a computer program product containing instructions, which when executed on a computer, cause the computer to execute the information display method of the road object provided by the embodiment shown in fig. 10.
It should be noted that, the information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data) and signals related to the embodiments of the present application are all authorized by the user or are fully authorized by the parties, and the collection, use and processing of the related data is required to comply with the relevant laws and regulations and standards of the relevant countries and regions.
The foregoing description is not intended to limit the embodiments of the present application, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the embodiments of the present application are intended to be included within the scope of the embodiments of the present application.

Claims (7)

1. An information display method of a road object, characterized by being applied to a vehicle, the method comprising:
acquiring state information of a plurality of road objects within an information acquisition range of the vehicle;
determining an abnormal road object from the plurality of road objects based on the state information of the plurality of road objects, the abnormal road object being a road object affecting safe running of the vehicle; the state information of the plurality of road objects includes a shape size and a position of each of the plurality of road objects;
Acquiring a human eye position of a driver of the vehicle;
determining a plurality of key corner points of the abnormal road object based on the shape size and the position of the abnormal road object;
determining intersection points of connecting lines between the human eye positions of the driver and each key corner point and the display area of the HUD based on the human eye position coordinates of the driver, the position coordinates of the plurality of key corner points and the position coordinates of corner points of the display area of the HUD under a reference coordinate system to obtain a plurality of projection points;
determining the area of a quadrangle surrounded by any four projection points in the plurality of projection points;
taking the quadrangle with the largest area as a first projection area of the abnormal road object in the display area of the HUD;
determining a sight line path based on the four corner points of the first projection area and the human eye position of the driver;
detecting whether a target road object located on the sight line path exists in other road objects based on the shape size and the position of the other road objects;
if the target road object does not exist, generating first shielding state information, wherein the first shielding state is used for indicating that the abnormal road object is not shielded on the sight line path;
And displaying the information of the abnormal road object based on the shielding state information on the sight line path.
2. The method according to claim 1, wherein the method further comprises:
if the target road object is present, second occlusion state information is generated based on an area of an overlapping region between a first projection region of the abnormal road object and a second projection region of the target road object in a display region of a head-up display HUD, the second occlusion state information including an area of the overlapping region.
3. The method of claim 2, wherein the displaying information of the abnormal road object based on the occlusion status information on the line-of-sight path comprises:
and when the shielding state information on the sight line path is the second shielding state information, if the area of the overlapped area is not smaller than a reference area threshold value, displaying the image model of the abnormal road object in the first projection area.
4. A method according to claim 3, characterized in that the method further comprises:
and when the shielding state information on the sight line path is the first shielding state information, or when the shielding state information on the sight line path is the second shielding state information and the area of the overlapped area is smaller than a reference area threshold value, marking and displaying the first projection area in the display area of the HUD.
5. The method according to any one of claims 1 to 4, wherein acquiring status information of a plurality of road objects within an information acquisition range of the vehicle includes:
receiving state information broadcast by other vehicles in a broadcast signal receiving range of the vehicle;
acquiring state information of a road object positioned in an information acquisition range through radar and/or image acquisition equipment on the vehicle;
and acquiring the state information of the plurality of road objects based on the received state information broadcast by other vehicles and the acquired state information of the road objects.
6. The method of any of claims 1-4, wherein the status information of the plurality of road objects includes a position and a speed of each road object;
the determining an abnormal road object from the plurality of road objects based on the state information of the plurality of road objects includes:
determining a distance between each road object and the vehicle based on the location of each road object;
determining a collision time of each road object with the vehicle based on a distance between the respective road object and the vehicle and a speed of the respective road object;
and taking the road object with the shortest collision time with the vehicle as the abnormal road object.
7. A vehicle, characterized in that the vehicle comprises a control device and a HUD;
the control device is used for: acquiring state information of a plurality of road objects within an information acquisition range of the vehicle; determining an abnormal road object from the plurality of road objects based on the state information of the plurality of road objects, the abnormal road object being a road object affecting safe running of the vehicle; the state information of the plurality of road objects includes a shape size and a position of each of the plurality of road objects; acquiring a human eye position of a driver of the vehicle; determining a plurality of key corner points of the abnormal road object based on the shape size and the position of the abnormal road object; determining intersection points of connecting lines between the human eye positions of the driver and each key corner point and the display area of the HUD based on the human eye position coordinates of the driver, the position coordinates of the plurality of key corner points and the position coordinates of corner points of the display area of the HUD under a reference coordinate system to obtain a plurality of projection points; determining the area of a quadrangle surrounded by any four projection points in the plurality of projection points; taking the quadrangle with the largest area as a first projection area of the abnormal road object in the display area of the HUD; determining a sight line path based on the four corner points of the first projection area and the human eye position of the driver; detecting whether a target road object located on the sight line path exists in other road objects based on the shape size and the position of the other road objects; if the target road object does not exist, generating first shielding state information, wherein the first shielding state is used for indicating that the abnormal road object is not shielded on the sight line path; transmitting information of the abnormal road object to the HUD based on the shielding state information on the sight line path;
The HUD is used for receiving and displaying information of the abnormal road object.
CN202210524784.2A 2022-05-13 2022-05-13 Information display method of road object and vehicle Active CN114999225B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210524784.2A CN114999225B (en) 2022-05-13 2022-05-13 Information display method of road object and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210524784.2A CN114999225B (en) 2022-05-13 2022-05-13 Information display method of road object and vehicle

Publications (2)

Publication Number Publication Date
CN114999225A CN114999225A (en) 2022-09-02
CN114999225B true CN114999225B (en) 2024-03-08

Family

ID=83027102

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210524784.2A Active CN114999225B (en) 2022-05-13 2022-05-13 Information display method of road object and vehicle

Country Status (1)

Country Link
CN (1) CN114999225B (en)

Citations (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007249364A (en) * 2006-03-14 2007-09-27 Denso Corp Safe driving support system and device
DE102007024395A1 (en) * 2007-05-25 2008-11-27 Robert Bosch Gmbh Presentation method for supporting a vehicle driver
JP2014150304A (en) * 2013-01-31 2014-08-21 Nippon Seiki Co Ltd Display device and display method therefor
CN104260669A (en) * 2014-09-17 2015-01-07 北京理工大学 Intelligent vehicle head-up display
CN105216715A (en) * 2015-10-13 2016-01-06 湖南七迪视觉科技有限公司 A kind of motorist vision assists enhancing system
CN105378815A (en) * 2013-06-10 2016-03-02 罗伯特·博世有限公司 Method and device for signalling traffic object that is at least partially visually concealed to driver of vehicle
JP2016052837A (en) * 2014-09-04 2016-04-14 日本精機株式会社 Head-up display device
WO2016056199A1 (en) * 2014-10-07 2016-04-14 株式会社デンソー Head-up display device, and display method for head-up display
CN106952489A (en) * 2015-11-11 2017-07-14 丰田自动车株式会社 Drive assistance device
CN107139930A (en) * 2017-04-27 2017-09-08 广州方铭交互信息科技有限公司 A kind of automobile lane-change DAS (Driver Assistant System) and method based on HUD new line Display Techniques
CN107272194A (en) * 2017-07-12 2017-10-20 湖南海翼电子商务股份有限公司 Head-up display device and its method
CN107428299A (en) * 2015-04-03 2017-12-01 株式会社电装 Information presentation device
CN108322719A (en) * 2018-02-12 2018-07-24 京东方科技集团股份有限公司 Head-up-display system and new line display methods, mobile devices
CN108345112A (en) * 2018-03-01 2018-07-31 苏州途驰安电子科技有限公司 A kind of vehicle-mounted head-up display
JP2018156172A (en) * 2017-03-15 2018-10-04 株式会社Subaru Display system in vehicle and method for controlling display system in vehicle
TWM569292U (en) * 2018-01-11 2018-11-01 英屬開曼群島商麥迪創科技股份有限公司 Warning system adapted to a vehicle
JP2019010919A (en) * 2017-06-29 2019-01-24 アイシン・エィ・ダブリュ株式会社 Travel support device and computer program
TWM574242U (en) * 2018-11-02 2019-02-11 群億電子科技股份有限公司 Head-up display with light-shielding function
CN109507799A (en) * 2017-09-15 2019-03-22 比亚迪股份有限公司 Vehicle-mounted HUD display methods and system
JP2019151205A (en) * 2018-03-02 2019-09-12 株式会社豊田中央研究所 On-vehicle display device, method for controlling the same, and computer program
CN110288163A (en) * 2019-07-01 2019-09-27 腾讯科技(深圳)有限公司 Method, apparatus, equipment and the storage medium of information processing
CN111267870A (en) * 2020-01-17 2020-06-12 北京梧桐车联科技有限责任公司 Information display method and device and computer storage medium
CN111703371A (en) * 2020-06-16 2020-09-25 北京百度网讯科技有限公司 Traffic information display method and device, electronic equipment and storage medium
JP2021009133A (en) * 2019-07-02 2021-01-28 株式会社デンソー Display control device and display control program
CN113597617A (en) * 2021-06-22 2021-11-02 华为技术有限公司 Display method, display device, display equipment and vehicle
KR20210141829A (en) * 2020-05-13 2021-11-23 현대자동차주식회사 System and method for detecting vehicle wheel alignment state
CN113859122A (en) * 2021-09-30 2021-12-31 重庆长安新能源汽车科技有限公司 Height self-adaptive adjustment AR-HUD display method and system and vehicle
CN113888900A (en) * 2021-09-10 2022-01-04 海信集团控股股份有限公司 Vehicle early warning method and device
CN114030475A (en) * 2021-12-22 2022-02-11 清华大学苏州汽车研究院(吴江) Vehicle driving assisting method and device, vehicle and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5870993B2 (en) * 2013-12-27 2016-03-01 トヨタ自動車株式会社 Vehicle information display device and vehicle information display method

Patent Citations (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007249364A (en) * 2006-03-14 2007-09-27 Denso Corp Safe driving support system and device
DE102007024395A1 (en) * 2007-05-25 2008-11-27 Robert Bosch Gmbh Presentation method for supporting a vehicle driver
JP2014150304A (en) * 2013-01-31 2014-08-21 Nippon Seiki Co Ltd Display device and display method therefor
CN105378815A (en) * 2013-06-10 2016-03-02 罗伯特·博世有限公司 Method and device for signalling traffic object that is at least partially visually concealed to driver of vehicle
JP2016052837A (en) * 2014-09-04 2016-04-14 日本精機株式会社 Head-up display device
CN104260669A (en) * 2014-09-17 2015-01-07 北京理工大学 Intelligent vehicle head-up display
WO2016056199A1 (en) * 2014-10-07 2016-04-14 株式会社デンソー Head-up display device, and display method for head-up display
CN107428299A (en) * 2015-04-03 2017-12-01 株式会社电装 Information presentation device
CN105216715A (en) * 2015-10-13 2016-01-06 湖南七迪视觉科技有限公司 A kind of motorist vision assists enhancing system
CN106952489A (en) * 2015-11-11 2017-07-14 丰田自动车株式会社 Drive assistance device
JP2018156172A (en) * 2017-03-15 2018-10-04 株式会社Subaru Display system in vehicle and method for controlling display system in vehicle
CN107139930A (en) * 2017-04-27 2017-09-08 广州方铭交互信息科技有限公司 A kind of automobile lane-change DAS (Driver Assistant System) and method based on HUD new line Display Techniques
JP2019010919A (en) * 2017-06-29 2019-01-24 アイシン・エィ・ダブリュ株式会社 Travel support device and computer program
CN107272194A (en) * 2017-07-12 2017-10-20 湖南海翼电子商务股份有限公司 Head-up display device and its method
CN109507799A (en) * 2017-09-15 2019-03-22 比亚迪股份有限公司 Vehicle-mounted HUD display methods and system
TWM569292U (en) * 2018-01-11 2018-11-01 英屬開曼群島商麥迪創科技股份有限公司 Warning system adapted to a vehicle
CN108322719A (en) * 2018-02-12 2018-07-24 京东方科技集团股份有限公司 Head-up-display system and new line display methods, mobile devices
CN108345112A (en) * 2018-03-01 2018-07-31 苏州途驰安电子科技有限公司 A kind of vehicle-mounted head-up display
JP2019151205A (en) * 2018-03-02 2019-09-12 株式会社豊田中央研究所 On-vehicle display device, method for controlling the same, and computer program
TWM574242U (en) * 2018-11-02 2019-02-11 群億電子科技股份有限公司 Head-up display with light-shielding function
CN110288163A (en) * 2019-07-01 2019-09-27 腾讯科技(深圳)有限公司 Method, apparatus, equipment and the storage medium of information processing
JP2021009133A (en) * 2019-07-02 2021-01-28 株式会社デンソー Display control device and display control program
CN111267870A (en) * 2020-01-17 2020-06-12 北京梧桐车联科技有限责任公司 Information display method and device and computer storage medium
KR20210141829A (en) * 2020-05-13 2021-11-23 현대자동차주식회사 System and method for detecting vehicle wheel alignment state
CN111703371A (en) * 2020-06-16 2020-09-25 北京百度网讯科技有限公司 Traffic information display method and device, electronic equipment and storage medium
CN113597617A (en) * 2021-06-22 2021-11-02 华为技术有限公司 Display method, display device, display equipment and vehicle
CN113888900A (en) * 2021-09-10 2022-01-04 海信集团控股股份有限公司 Vehicle early warning method and device
CN113859122A (en) * 2021-09-30 2021-12-31 重庆长安新能源汽车科技有限公司 Height self-adaptive adjustment AR-HUD display method and system and vehicle
CN114030475A (en) * 2021-12-22 2022-02-11 清华大学苏州汽车研究院(吴江) Vehicle driving assisting method and device, vehicle and storage medium

Also Published As

Publication number Publication date
CN114999225A (en) 2022-09-02

Similar Documents

Publication Publication Date Title
CN108204822B (en) ADAS-based vehicle AR navigation system and method
KR20210061319A (en) Electronic apparatus and control method thereof
CN110979318B (en) Lane information acquisition method and device, automatic driving vehicle and storage medium
CN110979332B (en) Control method and device of intelligent automobile and storage medium
CN109532845B (en) Control method and device of intelligent automobile and storage medium
CN109492566B (en) Lane position information acquisition method, device and storage medium
CN109151204B (en) Navigation method and device based on mobile terminal and mobile terminal
JP2014181927A (en) Information provision device, and information provision program
CN210139859U (en) Automobile collision early warning system, navigation and automobile
CN109581358B (en) Obstacle recognition method, obstacle recognition device and storage medium
CN110956847B (en) Parking space identification method and device and storage medium
US10803332B2 (en) Traffic sign detection method, apparatus, system and medium
JP2019008709A (en) Vehicle, information processing system, information processing device, and data structure
CN111273765A (en) Display control device for vehicle, display control method for vehicle, and storage medium
KR20130053137A (en) Mobile terminal and menthod for controlling of the same
US20200318989A1 (en) Route guidance apparatus and method
KR20160065724A (en) Electronic apparatus, control method of electronic apparatus, computer program and computer readable recording medium
CN111516690B (en) Control method and device of intelligent automobile and storage medium
CN115056649A (en) Augmented reality head-up display system, implementation method, equipment and storage medium
JP2019109707A (en) Display control device, display control method and vehicle
JP2005005978A (en) Surrounding condition recognition system
US11189162B2 (en) Information processing system, program, and information processing method
CN114332821A (en) Decision information acquisition method, device, terminal and storage medium
CN110962596B (en) Vehicle speed control method and device of automobile and storage medium
CN110329247B (en) Parking prompting method and device for automobile and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant