CN116620168A - Barrier early warning method and device, electronic equipment and storage medium - Google Patents

Barrier early warning method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116620168A
CN116620168A CN202310596908.2A CN202310596908A CN116620168A CN 116620168 A CN116620168 A CN 116620168A CN 202310596908 A CN202310596908 A CN 202310596908A CN 116620168 A CN116620168 A CN 116620168A
Authority
CN
China
Prior art keywords
obstacle
display unit
determining
driving
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310596908.2A
Other languages
Chinese (zh)
Other versions
CN116620168B (en
Inventor
张涛
李志勇
孙孝文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Zejing Automobile Electronic Co ltd
Original Assignee
Jiangsu Zejing Automobile Electronic Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Zejing Automobile Electronic Co ltd filed Critical Jiangsu Zejing Automobile Electronic Co ltd
Priority to CN202310596908.2A priority Critical patent/CN116620168B/en
Publication of CN116620168A publication Critical patent/CN116620168A/en
Application granted granted Critical
Publication of CN116620168B publication Critical patent/CN116620168B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/24Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view in front of the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q9/00Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling
    • B60Q9/008Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling for anti-collision purposes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/25Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view to the sides of the vehicle
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V9/00Prospecting or detecting by methods not provided for in groups G01V1/00 - G01V8/00
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/20Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
    • B60R2300/202Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used displaying a blind spot scene on the vehicle part responsible for the blind spot
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/20Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
    • B60R2300/205Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used using a head-up display
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/8093Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for obstacle warning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Geophysics (AREA)
  • Instrument Panels (AREA)

Abstract

The application discloses an obstacle early warning method, an obstacle early warning device, electronic equipment and a storage medium, and relates to the technical field of head-up display. Wherein the method comprises the following steps: receiving detection data of an obstacle sent by a vehicle, wherein the detection data at least comprises obstacle position data and obstacle attribute information; determining a target display unit from a plurality of display units based on the obstacle position data, wherein the target display unit is used for displaying early warning information corresponding to the obstacle; and determining the early warning information of the obstacle based on the attribute information of the obstacle, and displaying the early warning information in the target display unit. According to the technical scheme provided by the application, the problems of insufficient fusion and frequent flicker of the display of the side alarms in the prior art can be solved, the side alarms are displayed on the display units corresponding to the side A column, the interference to a driver can be reduced, and the driving experience feeling of the user is improved.

Description

Barrier early warning method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of head-up display technologies, and in particular, to a method and apparatus for early warning of an obstacle, an electronic device, and a storage medium.
Background
In recent years, augmented reality heads-up display (Augmented Reality Head Up Display, AR-HUD) technology has been increasingly applied to automobiles. The AR-HUD has the main advantages that real road condition information (such as roads, barriers, cruising targets, road signs and the like) in front of eyes is marked by AR and then is displayed on a display plane, so that a driver can intuitively judge real-time road conditions and make driving operation faster.
The display coverage area of the AR-HUD is only the road condition of the area in front of the vehicle, and the road conditions of the left and right sides of the vehicle are not in the display coverage area. At present, side warning of road conditions on the left side and the right side can only be processed by adopting a method of drawing guide marks, so that two problems exist: 1. if the road conditions on the left side and the right side are placed on the HUD display interface in front, side warning frequently flashes due to frequent coming vehicles on the rear side, so that driving of a driver is disturbed; 2. for the front side warning in the vehicle A column blind area, the HUD display interface in front of the vehicle A column blind area cannot be marked, so that a driver cannot understand the specific position of the obstacle. Therefore, how to display the road conditions on the left and right sides of the vehicle in the AR-HUD becomes a urgent problem to be solved.
Disclosure of Invention
The application provides an obstacle early warning method, an obstacle early warning device, electronic equipment and a storage medium, which can solve the problems of insufficient fusion and frequent flickering of the display of side alarms in the prior art, display the side alarms on a display unit corresponding to a side A column, reduce the interference to a driver and improve the driving experience of the user.
In a first aspect, the present application provides an obstacle early warning method, applied to a head-up display configured on a vehicle, where the head-up display includes a plurality of display units divided according to a view direction of a driving eyepoint, and each display unit corresponds to a view direction; the method comprises the following steps:
receiving detection data of an obstacle sent by the vehicle, wherein the detection data at least comprises obstacle position data and obstacle attribute information;
determining a target display unit from the plurality of display units based on the obstacle position data, wherein the target display unit is used for displaying early warning information corresponding to the obstacle;
and determining early warning information of the obstacle based on the obstacle attribute information, and displaying the early warning information in the target display unit.
The embodiment of the application provides an obstacle early warning method, which is characterized in that on the basis of the existing AR-HUD, display units are added on A columns at the left side and the right side, the display capacity of the ARHUD is enhanced, and the projection coverage areas of a front display unit, a left display unit and a right display unit are determined in advance. The HUD receives the barrier position data and the barrier attribute information detected by the ADAS, determines a target display unit for displaying the barrier according to the barrier position data, determines an early warning element for displaying the barrier according to the barrier attribute information, and finally displays the early warning element in the target display unit. The application solves the problems of insufficient fusion and frequent flicker of the display of the side alarms in the prior art, and displays the side alarms on the display units corresponding to the side A columns, so that the interference to a driver can be reduced, and the driving experience feeling of the user is improved.
Further, the view orientations include a front view, a left view, and a right view; the determining a target display unit from the plurality of display units based on the obstacle position data includes: acquiring a front view angle range corresponding to the front view, a left view angle range corresponding to the left view and a right view angle range corresponding to the right view; determining a target field of view range corresponding to the obstacle from the front field of view range, the left field of view range, and the right field of view range based on the obstacle position data; and determining the corresponding target display unit based on the target view angle range.
Further, the obstacle position data includes a longitudinal distance, which is a vertical distance between the obstacle and the driving eye point in a direction directly in front of the vehicle head, and a horizontal distance, which is a vertical distance between the obstacle and the driving eye point in a direction perpendicular to the direction directly in front of the vehicle head; the determining a target view angle range corresponding to the obstacle from the front view angle range, the left view angle range, and the right view angle range based on the obstacle position data includes: determining a first horizontal distance range based on the longitudinal distance and the front view angle range, a second horizontal distance range based on the longitudinal distance and the left view angle range, and a third horizontal distance range based on the longitudinal distance and the right view angle range; determining a target horizontal distance range to which the horizontal distance belongs from the first horizontal distance range, the second horizontal distance range and the third horizontal distance range; and determining a corresponding target field angle range based on the target horizontal distance range.
Further, a left view angle range corresponding to the left view field is determined by: determining a visual field far-end point and a visual field near-end point in a display unit corresponding to the left visual field, and determining a first distance between the visual field far-end point and the visual field near-end point; determining a second distance between the driving eyepoint and the far end point of the field of view, and determining a third distance between the driving eyepoint and the near end point of the field of view; and carrying out numerical calculation based on the first distance, the second distance and the third distance to obtain the left side view angle range.
Further, the driving eyepoint includes a first driving eyepoint for viewing the front view, a second driving eyepoint for viewing the left view, and a third driving eyepoint for viewing the right view; determining the position data of the second driving eyepoint by: determining a first longitudinal translation distance and a first horizontal translation distance corresponding to the left rotation of the first driving eyepoint by a preset angle; determining position data of the second driving eyepoint based on the position data of the first driving eyepoint, the first longitudinal translation distance, and the first horizontal translation distance; determining the position data of the third driving eyepoint by: determining a second longitudinal translation distance and a second horizontal translation distance corresponding to the right rotation of the first driving eyepoint by the preset angle; position data of the third driving eyepoint is determined based on the position data of the first driving eyepoint, the second longitudinal translation distance, and the second horizontal translation distance.
Further, before the determining the target display unit from the plurality of display units based on the obstacle position data, the method further includes: converting the obstacle position data into coordinate data in a vehicle body coordinate system with the first driving eyepoint as an origin; correspondingly, the determining the target display unit from the plurality of display units based on the obstacle position data includes: and determining a target display unit from the plurality of display units based on the coordinate data.
Further, the plurality of display units include a front display unit corresponding to the front view, a left display unit corresponding to the left view, and a right display unit corresponding to the right view; the front display unit is arranged on a front windshield of the vehicle, the left display unit is arranged on a left A column of the vehicle, and the right display unit is arranged on a right A column of the vehicle.
In a second aspect, the present application provides an obstacle early warning device, configured on a head-up display on a vehicle, the head-up display including a plurality of display units divided according to a view direction of a driving eyepoint, each display unit corresponding to a view direction; the device comprises:
the detection data receiving module is used for receiving detection data of the obstacle sent by the vehicle, and the detection data at least comprises obstacle position data and obstacle attribute information;
the display unit determining module is used for determining a target display unit from the plurality of display units based on the obstacle position data, and the target display unit is used for displaying early warning information corresponding to the obstacle;
and the early warning information display module is used for determining early warning information of the obstacle based on the attribute information of the obstacle and displaying the early warning information in the target display unit.
In a third aspect, the present application provides an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the obstacle warning method according to any embodiment of the application.
In a fourth aspect, the present application provides a computer readable storage medium storing computer instructions for causing a processor to implement the obstacle early warning method according to any embodiment of the present application when executed.
It should be noted that the above-mentioned computer instructions may be stored in whole or in part on a computer-readable storage medium. The computer readable storage medium may be packaged with the processor of the obstacle early warning device, or may be packaged separately with the processor of the obstacle early warning device, which is not limited in the present application.
The description of the second, third and fourth aspects of the present application may refer to the detailed description of the first aspect; moreover, the advantages described in the second aspect, the third aspect and the fourth aspect may refer to the analysis of the advantages of the first aspect, and are not described herein.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
It can be understood that before using the technical solutions disclosed in the embodiments of the present application, the user should be informed and authorized by appropriate ways according to relevant laws and regulations for the type, usage range, usage scenario, etc. of the personal information related to the present application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a method for warning an obstacle according to an embodiment of the present application;
fig. 2A is a network topology schematic diagram of a HUD according to an embodiment of the present application;
fig. 2B is a schematic diagram of an installation structure of a plurality of display units according to an embodiment of the present application;
Fig. 2C is a schematic top view of a projection coverage corresponding to a plurality of display units according to an embodiment of the present application;
fig. 3 is a schematic diagram of a second flow chart of an obstacle early warning method according to an embodiment of the present application;
fig. 4A is a schematic top view of a corresponding projection coverage of a left display unit according to an embodiment of the present application;
fig. 4B is a schematic top view of a front display unit corresponding to projection coverage according to an embodiment of the application
Fig. 5 is a schematic structural diagram of an obstacle early warning device according to an embodiment of the present application;
fig. 6 is a block diagram of an electronic device for implementing an obstacle early warning method according to an embodiment of the application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," "target," and "original," etc. in the description and claims of the present application and the above-described drawings are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be capable of executing sequences other than those illustrated or otherwise described. Furthermore, the terms "comprises," "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
Fig. 1 is a schematic flow chart of an obstacle early warning method according to an embodiment of the present application, where the embodiment is applicable to AR display of obstacles in front of a vehicle and on left and right sides on a head-up display. The obstacle early warning method provided by the embodiment of the application can be implemented by the obstacle early warning device provided by the embodiment of the application, and the device can be implemented in a software and/or hardware mode and is integrated in an electronic device for executing the method. Preferably, the electronic device in the embodiment of the present application may be a head-up display HUD configured on a vehicle.
Referring to fig. 1, the method of the present embodiment includes, but is not limited to, the following steps:
s110, receiving detection data of the obstacle sent by the vehicle.
The vehicle of the present embodiment further includes a driving assistance device, and the driving assistance device may be equipped with an advanced driving assistance system (Advanced Driving Assistance System, ADAS) or may be equipped with another driving assistance system. The obstacle means a front object and left and right objects that may cause an obstacle to the running of a vehicle driven by a driver, and includes vehicles, road blocks, pedestrians, and the like. The detection data refers to related data between the current vehicle and the obstacle, and at least may include obstacle position data and obstacle attribute information, and may further include running information (such as a speed difference) of the obstacle relative to the vehicle.
In one embodiment, the driving assistance apparatus detects attribute information (such as an obstacle type and an outline size) of an obstacle in front of and on the left and right sides of the vehicle and position information of the obstacle with respect to the vehicle (in ADAS) in real time through the data acquisition apparatus. The driving assistance apparatus determines whether a collision is likely between the vehicle and the obstacle based on the attribute information and the travel information. If a collision probability is to occur, the driving assistance apparatus determines detection data of an obstacle (such as obstacle position data and obstacle attribute information), and transmits collision warning information to the HUD through a controller area network (Controller Area Network, CAN) bus.
In another embodiment, after the driving assistance device acquires the attribute information of the obstacle in front of the vehicle, the left and right sides of the vehicle and the position information of the obstacle relative to the vehicle, the obstacle position data and the obstacle attribute information are transmitted to the HUD through the CAN bus, so that the HUD performs AR labeling on the obstacle.
S120, determining a target display unit from a plurality of display units based on the obstacle position data.
The HUD comprises a plurality of display units which are obtained by dividing according to the visual field direction of the driving eyepoint, and each display unit corresponds to one visual field direction. The target display unit is used for displaying the early warning information corresponding to the obstacle.
The view orientation in the embodiments of the present application may include a front view, a left view, and a right view. The plurality of display units may include a front display unit corresponding to a front view, a left display unit corresponding to a left view, and a right display unit corresponding to a right view; as shown in fig. 2A, which is a network topology schematic diagram of the HUD, the HUD includes a left display unit, a front display unit, and a right display unit, and the ADAS module is connected with the HUD through a CAN bus.
The front display unit is disposed on a front windshield of the vehicle, the left display unit is disposed on a left A-pillar of the vehicle, and the right display unit is disposed on a right A-pillar of the vehicle. The a pillar refers to a mechanical component between the front windshield and the front door of an automobile. As shown in fig. 2B, a schematic diagram of an installation structure of a plurality of display units is shown, in which reference numeral 1 is a left side a pillar of a vehicle, reference numeral 2 is a right side a pillar of the vehicle, reference numeral 3 is a front display unit of the vehicle, reference numeral 4 is a front windshield of the vehicle, reference numeral 5 is a left side display unit, and reference numeral 6 is a right side display unit. Optionally, the display medium with reference numerals 5 and 6 is not limited to the mode of adding a flexible screen to the transparent A column or the non-transparent A column, or the mode of projection; reference numeral 3 is used for AR-HUD virtual image display, and the basic specification is not limited to a projection distance of about 8 meters and a size of 50 inches.
The driving eyepoint includes a first driving eyepoint for viewing a front view, a second driving eyepoint for viewing a left view, and a third driving eyepoint for viewing a right view. As shown in fig. 2C, a top view of the projection coverage corresponding to the plurality of display units is shown, where reference numeral 7 is a left a-pillar projection coverage area, reference numeral 8 is an AR virtual image plane of the front display unit, reference numeral 9 is a projection coverage area of the front display unit, reference numeral 10 is a right a-pillar projection coverage area, reference numeral 11 is a driving eye point (i.e., a second driving eye point) of the left a-pillar, reference numeral 12 is a driving eye point (i.e., a first driving eye point) of the front display unit, reference numeral 13 is a driving eye point (i.e., a third driving eye point) of the right a-pillar, and reference numeral 14 is an ADAS module. As can be seen from fig. 2C, the projected coverage areas on the left and right sides (i.e., reference numerals 7 and 10) are increased compared to the existing AR-HUD coverage area (i.e., reference numeral 9).
Regarding the second driving eyepoint 11, the first driving eyepoint 12, and the third driving eyepoint 13, the second driving eyepoint 11 of the left a pillar and the third driving eyepoint 13 of the right a pillar are respectively rotated by a specific angle with respect to the first driving eyepoint 12 of the front display unit, the rotation angle being in accordance with the human eye observation comfort position. Alternatively, the first driving eye point is mainly designed based on the HUD optical theory, is basically in a straight line with the center of the steering wheel, and has a height of about 1.4 m.
The second driving point 11 and the first driving point 12 may be spatially different by a centimeter level, and may be equivalent to the same driving point. If higher accuracy is sought, the positional shift value compensation of the second driving eyepoint 11 and the first driving eyepoint 12 may be added at the time of the subsequent calculation with the second driving eyepoint 11.
Specific: the position data of the second driving eyepoint is determined by: the method comprises the steps that an eye point of a driver observes a left blind zone at a first driving eye point (marked as an O point) position, the eye point is slightly oriented to a left A column, and a first longitudinal translation distance (marked as-a) and a first horizontal translation distance (marked as b) corresponding to a preset left rotation angle of the first driving eye point are determined; the position data of the second driving eyepoint is determined based on the position data of the first driving eyepoint (assumed to be (0, 0)), the first longitudinal translation distance, and the first horizontal translation distance, that is (-a, b). The position data of the third driving eyepoint is determined by: the method comprises the steps that an eye point of a driver observes a right blind zone at a first driving eye point (marked as an O point), the eye point is slightly oriented to a right A column, and a second longitudinal translation distance (marked as-a) and a second horizontal translation distance (marked as-b) corresponding to a preset angle of rightward rotation of the first driving eye point are determined; the position data of the third driving eyepoint, that is (-a, -b), is determined based on the position data of the first driving eyepoint (assumed to be (0, 0)), the second longitudinal translation distance, and the second horizontal translation distance.
In the embodiment of the application, the included angle between the connecting line of the obstacle and the first driving eye point and the front of the reference line is determined according to the obstacle position data, and the reference line is the extension line of the first driving eye point in the front direction of the vehicle head. As can be seen from fig. 2C, each display unit corresponds to a projection coverage area of a certain angle, and the projection coverage area of which display unit the obstacle is in is determined based on the magnitude of the included angle, so as to obtain the target display unit.
Preferably, the present application uses the position of the first driving point as a reference point when the obstacle is pre-warned, and further comprises, before determining the target display unit from the plurality of display units based on the obstacle position data: converting the obstacle position data into coordinate data in a vehicle body coordinate system with the first driving eye point as an origin; accordingly, determining a target display unit from the plurality of display units based on the obstacle position data in step S120 includes: the target display unit is determined from the plurality of display units based on the coordinate data.
S130, determining early warning information of the obstacle based on the attribute information of the obstacle, and displaying the early warning information in the target display unit.
Wherein, the obstacle attribute information may include a type of obstacle (e.g., vehicle, roadblock, pedestrian, etc.) and an outline dimension (e.g., width, height) of the obstacle, and the early warning information may include at least one of an early warning identification pattern and a labeling line. The display unit is a head-up display of augmented reality technology, and is a front display unit for projecting important driving information onto a front windshield, a left display unit on a left A column or a right display unit on a right A column through an optical projection technology, so that a driver can see the important driving information without lowering or turning.
In the embodiment of the application, the HUD determines the identification pattern and/or the marking line for early warning the obstacle according to the type and the size of the obstacle. Thereafter, the identification pattern and/or the marking line are displayed in the control target display unit.
According to the technical scheme provided by the embodiment, the detection data of the obstacle sent by the vehicle is received, the target display unit is determined from the plurality of display units based on the obstacle position data, the early warning information of the obstacle is determined based on the obstacle attribute information, and the early warning information is displayed in the target display unit. According to the application, on the basis of the existing AR-HUD, the display units are added on the left and right side A columns, so that the display capability of the ARHUD is enhanced, and the projection coverage areas of the front display unit, the left display unit and the right display unit are determined in advance. The HUD receives the barrier position data and the barrier attribute information detected by the ADAS, determines a target display unit for carrying out early warning display on the barrier according to the barrier position data, determines early warning elements for carrying out early warning display on the barrier according to the barrier attribute information, and finally displays the early warning elements in the target display unit. The application solves the problems of insufficient fusion and frequent flicker of the display of the side alarms in the prior art, and displays the side alarms on the display units corresponding to the side A columns, so that the interference to a driver can be reduced, and the driving experience feeling of the user is improved.
The method for pre-warning an obstacle provided by the embodiment of the present application is further described below, and fig. 3 is a schematic second flow chart of the method for pre-warning an obstacle provided by the embodiment of the present application. The embodiment of the application is optimized based on the embodiment, and is specifically optimized as follows: the present embodiment explains the determination process of the target display unit in detail.
Referring to fig. 3, the method of the present embodiment includes, but is not limited to, the following steps:
s210, acquiring a front view angle range corresponding to the front view, a left view angle range corresponding to the left view and a right view angle range corresponding to the right view.
In the embodiment of the application, the front view angle range corresponding to the front view, that is, the projection coverage area of the front display unit, can be obtained according to the front optical design of the front display unit. The front view angle range takes the datum line as a symmetry axis and is uniformly distributed on two sides of the datum line. The reference line is an extension line of the first driving eyepoint of the front display unit in the vehicle head straight forward direction.
Fig. 4A is a schematic top view of the left display unit corresponding to the projection coverage, wherein the left side in the drawing is a reference coordinate system, the X-axis is an extension line of the first driving eye point in the direction right in front of the vehicle head (i.e., the reference line), and the Y-axis is an extension line of the first driving eye point in the direction perpendicular to the direction right in front of the vehicle head. In the figure, reference sign J is a far end point of a visual field, reference sign K is a near end point of the visual field, reference sign P is a second driving eye point, and the left side view angle is ++EPF or ++ JPK.
The left view angle range corresponding to the left view field is determined by: determining a far-field-of-view point (i.e., a J point in the figure) and a near-field-of-view point (i.e., a K point in the figure) in a display unit corresponding to the left-side field, and measuring a first distance between the far-field-of-view point and the near-field-of-view point, i.e., a line segment KJ in the figure; measuring a second distance between the driving eyepoint and the far end point of the visual field, namely a line segment PJ in the graph, and measuring a third distance between the driving eyepoint and the near end point of the visual field, namely a line segment PK in the graph; a left view angle range is obtained by numerical calculation based on the first distance, the second distance and the third distance, a value of JPK is calculated according to the length of three sides of the triangle PJK, the calculation formula is +. JPK = arccoss ((|PJ||PJ|+|) arccos ((|PJ) |pj|++ |. Alternatively, the line segment KJ may be the length of the a-pillar, which may be obtained by measuring with a tape measure, and the corresponding far-end point of the field of view is the end of the a-pillar that is far away from the driving eye point, and the near-end point of the field of view is the end of the a-pillar that is near the driving eye point.
The implementation manner of the right view angle range corresponding to the right view field is identical to the implementation manner of the left view angle range, and is not repeated here.
S220, determining a first horizontal distance range based on the longitudinal distance and the front view angle range, determining a second horizontal distance range based on the longitudinal distance and the left view angle range, and determining a third horizontal distance range based on the longitudinal distance and the right view angle range.
The obstacle position data includes a longitudinal distance, which is a vertical distance between the obstacle and the driving eye point in a direction right in front of the vehicle head, and a horizontal distance, which is a vertical distance between the obstacle and the driving eye point in a direction perpendicular to the direction right in front of the vehicle head.
Fig. 4B is a schematic top view of the corresponding projection coverage of the front display unit, wherein the left side of the diagram is a reference coordinate system, the reference symbol O in the diagram is a first driving eye point, the OC in the diagram is a reference line, and the front view angle is AOB. Reference numeral Q denotes an obstacle, and coordinates of the obstacle are denoted by (x, y). If the obstacle is located in the projection coverage of the front display unit, the known longitudinal distance x must be greater than 0, and the first horizontal distance y has a value ranging from-x×tan (< AOB/2) < y < x×tan (< AOB/2).
In fig. 4A, reference symbol Q is an obstacle, the angle GPF between the second driving eye point P and the distal end of the a pillar is measured, the angle GPE between the second driving eye point P and the proximal end of the a pillar is measured, if the obstacle is located in the projection coverage of the left display unit, the longitudinal distance x of the obstacle ranges from 0 to infinity, and the value range of the second horizontal distance y is x×tan+gpf < y < x×tan+gpe for the obstacle Q.
The implementation manner of the third horizontal distance range corresponding to the right side view field is identical to the implementation manner of the second horizontal distance range corresponding to the left side view field, and will not be described in detail herein.
S230, determining a target horizontal distance range to which the horizontal distance belongs from the first horizontal distance range, the second horizontal distance range and the third horizontal distance range.
In the embodiment of the application, the horizontal distance is acquired from the obstacle position data, and compared with the first horizontal distance range, the second horizontal distance range and the third horizontal distance range, the target horizontal distance range to which the horizontal distance belongs is determined.
S240, determining a corresponding target field angle range based on the target horizontal distance range.
In the embodiment of the application, under a certain longitudinal distance, the horizontal distance range of the projection coverage area of the display unit corresponds to the viewing angle range of the display unit one by one. After determining the target horizontal distance range to which the obstacle horizontal distance belongs in step S230, the target angle range corresponding to the target horizontal distance range is further determined.
S250, determining a corresponding target display unit based on the target view angle range.
In the embodiment of the present application, after the target angle range is determined in step S240, the target display unit corresponding to the target angle range is determined.
Based on the fact that the three display units in the head-up display are all used for displaying the real road conditions, three 3D scenes are preset in software, and 3 real projection coverage scenes are corresponding in a mode that 3 real eyepoint parameters are written into a virtual camera. Specifically, measuring the position relation (x, y, z) between the first driving eye point and the ADAS origin, configuring the 3D scene 1 in software, and setting (x, y, z) as the position coordinates of the virtual camera 1; measuring the position relation (w, e, r) between the second driving eye point of the left A column and the ADAS origin, constructing the 3D scene 2 in software, and setting (w, e, r) as the position coordinates of the virtual camera 2; measuring the position relation (v, n, m) between the third driving eye point of the right A column and the ADAS origin; the 3D scene 3 is constructed in software, and (v, n, m) is set as the position coordinates of the virtual camera 3.
According to the technical scheme provided by the embodiment, a front view angle range corresponding to a front view, a left view angle range corresponding to a left view and a right view angle range corresponding to a right view are obtained; determining a first horizontal distance range based on the longitudinal distance and the front view angle range of the obstacle, determining a second horizontal distance range based on the longitudinal distance and the left view angle range of the obstacle, and determining a third horizontal distance range based on the longitudinal distance and the right view angle range of the obstacle; determining a target horizontal distance range to which the horizontal distance of the obstacle belongs from the first horizontal distance range, the second horizontal distance range and the third horizontal distance range; determining a corresponding target field angle range based on the target horizontal distance range; the corresponding target display unit is determined based on the target angle of view range. According to the method and the device for displaying the obstacle, the target display unit for displaying the obstacle is determined according to the obstacle position data, the problems that display of the side alarms is not fused and flicker frequently in the prior art are solved, the side alarms are displayed on the display units corresponding to the side A column, interference to a driver can be reduced, and driving experience feeling of the user is improved.
Fig. 5 is a schematic structural diagram of an obstacle early warning device provided in an embodiment of the present application, and is configured on a head-up display on a vehicle, where the head-up display includes a plurality of display units obtained by dividing according to a view direction of a driving eyepoint, and each display unit corresponds to one view direction, as shown in fig. 5, the device 500 may include:
a detection data receiving module 510, configured to receive detection data of an obstacle sent by the vehicle, where the detection data includes at least obstacle position data and obstacle attribute information;
the display unit determining module 520 is configured to determine a target display unit from the plurality of display units based on the obstacle position data, where the target display unit is configured to display early warning information corresponding to the obstacle;
and the early warning information display module 530 is configured to determine early warning information of the obstacle based on the attribute information of the obstacle, and display the early warning information in the target display unit.
Optionally, the view orientation includes a front view, a left view, and a right view;
further, the display unit determining module 520 may be specifically configured to: acquiring a front view angle range corresponding to the front view, a left view angle range corresponding to the left view and a right view angle range corresponding to the right view; determining a target field of view range corresponding to the obstacle from the front field of view range, the left field of view range, and the right field of view range based on the obstacle position data; and determining the corresponding target display unit based on the target view angle range.
Optionally, the obstacle position data includes a longitudinal distance and a horizontal distance, the longitudinal distance is a vertical distance between the obstacle and the driving eye point in a direction right in front of the vehicle head, and the horizontal distance is a vertical distance between the obstacle and the driving eye point in a direction vertical to the direction right in front of the vehicle head;
further, the display unit determining module 520 may be specifically configured to: determining a first horizontal distance range based on the longitudinal distance and the front view angle range, a second horizontal distance range based on the longitudinal distance and the left view angle range, and a third horizontal distance range based on the longitudinal distance and the right view angle range; determining a target horizontal distance range to which the horizontal distance belongs from the first horizontal distance range, the second horizontal distance range and the third horizontal distance range; and determining a corresponding target field angle range based on the target horizontal distance range.
Optionally, the left view angle range corresponding to the left view field is determined by: determining a visual field far-end point and a visual field near-end point in a display unit corresponding to the left visual field, and determining a first distance between the visual field far-end point and the visual field near-end point; determining a second distance between the driving eyepoint and the far end point of the field of view, and determining a third distance between the driving eyepoint and the near end point of the field of view; and carrying out numerical calculation based on the first distance, the second distance and the third distance to obtain the left side view angle range.
Optionally, the driving eyepoint includes a first driving eyepoint for viewing the front view, a second driving eyepoint for viewing the left view, and a third driving eyepoint for viewing the right view;
optionally, the position data of the second driving eye point is determined by: determining a first longitudinal translation distance and a first horizontal translation distance corresponding to the left rotation of the first driving eyepoint by a preset angle; determining position data of the second driving eyepoint based on the position data of the first driving eyepoint, the first longitudinal translation distance, and the first horizontal translation distance;
optionally, the position data of the third driving eye point is determined by: determining a second longitudinal translation distance and a second horizontal translation distance corresponding to the right rotation of the first driving eyepoint by the preset angle; position data of the third driving eyepoint is determined based on the position data of the first driving eyepoint, the second longitudinal translation distance, and the second horizontal translation distance.
Further, the obstacle early warning device may further include: a coordinate conversion module;
the coordinate conversion module is used for converting the obstacle position data into coordinate data in a vehicle body coordinate system taking the first driving eye point as an origin before the target display unit is determined from the plurality of display units based on the obstacle position data; accordingly, the display unit determining module 520 is configured to determine a target display unit from the plurality of display units based on the coordinate data.
Optionally, the plurality of display units include a front display unit corresponding to the front view, a left display unit corresponding to the left view, and a right display unit corresponding to the right view; the front display unit is arranged on a front windshield of the vehicle, the left display unit is arranged on a left A column of the vehicle, and the right display unit is arranged on a right A column of the vehicle.
The obstacle early warning device provided by the embodiment is applicable to the obstacle early warning method provided by any embodiment, and has corresponding functions and beneficial effects.
Fig. 6 is a block diagram of an electronic device for implementing an obstacle early warning method according to an embodiment of the application. The electronic device 10 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 6, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as the obstacle warning method.
In some embodiments, the obstacle warning method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the obstacle warning method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the obstacle pre-warning method in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above can be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present application may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present application, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server) or that includes a middleware component (e.g., an application server) or that includes a front-end component through which a user can interact with an implementation of the systems and techniques described here, or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
Note that the above is only a preferred embodiment of the present application and the technical principle applied. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. For example, one skilled in the art may use the various forms of flow shown above to reorder, add, or delete steps; the steps recited in the present application may be performed in parallel, sequentially or in a different order, and are not limited herein as long as the desired results of the technical solution of the present application can be achieved.
The above embodiments do not limit the scope of the present application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application should be included in the scope of the present application.

Claims (10)

1. An obstacle early warning method is characterized by being applied to a head-up display arranged on a vehicle, wherein the head-up display comprises a plurality of display units which are divided according to the visual field direction of a driving eyepoint, and each display unit corresponds to one visual field direction; the method comprises the following steps:
Receiving detection data of an obstacle sent by the vehicle, wherein the detection data at least comprises obstacle position data and obstacle attribute information;
determining a target display unit from the plurality of display units based on the obstacle position data, wherein the target display unit is used for displaying early warning information corresponding to the obstacle;
and determining early warning information of the obstacle based on the obstacle attribute information, and displaying the early warning information in the target display unit.
2. The obstacle early warning method according to claim 1, wherein the view orientation includes a front view, a left view, and a right view; the determining a target display unit from the plurality of display units based on the obstacle position data includes:
acquiring a front view angle range corresponding to the front view, a left view angle range corresponding to the left view and a right view angle range corresponding to the right view;
determining a target field of view range corresponding to the obstacle from the front field of view range, the left field of view range, and the right field of view range based on the obstacle position data;
and determining the corresponding target display unit based on the target view angle range.
3. The obstacle early warning method according to claim 2, wherein the obstacle position data includes a longitudinal distance that is a vertical distance of the obstacle from a driving eye point in a direction directly in front of the vehicle head, and a horizontal distance that is a vertical distance of the obstacle from the driving eye point in a direction perpendicular to the direction directly in front of the vehicle head; the determining a target view angle range corresponding to the obstacle from the front view angle range, the left view angle range, and the right view angle range based on the obstacle position data includes:
determining a first horizontal distance range based on the longitudinal distance and the front view angle range, a second horizontal distance range based on the longitudinal distance and the left view angle range, and a third horizontal distance range based on the longitudinal distance and the right view angle range;
determining a target horizontal distance range to which the horizontal distance belongs from the first horizontal distance range, the second horizontal distance range and the third horizontal distance range;
and determining a corresponding target field angle range based on the target horizontal distance range.
4. The obstacle early warning method according to claim 2, wherein the left view angle range corresponding to the left view field is determined by:
determining a visual field far-end point and a visual field near-end point in a display unit corresponding to the left visual field, and determining a first distance between the visual field far-end point and the visual field near-end point;
determining a second distance between the driving eyepoint and the far end point of the field of view, and determining a third distance between the driving eyepoint and the near end point of the field of view;
and carrying out numerical calculation based on the first distance, the second distance and the third distance to obtain the left side view angle range.
5. The obstacle early warning method according to claim 2, wherein the driving eyepoint includes a first driving eyepoint for viewing the front view, a second driving eyepoint for viewing the left view, and a third driving eyepoint for viewing the right view;
determining the position data of the second driving eyepoint by:
determining a first longitudinal translation distance and a first horizontal translation distance corresponding to the left rotation of the first driving eyepoint by a preset angle;
Determining position data of the second driving eyepoint based on the position data of the first driving eyepoint, the first longitudinal translation distance, and the first horizontal translation distance;
determining the position data of the third driving eyepoint by:
determining a second longitudinal translation distance and a second horizontal translation distance corresponding to the right rotation of the first driving eyepoint by the preset angle;
position data of the third driving eyepoint is determined based on the position data of the first driving eyepoint, the second longitudinal translation distance, and the second horizontal translation distance.
6. The obstacle early warning method according to claim 5, further comprising, before the determination of a target display unit from the plurality of display units based on the obstacle position data:
converting the obstacle position data into coordinate data in a vehicle body coordinate system with the first driving eyepoint as an origin;
correspondingly, the determining the target display unit from the plurality of display units based on the obstacle position data includes:
and determining a target display unit from the plurality of display units based on the coordinate data.
7. The obstacle early warning method according to claim 2, wherein the plurality of display units includes a front display unit corresponding to the front field of view, a left display unit corresponding to the left field of view, and a right display unit corresponding to the right field of view; the front display unit is arranged on a front windshield of the vehicle, the left display unit is arranged on a left A column of the vehicle, and the right display unit is arranged on a right A column of the vehicle.
8. An obstacle warning device is characterized in that the device is provided with a head-up display arranged on a vehicle, the head-up display comprises a plurality of display units which are divided according to the visual field direction of a driving eyepoint, and each display unit corresponds to one visual field direction; the device comprises:
the detection data receiving module is used for receiving detection data of the obstacle sent by the vehicle, and the detection data at least comprises obstacle position data and obstacle attribute information;
the display unit determining module is used for determining a target display unit from the plurality of display units based on the obstacle position data, and the target display unit is used for displaying early warning information corresponding to the obstacle;
And the early warning information display module is used for determining early warning information of the obstacle based on the attribute information of the obstacle and displaying the early warning information in the target display unit.
9. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the obstacle warning method of any one of claims 1 to 7.
10. A computer readable storage medium storing computer instructions for causing a processor to implement the obstacle early warning method of any one of claims 1 to 7 when executed.
CN202310596908.2A 2023-05-24 2023-05-24 Barrier early warning method and device, electronic equipment and storage medium Active CN116620168B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310596908.2A CN116620168B (en) 2023-05-24 2023-05-24 Barrier early warning method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310596908.2A CN116620168B (en) 2023-05-24 2023-05-24 Barrier early warning method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116620168A true CN116620168A (en) 2023-08-22
CN116620168B CN116620168B (en) 2023-12-12

Family

ID=87602096

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310596908.2A Active CN116620168B (en) 2023-05-24 2023-05-24 Barrier early warning method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116620168B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117163302A (en) * 2023-10-31 2023-12-05 安胜(天津)飞行模拟系统有限公司 Aircraft instrument display method, device, equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108021859A (en) * 2016-11-03 2018-05-11 通用汽车环球科技运作有限责任公司 Method and apparatus for alerting object
US20200018952A1 (en) * 2018-07-12 2020-01-16 Toyota Research Institute, Inc. Vehicle systems and methods for redirecting a driver's gaze towards an object of interest
CN111267870A (en) * 2020-01-17 2020-06-12 北京梧桐车联科技有限责任公司 Information display method and device and computer storage medium
CN111731187A (en) * 2020-06-19 2020-10-02 杭州视为科技有限公司 Automobile A-pillar blind area image display system and method
CN212604869U (en) * 2020-06-18 2021-02-26 上海博泰悦臻电子设备制造有限公司 Image display device and automobile
CN113968186A (en) * 2020-07-22 2022-01-25 华为技术有限公司 Display method, device and system
CN114290990A (en) * 2021-12-24 2022-04-08 浙江吉利控股集团有限公司 Obstacle early warning system and method for vehicle A-column blind area and signal processing device
CN114373335A (en) * 2021-12-22 2022-04-19 江苏泽景汽车电子股份有限公司 Vehicle collision early warning method and device, electronic equipment and storage medium
CN115675289A (en) * 2022-12-30 2023-02-03 深圳曦华科技有限公司 Image display method and device based on driver visual field state in driving scene

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108021859A (en) * 2016-11-03 2018-05-11 通用汽车环球科技运作有限责任公司 Method and apparatus for alerting object
US20200018952A1 (en) * 2018-07-12 2020-01-16 Toyota Research Institute, Inc. Vehicle systems and methods for redirecting a driver's gaze towards an object of interest
CN111267870A (en) * 2020-01-17 2020-06-12 北京梧桐车联科技有限责任公司 Information display method and device and computer storage medium
CN212604869U (en) * 2020-06-18 2021-02-26 上海博泰悦臻电子设备制造有限公司 Image display device and automobile
CN111731187A (en) * 2020-06-19 2020-10-02 杭州视为科技有限公司 Automobile A-pillar blind area image display system and method
CN113968186A (en) * 2020-07-22 2022-01-25 华为技术有限公司 Display method, device and system
CN114373335A (en) * 2021-12-22 2022-04-19 江苏泽景汽车电子股份有限公司 Vehicle collision early warning method and device, electronic equipment and storage medium
CN114290990A (en) * 2021-12-24 2022-04-08 浙江吉利控股集团有限公司 Obstacle early warning system and method for vehicle A-column blind area and signal processing device
CN115675289A (en) * 2022-12-30 2023-02-03 深圳曦华科技有限公司 Image display method and device based on driver visual field state in driving scene

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117163302A (en) * 2023-10-31 2023-12-05 安胜(天津)飞行模拟系统有限公司 Aircraft instrument display method, device, equipment and storage medium
CN117163302B (en) * 2023-10-31 2024-01-23 安胜(天津)飞行模拟系统有限公司 Aircraft instrument display method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN116620168B (en) 2023-12-12

Similar Documents

Publication Publication Date Title
JP2021152906A (en) Method, device, appliance and storage medium for predicting vehicle locus
CN113370911B (en) Pose adjustment method, device, equipment and medium of vehicle-mounted sensor
EP4089659A1 (en) Map updating method, apparatus and device
EP3961582A2 (en) Method and apparatus for controlling vehicle and electronic device
US11953605B2 (en) Method, device, equipment, and storage medium for determining sensor solution
JP7441878B2 (en) Methods, equipment, storage media and program products for outputting alarm information
CN116620168B (en) Barrier early warning method and device, electronic equipment and storage medium
CN113706704B (en) Method and equipment for planning route based on high-precision map and automatic driving vehicle
CN113868356A (en) Rendering method, rendering apparatus, storage medium, and computer program
JP2021099877A (en) Method, device, apparatus and storage medium for reminding travel on exclusive driveway
CN111079079A (en) Data correction method and device, electronic equipment and computer readable storage medium
CN115857169A (en) Collision early warning information display method, head-up display device, carrier and medium
CN111881245A (en) Visibility dynamic map generation method and device, computer equipment and storage medium
CN113126120B (en) Data labeling method, device, equipment, storage medium and computer program product
CN116080399B (en) Display method, display system and storage medium
CN116572837A (en) Information display control method and device, electronic equipment and storage medium
CN114170846B (en) Vehicle lane change early warning method, device, equipment and storage medium
CN114852068A (en) Pedestrian collision avoidance method, device, equipment and storage medium
CN117008775B (en) Display method, display device, electronic equipment and storage medium
CN114565681B (en) Camera calibration method, device, equipment, medium and product
CN115908838B (en) Vehicle presence detection method, device, equipment and medium based on radar fusion
CN115097628B (en) Driving information display method, device and system
CN118072280A (en) Method and device for detecting change of traffic light, electronic equipment and automatic driving vehicle
CN116299199A (en) Radar distance display method and device, electronic equipment and storage medium
CN116468607A (en) Vehicle bottom blind area filling method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant