CN111915915A - Driving scene reconstruction method, device, system, vehicle, equipment and storage medium - Google Patents

Driving scene reconstruction method, device, system, vehicle, equipment and storage medium Download PDF

Info

Publication number
CN111915915A
CN111915915A CN202010685596.9A CN202010685596A CN111915915A CN 111915915 A CN111915915 A CN 111915915A CN 202010685596 A CN202010685596 A CN 202010685596A CN 111915915 A CN111915915 A CN 111915915A
Authority
CN
China
Prior art keywords
information
early warning
vehicle
driving scene
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010685596.9A
Other languages
Chinese (zh)
Inventor
丁磊
朱兰芹
何磊
胡健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Human Horizons Shanghai Autopilot Technology Co Ltd
Original Assignee
Human Horizons Shanghai Autopilot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Human Horizons Shanghai Autopilot Technology Co Ltd filed Critical Human Horizons Shanghai Autopilot Technology Co Ltd
Priority to CN202010685596.9A priority Critical patent/CN111915915A/en
Publication of CN111915915A publication Critical patent/CN111915915A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
    • G08G1/096725Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/0097Predicting future conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096766Systems involving transmission of highway information, e.g. weather, speed limits where the system is characterised by the origin of the information transmission
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096766Systems involving transmission of highway information, e.g. weather, speed limits where the system is characterised by the origin of the information transmission
    • G08G1/096783Systems involving transmission of highway information, e.g. weather, speed limits where the system is characterised by the origin of the information transmission where the origin of the information is a roadside individual element
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0968Systems involving transmission of navigation instructions to the vehicle
    • G08G1/0969Systems involving transmission of navigation instructions to the vehicle having a display in the form of a map
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means

Abstract

The application provides a driving scene reconstruction method, a driving scene reconstruction device, a driving scene reconstruction system, a vehicle, electronic equipment and a computer readable storage medium. The driving scene reconstruction method comprises the following steps: acquiring target information and early warning information in sensor information, Internet of vehicles information and map information; integrating target information and early warning information in the sensor information, the Internet of vehicles information and the map information; and reconstructing a driving scene of the vehicle based on the integrated target information, early warning information and the state of the vehicle, and superposing the early warning information on a target corresponding to the target information. According to the driving scene reconstruction method, the alarm information can be superimposed on the reconstructed target, so that a driver can quickly identify dangerous sources, the self-driving state can be corrected in time, dangers are avoided, and the driving safety is improved.

Description

Driving scene reconstruction method, device, system, vehicle, equipment and storage medium
Technical Field
The present application relates to the field of automatic driving technologies, and in particular, to a driving scene reconstruction method, apparatus, system, vehicle, electronic device, and computer-readable storage medium.
Background
An automatic driving automobile is also called an unmanned automobile, a computer driving automobile or a wheeled mobile robot, and is an intelligent automobile which realizes unmanned driving through a computer system. As the level of automatic driving becomes higher and higher, the responsibility of the driver gradually turns into a monitor. The driving scene reconstruction system can better provide the surrounding situation of the vehicle for the driver, so that the driver can clearly know the surrounding situation of the vehicle in a relaxed state. Meanwhile, in a non-automatic driving mode, components such as sensors and the like adopted by automatic driving normally work, scene reconstruction can be realized, and driving assistance is provided for a driver.
In order to improve the driving safety, the automatic driving has various early warning functions. However, the driver cannot identify the dangerous source according to the early warning function, so that the automatic driving experience is reduced, and the automatic driving hidden danger exists.
Disclosure of Invention
The embodiment of the application provides a driving scene reconstruction method, a driving scene reconstruction device, a driving scene reconstruction system, a vehicle, electronic equipment and a computer readable storage medium, which are used for solving the problems in the related art, and the technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a driving scenario reconstruction method, including:
acquiring target information and early warning information in sensor information, Internet of vehicles information and map information;
integrating target information and early warning information in the sensor information, the Internet of vehicles information and the map information;
and reconstructing a driving scene of the vehicle based on the integrated target information, early warning information and the state of the vehicle, and superposing the early warning information on a target corresponding to the target information.
In a second aspect, an embodiment of the present application provides a driving scenario reconstruction apparatus, including:
the acquisition module is used for acquiring target information and early warning information in the sensor information, the Internet of vehicles information and the map information;
the integration module is used for integrating target information and early warning information in the sensor information, the Internet of vehicles information and the map information;
and the reconstruction module is used for reconstructing a driving scene of the self-vehicle based on the integrated target information, the early warning information and the state of the self-vehicle, and superposing the early warning information on a target corresponding to the target information.
In a third aspect, an embodiment of the present application provides a driving scenario reconstruction system, including the above driving scenario reconstruction device, the system further includes:
the sensor is connected with the driving scene reconstruction device and used for acquiring and outputting sensor information to the driving scene reconstruction device;
the vehicle networking device is connected with the driving scene reconstruction device and is used for outputting vehicle networking information to the driving scene reconstruction device;
the map device is connected with the driving scene reconstruction device and is used for outputting map information to the driving scene reconstruction device;
and the display device is connected with the driving scene reconstruction device and used for receiving the data of the reconstructed driving scene from the driving scene reconstruction device and displaying the reconstructed driving scene.
In a fourth aspect, an embodiment of the present application provides a vehicle, which includes the driving scenario reconstruction apparatus described above, or includes the driving scenario reconstruction system described above.
In a fifth aspect, an embodiment of the present application provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the driving scenario reconstruction method.
In a sixth aspect, the present application provides a computer-readable storage medium, in which computer instructions are stored, and when executed by a processor, the computer instructions implement the method according to any one of the above.
The advantages or beneficial effects in the above technical solution at least include:
according to the driving scene reconstruction method, the early warning information is superposed on the corresponding targets around the vehicle, so that when the display device displays the reconstructed driving scene of the self vehicle, the early warning information superposed on the corresponding targets can be seen, visualization of the early warning information is achieved, a driver can be facilitated to identify danger sources, the self vehicle state is timely corrected, danger is avoided, and driving safety is improved.
The foregoing summary is provided for the purpose of description only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features of the present application will be readily apparent by reference to the drawings and following detailed description.
Drawings
In the drawings, like reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily to scale. It is appreciated that these drawings depict only some embodiments in accordance with the disclosure and are therefore not to be considered limiting of its scope.
FIG. 1 is a block diagram schematic of a driving scenario reconstruction system in accordance with an exemplary embodiment;
FIG. 2 shows a process of the controller of FIG. 1;
FIG. 3 illustrates the types of information processed by the processor;
FIG. 4 is a schematic flow chart illustrating a driving scenario reconstruction method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of road traffic condition information in a driving scene reconstruction method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of target information in a driving scene reconstruction method according to an embodiment of the present application;
fig. 7 is a schematic view of a driving scene reconstructed by a driving scene reconstruction method according to an embodiment of the present application;
FIG. 8 is a schematic flow chart illustrating a driving scenario reconstruction method according to an embodiment of the present application;
FIG. 9 is a schematic view of an information processing procedure according to the driving scenario reconstruction method shown in FIG. 8;
FIG. 10 is a schematic view of a driving scenario reconstructed by a driving scenario reconstruction method according to an embodiment of the present application;
fig. 11 is a schematic view of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application;
fig. 12 is a schematic view of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application;
fig. 13 is a schematic view of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application;
fig. 14 is a schematic view of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application;
fig. 15 is a schematic view of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application;
fig. 16 is a block diagram of a driving scene reconstructing apparatus according to an embodiment of the present application;
fig. 17 is a block diagram illustrating an acquisition module of a driving scene reconstruction apparatus according to an embodiment of the present disclosure;
fig. 18 is a block diagram illustrating an integrated module of a driving scene reconstructing apparatus according to an embodiment of the present application;
FIG. 19 is a block diagram of a driving scenario reconstruction system according to an embodiment of the present application;
fig. 20 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In the following, only certain exemplary embodiments are briefly described. As those skilled in the art will recognize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present application. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
An Advanced Driver Assistance System (ADAS) sensor can collect environmental data inside and outside a vehicle through various vehicle-mounted sensors, and perform technical processing such as identification, detection and tracking of static and dynamic objects, so that a Driver can perceive possible dangers at the fastest time and take corresponding measures to improve driving safety.
Sensors include, but are not limited to, one or more image acquisition devices (e.g., cameras), inertial measurement units, radar, and the like. The image capture device may be used to capture target information, road identification information, and lane line information of the environment surrounding the autonomous vehicle. The inertial measurement unit may sense position and orientation changes of the autonomous vehicle based on inertial acceleration. Radar may utilize radio signals to sense targets, road markings, within the local environment of the autonomous vehicle. In addition to sensing the target, the radar unit may additionally sense the speed and/or heading of the target. The image capture device may include one or more means for capturing images of the environment surrounding the autonomous vehicle. For example, the image capture device may be a still camera and/or a video camera. The camera may comprise an infrared camera. The camera may be mechanically moved, for example, by mounting the camera on a rotating and/or tilting platform.
The sensors may also include, for example: sonar sensors, infrared sensors, steering sensors, throttle sensors, brake sensors, and audio sensors (e.g., microphones), among others. The audio sensor may be configured to collect sound from an environment surrounding the autonomous vehicle. The steering sensor may be configured to sense a steering angle of a steering wheel, wheels of a vehicle, or a combination thereof. The throttle sensor and the brake sensor sense a throttle position and a brake position of the vehicle, respectively. In some cases, the throttle sensor and the brake sensor may be integrated into an integrated throttle/brake sensor.
V2X (vehicle to environment, also called as internet of vehicles) is the exchange of information from vehicle to outside. V2X communication is an important key technology for realizing environment perception, information interaction and cooperative control in the Internet of vehicles. Various communication technologies are adopted To realize interconnection and intercommunication between a Vehicle and a Vehicle (V2V for short), between a Vehicle and a road (V2I for short) and between a Vehicle and a Person (V2P for short), information is effectively utilized such as extracted and shared on an information network platform, and effective management and control are carried out on the Vehicle and comprehensive service is provided. Therefore, a series of road traffic condition information such as real-time road conditions, road identification information, lane line information, target information and the like is obtained, so that the driving safety is improved, the congestion is reduced, the traffic efficiency is improved, and vehicle-mounted entertainment information is provided.
In the automatic driving scene reconstruction process in the embodiment of the application, not only the sensor information from the ADAS sensor can be acquired, but also the vehicle networking information from the V2X can be acquired, the finally acquired ambient environment information is rich, and the real driving environment can be reflected, so that the automatic driving safety is improved, and the automatic driving experience is improved.
FIG. 1 is a block diagram schematic of a driving scenario reconstruction system of an exemplary embodiment. As shown in fig. 1, after the sensor, the V2X and the map device collect the information around the vehicle, the controller may receive raw data from the sensor, receive road information (also referred to as lane line information), traffic sign information (also referred to as road sign information), information about traffic participants (also referred to as target information) such as surrounding vehicles and/or pedestrians from the V2X, and receive road information (also referred to as lane line information), traffic sign information (also referred to as road sign information), vehicle location information, navigation information (also referred to as navigation path planning information) from the map device. The controller receives information from the sensor information, the internet of vehicles information and the map information, acquires road traffic condition information from the received information, integrates the road traffic condition information, and transmits the integrated road traffic condition information, such as lane line information, traffic sign information (also called road identification information), target information (including the type, direction, position, early warning and the like of a target), a motion track of a self vehicle and the like, to the meter controller. The instrument controller processes the received information into data of the driving scene, and transmits the data of the driving scene to the instrument display device to display the reconstructed driving scene.
In one embodiment, the road traffic condition information may include road identification information, lane line information, road traffic abnormal condition information, congestion condition information, road traffic scene information, navigation path planning information, target information, and the like. The target includes traffic participants such as vehicles around the own vehicle and pedestrians.
Fig. 2 shows the processing of the controller of fig. 1, and fig. 3 shows the types of information processed by the controller. As shown in fig. 2, the sensor may exemplarily include a radar, an image acquisition device, and the like. Radar may generate point cloud data, such as point clouds, using radio signals to sense target information and road identification information in the environment surrounding the autonomous vehicle. An image capture device, such as a camera, may be used to capture road signs, lane lines, objects, etc. in the surrounding environment to generate a video stream. As shown in fig. 2, the controller may illustratively include a classification processing module and an information fusion module (also referred to as an integration module). The classification processing module can comprise a target information identification module, a traffic information identification module (also called a road traffic condition information identification module) and a function early warning module. After the controller receives information of a cloud point image from a radar, a video stream from a camera, V2X and information of a map device, the target information identification module identifies target information from the information received by the controller and transmits the target information to the information fusion module; the traffic information identification module identifies road traffic condition information from the information received by the controller and transmits the road traffic condition information to the information fusion module; the function early warning module identifies early warning information from information received by the controller and transmits the early warning information to the information fusion module (also called an integration module). The information fusion module integrates the received target information, road traffic condition information and early warning information respectively and outputs the integrated target information, road traffic condition information and early warning information.
As shown in fig. 3, the object recognition module may recognize information of the type, coordinates, orientation, etc. of the object, for example. The traffic information identification module can identify the position and the current state of a traffic light (also called a traffic light), the value and the position of a speed limit sign, the type and the coordinates of a lane line, a road traffic scene, a road traffic abnormal condition, an optimized navigation path plan and the like. The function early warning modules can include automatic driving grades, forward collision early warning, emergency braking early warning, intersection collision early warning, blind zone early warning, lane change early warning, speed limit early warning, lane keeping early warning, emergency lane keeping, backward collision early warning, rear intersection collision early warning, door opening early warning, left turn assistance, red light running early warning, reverse overtaking early warning, vehicle out-of-control early warning, abnormal vehicle early warning, vulnerable traffic participant early warning and the like.
The information fusion module combines the early warning information with the target information, combines the automatic driving grade, the lane keeping state and the lane line information, combines the navigation path planning with the self-vehicle movement, and outputs the integrated target information, the road traffic condition information and the early warning information.
As shown in fig. 3, the target information may include, for example:
target state: no, yes, early warning level 1, early warning level 2, early warning level 3 and the like;
target type: for example, cars, SUVs, coaches, buses, vans, motorcycles, adults, children, and the like;
target orientation: e.g., forward, backward, left, right, etc.;
coordinates are as follows: abscissa, ordinate, etc.
As shown in fig. 3, the lane line information may include, for example:
the state is as follows: absence, presence, early warning, Adaptive Cruise (ACC), automatic driving level L2, automatic driving level L3, automatic driving level L4, and the like;
type (2): solid lines, dashed lines, double yellow lines, road edges, and the like;
parameters are as follows: a0, a1, a2, A3 (lane line equation: y: a0+ a1 x + a2 x)2+a3*x3)。
Those skilled in the art will appreciate that a0, a1, a2, A3 are polynomial coefficients from the lane line on the left side of the lane, where a0 represents the lateral distance from the center of the vehicle to the lane line and positive values represent the lane line on the left; a1 represents the heading angle of the vehicle relative to the lane line, and a positive value represents that the lane line is counterclockwise; a2 denotes lane line curvature, positive values denote lane line bending to the left; a3 indicates the rate of change of curvature of the lane line, and positive values indicate that the lane line curves to the left. The controller obtains the lane line information from the sensor information or the V2X information, and then obtains the values of a0, a1, a2, and A3. When the lane line is drawn on the display device, the lane line is drawn according to the lane line equation, so that the lane line consistent with the actual lane line can be displayed in the reconstructed driving scene.
As shown in fig. 3, the traffic information may include, for example:
the number of lanes: 1. 2, 3, 4, 5, 6, 7, 8, etc.;
the current lane: left 1, left 2, right 1, right 2, etc.;
path planning direction: none, straight, left-turning, right-turning, left-front, right-front, turning around, etc
Speed limit information state: no, present and early warning;
and (4) limiting the speed value: 5. 10, 15 … … 130;
the traffic light state: none, red, yellow, green, countdown, etc.;
traffic light coordinates: abscissa and ordinate;
road traffic scene: ramps, crossroads, road merges, road bifurcations, T-junctions, and the like.
Based on sensor information, Internet of vehicles information and map information, the application provides a driving scene reconstruction method based on actual traffic.
Fig. 4 is a flowchart illustrating a driving scene reconstructing method according to an embodiment of the present application. As shown in fig. 1, the driving scenario reconstruction method may include:
s101, acquiring road traffic condition information in sensor information, Internet of vehicles information and map information;
s102, integrating the sensor information, the Internet of vehicles information and the road traffic condition information in the map information;
s103, reconstructing a driving scene of the vehicle based on the integrated road traffic condition information.
The driving scene reconstruction method integrates the sensor information from the ADAS sensor, the vehicle networking information from the V2X and the map information, enriches the information sources of the surrounding environment of the automatic driving vehicle, and combines the road traffic condition information, so that the reconstructed driving scene is closer to the real driving environment, the actual requirements of users in the automatic driving state and the common driving state are better met, a more effective auxiliary effect can be provided for the driver, and the driving safety is improved.
Fig. 5 is a schematic diagram of road traffic condition information in a driving scene reconstruction method according to an embodiment of the application. As shown in fig. 5, the road traffic condition information may include road identification information, lane line information, road traffic scene information, road traffic abnormal condition information, navigation path planning information, and the like.
Illustratively, as shown in fig. 5, the road marking information may include red road lamp information (also referred to as traffic lamp information), speed limit sign information, and the like. The traffic light information may include the status of the traffic light, the location of the traffic light, and the like. The speed limit sign information may include a numerical value of the speed limit sign, a position of the speed limit sign, and the like.
The lane line information may include at least one of information of a lane in which the own vehicle is located, information of a lane in which the surrounding vehicle is located, color information of a lane line, type information of the lane line, and position information of the lane line.
In one embodiment, as shown in fig. 5, the device for V2X communication may include at least one of an On Board Unit (OBU) and a Road Side Unit (RSU). The source of the vehicle networking information is at least one of an On Board Unit (OBU) and a Road Side Unit (RSU). Through the V2X communication, at least one of road identification information, lane line information, road traffic abnormality information, and congestion condition information can be obtained. The road traffic abnormal situation information may include at least one of road construction information, abnormal vehicle information, and emergency vehicle information. Therefore, the optimal driving route of the self vehicle can be planned according to the road traffic abnormal condition information and the congestion condition information through the vehicle networking information, and the self vehicle can efficiently reach the destination.
Illustratively, the map information may be from a Beidou navigation System or a GPS navigation System. The map information may include road identification information, lane line information, road traffic scene information, navigation path planning information, and the like. The road traffic scene information may include road information such as intersection information, road merging information, road branching information, and ramp information. The navigation path planning information includes travel path information from an origin to a destination.
In order to obtain the optimal road traffic condition information, for example, as shown in fig. 5, in S102, the integration of the road traffic condition information in the sensor information, the internet of vehicles information, and the map information may include one of the following:
integrating the sensor information, the Internet of vehicles information and the road identification information in the map information;
integrating the sensor information, the Internet of vehicles information and the lane line information in the map information;
integrating road traffic scene information in the map information;
screening and integrating road traffic abnormal condition information in the Internet of vehicles information;
and combining the congestion condition information in the Internet of vehicles information with the navigation path planning information in the map information to obtain optimized navigation path planning information.
Those skilled in the art will appreciate that the accuracy of the information collected by each of the sensor information, the car networking information, and the map information may be different, for example, the accuracy of the position of the speed limit sign and the position of the traffic light in the point cloud data of the radar is higher than the accuracy of the position of the speed limit sign and the position of the traffic light in the video stream of the image collection device; the point cloud data of the radar does not contain speed limit values, traffic light states and the like.
When the sensor information, the internet of vehicles information, and the road traffic condition information in the map information are integrated, the information with the best accuracy can be selected as the integrated information for the repeated information acquired from the sensor information, the internet of vehicles information, and the map information. For only one item of road identification information obtained from the sensor information, the internet of vehicles information, and the map information, the information may be directly adopted as the integrated information.
For those skilled in the art, the accuracy of each of the sensor information, the internet of vehicles information, and the map information may be known, and in the information integration process, the source of the information may be directly set, or the information may be screened, selected, and fused by a model to obtain the integrated information.
In one embodiment, reconstructing the driving scene of the host vehicle based on the integrated road traffic condition information in S103 may include:
reconstructing a driving scene of the self-vehicle based on at least one of the integrated road identification information, lane line information, road traffic scene information, road traffic abnormal condition information and optimized navigation path planning information.
After the driving scene of the vehicle is reconstructed, the obtained driving scene can display traffic lights, states of the traffic lights, speed limit signs and values, the lane where the vehicle is located, the lanes where surrounding vehicles are located, colors and types of lane lines and the like, and the driving path of the vehicle is the optimal driving path.
In one embodiment, the driving scenario reconstruction method may further include:
acquiring target information in sensor information and Internet of vehicles information;
integrating the sensor information and the target information in the Internet of vehicles information;
and reconstructing the driving scene of the vehicle based on the integrated road traffic condition information and the integrated target information.
Fig. 6 is a schematic diagram of target information in a driving scene reconstruction method according to an embodiment of the present application. In one embodiment, as shown in fig. 6, the target information obtained from the point cloud data of the radar may include information such as the size of the target, the position of the target, and the orientation of the target. The object information obtained from the video stream of the image capturing device may comprise information such as the size of the object, the type of the object, the position of the object and the orientation of the object.
For repeated target information derived from sensor information and vehicle networking information, information with the best accuracy may be selected as the integrated information. For only one item of target information acquired from the sensor information, the internet of vehicles information, and the map information, the information may be directly adopted as the integrated information.
For those skilled in the art, the accuracy of the target information in the sensor information and the car networking information may be known, so that the source of the information may be directly set in the information integration process, or the information may be filtered, corresponded, selected, and fused through a model. And integrating the target information in the sensor information and the vehicle networking information to obtain the integrated target information.
In one embodiment, the driving scenario reconstruction method may further include: and receiving the data of the reconstructed driving scene, and displaying the reconstructed driving scene. Thus, the reconstructed driving scene can be presented.
Fig. 7 is a schematic view of a driving scene reconstructed by the driving scene reconstruction method according to an embodiment of the application. Illustratively, the reconstructed driving scene is shown in fig. 7, and as can be seen from fig. 7, the driving scene includes the own vehicle 11, the lane in which the own vehicle 11 is located, the left lane of the own vehicle 11, the right lane of the own vehicle 11, the related target on the left lane, the related target on the right lane, and the lane line (color, type). Not only the relevant object but also the direction of the object, such as the direction of the vehicle 12 and the direction of the vehicle 13, are displayed in the driving scene, which can be clearly derived from the driving scene.
In one embodiment, only the most relevant objects to the own vehicle may be included in the driving scene, for example, the driving scene may include the type, direction and position of 3 objects in front of the own vehicle, 1 object behind the own vehicle, 1 object on each of the left and right sides of the own vehicle, 1 object behind the left side of the own vehicle, 1 object in front of the left side of the own vehicle, 1 object behind the right side of the own vehicle, and 1 object in front of the right side of the own vehicle.
Fig. 8 is a flowchart illustrating a driving scenario reconstruction method according to an embodiment of the present application. For example, as shown in fig. 8, the driving scenario reconstruction method may include:
s201: acquiring target information and early warning information in sensor information, Internet of vehicles information and map information;
s202: integrating target information and early warning information in the sensor information, the Internet of vehicles information and the map information;
s203: and reconstructing a driving scene of the vehicle based on the integrated target information, early warning information and the state of the vehicle, and superposing the early warning information on a target corresponding to the target information.
According to the driving scene reconstruction method, the early warning information is superposed on the corresponding targets around the vehicle, so that when the display device displays the reconstructed driving scene of the self vehicle, the early warning information superposed on the corresponding targets can be seen, visualization of the early warning information is achieved, identification of dangerous sources is facilitated, the self vehicle state is corrected in time, dangers are avoided, and driving safety is improved.
The early warning information may be derived from sensor information and vehicle networking information. For example, the warning information in the sensor information may include: blind Spot Warning (BSW), Lane Change Warning (LCW), Forward Collision Warning (FCW), Emergency Braking (AEB), Intersection Collision Warning (ICW), Lane Departure Warning (LDW), Speed Limit Warning (SLW), Intelligent vehicle Speed control (ISA), Door opening Warning (Door Open Warning, DOW), backward Collision Warning (r Collision Warning, RCW), Lane Keep Assist (Lane Keep Assist, LKA), Emergency Lane Keep Assist (Lane Keep Assist, ela), and the like.
For example, the warning information in the internet of vehicles information may include: blind Spot Warning (BSW)/Lane Change Warning (LCW), Forward Collision Warning (FCW), Emergency Braking (AEB), Intersection Collision Warning (ICW), Lane Departure Warning (LDW), Speed Limit Warning (Speed Limit Warning, SLW), Intelligent Speed Control (ISA), Abnormal Vehicle Warning (AVW), Vehicle loss Warning (Control Limit Warning, CLW), reverse over-the-Road Warning (Do dns Warning, pw), Emergency Vehicle Warning (Emergency Vehicle Warning, AVW), danger Warning (Warning of danger), Road congestion Warning (hlw), right-Left Traffic Warning (hlw), VRUCW), Traffic Light-based Vehicle Speed guidance (TLOSA), Emergency Vehicle signal Priority (EVP), Rear Cross Traffic Alert (RCTA), and the like.
As will be understood by those skilled in the art, the warning condition information of each of the sensor information may be known, and the warning condition information of each of the internet of vehicles information may be known, and thus, the warning condition information of each of the warning information will not be described in detail herein.
For example, the target information may include information on the type, size, position, orientation, movement trajectory, speed, acceleration, and the like of the target around the host vehicle. The type of object may include, for example, a vehicle, a pedestrian, etc.
In addition, road sign information may also be incorporated in the process of superimposing the warning information on the target. For example, the road marking information may include traffic light information, speed limit sign information, and the like. The traffic light information may include the position, state, etc. of the traffic light, and the speed limit sign information may include the position, speed limit value, etc. of the speed limit sign.
Fig. 9 is a schematic view of an information processing procedure according to the driving scenario reconstruction method shown in fig. 8. A part of the information is shown in fig. 9.
In one embodiment, in S201, acquiring the warning information in the sensor information may include:
acquiring early warning condition information of at least one of blind area early warning, lane change early warning, forward collision early warning, emergency braking early warning, intersection collision early warning, lane departure early warning, speed limit early warning, door opening early warning, backward collision early warning, lane keeping assistance and emergency lane keeping assistance in the sensor information.
In one embodiment, in S201, acquiring the warning information in the internet of vehicles information may include:
acquiring early warning condition information of at least one of blind area early warning, lane change early warning, forward collision early warning, emergency braking early warning, intersection collision early warning, lane departure early warning, speed limit early warning, abnormal vehicle reminding, vehicle out-of-control early warning, reverse overtaking early warning, emergency vehicle reminding, road dangerous condition early warning, left turn assisting, red light running early warning, front congestion reminding and weak traffic participant early warning from a vehicle-mounted unit and/or a road side unit.
In one embodiment, in S201, acquiring the target information in the sensor information, as shown in fig. 9, may include:
receiving point cloud data from a radar, and analyzing the point cloud data to obtain target information;
and receiving a video stream from the image acquisition equipment, and analyzing the video stream to obtain target information.
The target information obtained from the point cloud data may include the size, position, orientation, and the like of the target around the host vehicle. The object information obtained from the video stream may include the size, type, location, orientation, etc. of the objects around the host vehicle.
In one embodiment, in S201, acquiring target information in the internet of vehicles information, as shown in fig. 9, may include: target information from the on-board unit and/or the roadside unit is acquired, which may include information about the type, position, orientation, trajectory of motion (also referred to as a travel path), speed, acceleration, etc. of the target around the host vehicle.
In one embodiment, in S202, integrating the target information in the sensor information and the vehicle networking information may include:
and screening, corresponding and selecting target information in the sensor information and the vehicle networking information. The integrated target size, type, position, orientation (also referred to as direction), motion trajectory, velocity, acceleration, etc. are output. For repeated target information derived from sensor information and vehicle networking information, information with the best accuracy may be selected as the integrated information. For example, the position information of the target can be obtained from point cloud data of a radar, video stream of an image acquisition device and vehicle networking information. The position information of the target acquired from the point cloud data of the radar has higher accuracy, and then the position information of the target acquired from the point cloud data of the radar is used as the position information of the target after integration. The direction information of the target acquired from the vehicle networking information has higher accuracy, and then the direction information of the target acquired from the vehicle networking information is used as the direction information of the target after integration. For only one item of target information obtained from the sensor information and the internet of vehicles information, the target information can be directly adopted as the integrated target information. For example, the motion trail, speed and acceleration information of the target can be obtained only from the internet of vehicles information, and then the motion trail, speed and acceleration information of the target obtained from the internet of vehicles information is directly used as the integrated motion trail, speed and acceleration information of the target.
The accuracy of the sensor information and the target information in the car networking information is known to those skilled in the art, so that the source of the information can be directly set in the information integration process, or the information can be screened, corresponded, selected and fused through a model. And integrating information such as the size of the target, the type of the target, the position of the target, the direction of the target and the like in the sensor information and the internet of vehicles information to obtain the integrated information such as the size of the target, the type of the target, the position of the target, the direction of the target and the like, and using the integrated information as the integrated information of the target.
In one embodiment, in S202, integrating the early warning information in the sensor information and the vehicle networking information may include:
integrating early warning condition information of at least one of blind area early warning, lane change early warning, forward collision early warning, emergency braking, intersection collision early warning, lane departure early warning and speed limit early warning in the sensor information and the internet of vehicles information;
taking early warning condition information of at least one of door opening early warning and backward collision early warning in the sensor information as integrated early warning condition information;
and taking the early warning condition information of at least one of road dangerous condition early warning, red light running early warning and front congestion reminding in the Internet of vehicles information as the integrated early warning condition information.
Those skilled in the art will appreciate that both sensor information and vehicle networking information include the following warning information: the method comprises the following steps of blind area early warning, lane change early warning, forward collision early warning, emergency braking early warning, intersection collision early warning, lane departure early warning, speed limit early warning and the like. That is to say, the early warning information includes the early warning information shared by the sensor information and the internet of vehicles information, such as blind area early warning, lane change early warning, forward collision early warning, emergency braking early warning, intersection collision early warning, lane departure early warning, speed limit early warning, and the like. For the common early warning information of the sensor information and the vehicle networking information, the corresponding early warning condition information in the sensor information and the vehicle networking information needs to be integrated. For example, the early warning condition information of the blind area early warning and the lane change early warning in the sensor information and the early warning condition information of the blind area early warning and the lane change early warning in the vehicle networking information are integrated to obtain the early warning condition information of the integrated blind area early warning and the lane change early warning; integrating the early warning condition information of the forward collision early warning in the sensor information and the early warning condition information of the forward collision early warning in the Internet of vehicles information to obtain the integrated early warning condition information of the forward collision early warning; early warning condition information of emergency braking early warning, intersection collision early warning, lane departure early warning, speed limit early warning and the like in the sensor information and early warning condition information of emergency braking early warning, intersection collision early warning, lane departure early warning, speed limit early warning and the like in the vehicle networking information are respectively and correspondingly integrated to respectively obtain the early warning condition information of the integrated emergency braking early warning, intersection collision early warning, lane departure early warning, speed limit early warning and the like.
For the common early warning information of the sensor information and the vehicle networking information, the early warning information with the optimal precision can be selected as the integrated early warning information. For example, there is a forward collision warning in both the sensor information and the internet of vehicles information, but the accuracy of the forward collision warning in the sensor information and the internet of vehicles information is different. For example, when the weather conditions are poor, such as rainy days, foggy days and the like, the forward collision warning in the sensor information is prone to misjudgment, so that when the weather conditions are poor, the warning condition information of the forward collision warning in the vehicle networking information can be used as the warning condition information of the integrated forward collision warning. For example, when the weather condition is good, the close range detectability of the sensor information is good, and the internet of vehicles information has a certain delay, so that when the weather condition is good, the early warning condition information of the forward collision early warning in the sensor information can be used as the early warning condition information of the integrated forward collision early warning. For those skilled in the art, the accuracy of the early warning information in the sensor information and the vehicle networking information may be known, and the source of the early warning condition information may be directly set in the process of integrating the early warning condition information of the common early warning information.
In one implementation, the common early warning information of the sensor information and the vehicle networking information can be screened, selected and fused through the data model, and the integrated early warning condition information is output. For example, a data model may be provided, the data model receives data of common warning information from the sensor information and the V2X information, and the data model calculates optimal warning condition information as the integrated warning condition information based on the speed and acceleration of the vehicle, the speed and acceleration of the target detected by the sensor, the distance between the vehicle and the target, and the like.
For the early warning information that the sensor information exists but the vehicle networking information does not exist, for example, door opening early warning, backward collision early warning and the like, the early warning condition information of the early warning information in the sensor can be directly used as the integrated early warning condition information. For example, the early warning condition information of the door opening early warning in the sensor information is used as the early warning condition information of the integrated door opening early warning, and the early warning condition information of the backward collision early warning in the sensor information is used as the early warning condition information of the integrated backward collision early warning.
For the early warning information that the vehicle networking information exists but the sensor information does not exist, such as road dangerous condition early warning, red light running early warning, front congestion reminding and the like, the early warning condition information of the early warning information in the vehicle networking information can be directly used as the integrated early warning condition information. For example, the early warning condition information of the road danger condition early warning in the internet of vehicles information is used as the early warning condition information of the integrated road danger condition early warning; taking early warning condition information of the early warning of red light running in the Internet of vehicles information as integrated early warning condition information of the early warning of red light running; and taking the early warning condition information of the front congestion reminding in the Internet of vehicles information as the integrated early warning condition information of the front congestion reminding.
In one embodiment, in S203, reconstructing a driving scene of the host vehicle based on the integrated target information, warning information and a state of the host vehicle, and superimposing the warning information on a target corresponding to the target information may include:
and reconstructing a driving scene of the self-vehicle based on the integrated target information, early warning information and the state of the self-vehicle, and superposing an early warning prompt corresponding to the early warning condition information on the corresponding target under the condition that the target information meets the early warning condition information.
Those skilled in the art can understand that the integrated target information, the early warning information and the state of the own vehicle can be reconstructed by adopting conventional technologies in the field, and a process of reconstructing the driving scene of the own vehicle is not described in detail herein.
For example, the state of the own vehicle may include: the speed, acceleration, position, traveling path information, whether the door is opened, whether the front and rear covers of the vehicle are opened, and the like.
In S203, for example, a driving scene of the host vehicle is reconstructed based on the integrated target information, the warning information, and the state of the host vehicle, and whether the vehicle around the host vehicle satisfies warning condition information such as a forward collision warning is determined by combining the state of the host vehicle (for example, speed, acceleration, and travel path information) and the information of the vehicle around the host vehicle (for example, speed, acceleration, and motion trajectory), and if yes, a warning prompt corresponding to the warning condition information such as the forward collision warning is superimposed on the corresponding vehicle.
In one embodiment, the early warning prompts may be classified into multiple levels according to the degree of risk, for example, the early warning prompts may be classified into a first-level early warning prompt, a second-level early warning prompt and a third-level early warning prompt according to the degree of risk of low, medium and high.
Illustratively, the early warning prompt may be at least one of a color or a flashing, the color may be, for example, yellow, red, etc., and the flashing may be of different frequencies. The state of the own vehicle and the information of the vehicles around the own vehicle are combined, for example, the running path information of the own vehicle and the motion trail of the first vehicle around the own vehicle are combined, whether the running path of the own vehicle is overlapped with the motion trail of the first vehicle around the own vehicle is judged, and if the overlapping exists, the first vehicle and the own vehicle are considered to have a collision risk. When displaying the reconstructed driving scene, the first vehicle may be displayed in a color of the warning prompt, for example, red.
In one embodiment, the location, speed, acceleration of the host vehicle may be combined with the location, speed, acceleration of the first vehicle around the host vehicle, and the location of the impact point, and the time to impact may be calculated. When the time difference between the current time of the vehicle and the collision time is in the first range, it indicates that the driver can take measures to correct the vehicle state to avoid collision, and at this time, the first-level early warning prompt can be provided, and the first early warning prompt can be superimposed on the first vehicle, for example, the first vehicle is displayed to be yellow and does not flicker. When the time difference between the current time and the collision time of the vehicle is in the second range, it indicates that the driver must take immediate measures to correct the vehicle state, at this time, the second warning prompt may be a secondary warning prompt, and the second warning prompt may be superimposed on the first vehicle, for example, the first vehicle is displayed red and does not flicker. When the time difference between the current time and the collision time of the own vehicle is in a third range, the own vehicle is indicated to enter an emergency state, and emergency measures such as emergency braking need to be taken.
In one embodiment, the position, speed, acceleration of the host vehicle and the position, speed, acceleration of the first vehicle around the host vehicle may be combined, and the location of the impact point, and the distance between the current position of the host vehicle and the location of the impact point may be calculated. When the distance difference between the current position of the vehicle and the position of the collision point is within a first range, it indicates that the driver can take measures to correct the vehicle state to avoid collision, and at this time, the first-level early warning prompt can be given, and the first early warning prompt can be superimposed on the first vehicle, for example, the first vehicle is displayed to be yellow and not to flicker. When the distance difference between the current position of the vehicle and the position of the collision point is in the second range, it indicates that the driver must take measures immediately to correct the vehicle state, at this time, the second warning prompt may be a secondary warning prompt, and the second warning prompt may be superimposed on the first vehicle, for example, the first vehicle is displayed red and does not flicker. When the distance difference between the current position of the vehicle and the position of the collision point is a third range, the vehicle is indicated to enter an emergency state, and emergency measures such as emergency braking need to be taken.
Fig. 10 is a schematic view of a driving scene reconstructed by the driving scene reconstruction method according to an embodiment of the present application. For example, as shown in fig. 10, the own vehicle 11 changes lane to the left, and the first vehicle 100 of the surrounding vehicles is determined to satisfy the warning condition information of the blind zone warning and the lane change warning by combining the state of the own vehicle 11 and the information of the surrounding vehicles, and the warning prompt corresponding to the warning condition information of the blind zone warning and the lane change warning, for example, red is superimposed on the first vehicle 100, so that the first vehicle 100 is displayed in red. The driver can know that the source of the danger is the first vehicle by observing the driving scene, so that necessary measures can be taken to avoid the lane change risk.
Fig. 11 is a schematic view of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application. Illustratively, as shown in fig. 11, the own vehicle 11 is kept in a straight line, the surrounding vehicles and the first vehicle 100 are judged to satisfy the warning condition information of the forward collision warning in conjunction with the state of the own vehicle 11 and the information of the surrounding vehicles, and a warning notice corresponding to the warning condition information of the forward collision warning, for example, red is superimposed on the first vehicle 100, so that the first vehicle 100 is displayed in red. The driver can know that the source of the danger is the first vehicle by observing the driving scene, so that necessary measures can be taken to avoid collision with the first vehicle 100.
Fig. 12 is a schematic view of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application. Illustratively, as shown in fig. 12, the own vehicle 11 is kept in a straight line, the surrounding vehicles and the first vehicle 100 are judged to satisfy the warning condition information for warning of the backward collision in conjunction with the state of the own vehicle 11 and the information of the surrounding vehicles, and a warning notice corresponding to the warning condition information for warning of the backward collision, for example, yellow is superimposed on the first vehicle 100, so that the first vehicle 100 is displayed in yellow. The driver can know that the source of the danger is the first vehicle by observing the driving scene, so that necessary measures can be taken to avoid collision with the first vehicle 100.
Fig. 13 is a schematic view of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application. Illustratively, as shown in fig. 13, the own vehicle 11 is kept in a straight line, and the first vehicle 100 of the surrounding vehicles is judged to satisfy the warning condition information of the door opening warning in conjunction with the state of the own vehicle 11 and the information of the surrounding vehicles, and a warning notice corresponding to the warning condition information of the door opening warning, for example, red is superimposed on the first vehicle 100, so that the first vehicle 100 is displayed in red. The driver can know that the source of the danger of opening the door is the first vehicle by observing the driving scene, so that necessary measures can be taken to avoid collision with the first vehicle 100.
Fig. 14 is a schematic view of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application. For example, as shown in fig. 14, the host vehicle 11 keeps moving straight, and it is determined whether the relative positional relationship between the current position of the host vehicle 11 and the lane line satisfies the warning condition for lane keeping assist, based on the current position of the host vehicle 11 and the position of the lane line. If so, an early warning prompt corresponding to the early warning condition information of the lane keeping assist, for example, red is superimposed on the corresponding lane. In fig. 14, the positional relationship between the current position of the host vehicle 11 and the left lane line 200 satisfies the warning condition for lane keeping assist, and a warning indication corresponding to the warning condition information for lane keeping assist is superimposed on the left lane line 200 in, for example, red, and the left lane line 200 is displayed in red. The driver can know that the self-vehicle deviates from the left lane line by observing the driving scene, so that necessary measures can be taken, and the self-vehicle can be recovered to the normal lane position.
Fig. 15 is a schematic view of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application. For example, as shown in fig. 15, at the intersection, the own vehicle 11 is to turn left, and the surrounding first vehicle 100 is to go straight to the right in the horizontal direction, and it is determined that the surrounding first vehicle 100 may collide with the own vehicle during the left turn of the own vehicle 11, that is, the first vehicle 100 satisfies the advance warning condition information for the forward target crossing assistance, in conjunction with the state of the own vehicle 11 and the information of the first vehicle 100. Therefore, an early warning notice corresponding to the early warning condition information of the forward target crossing assistance, for example, yellow is superimposed on the first vehicle 100, and thus, the first vehicle 100 is displayed as yellow. The driver can know that the source of the danger is the first vehicle by observing the driving scene, so that necessary measures can be taken, or the automatic driving starts the emergency deceleration or braking under the condition that the driver does not act, so that the collision between the self vehicle and the first vehicle 100 is avoided.
In one embodiment, the driving scenario reconstruction method may further include:
acquiring road identification information in sensor information, Internet of vehicles information and map information;
integrating the sensor information, the Internet of vehicles information and the road identification information in the map information;
and reconstructing a driving scene of the self-vehicle based on the integrated road identification information, early warning information and the state of the self-vehicle, and superposing an early warning prompt corresponding to the early warning condition information on the corresponding road identification under the condition that the state of the self-vehicle meets the early warning condition information.
Illustratively, based on the integrated road identification information, early warning information and the state of the vehicle, the driving scene of the vehicle is reconstructed, the overspeed of the vehicle is judged by combining the state (such as speed) of the vehicle and the numerical value of the speed limit sign, at the moment, the state of the vehicle meets the early warning condition information of the speed limit early warning, and then, an early warning prompt (such as flicker) corresponding to the early warning condition information of the speed limit early warning is superposed on the corresponding speed limit sign. For example, the warning prompt is a flashing with a certain frequency, and then the flashing is superposed on the speed limit sign, that is, the speed limit sign flashes with a certain frequency in a driving scene to remind a driver of overspeed of the vehicle.
Illustratively, based on the integrated road identification information, early warning information and the state of the vehicle, a driving scene of the vehicle is reconstructed, and the state (such as speed and acceleration) of the vehicle and the state (such as red light on) of a traffic light are combined to judge that the vehicle is about to run a red light. For example, the warning prompt is flashing, and then the flashing is superimposed on the traffic light, that is, the traffic light flashes at a certain frequency in the driving scene to attract the attention of the driver, so that the driver can take measures to correct the state of the vehicle and avoid running the red light.
The types of alert prompts are exemplified herein, e.g., color, flashing, etc. Those skilled in the art will appreciate that in actual implementation, the type of warning prompts selected should conform to standards common in the art.
It should be noted that, although the detailed contents of the target information, the road identification information, and the warning information have been enumerated, those skilled in the art will appreciate that the target information, the road identification information, and the warning information are not limited to the enumerated contents. In fact, the user can flexibly set the contents of the target information, the road identification information and the early warning information acquired from the sensor information, the internet of vehicles information and the map information according to personal preferences and/or actual application scenes, and the driving scene reconstruction method for reconstructing the driving scene of the vehicle is within the protection range of the application.
Fig. 16 is a block diagram of a driving scene reconstructing apparatus according to an embodiment of the present application. An embodiment of the present application further provides a driving scenario reconstruction device, as shown in fig. 16, the driving scenario reconstruction device may include:
the acquisition module 21 is used for acquiring target information and early warning information in the sensor information, the internet of vehicles information and the map information;
the integration module 22 is connected with the acquisition module 21 and is used for integrating target information and early warning information in the sensor information, the internet of vehicles information and the map information;
and the reconstruction module 23 is connected with the integration module 22 and is used for reconstructing a driving scene of the vehicle based on the integrated target information, the early warning information and the state of the vehicle and superposing the early warning information on a target corresponding to the target information.
Fig. 17 is a block diagram illustrating an acquisition module of a driving scene reconstruction apparatus according to an embodiment of the present disclosure. In one embodiment, as shown in fig. 17, the obtaining module 21 may include at least one of:
the point cloud data acquisition sub-module 211 is configured to receive point cloud data from a radar, and analyze the point cloud data to obtain at least one of target information and road identification information;
and the video stream acquisition sub-module 212 is configured to receive a video stream from the image capturing device, and parse the video stream to obtain at least one of the target information and the road identification information.
In one embodiment, the obtaining module 21 may include:
and the internet of vehicles information acquisition submodule 213 is used for acquiring at least one of target information, target runaway state reminding, abnormal vehicle information, a target motion track and road identification information from the vehicle-mounted unit and/or the road side unit.
In one embodiment, the obtaining module 21 may include:
and a map information obtaining sub-module 214, configured to obtain at least one of road identification information and navigation path planning information from the map information.
In one embodiment, the obtaining module 21 may include:
a first warning information obtaining sub-module 215 for obtaining warning condition information of at least one of a blind area warning, a lane change warning, a forward collision warning, an emergency braking warning, an intersection collision warning, a lane departure warning, a speed limit warning, a door opening warning, a backward collision warning, a lane keeping assist, and an emergency lane keeping assist among the sensor information;
the second warning information obtaining sub-module 216 is configured to obtain warning condition information of at least one of a blind area warning, a lane change warning, a forward collision warning, an emergency braking warning, a crossroad collision warning, a lane departure warning, a speed limit warning, an abnormal vehicle warning, a vehicle out-of-control warning, a reverse overtaking warning, an emergency vehicle warning, a road danger state warning, a left turn assistant, a red light running warning, and a front congestion warning from the on-board unit and/or the road side unit.
Fig. 18 is a block diagram illustrating an integrated module of a driving scene reconstructing apparatus according to an embodiment of the present application. In one embodiment, as shown in FIG. 18, integration module 22 includes at least one of:
the target information integration sub-module 221 is configured to integrate the sensor information and target information in the internet of vehicles information;
a road identification information integration sub-module 222, configured to integrate the sensor information, the internet of vehicles information, and the road identification information in the map information;
a first early warning information integration sub-module 223, configured to integrate the sensor information and early warning condition information of at least one of a blind area early warning, a lane change early warning, a forward collision early warning, emergency braking, an intersection collision early warning, a lane departure early warning, and a speed limit early warning in the internet of vehicles information;
a second warning information integration sub-module 224, configured to use warning condition information of at least one of the door opening warning and the backward collision warning in the sensor information as integrated warning condition information;
and a third warning information integration sub-module 225, configured to use warning condition information of at least one of a road dangerous condition warning, a red light running warning, and a front congestion warning in the internet of vehicles information as integrated warning condition information.
In one embodiment, the reconstruction module is configured to reconstruct a driving scene of the vehicle based on the integrated target information, the early warning information, and a state of the vehicle, and superimpose an early warning prompt corresponding to the early warning condition information on a corresponding target when the target information satisfies the early warning condition information.
In one embodiment, the obtaining module is further configured to obtain road identification information in the sensor information, the internet of vehicles information, and the map information;
the integration module is also used for integrating the sensor information, the Internet of vehicles information and the road identification information in the map information;
the reconstruction module is further used for reconstructing a driving scene of the self-vehicle based on the integrated road identification information, the early warning information and the state of the self-vehicle, and superposing an early warning prompt corresponding to the early warning condition information on the corresponding road identification under the condition that the state of the self-vehicle meets the early warning condition information.
The functions of the modules in the embodiment of the present application may refer to the corresponding descriptions in the above method, and are not described herein again.
Fig. 19 is a block diagram of a driving scene reconstruction system according to an embodiment of the present application. An embodiment of the present application further provides a driving scenario reconstruction system, as shown in fig. 19, where the driving scenario reconstruction system includes the driving scenario reconstruction device 20, and the driving scenario reconstruction system may further include:
the sensor 31 is connected with the driving scene reconstruction device 20 and used for acquiring and outputting sensor information to the driving scene reconstruction device 20;
the vehicle networking device 32 is connected with the driving scene reconstruction device 20 and used for outputting vehicle networking information to the driving scene reconstruction device;
a map device 33 connected to the driving scene reconstructing device 20, for outputting map information to the driving scene reconstructing device;
and the display device 34 is connected with the driving scene reconstruction device and used for receiving the data of the reconstructed driving scene from the driving scene reconstruction device and displaying the reconstructed driving scene.
It will be understood by those skilled in the art that the process of displaying the reconstructed driving scene after the display device obtains the data of the reconstructed driving scene is conventional in the art, and the detailed description of the displaying method is omitted here.
In one embodiment, the sensor 21, the internet of vehicles device 32, and the map device 33 are all connected to the acquisition module 21 in the driving scene reconstruction device 20. The display device 34 may be connected to a reconstruction device in the driving scene reconstruction device 20. Those skilled in the art will appreciate that "connected" is an electrical connection, may be a CAN connection, or may be a WIFI connection, or may be a network connection, etc.
In one embodiment, the driving scene reconstructing apparatus may be a controller integrated with an acquiring module, an integrating module, and a reconstructing module. The display device may be an instrument controller having a display function in the vehicle. In one embodiment, the controller is integrated with an acquisition module and an integration module. The display device may be an instrument controller having a display function in the vehicle, and the instrument controller may implement the function of the reconfiguration module and the function of display.
The functions of each module in each apparatus in the embodiment of the present application may refer to corresponding descriptions in the above method, and are not described herein again.
The embodiment of the application also provides a vehicle, and in one implementation mode, the vehicle can comprise the driving scene reconstruction device. In one embodiment, the vehicle may include the driving scenario reconstruction system described above.
Fig. 20 is a block diagram of an electronic device according to an embodiment of the present application. An embodiment of the present application further provides an electronic device, as shown in fig. 20, the electronic device includes: at least one processor 920, and a memory 910 communicatively coupled to the at least one processor 920. The memory 910 has stored therein instructions executable by the at least one processor 920. The instructions are executed by at least one processor 920. The processor 920 implements the driving scenario reconstruction method in the above-described embodiment when executing the instructions. The number of the memory 910 and the processor 920 may be one or more. The electronic device is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
The electronic device may further include a communication interface 930 for communicating with an external device for data interactive transmission. The various devices are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor 920 may process instructions for execution within an electronic device, including instructions stored in or on a memory to display Graphical information for a Graphical User Interface (GUI) on an external input/output device, such as a display device coupled to an Interface. In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 20, but this is not intended to represent only one bus or type of bus.
Optionally, in an implementation, if the memory 910, the processor 920 and the communication interface 930 are integrated on a chip, the memory 910, the processor 920 and the communication interface 930 may complete communication with each other through an internal interface.
It should be understood that the processor may be a Central Processing Unit (CPU), other general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or any conventional processor or the like. It is noted that the processor may be a processor supporting an Advanced reduced instruction set machine (ARM) architecture.
Embodiments of the present application provide a computer-readable storage medium (such as the above-mentioned memory 910) storing computer instructions, which when executed by a processor implement the methods provided in embodiments of the present application.
Alternatively, the memory 910 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device of the driving scene reconstruction method, and the like. Further, the memory 910 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 910 may optionally include a memory remotely located from the processor 920, and these remote memories may be connected to the electronic device of the driving scenario reconstruction method through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more (two or more) executable instructions for implementing specific logical functions or steps in the process. And the scope of the preferred embodiments of the present application includes other implementations in which functions may be performed out of the order shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. All or part of the steps of the method of the above embodiments may be implemented by hardware that is configured to be instructed to perform the relevant steps by a program, which may be stored in a computer-readable storage medium, and which, when executed, includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module may also be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various changes or substitutions within the technical scope of the present application, and these should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (18)

1. A driving scene reconstruction method is characterized by comprising the following steps:
acquiring target information and early warning information in sensor information, Internet of vehicles information and map information;
integrating target information and early warning information in the sensor information, the Internet of vehicles information and the map information;
and reconstructing a driving scene of the vehicle based on the integrated target information, early warning information and the state of the vehicle, and superposing the early warning information on a target corresponding to the target information.
2. The method of claim 1, wherein obtaining early warning information in the sensor information comprises:
acquiring early warning condition information of at least one of blind area early warning, lane change early warning, forward collision early warning, emergency braking early warning, intersection collision early warning, lane departure early warning, speed limit early warning, door opening early warning, backward collision early warning, lane keeping assistance and emergency lane keeping assistance in the sensor information.
3. The method of claim 1, wherein obtaining early warning information in the internet of vehicles information comprises:
acquiring early warning condition information of at least one of blind area early warning, lane change early warning, forward collision early warning, emergency braking early warning, intersection collision early warning, lane departure early warning, speed limit early warning, abnormal vehicle reminding, vehicle out-of-control early warning, reverse overtaking early warning, emergency vehicle reminding, road dangerous condition early warning, left turn assisting, red light running early warning and front congestion reminding from a vehicle-mounted unit and/or a road side unit.
4. The method of claim 1, wherein integrating the sensor information and the early warning information in the internet of vehicles information comprises:
integrating early warning condition information of at least one of blind area early warning, lane change early warning, forward collision early warning, emergency braking, intersection collision early warning, lane departure early warning and speed limit early warning in the sensor information and the internet of vehicles information;
taking early warning condition information of at least one of door opening early warning and backward collision early warning in the sensor information as integrated early warning condition information;
and taking the early warning condition information of at least one of road dangerous condition early warning, red light running early warning and front congestion reminding in the Internet of vehicles information as the integrated early warning condition information.
5. The method of claim 1, wherein the target information includes at least one of a type, a size, a position, an orientation, a motion trajectory, a velocity, an acceleration of a target around the host vehicle.
6. The method according to any one of claims 1 to 5, wherein reconstructing a driving scene of the own vehicle based on the integrated target information, early warning information and a state of the own vehicle, and superposing the early warning information on a target corresponding to the target information comprises:
and reconstructing a driving scene of the self-vehicle based on the integrated target information, early warning information and the state of the self-vehicle, and superposing an early warning prompt corresponding to the early warning condition information on the corresponding target under the condition that the target information meets the early warning condition information.
7. The method of claim 1, further comprising:
acquiring road identification information in sensor information, Internet of vehicles information and map information;
integrating the sensor information, the Internet of vehicles information and the road identification information in the map information;
and reconstructing a driving scene of the self-vehicle based on the integrated road identification information, early warning information and the state of the self-vehicle, and superposing an early warning prompt corresponding to the early warning condition information on the corresponding road identification under the condition that the state of the self-vehicle meets the early warning condition information.
8. A driving scene reconstructing apparatus, comprising:
the acquisition module is used for acquiring target information and early warning information in the sensor information, the Internet of vehicles information and the map information;
the integration module is used for integrating target information and early warning information in the sensor information, the Internet of vehicles information and the map information;
and the reconstruction module is used for reconstructing a driving scene of the self-vehicle based on the integrated target information, the early warning information and the state of the self-vehicle, and superposing the early warning information on a target corresponding to the target information.
9. The apparatus of claim 8, wherein the obtaining module comprises:
the first early warning information acquisition submodule is used for acquiring early warning condition information of at least one of blind area early warning, lane change early warning, forward collision early warning, emergency braking early warning, intersection collision early warning, lane departure early warning, speed limit early warning, door opening early warning, backward collision early warning, lane keeping assistance and emergency lane keeping assistance in the sensor information.
10. The apparatus of claim 8, wherein the obtaining module comprises:
and the second early warning information acquisition submodule is used for acquiring early warning condition information of at least one of blind area early warning, lane change early warning, forward collision early warning, emergency braking early warning, intersection collision early warning, lane departure early warning, speed limit early warning, abnormal vehicle reminding, vehicle out-of-control early warning, reverse overtaking early warning, emergency vehicle reminding, road dangerous condition early warning, left turn assisting, red light running early warning and front congestion reminding from the vehicle-mounted unit and/or the road side unit.
11. The apparatus of claim 8, wherein the integration module comprises at least one of:
the first early warning information integration submodule is used for integrating early warning condition information of at least one of blind area early warning, lane change early warning, forward collision early warning, emergency braking, intersection collision early warning, lane departure early warning and speed limit early warning in the sensor information and the Internet of vehicles information;
the second early warning information integration submodule is used for taking early warning condition information of at least one of door opening early warning and backward collision early warning in the sensor information as integrated early warning condition information;
and the third early warning information integration sub-module is used for taking early warning condition information of at least one of road dangerous condition early warning, red light running early warning and front congestion reminding in the Internet of vehicles information as the integrated early warning condition information.
12. The apparatus of claim 8, wherein the object information comprises at least one of a type, a size, a position, an orientation, a motion trajectory, a velocity, an acceleration of an object around the host vehicle.
13. The device according to any one of claims 8 to 12, wherein the reconstruction module is configured to reconstruct a driving scene of the host vehicle based on the integrated target information, the early warning information, and a state of the host vehicle, and superimpose an early warning prompt corresponding to the early warning condition information on a corresponding target when the target information satisfies the early warning condition information.
14. The apparatus of claim 8,
the acquisition module is also used for acquiring road identification information in the sensor information, the Internet of vehicles information and the map information;
the integration module is further used for integrating the sensor information, the Internet of vehicles information and the road identification information in the map information;
the reconstruction module is further used for reconstructing a driving scene of the self-vehicle based on the integrated road identification information, the early warning information and the state of the self-vehicle, and superposing the early warning prompt corresponding to the early warning condition information on the corresponding road identification under the condition that the state of the self-vehicle meets the early warning condition information.
15. A driving scenario reconstruction system, characterized by comprising the driving scenario reconstruction apparatus of any one of claims 8 to 14, the system further comprising:
the sensor is connected with the driving scene reconstruction device and used for acquiring and outputting sensor information to the driving scene reconstruction device;
the vehicle networking device is connected with the driving scene reconstruction device and is used for outputting vehicle networking information to the driving scene reconstruction device;
the map device is connected with the driving scene reconstruction device and is used for outputting map information to the driving scene reconstruction device;
and the display device is connected with the driving scene reconstruction device and used for receiving the data of the reconstructed driving scene from the driving scene reconstruction device and displaying the reconstructed driving scene.
16. A vehicle characterized by comprising the driving scenario reconstruction apparatus of any one of claims 8 to 14, or comprising the driving scenario reconstruction system of claim 15.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
18. A computer readable storage medium having stored therein computer instructions which, when executed by a processor, implement the method of any one of claims 1-7.
CN202010685596.9A 2020-07-16 2020-07-16 Driving scene reconstruction method, device, system, vehicle, equipment and storage medium Pending CN111915915A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010685596.9A CN111915915A (en) 2020-07-16 2020-07-16 Driving scene reconstruction method, device, system, vehicle, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010685596.9A CN111915915A (en) 2020-07-16 2020-07-16 Driving scene reconstruction method, device, system, vehicle, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111915915A true CN111915915A (en) 2020-11-10

Family

ID=73281004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010685596.9A Pending CN111915915A (en) 2020-07-16 2020-07-16 Driving scene reconstruction method, device, system, vehicle, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111915915A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991764A (en) * 2021-04-26 2021-06-18 中汽研(天津)汽车工程研究院有限公司 Overtaking scene data acquisition, identification and extraction system based on camera
CN113033684A (en) * 2021-03-31 2021-06-25 浙江吉利控股集团有限公司 Vehicle early warning method, device, equipment and storage medium
CN113096427A (en) * 2021-03-30 2021-07-09 北京三快在线科技有限公司 Information display method and device
CN113240939A (en) * 2021-03-31 2021-08-10 浙江吉利控股集团有限公司 Vehicle early warning method, device, equipment and storage medium
CN113428160A (en) * 2021-07-28 2021-09-24 中汽创智科技有限公司 Dangerous scene prediction method, device and system, electronic equipment and storage medium
CN113428178A (en) * 2021-07-24 2021-09-24 中汽创智科技有限公司 Control method, device and medium for automatically driving vehicle and vehicle
CN113706870A (en) * 2021-08-30 2021-11-26 广州文远知行科技有限公司 Method for collecting main vehicle lane change data in congested scene and related equipment
CN114005271A (en) * 2021-08-05 2022-02-01 北京航空航天大学 Intersection collision risk quantification method in intelligent networking environment
CN114170805A (en) * 2021-12-23 2022-03-11 南京理工大学 Vehicle path planning method based on AEB collision speed
CN114373295A (en) * 2021-11-30 2022-04-19 江铃汽车股份有限公司 Driving safety early warning method, system, storage medium and equipment
CN114734993A (en) * 2020-12-23 2022-07-12 观致汽车有限公司 Dynamic traffic scene display system and display method
US20220234605A1 (en) * 2021-04-16 2022-07-28 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Method for outputting early warning information, device, storage medium and program product
CN114863689A (en) * 2022-07-08 2022-08-05 中汽研(天津)汽车工程研究院有限公司 Method and system for collecting, identifying and extracting data of on-off ramp behavior scene
CN114999161A (en) * 2022-07-29 2022-09-02 河北博士林科技开发有限公司 Be used for intelligent traffic jam edge management system
CN115588311A (en) * 2022-11-07 2023-01-10 中国第一汽车股份有限公司 Automatic driving vehicle remote control method, system, vehicle and storage medium
CN117727183A (en) * 2024-02-18 2024-03-19 南京淼瀛科技有限公司 Automatic driving safety early warning method and system combining vehicle-road cooperation
CN117727183B (en) * 2024-02-18 2024-05-17 南京淼瀛科技有限公司 Automatic driving safety early warning method and system combining vehicle-road cooperation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108136907A (en) * 2015-10-09 2018-06-08 日产自动车株式会社 Display apparatus and vehicle display methods
US20190130765A1 (en) * 2017-10-31 2019-05-02 Cummins Inc. Sensor fusion and information sharing using inter-vehicle communication
CN110758243A (en) * 2019-10-31 2020-02-07 的卢技术有限公司 Method and system for displaying surrounding environment in vehicle driving process
CN111402588A (en) * 2020-04-10 2020-07-10 河北德冠隆电子科技有限公司 High-precision map rapid generation system and method for reconstructing abnormal roads based on space-time trajectory

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108136907A (en) * 2015-10-09 2018-06-08 日产自动车株式会社 Display apparatus and vehicle display methods
US20190130765A1 (en) * 2017-10-31 2019-05-02 Cummins Inc. Sensor fusion and information sharing using inter-vehicle communication
CN110758243A (en) * 2019-10-31 2020-02-07 的卢技术有限公司 Method and system for displaying surrounding environment in vehicle driving process
CN111402588A (en) * 2020-04-10 2020-07-10 河北德冠隆电子科技有限公司 High-precision map rapid generation system and method for reconstructing abnormal roads based on space-time trajectory

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114734993A (en) * 2020-12-23 2022-07-12 观致汽车有限公司 Dynamic traffic scene display system and display method
CN114734993B (en) * 2020-12-23 2023-11-03 观致汽车有限公司 Dynamic traffic scene display system and display method
CN113096427A (en) * 2021-03-30 2021-07-09 北京三快在线科技有限公司 Information display method and device
CN113033684A (en) * 2021-03-31 2021-06-25 浙江吉利控股集团有限公司 Vehicle early warning method, device, equipment and storage medium
CN113240939A (en) * 2021-03-31 2021-08-10 浙江吉利控股集团有限公司 Vehicle early warning method, device, equipment and storage medium
US20220234605A1 (en) * 2021-04-16 2022-07-28 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Method for outputting early warning information, device, storage medium and program product
CN112991764A (en) * 2021-04-26 2021-06-18 中汽研(天津)汽车工程研究院有限公司 Overtaking scene data acquisition, identification and extraction system based on camera
CN113428178A (en) * 2021-07-24 2021-09-24 中汽创智科技有限公司 Control method, device and medium for automatically driving vehicle and vehicle
CN113428178B (en) * 2021-07-24 2023-02-28 中汽创智科技有限公司 Control method, device and medium for automatically driving vehicle and vehicle
CN113428160B (en) * 2021-07-28 2023-02-24 中汽创智科技有限公司 Dangerous scene prediction method, device and system, electronic equipment and storage medium
CN113428160A (en) * 2021-07-28 2021-09-24 中汽创智科技有限公司 Dangerous scene prediction method, device and system, electronic equipment and storage medium
CN114005271A (en) * 2021-08-05 2022-02-01 北京航空航天大学 Intersection collision risk quantification method in intelligent networking environment
CN113706870A (en) * 2021-08-30 2021-11-26 广州文远知行科技有限公司 Method for collecting main vehicle lane change data in congested scene and related equipment
CN114373295A (en) * 2021-11-30 2022-04-19 江铃汽车股份有限公司 Driving safety early warning method, system, storage medium and equipment
CN114170805A (en) * 2021-12-23 2022-03-11 南京理工大学 Vehicle path planning method based on AEB collision speed
CN114170805B (en) * 2021-12-23 2024-04-23 南京理工大学 Vehicle path planning method based on AEB collision speed
CN114863689B (en) * 2022-07-08 2022-09-30 中汽研(天津)汽车工程研究院有限公司 Method and system for collecting, identifying and extracting data of on-off ramp behavior scene
CN114863689A (en) * 2022-07-08 2022-08-05 中汽研(天津)汽车工程研究院有限公司 Method and system for collecting, identifying and extracting data of on-off ramp behavior scene
CN114999161A (en) * 2022-07-29 2022-09-02 河北博士林科技开发有限公司 Be used for intelligent traffic jam edge management system
CN114999161B (en) * 2022-07-29 2022-10-28 河北博士林科技开发有限公司 Be used for intelligent traffic jam edge management system
CN115588311A (en) * 2022-11-07 2023-01-10 中国第一汽车股份有限公司 Automatic driving vehicle remote control method, system, vehicle and storage medium
CN117727183A (en) * 2024-02-18 2024-03-19 南京淼瀛科技有限公司 Automatic driving safety early warning method and system combining vehicle-road cooperation
CN117727183B (en) * 2024-02-18 2024-05-17 南京淼瀛科技有限公司 Automatic driving safety early warning method and system combining vehicle-road cooperation

Similar Documents

Publication Publication Date Title
CN111880533B (en) Driving scene reconstruction method, device, system, vehicle, equipment and storage medium
CN111915915A (en) Driving scene reconstruction method, device, system, vehicle, equipment and storage medium
US11314252B2 (en) Providing user assistance in a vehicle based on traffic behavior models
US10176720B2 (en) Auto driving control system
US11315418B2 (en) Providing user assistance in a vehicle based on traffic behavior models
CN110371114B (en) Vehicle control device, vehicle control method, and storage medium
US11827274B2 (en) Turn path visualization to improve spatial and situational awareness in turn maneuvers
JP6459220B2 (en) Accident prevention system, accident prevention device, accident prevention method
CN113345269B (en) Vehicle danger early warning method, device and equipment based on V2X vehicle networking cooperation
CN108275149B (en) System and method for merge assistance using vehicle communication
US11195415B2 (en) Lane change notification
US10147324B1 (en) Providing user assistance in a vehicle based on traffic behavior models
WO2020057406A1 (en) Driving aid method and system
JP2020053046A (en) Driver assistance system and method for displaying traffic information
JP2017151041A (en) Driving support device and center
JP7445882B2 (en) Driving support method, road photographic image collection method, and roadside device
WO2019116423A1 (en) Teacher data collection device
CN110949389A (en) Vehicle control device, vehicle control method, and storage medium
US20220120581A1 (en) End of trip sequence
US20220073104A1 (en) Traffic accident management device and traffic accident management method
JP2020065141A (en) Vehicle overhead image generation system and method thereof
JP7183438B2 (en) Driving support device, driving support method and program
CN117836184A (en) Complementary control system for autonomous vehicle
US11491976B2 (en) Collision warning system for safety operators of autonomous vehicles
EP3896671A1 (en) Detection of a rearward approaching emergency vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201110