WO2022012094A1 - Driving scene reconstruction method and apparatus, system, vehicle, device, and storage medium - Google Patents

Driving scene reconstruction method and apparatus, system, vehicle, device, and storage medium Download PDF

Info

Publication number
WO2022012094A1
WO2022012094A1 PCT/CN2021/085226 CN2021085226W WO2022012094A1 WO 2022012094 A1 WO2022012094 A1 WO 2022012094A1 CN 2021085226 W CN2021085226 W CN 2021085226W WO 2022012094 A1 WO2022012094 A1 WO 2022012094A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
vehicle
road traffic
driving scene
road
Prior art date
Application number
PCT/CN2021/085226
Other languages
French (fr)
Chinese (zh)
Inventor
丁磊
朱兰芹
何磊
胡健
Original Assignee
华人运通(上海)自动驾驶科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华人运通(上海)自动驾驶科技有限公司 filed Critical 华人运通(上海)自动驾驶科技有限公司
Publication of WO2022012094A1 publication Critical patent/WO2022012094A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0278Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/028Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using a RF signal

Definitions

  • the present application relates to the technical field of automatic driving, and in particular, to a driving scene reconstruction method, apparatus, system, vehicle, electronic device, and computer-readable storage medium.
  • Self-driving car also known as driverless car, computer-driven car or wheeled mobile robot
  • driverless car computer-driven car or wheeled mobile robot
  • the driving scene reconstruction system can better provide the driver with the surrounding situation of the vehicle, so that the driver can clearly understand the surrounding situation of the vehicle in a relaxed state.
  • the sensors and other components used in autonomous driving work normally, which can realize scene reconstruction and provide driving assistance for the driver.
  • the driving scene information provided by the driving scene reconstruction system is limited, which reduces the automatic driving experience and has hidden dangers of automatic driving.
  • the embodiments of the present application provide a method, device, system, vehicle, electronic device, and computer-readable storage medium for automatic driving reconstruction, so as to solve the problems existing in the related art, and the technical solutions are as follows:
  • an embodiment of the present application provides a driving scene reconstruction method, including:
  • the driving scene of the self-vehicle is reconstructed.
  • an embodiment of the present application provides a driving scene reconstruction device, including:
  • the acquisition module is used to acquire the road traffic situation information in the sensor information, the Internet of Vehicles information and the map information;
  • an integration module configured to integrate the sensor information, the vehicle networking information and the road traffic situation information in the map information
  • the reconstruction module is used to reconstruct the driving scene of the self-vehicle based on the integrated road traffic situation information.
  • an embodiment of the present application provides a driving scene reconstruction system, including the driving scene reconstruction device described above, and the system further includes:
  • a sensor connected to the driving scene reconstruction device, for collecting and outputting sensor information to the driving scene reconstruction device;
  • an Internet of Vehicles device connected to the driving scene reconstruction device, and used for outputting the Internet of Vehicles information to the driving scene reconstruction device;
  • a map device connected to the driving scene reconstruction device, for outputting map information to the driving scene reconstruction device;
  • the display device is connected to the driving scene reconstruction device, and is used for receiving data of the reconstructed driving scene from the driving scene reconstruction device, and displaying the reconstructed driving scene.
  • an embodiment of the present application provides a vehicle, which includes the above-mentioned driving scene reconstruction device, or includes the above-mentioned driving scene reconstruction system.
  • an embodiment of the present application provides an electronic device, the electronic device includes: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions that can be executed by the at least one processor , so that at least one processor can execute the above driving scene reconstruction method.
  • an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores computer instructions, and when the computer instructions are executed on a computer, the method in any one of the implementation manners of the above aspects is executed.
  • the driving scene reconstruction method integrates sensor information, Internet of Vehicles information and map information, enriches the information sources of the surrounding environment of the autonomous driving vehicle, and combines road traffic situation information to make the reconstructed driving scene closer to the real one.
  • the driving environment can provide more effective assistance for the driver and improve driving safety.
  • FIG. 1 is a schematic block diagram of a driving scene reconstruction system according to an exemplary embodiment
  • Fig. 2 shows the processing procedure of the controller in Fig. 1;
  • FIG. 3 shows the types of information handled by the processor
  • FIG. 4 is a schematic flowchart of a driving scene reconstruction method according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of road traffic situation information in a driving scene reconstruction method according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of target information in a driving scene reconstruction method according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application.
  • FIG. 9 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application.
  • FIG. 10 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application.
  • FIG. 11 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application.
  • FIG. 12 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application.
  • FIG. 13 is a structural block diagram of a driving scene reconstruction device according to an embodiment of the present application.
  • FIG. 14 is a structural block diagram of an acquisition module of a driving scene reconstruction device according to an embodiment of the present application.
  • FIG. 15 is a structural block diagram of an integration module of a driving scene reconstruction device according to an embodiment of the present application.
  • 16 is a structural block diagram of a driving scene reconstruction system according to an embodiment of the present application.
  • FIG. 17 is a structural block diagram of an electronic device according to an embodiment of the present application.
  • ADAS sensors can collect environmental data inside and outside the car through various on-board sensors, and perform technical processing such as identification, detection and tracking of static and dynamic objects, so that drivers can The fastest time to detect possible dangers and take corresponding measures to improve driving safety.
  • Sensors include, but are not limited to, one or more image acquisition devices (eg, cameras), inertial measurement units, radars, and the like.
  • the image acquisition device can be used to collect target information, road marking information and lane line information of the surrounding environment of the autonomous vehicle.
  • the inertial measurement unit can sense position and orientation changes of the autonomous vehicle based on inertial acceleration.
  • Radar can use radio signals to sense objects, road signs, within the local environment of an autonomous vehicle.
  • the radar unit may additionally sense the speed and/or heading of the target.
  • the image capture device may include one or more devices for capturing images of the environment surrounding the autonomous vehicle.
  • the image capture device may be a still camera and/or a video camera.
  • the cameras may include infrared cameras. The camera may be moved mechanically, eg by mounting the camera on a rotating and/or tilting platform.
  • Sensors may also include, for example, sonar sensors, infrared sensors, steering sensors, accelerator sensors, brake sensors, audio sensors (eg, microphones), and the like. Audio sensors can be configured to pick up sound from the environment surrounding the autonomous vehicle.
  • the steering sensor may be configured to sense the steering angle of the steering wheel, the wheels of the vehicle, or a combination thereof.
  • the accelerator sensor and the brake sensor sense the accelerator position and the brake position of the vehicle, respectively. In some cases, the accelerator sensor and brake sensor may be integrated into an integrated accelerator/brake sensor.
  • V2X vehicle to everything, the Internet of Vehicles, or the Internet of Vehicles
  • V2X communication is an important key technology for realizing environmental perception, information interaction and collaborative control in the Internet of Vehicles. It adopts various communication technologies to realize vehicle-to-vehicle (Vehicle-To-Vehicle, referred to as V2V), vehicle-to-road (Vehicle-To-Infrastructure, referred to as V2I) and vehicle-to-person (Vehicle-To-Person, referred to as V2P) interconnection It can effectively use information such as extraction and sharing on the information network platform to effectively manage and control vehicles and provide comprehensive services. In this way, a series of road traffic information such as real-time road conditions, road sign information, lane line information, and target information can be obtained, thereby improving driving safety, reducing congestion, improving traffic efficiency, and providing in-vehicle entertainment information.
  • V2V vehicle-to-vehicle
  • V2I vehicle-to-road
  • FIG. 1 is a schematic block diagram of a driving scene reconstruction system according to an exemplary embodiment.
  • the controller can receive the raw data from the sensor, and the controller can also receive the road information (also called lane line information), traffic sign information from the V2X (also referred to as road sign information), surrounding vehicles or/and pedestrians and other traffic participants (also referred to as target information), etc., the controller can also receive road information (also referred to as lane line information) from the map device, traffic Sign information (also referred to as road sign information), own vehicle position information, navigation information (also referred to as navigation route planning information), and the like.
  • road information also called lane line information
  • traffic sign information also referred to as road sign information
  • surrounding vehicles or/and pedestrians and other traffic participants also referred to as target information
  • target information also referred to as target information
  • the controller can also receive road information (also referred to as lane line information) from the map device, traffic Sign information (also referred to as road sign information), own vehicle position information, navigation information (also
  • the controller After the controller receives information from sensor information, vehicle networking information and map information, it obtains road traffic condition information from the received information, integrates the road traffic condition information, and integrates the integrated road traffic condition information such as lane line information. , traffic sign information (also known as road sign information), target information (including target type, direction, location, alarm, etc.) and the vehicle's trajectory, etc. are transmitted to the instrument controller.
  • the instrument controller processes the received information into driving scene data, and transmits the driving scene data to the instrument display device to display the reconstructed driving scene.
  • the road traffic condition information may include road sign information, lane line information, road traffic abnormal condition information, congestion condition information, road traffic scene information, navigation path planning information, target information, and the like.
  • the targets include traffic participants such as vehicles and pedestrians around the ego vehicle.
  • FIG. 2 shows the processing procedure of the controller in FIG. 1
  • FIG. 3 shows the types of information processed by the controller.
  • the sensors may include, for example, radars and image acquisition devices, among others. Radar can use radio signals to sense target information and road marking information in the surrounding environment of autonomous vehicles, and generate point cloud data such as point cloud maps. Image capture devices such as cameras can be used to capture road signs, lane lines, and objects in the surrounding environment to generate video streams.
  • the controller may include a classification processing module and an information fusion module (also referred to as an integration module).
  • the classification processing module may include a target information identification module, a traffic information identification module (also referred to as a road traffic condition information identification module), and a function alarm module.
  • the target information identification module identifies the target information from the information received by the controller, and transmits the target information to the information fusion module;
  • the traffic information identification module identifies the road traffic condition information from the information received by the controller, and transmits the road traffic condition information to the information fusion module;
  • the function alarm module identifies the function alarm information from the information received by the controller, and sends the function alarm information to the information fusion module.
  • the alarm information is transmitted to the information fusion module (also called the integration module).
  • the information fusion module integrates the received target information, road traffic condition information, and function alarm information respectively, and outputs the integrated target information, road traffic condition information, and function alarm information.
  • the target identification module can identify information such as the type, coordinates, and orientation of the target.
  • the traffic information recognition module can identify the location and current state of traffic lights, the value and location of speed limit signs, the type and coordinates of lane lines, road traffic scenarios, road traffic anomalies, and optimized road path planning.
  • the function alarm module can include automatic driving level, forward collision warning, emergency braking warning, intersection collision warning, blind spot warning, lane change warning, speed limit warning, lane keeping warning, emergency lane keeping, rear collision warning, rear cross traffic Intersection collision warning, door opening warning, left turn assist, red light running warning, reverse overtaking warning, vehicle loss of control warning, abnormal vehicle warning, vulnerable traffic participant warning, etc.
  • the information fusion module combines the function alarm information with the target information, the automatic driving level, the lane keeping status and the lane line information, the navigation path planning and the vehicle movement, and outputs the integrated target information and road traffic information. , Function alarm information.
  • the target information may include:
  • Target status none, yes, alarm level 1, alarm level 2, alarm level 3, etc.
  • Target type For example, cars, off-road vehicles, SUVs, passenger cars, buses, small trucks, trucks, motorcycles, two-wheelers, adults, children, etc.;
  • Target orientation for example, forward, backward, left, right, etc.
  • the lane line information may include:
  • Type solid line, dotted line, double yellow line, road edge, road edge, etc.
  • A0, A1, A2, and A3 are all polynomial coefficients of the lane line on the left side of the own lane, where A0 represents the lateral distance from the center of the vehicle to the lane line, and a positive value indicates that the lane line is on the left;
  • the heading angle of the vehicle relative to the lane line, a positive value means the lane line is counterclockwise;
  • A2 means the lane line curvature, a positive value means the lane line bends to the left;
  • A3 means the change rate of the lane line curvature, and a positive value means the lane line bends to the left.
  • the controller After the controller obtains the lane line information from the sensor information or V2X information, it can obtain the values of A0, A1, A2, and A3.
  • the lane line is drawn on the display device, the lane line is drawn according to the lane line equation, so that the lane line consistent with the actual lane line can be displayed in the reconstructed driving scene.
  • the traffic information may include:
  • the current lane left 1, left 2, right 1, right 2, etc.;
  • Path planning direction none, go straight, turn left, turn right, front left, front right, U-turn, etc.
  • Traffic light status none, red light, yellow light, green light, countdown, etc.
  • Road traffic scenarios ramps, intersections, road merging, road bifurcations, T-junctions, etc.
  • this application proposes a driving scene reconstruction method based on actual traffic.
  • FIG. 4 is a schematic flowchart of a driving scene reconstruction method according to an embodiment of the present application.
  • the driving scene reconstruction method may include:
  • the driving scene reconstruction method of the present application integrates sensor information from ADAS sensors, vehicle networking information and map information from V2X, and enriches the information sources of the surrounding environment of autonomous vehicles. Moreover, the driving scene reconstruction method combines Road traffic situation information, thus, the reconstructed driving scene is closer to the real driving environment, more in line with the actual needs of users in the automatic driving state and ordinary driving state, which can provide more effective assistance for the driver and improve the driving experience. safety.
  • the sensor information, the Internet of Vehicles information and the road traffic situation information in the map information are acquired in real time, and the sensor information, the Internet of Vehicles information and the map information are The road traffic situation information is integrated; based on the integrated road traffic situation information, the driving scene of the self-vehicle is reconstructed in real time. Therefore, the reconstructed driving scene can display the surrounding environment of the vehicle in real time.
  • FIG. 5 is a schematic diagram of road traffic situation information in a driving scene reconstruction method according to an embodiment of the present application.
  • acquiring road traffic condition information in the sensor information may include: receiving point cloud data from radar, and analyzing the point cloud data to obtain road sign information.
  • acquiring road traffic condition information in sensor information may include: receiving a video stream from an image acquisition device, and parsing the video stream to obtain road sign information and lane line information.
  • the road sign information may include red street light information (also referred to as traffic light information), speed limit sign information, and the like.
  • the traffic light information may include the status of the traffic light, the location of the traffic light, and the like.
  • the speed limit sign information may include the value of the speed limit sign, the position of the speed limit sign, and the like. For example, the value of the speed limit sign on a certain section of an urban road is 50, indicating that the maximum speed limit of this section is 50km/h.
  • road sign information is not limited to traffic light information and speed limit sign information, but may also include identifiable signs such as U-turn signs, going straight signs, turn signs and the like.
  • the lane line information may include at least one of information about the lane where the vehicle is located, information about the lane where surrounding vehicles are located, color information of the lane line, type information of the lane line, and position information of the lane line.
  • the colors of the lane lines include white, yellow, and so on.
  • the types of lane lines include solid lines, dashed lines, double yellow lines, curbs, etc.
  • the position information of the lane line may be the coordinate information of the lane line, which corresponds to the position of the lane line.
  • the lane line can determine the lane, and therefore, the lane line information can reflect the number of lanes, the lane in which the self-vehicle is located, and the lanes in which surrounding vehicles are located.
  • acquiring road traffic situation information in the Internet of Vehicles information may include: acquiring road identification information, lane line information, road traffic abnormality information and/or roadside units from the vehicle-mounted unit and/or the roadside unit At least one of the congestion status information.
  • the device for V2X communication may include at least one of an on-board unit (On Board Unit, OBU) and a road side unit (Road Side Unit, RSU).
  • the source of the Internet of Vehicles information is at least one of an on-board unit (OBU) and a roadside unit (RSU).
  • OBU on-board unit
  • RSU roadside unit
  • the road traffic abnormal situation information may include at least one of road construction information, abnormal vehicle information, and emergency vehicle information. Therefore, through the information of the Internet of Vehicles, the optimal driving route of the self-vehicle can be planned according to the abnormal situation information of road traffic and the information of the congestion situation, so as to reach the destination efficiently.
  • acquiring road traffic situation information in the map information may include: acquiring at least one of road identification information, lane line information, road traffic scene information and navigation path planning information from the map information .
  • the map information can come from a Beidou navigation system or a GPS navigation system.
  • the map information may include information such as road marking information, lane line information, road traffic scene information, and navigation path planning information.
  • road marking information In order to acquire road traffic situation information, at least one of road identification information, lane line information, road traffic scene information and navigation path planning information may be acquired from map information.
  • the road traffic scene information may include road information such as intersection information, road merging information, road bifurcation information, and ramp information.
  • the navigation route planning information includes the travel route information from the origin to the destination.
  • the map information may provide a plurality of travel route information from the origin to the destination, and the navigation route planning information may include a plurality of travel route information from the origin to the destination.
  • the road traffic situation information obtained from the map information includes: the lane where the vehicle and surrounding vehicles are located, the type and coordinates of the lane line, the location of the traffic lights, the distance from the vehicle to the ramp, the intersection. The distance to the mouth, etc., navigation path planning information, etc.
  • sensor information there may be duplicate information in sensor information, vehicle networking information, and map information.
  • sensor information, vehicle networking information, and map information all include road sign information.
  • the road sign information can be integrated, thereby selecting the optimal road sign information for reconstructing the driving scene.
  • the optimal road traffic situation information exemplarily, as shown in FIG. 5 , in S102, the sensor information, the Internet of Vehicles information and the road traffic situation information in the map information are integrated, which may include one of the following:
  • the congestion status information in the Internet of Vehicles information and the navigation path planning information in the map information are combined to obtain optimized navigation path planning information.
  • road sign information such as the location of speed limit signs, traffic lights, etc.
  • road sign information can be parsed from the point cloud data of the radar; road sign information can be parsed from the video stream of the image acquisition device, such as limited The position of the speed sign, the speed limit value, the position of the traffic light, the status of the traffic light (red light or green light or yellow light), etc.
  • the Internet of Vehicles information includes road identification information, such as the position of the speed limit sign, the speed limit value, the position of the traffic light, the status of the traffic light, and so on.
  • the map information includes road identification information, such as the location of traffic lights, speed limit values, and the like.
  • the accuracy of the information collected by sensor information, vehicle networking information, and map information may be different.
  • the position of the speed limit sign and the position of the traffic light in the point cloud data of the radar is compared with the image
  • the position of the speed limit sign and the position of the traffic light in the video stream of the acquisition device is more accurate; the point cloud data of the radar does not contain the speed limit value, the status of the traffic light, etc.
  • integrating sensor information, vehicle networking information, and road marking information in map information may include screening, selecting, and fusing road marking information in sensor information, vehicle networking information, and map information.
  • the output integrated road sign information may include: the position and current state of the traffic lights; the value and position of the speed limit sign, etc.
  • the information with the best accuracy can be selected as the integrated information.
  • the location information of traffic lights can be obtained from sensor information, Internet of Vehicles information, and map information. Then, the location information of traffic lights with the best accuracy is selected as the integrated location information of traffic lights.
  • the location information of the traffic lights in the point cloud data of the radar has the best accuracy. Then, the location information of the traffic lights parsed from the point cloud data of the radar is selected as the integrated traffic light location information.
  • this information can be directly used as the integrated information.
  • the status of traffic lights can only be parsed from the video stream of the image acquisition device. The status of the traffic lights after.
  • the accuracy of each information in sensor information, vehicle networking information and map information can be known.
  • the source of the information can be directly set, or the information can be screened through a model , selection, fusion.
  • the position and speed limit value of the integrated speed limit sign are obtained. , the position of the traffic light, the status of the traffic light, and serve as the integrated road sign information.
  • the lane line information can be parsed from the video stream of the image acquisition device, for example, the color of the lane line (white or yellow, etc.), the type of the lane line (dotted line or solid line or road edge, etc.), the lane line location, etc.
  • the information of the Internet of Vehicles includes the information of the lane where the vehicle is located, and the information of the lane where the surrounding vehicles are located.
  • the map information includes information about the lane where the vehicle is located, information about the lane where the surrounding vehicles are located, the type of lane line, the location of the lane line, and other information.
  • integrating sensor information, vehicle networking information, and lane line information in map information may include screening, selecting, and fusing the sensor information, vehicle networking information, and lane line information in map information. Output the type, coordinates, etc. of the integrated lane lines.
  • the information with the best accuracy can be selected as the integrated information.
  • the type of lane line and the position of the lane line can be obtained from both the sensor information and the map information. Then, the type of lane line and the position of the lane line with the best accuracy are selected as the integrated type of lane line and lane line. s position.
  • the type of lane lines parsed from the video stream of the image acquisition device has the best accuracy, then the type of lane lines parsed from the video stream of the image acquisition device is selected as the type of integrated lane lines; map information The position accuracy of the lane lines obtained from the is the best, then, the position of the lane lines obtained from the map information is selected as the integrated lane line position.
  • this information can be directly used as the integrated information.
  • sensor information vehicle networking information and map information
  • only the color of the lane line can be parsed from the video stream of the image acquisition device, then the color of the lane line parsed from the video stream of the image acquisition device is directly used as the integration Rear lane line color.
  • the accuracy of each information in sensor information, vehicle networking information and map information can be known.
  • the source of the information can be directly set, or the information can be screened through a model , selection, fusion.
  • the information of the Internet of Vehicles and the information of the lane where the vehicle is located the information of the lane where the surrounding vehicles are located, the type of lane line, the color of the lane line, the position of the lane line, etc.
  • the information of the lane where the self-vehicle is located, the information of the lane where the surrounding vehicles are located, the type of lane line, the color of the lane line, the position of the lane line and other information are used as the integrated lane line information.
  • the road traffic scene information in the map information is integrated, and the road traffic scene information in the map information may be directly used as the integrated road traffic scene information.
  • the road traffic scene information may include road information such as intersection information, road merging information, road bifurcation information, and ramp information. Road information such as intersection information, road merging information, road bifurcation information, and ramp information obtained from the map information can be used as the integrated road traffic scene information.
  • screening and integrating the road traffic abnormality information in the Internet of Vehicles information may include: screening, selecting, and merging the road traffic abnormality information from a plurality of roadside units, and obtaining the self-vehicle information.
  • Information about road traffic anomalies that affect driving Those skilled in the art can understand that the self-vehicle can communicate with multiple roadside units, so that road traffic abnormality information from multiple roadside units can be obtained.
  • the road traffic abnormal situation information that affects the driving of the vehicle can be obtained, which is beneficial to predict the driving route of the vehicle.
  • the road traffic abnormal situation information may include road construction information, abnormal vehicle information, emergency vehicle information, and the like.
  • the road construction information, abnormal vehicle information, emergency vehicle information, etc. obtained from the Internet of Vehicles information can be filtered and integrated as the integrated road traffic abnormal situation information.
  • abnormal vehicles may include malfunctioning vehicles, out-of-control vehicles, and the like.
  • Emergency vehicles may include ambulances, fire trucks, police cars, and the like.
  • combining the congestion status information in the Internet of Vehicles information with the navigation path planning information in the map information to obtain optimized navigation path planning information may include obtaining a plurality of origins from the map information.
  • the navigation path planning information to the destination combined with the congestion status information in the IoV information, can discard the navigation path planning information affected by the congestion status, thereby obtaining optimized navigation path planning information, which can reach the destination more efficiently.
  • reconstructing the driving scene of the self-vehicle may include:
  • the driving scene of the self-vehicle is reconstructed.
  • the integrated road sign information, lane line information, road traffic scene information, road traffic abnormality information and optimized navigation path planning information are all optimized information, so that the reconstructed driving scene of the vehicle is closer to the real one
  • the driving environment can provide effective assistance for the driver.
  • the driving scene reconstruction method may further include:
  • the driving scene of the self-vehicle is reconstructed.
  • the driving scene of the self-vehicle is reconstructed, so that the obtained driving scene can not only reflect the road traffic situation, but also reflect the target information around the self-vehicle.
  • the driving scene is more in line with the real driving environment, and the targets around the vehicle can also play an early warning role for the driver, so that the driver can predict the danger of collision and better improve driving safety.
  • the target information may include size information of the target, type information of the target (eg, vehicle and vehicle type, or pedestrian and pedestrian type), direction information of the target (eg, forward, backward, left and right), the location information of the target (for example, the abscissa and ordinate of the target relative to the vehicle), etc.
  • Vehicle types may include cars, off-road vehicles, passenger cars, buses, pickup trucks, trucks, motorcycles, two-wheelers, and the like.
  • Pedestrian types may include adults, children, and the like.
  • acquiring target information in sensor information may include at least one of the following:
  • FIG. 6 is a schematic diagram of target information in a driving scene reconstruction method according to an embodiment of the present application.
  • the target information obtained from the point cloud data of the radar may include information such as the size of the target, the position of the target, and the direction of the target.
  • the target information obtained from the video stream of the image capture device may include information such as the size of the target, the type of the target, the position of the target, and the direction of the target.
  • acquiring target information in the Internet of Vehicles information may include:
  • the target information obtained from the Internet of Vehicles information may include information such as the type of the target, the location of the target, and the direction of the target.
  • integrating the sensor information and the target information in the Internet of Vehicles information may include: screening, corresponding, and selecting the sensor information and the target information in the Internet of Vehicles information. Output the integrated target type, coordinates, orientation (also called direction), etc.
  • the information with the best accuracy can be selected as the integrated information.
  • the location information of the target can be obtained from the point cloud data of the radar, the video stream of the image acquisition device, and the information of the Internet of Vehicles.
  • the position information of the target obtained from the point cloud data of the radar is more accurate, then the position information of the target obtained from the point cloud data of the radar is used as the position information of the integrated target.
  • the direction information of the target obtained from the Internet of Vehicles information is more accurate, then the direction information of the target obtained from the Internet of Vehicles information is used as the direction information of the integrated target.
  • the accuracy of the sensor information and the target information in the Internet of Vehicles information is known. Therefore, in the process of information integration, the source of the information can be directly set, or the information can be screened through a model. , correspondence, selection, fusion. After integrating the information such as the size of the target, the type of the target, the position of the target, and the direction of the target in the sensor information and the Internet of Vehicles information, the size of the integrated target, the type of the target, the position of the target, and the direction of the target are obtained. information and serve as the integrated target information.
  • the driving scene reconstruction method may further include: receiving data of the reconstructed driving scene, and displaying the reconstructed driving scene. In this way, the reconstructed driving scene can be presented.
  • FIG. 7 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to an embodiment of the present application.
  • the integrated road traffic situation information includes the lane in which the self-vehicle is located, the left lane, the right lane, the color of the lane line, the type of the lane line, the target located in the lane, and the like.
  • the reconstructed driving scene is shown in Figure 7. It can be seen from Figure 7 that the driving scene includes the vehicle 11, the lane where the vehicle 11 is located, the left lane of the vehicle 11, and the right lane of the vehicle 11. , related objects in the left lane, related objects in the right lane, and lane lines (color, type).
  • the driving scene not only the relevant target but also the direction of the target is displayed, for example, the direction of the vehicle 12 and the direction of the vehicle 13 can be clearly obtained from the driving scene.
  • the driving scene may only include the most relevant targets of the vehicle.
  • the driving scene may display 3 targets in front of the vehicle, 1 target behind the vehicle, and two targets on the left and right sides of the vehicle. Type, direction and location of 1 target.
  • the driving scene may include 3 targets in front of the vehicle, 1 target behind the vehicle, 1 target on the left side and 1 target on the right side of the vehicle, 1 target behind the vehicle on the left side, and 1 target on the left side of the vehicle. Type, direction and location of 1 target in front, 1 target behind the right side of the vehicle, and 1 target in front of the right side of the vehicle.
  • the target information most relevant to the self-vehicle can be displayed in the driving scene, avoiding the display of targets that are less relevant to the self-vehicle, making the display screen more concise and clear, and allowing the driver to pay more attention to the most relevant target information.
  • Related goals improve the driver's experience and avoid display redundancy.
  • the driving scene may include vehicles and pedestrians located within a certain range of the vehicle.
  • the target may be a vehicle or a pedestrian whose lateral distance from the vehicle is within the range of X1 and the longitudinal distance is within the range of Y1.
  • the values of the horizontal distance X1 and the vertical distance Y1 can be determined according to actual needs.
  • FIG. 8 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application.
  • the driving scene shown in FIG. 8 includes an intersection, a lane perpendicular to the lane where the vehicle 11 is located, vehicles 12 and 13 located in the vertical lane, speed limit signs 14 , traffic lights 15 and surrounding areas of the vehicle related goals.
  • the driving scene shown in FIG. 8 not only the relevant targets but also the directions of the targets are displayed. For example, the head direction of the vehicle 12 and the head direction of the vehicle 13 can be clearly obtained from the driving scene.
  • the driving scene can include the T-junction, a lane perpendicular to the lane where the vehicle is located, vehicles located in the vertical lane, speed limit signs, traffic lights, and related objects around the vehicle.
  • FIG. 9 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application.
  • lane merging is included.
  • the driving scene shows the lane merging, the merged lane, and the related vehicles in the merged lane.
  • information about road forks, forked lanes and other target information located in the same lane as the vehicle may be displayed in the driving scene.
  • the driving scene may also include opposite lane information.
  • the object lane does not affect driving, and in a driving scene, the opposite lane may be displayed in a predetermined color such as gray, and the opposite lane target information is not displayed.
  • the target of the target lane usually does not affect driving. Therefore, displaying the opposite lane as a predetermined color and not displaying the target information of the opposite lane is more in line with driving habits and avoids Drivers pay attention to irrelevant road traffic situation information.
  • the targets shown are specific vehicle styles.
  • the contour information of the target can be obtained from sensor information and/or Internet of Vehicles information, and the contour information can be processed so that the specific style of the target can be displayed in the driving scene.
  • a dedicated target model library can be established for commonly used targets (vehicles or pedestrians), the type of the target can be obtained from sensor information and/or Internet of Vehicles information, and then the corresponding target model library can be called from the target model library.
  • the target model is presented in the driving scene.
  • the target model library includes the model of Mercedes-Benz. When the logo of the Mercedes-Benz is parsed from the image acquisition device, the model of the Mercedes-Benz can be retrieved from the target model library, thereby displaying the model of the Mercedes-Benz in the in the driving scene.
  • the sensor information, the Internet of Vehicles information and the road traffic condition information and target information in the map information are acquired in real time, so that the reconstructed driving scene can feed back the surrounding environment conditions of the vehicle in real time, and can display the real-time status of the vehicle in real time.
  • the driving scene is closer to the real driving scene of the vehicle.
  • road sign information, lane line information, road traffic scene information, road traffic abnormality information, congestion status information and navigation path planning information can be combined to obtain optimized navigation path planning information, and Fit the best route for the vehicle to travel.
  • the next driving path and movement direction of the vehicle can be displayed in the driving scene, and arrows and other methods can be used in the driving scene. Indicates the next movement direction of the vehicle (for example, left turn, right turn, U-turn, lane change, left front or right front, etc.), making the reconstructed driving scene more realistic and reliable, and providing effective assistance for the driver.
  • FIG. 10 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application. As shown in FIG. 10 , the next driving path and moving direction (turn left) of the self-vehicle 11 are displayed in the driving scene, and are indicated by means of arrows.
  • FIG. 11 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application. As shown in FIG. 11 , the next driving path and moving direction (lane change) of the self-vehicle 11 are displayed in the driving scene, and are indicated by means of arrows.
  • FIG. 12 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application. As shown in FIG. 12 , the next driving path and moving direction (turning right into the ramp) of the self-vehicle 11 are displayed in the driving scene, and are indicated by means of arrows.
  • automatic driving can be divided into six automatic driving levels, namely: L0 (no automation), the human driver drives the car with full authority, and can be warned during the driving process; L1 (driving support) , one operation of the steering wheel and acceleration and deceleration are supported through the driving environment, and the rest are operated by humans; L2 (partial automation), multiple operations in the steering wheel and acceleration and deceleration are supported through the driving environment, and the rest are operated by humans; L3 (Conditional automation), all driving operations are completed by the unmanned system, and humans provide appropriate responses according to system requirements; L4 (high automation), all driving operations are completed by the unmanned system, according to system requirements, humans do not All responses must be provided, the road and environmental conditions are limited; L5 (full automation), all driving operations are performed by the driverless system, where possible, the human takes over, the road and environmental conditions are not limited. Different levels of autonomous driving require different levels of driver involvement.
  • the driving scene reconstruction method may further include:
  • the level prompt information may include at least one of color information, sound information, blinking information, and the like.
  • the level prompt information may be color information
  • the automatic driving level corresponds to the color information
  • one automatic driving level corresponds to one color.
  • the background color of the driving scene may be displayed as a color corresponding to the automatic driving level. For example, when the automatic driving level is L1, the background color of the driving scene is displayed as light blue; when the automatic driving level is L2, the background color of the driving scene is displayed as light green; when the automatic driving level is L3, the background color of the driving scene is displayed as light green. The background color appears light purple.
  • the color of the ego vehicle may be displayed as a color corresponding to the level of autonomous driving.
  • the lane line color may be displayed as a color corresponding to the level of autonomous driving.
  • the level prompt information may be sound information
  • the automatic driving level corresponds to the sound information
  • one automatic driving level corresponds to one sound.
  • the display of the level prompt information is not limited to the above methods, as long as the reconstructed driving scene can display the level prompt information so that the driver can feel the driving level of the vehicle, it is within the scope of protection of the present application.
  • the level prompt information may be the content displayed in the scene, and different driving levels are reflected by setting different content displayed in the scene.
  • different driving levels can be embodied by setting the type or number of elements of the road traffic situation information displayed in the driving scene. For example, at the L0 driving level, all real-time road traffic information can be displayed in the reconstructed driving scene; at the L1 driving level, if only the lateral control function is available, only the lane line information can be displayed.
  • Control function which can only display the target (such as the vehicle) located on the front or rear side of the vehicle; at the L2 driving level, it can display the lane line information, the front side target and the rear side target at the same time; at the L3 driving level, it can be displayed at the same time Display lane line information, front targets, rear targets and road changes; at L4 driving level, the self-vehicle is highlighted, and other elements are simplified; at L5 driving level, only the self-vehicle can be displayed, and at the same time, the driving scene Scenes with entertainment, meetings, games and other links can be added to the system.
  • the automatic driving function includes no lateral and longitudinal control; Lane Keep Assist (LKA), Lane Center Assist (LCA), only longitudinal control; Highway Assist (HWA) ), that is, there is horizontal and vertical control, and the driver can take off his hands and feet; the traffic jam driving function (Traffic Jam Pilot, TJP), that is, there is horizontal and vertical control, and the driver can take off his hands and feet and eyes.
  • LKA Lane Keep Assist
  • LCDA Lane Center Assist
  • HWA Highway Assist
  • TJP Traffic Jam Pilot
  • the driving scene reconstruction method may further include:
  • Such a driving scene reconstruction method can clearly distinguish different driving functions in the driving scene, so that the driver can more truly feel the difference and change of different driving functions, so that the driver can grasp the driving participation in real time.
  • the function prompt information may include at least one of cruise speed information, lane line color information, sound information, flashing information, and the like.
  • the cruising speed information can be used as the prompt information of the longitudinal control function
  • the color information of the lane line can be used as the prompt information of the lateral control function.
  • the change of cruising speed is used to reflect the longitudinal control function
  • the color change of the lane line can be used to reflect the lateral control function. For example, when there is no horizontal and vertical control, the color of the lane line is displayed as white; when the HWA function is used, the color of the lane line is displayed as red; when the TJP function is used, the color of the lane line is displayed as yellow.
  • the display of the function prompt information is not limited to the above methods, as long as the reconstructed driving scene can display the function prompt information so that the driver can feel the driving function of the vehicle, it is within the scope of protection of the present application.
  • the driving scene reconstruction method of the present application can acquire sensor information, Internet of Vehicles information and road sign information, lane line information, road traffic scene information, abnormal road traffic situation information, target information and optimized navigation paths in real time in the map information. Planning information, and integrating these information, based on the integrated road sign information, lane line information, road traffic scene information, road traffic abnormal situation information, target information and optimized navigation path planning information, reconstruct the driving scene of the self-vehicle . Therefore, the reconstructed driving scene can adaptively adjust the content of the driving scene according to the actual road environment where the vehicle is located, and display the most relevant information to the user for the automatic driving task of the vehicle. For example, it can be displayed to the user.
  • Exit traffic lights the status of traffic lights, speed limit signs and values, the lane where the vehicle is located, the lane where surrounding vehicles are located, the color and type of lane lines, etc.; when the road traffic scene changes, you can In the actual situation, the user will be shown the intersection, road merging, road bifurcation or ramp, etc.; when the opposite lane information is displayed in the driving scene, the opposite lane can be displayed in gray, and the target lane target information will not be displayed; When there are road constructions, abnormal vehicles or emergency vehicles that affect driving in the environment, the reconstructed driving scene will display road constructions, abnormal vehicles or emergency vehicles; and the driving path of the self-vehicle is optimal driving path.
  • the driving scene reconstruction method of this application integrates sensor information from ADAS sensors, vehicle networking information and map information from V2X, and enriches the information sources of the surrounding environment of autonomous vehicles. Moreover, the driving scene reconstruction method, Combining the road traffic situation information and the target information around the vehicle, the reconstructed driving scene can reflect the current real driving environment of the vehicle in real time, which is more in line with the actual needs of users in the automatic driving state and ordinary driving state. The driver provides more effective assistance and improves driving safety.
  • the road traffic situation information is not limited to the listed content.
  • the user can flexibly set the content of the road traffic situation information obtained from sensor information, IoV information and map information according to personal preferences and/or actual application scenarios, as long as the driving scene reconstruction method of the present application is used to reconstruct the content of the road traffic situation information.
  • the driving scenarios of the self-built vehicle are all within the scope of protection of the present application.
  • FIG. 13 is a structural block diagram of a driving scene reconstruction apparatus according to an embodiment of the present application.
  • the embodiment of the present application also provides a driving scene reconstruction device.
  • the driving scene reconstruction device may include:
  • an acquisition module 21 configured to acquire the road traffic situation information in the sensor information, the Internet of Vehicles information and the map information;
  • the integration module 22, connected with the acquisition module 21, is used to integrate the road traffic situation information in the sensor information, the vehicle networking information and the map information;
  • the reconstruction module 23 is connected with the integration module 22, and is used for reconstructing the driving scene of the self-vehicle based on the integrated road traffic situation information.
  • FIG. 14 is a structural block diagram of an acquisition module of a driving scene reconstruction apparatus according to an embodiment of the present application.
  • the acquisition module 21 may include at least one of the following:
  • the point cloud data acquisition sub-module 211 is used to receive point cloud data from radar, and analyze the point cloud data to obtain road sign information;
  • the video stream acquisition sub-module 212 is configured to receive the video stream from the image acquisition device, and parse the video stream to obtain road sign information and lane line information.
  • the obtaining module 21 may include:
  • the vehicle networking information acquisition sub-module 213 is configured to acquire at least one of road sign information, lane line information, road traffic abnormality information and congestion condition information from the vehicle-mounted unit and/or the roadside unit.
  • the obtaining module 21 may include:
  • the map information acquisition sub-module 214 is configured to acquire at least one of road identification information, lane line information, road traffic scene information and navigation path planning information from the map information.
  • FIG. 15 is a structural block diagram of an integration module of a driving scene reconstruction apparatus according to an embodiment of the present application.
  • the integration module 22 includes at least one of the following:
  • a road sign information integration sub-module 221, configured to integrate the sensor information, the vehicle networking information and the road sign information in the map information;
  • a lane line information integration sub-module 222 configured to integrate the sensor information, the vehicle networking information and the lane line information in the map information;
  • the scene information integration sub-module 223 is used to integrate the road traffic scene information in the map information
  • the abnormal situation information integration sub-module 224 is used to screen and integrate the road traffic abnormal situation information in the Internet of Vehicles information;
  • the navigation path optimization sub-module 225 is configured to combine the congestion status information in the IoV information with the navigation path planning information in the map information to obtain optimized navigation path planning information.
  • the road sign information includes traffic light information and/or speed limit sign information.
  • the lane line information includes at least one of information about the lane where the vehicle is located, information about the lane where surrounding vehicles are located, color information of the lane line, type information of the lane line, and position information of the lane line.
  • the road traffic scene information includes at least one of intersection information, road merging information, road bifurcation information, and ramp information.
  • the road traffic abnormal situation information includes at least one of road construction information, abnormal vehicle information and emergency vehicle information.
  • the reconstruction module 23 is configured to reconstruct the reconstruction based on at least one of the integrated road sign information, lane line information, road traffic scene information, road traffic abnormal situation information and optimized navigation path planning information.
  • the driving scene of the car is configured to reconstruct the reconstruction based on at least one of the integrated road sign information, lane line information, road traffic scene information, road traffic abnormal situation information and optimized navigation path planning information. The driving scene of the car.
  • the acquiring module 21 is further configured to acquire the sensor information and the target information in the vehicle networking information;
  • the integration module 22 is further configured to acquire the sensor information and the target information in the vehicle networking information Perform integration;
  • the reconstruction module 23 is used to reconstruct the driving scene of the self-vehicle based on the integrated road traffic situation information and the integrated target information.
  • the target information includes at least one of size information, type information, location information and orientation information of the target.
  • the driving scene reconstruction device may further include:
  • the extraction module 24 is used to extract the automatic driving level information and level prompt information of the self-vehicle;
  • the reconstruction module 23 is configured to reconstruct the driving scene of the self-vehicle based on the integrated road traffic condition information, automatic driving level information and level prompt information.
  • the driving scene reconstruction device may further include:
  • the extraction module 24 is used to extract the automatic driving function information and function prompt information of the self-vehicle;
  • the reconstruction module 23 is used to reconstruct the driving scene of the self-vehicle based on the integrated road traffic condition information, automatic driving function information and function prompt information.
  • FIG. 16 is a structural block diagram of a driving scene reconstruction system according to an embodiment of the present application.
  • the embodiment of the present application further provides a driving scene reconstruction system.
  • the driving scene reconstruction system includes the driving scene reconstruction device 20 described above.
  • the driving scene reconstruction system may further include:
  • the sensor 31 is connected to the driving scene reconstruction device 20 for collecting and outputting sensor information to the driving scene reconstruction device 20;
  • the vehicle networking device 32 is connected to the driving scene reconstruction device 20, and is used for outputting the vehicle networking information to the driving scene reconstruction device;
  • a map device 33 connected to the driving scene reconstruction device 20, and used for outputting map information to the driving scene reconstruction device;
  • the display device 34 is connected to the driving scene reconstruction device, and is configured to receive data of the reconstructed driving scene from the driving scene reconstruction device, and display the reconstructed driving scene.
  • the senor 21 , the IoV device 32 , and the map device 33 are all connected to the acquisition module 21 in the driving scene reconstruction device 20 .
  • the display device 34 may be connected to a reconstruction device in the driving scene reconstruction device 20 .
  • a "connection" is an electrical connection, which may be a CAN connection, or a WIFI connection, or a network connection, or the like.
  • the driving scene reconstruction device may be a controller, and the controller is integrated with an acquisition module, an integration module, an extraction module and a reconstruction module.
  • the display device may be an instrument controller with a display function in the vehicle.
  • the controller is integrated with an acquisition module, an integration module, and an extraction module.
  • the display device can be an instrument controller with a display function in the vehicle, and the instrument controller can realize the function of the reconstruction module and the function of display.
  • An embodiment of the present application further provides a vehicle.
  • the vehicle may include the above-mentioned driving scene reconstruction device.
  • the vehicle may include the driving scene reconstruction system described above.
  • FIG. 17 is a structural block diagram of an electronic device according to an embodiment of the present application.
  • An embodiment of the present application further provides an electronic device.
  • the electronic device includes: at least one processor 920 , and a memory 910 communicatively connected to the at least one processor 920 .
  • the memory 910 has stored therein instructions executable by at least one processor 920 .
  • the instructions are executed by at least one processor 920 .
  • the processor 920 executes the instruction, the driving scene reconstruction method in the foregoing embodiment is implemented.
  • the number of the memory 910 and the processor 920 may be one or more.
  • the electronic device is intended to represent various forms of digital computers, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers.
  • Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular phones, smart phones, wearable devices, and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions are by way of example only, and are not intended to limit implementations of the application described and/or claimed herein.
  • the electronic device may further include a communication interface 930 for communicating with external devices and performing interactive data transmission.
  • the various devices are interconnected using different buses and can be mounted on a common motherboard or otherwise as desired.
  • the processor 920 may process instructions for execution within the electronic device, including storing in or on memory to display a Graphical User Interface (Graphical User Interface) on an external input/output device, such as a display device coupled to the interface. GUI) commands for graphical information.
  • multiple processors and/or multiple buses may be used with multiple memories and multiple memories, if desired.
  • multiple electronic devices may be connected, each providing some of the necessary operations (eg, as a server array, a group of blade servers, or a multiprocessor system).
  • the bus can be divided into address bus, data bus, control bus and so on. For ease of representation, only one thick line is shown in FIG. 17, but it does not mean that there is only one bus or one type of bus.
  • the memory 910, the processor 920 and the communication interface 930 are integrated on one chip, the memory 910, the processor 920 and the communication interface 930 can communicate with each other through an internal interface.
  • processor may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or any conventional processor or the like. It is worth noting that the processor may be a processor supporting an advanced reduced instruction set machine (Advanced RISC Machines, ARM) architecture.
  • Advanced RISC Machines Advanced RISC Machines
  • Embodiments of the present application provide a computer-readable storage medium (such as the above-mentioned memory 910 ), which stores computer instructions, and when the program is executed by a processor, implements the methods provided in the embodiments of the present application.
  • a computer-readable storage medium such as the above-mentioned memory 910
  • the memory 910 may include a stored program area and a stored data area, wherein the stored program area may store an operating system and an application program required by at least one function; Use the created data, etc.
  • memory 910 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device.
  • the memory 910 may optionally include memory located remotely relative to the processor 920, and these remote memories may be connected to the electronics of the driving scene reconstruction method via a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • first and second are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implying the number of indicated technical features. Thus, a feature delimited with “first”, “second” may expressly or implicitly include at least one of that feature.
  • plurality means two or more, unless otherwise expressly and specifically defined.
  • Any description of a process or method in a flowchart or otherwise described herein may be understood to represent a representation of executable instructions comprising one or more (two or more) steps for implementing a specified logical function or process.
  • a module, fragment or section of code may be understood to represent a representation of executable instructions comprising one or more (two or more) steps for implementing a specified logical function or process.
  • a module, fragment or section of code may be understood to represent a representation of executable instructions comprising one or more (two or more) steps for implementing a specified logical function or process.
  • a module, fragment or section of code A module, fragment or section of code.
  • the scope of the preferred embodiments of the present application includes alternative implementations in which the functions may be performed out of the order shown or discussed, including performing the functions substantially concurrently or in the reverse order depending upon the functions involved.
  • each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically alone, or two or more units may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. If the above-mentioned integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may also be stored in a computer-readable storage medium.
  • the storage medium may be a read-only memory, a magnetic disk or an optical disk, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)

Abstract

A driving scene reconstruction method and apparatus, a system, a vehicle, an electronic device, and a computer readable storage medium. A driving scene reconstruction method, comprising: acquiring road traffic condition information in sensor information, Internet of Vehicles information, and map information (S101); integrating the road traffic condition information in the sensor information, Internet of Vehicles information, and map information (S102); and, on the basis of the integrated road traffic condition information, reconstructing the driving scenario of a self-driving vehicle (11) (S103). The present driving scenario reconstruction method fuses sensor information, Internet of Vehicles information, and map information, enriching the information sources of the environment around a self-driving vehicle and incorporating road traffic condition information so that the reconstructed driving scenario is closer to the real driving environment, providing the driver with a more effective assistance function and enhancing driving safety.

Description

驾驶场景重构方法、装置、系统、车辆、设备及存储介质Driving scene reconstruction method, device, system, vehicle, equipment and storage medium
本申请要求于2020年7月16日提交中国专利局、申请号为202010685995.5、发明名称为“驾驶场景重构方法、装置、系统、车辆、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application filed on July 16, 2020 with the application number 202010685995.5 and the invention titled "Driving Scene Reconstruction Method, Device, System, Vehicle, Equipment and Storage Medium". The entire contents of this application are incorporated by reference.
技术领域technical field
本申请涉及自动驾驶技术领域,尤其涉及一种驾驶场景重构方法、装置、系统、车辆、电子设备及计算机可读存储介质。The present application relates to the technical field of automatic driving, and in particular, to a driving scene reconstruction method, apparatus, system, vehicle, electronic device, and computer-readable storage medium.
背景技术Background technique
自动驾驶汽车又称无人驾驶汽车、电脑驾驶汽车或轮式移动机器人,是一种通过电脑系统实现无人驾驶的智能汽车。随着自动驾驶水平越来越高,驾驶员的职责逐步转化为监控者。驾驶场景重构系统可以更好地为驾驶员提供车辆周边情况,使得驾驶员在放松的状态下可以清晰地了解车辆周边情况。同时,在非自动驾驶模式下,自动驾驶所采用的传感器等部件正常工作,可以实现场景重构,为驾驶员提供驾驶辅助。Self-driving car, also known as driverless car, computer-driven car or wheeled mobile robot, is a kind of intelligent car that realizes unmanned driving through computer system. As the level of autonomous driving becomes higher and higher, the responsibility of the driver is gradually transformed into that of the monitor. The driving scene reconstruction system can better provide the driver with the surrounding situation of the vehicle, so that the driver can clearly understand the surrounding situation of the vehicle in a relaxed state. At the same time, in the non-autonomous driving mode, the sensors and other components used in autonomous driving work normally, which can realize scene reconstruction and provide driving assistance for the driver.
目前,驾驶场景重构系统提供的驾驶场景信息有限,降低了自动驾驶体验,存在自动驾驶隐患。At present, the driving scene information provided by the driving scene reconstruction system is limited, which reduces the automatic driving experience and has hidden dangers of automatic driving.
发明内容SUMMARY OF THE INVENTION
本申请实施例提供一种自动驾驶重构方法、装置、系统、车辆、电子设备及计算机可读存储介质,以解决相关技术存在的问题,技术方案如下:The embodiments of the present application provide a method, device, system, vehicle, electronic device, and computer-readable storage medium for automatic driving reconstruction, so as to solve the problems existing in the related art, and the technical solutions are as follows:
第一方面,本申请实施例提供了一种驾驶场景重构方法,包括:In a first aspect, an embodiment of the present application provides a driving scene reconstruction method, including:
获取传感器信息、车联网信息和地图信息中的道路交通情况信息;Obtain road traffic information from sensor information, vehicle networking information and map information;
对所述传感器信息、所述车联网信息和所述地图信息中的道路交通情况信息进行整合;integrating the sensor information, the vehicle networking information and the road traffic situation information in the map information;
基于整合后的道路交通情况信息,重构自车的驾驶场景。Based on the integrated road traffic situation information, the driving scene of the self-vehicle is reconstructed.
第二方面,本申请实施例提供了一种驾驶场景重构装置,包括:In a second aspect, an embodiment of the present application provides a driving scene reconstruction device, including:
获取模块,用于获取传感器信息、车联网信息和地图信息中的道路交通情况信息;The acquisition module is used to acquire the road traffic situation information in the sensor information, the Internet of Vehicles information and the map information;
整合模块,用于对所述传感器信息、所述车联网信息和所述地图信息中的道路交通情况信息进行整合;an integration module, configured to integrate the sensor information, the vehicle networking information and the road traffic situation information in the map information;
重构模块,用于基于整合后的道路交通情况信息,重构自车的驾驶场景。The reconstruction module is used to reconstruct the driving scene of the self-vehicle based on the integrated road traffic situation information.
第三方面,本申请实施例提供了一种驾驶场景重构系统,包括以上所述的驾驶场景重构装置,所述系统还包括:In a third aspect, an embodiment of the present application provides a driving scene reconstruction system, including the driving scene reconstruction device described above, and the system further includes:
传感器,与所述驾驶场景重构装置连接,用于采集并向所述驾驶场景重构装置输出传感器信息;a sensor, connected to the driving scene reconstruction device, for collecting and outputting sensor information to the driving scene reconstruction device;
车联网装置,与所述驾驶场景重构装置连接,用于向所述驾驶场景重构装置输出车联网信息;an Internet of Vehicles device, connected to the driving scene reconstruction device, and used for outputting the Internet of Vehicles information to the driving scene reconstruction device;
地图装置,与所述驾驶场景重构装置连接,用于向所述驾驶场景重构装置输出地图信息;a map device, connected to the driving scene reconstruction device, for outputting map information to the driving scene reconstruction device;
显示装置,与所述驾驶场景重构装置连接,用于从所述驾驶场景重构装置接收重构后的驾驶场景的数据,显示重构后的驾驶场景。The display device is connected to the driving scene reconstruction device, and is used for receiving data of the reconstructed driving scene from the driving scene reconstruction device, and displaying the reconstructed driving scene.
第四方面,本申请实施例提供了一种车辆,包括以上所述的驾驶场景重构装置,或者,包括以上所述的驾驶场景重构系统。In a fourth aspect, an embodiment of the present application provides a vehicle, which includes the above-mentioned driving scene reconstruction device, or includes the above-mentioned driving scene reconstruction system.
第五方面,本申请实施例提供了一种电子设备,该电子设备包括:至少一个处理器;以及与至少一个处理器通信连接的存储器;其中,存储器存储有可被至少一个处理器执行的指令,以使至少一个处理器能够执行上述驾驶场景重构方法。In a fifth aspect, an embodiment of the present application provides an electronic device, the electronic device includes: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions that can be executed by the at least one processor , so that at least one processor can execute the above driving scene reconstruction method.
第六方面,本申请实施例提供了一种计算机可读存储介质,计算机可读存储介质存储计算机指令,当计算机指令在计算机上运行时,上述各方面任一种实施方式中的方法被执行。In a sixth aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores computer instructions, and when the computer instructions are executed on a computer, the method in any one of the implementation manners of the above aspects is executed.
上述技术方案中的优点或有益效果至少包括:The advantages or beneficial effects of the above technical solutions include at least:
该驾驶场景重构方法,融合了传感器信息、车联网信息和地图信息,丰富了自动驾驶车辆周围环境的信息来源,而且,结合了道路交通情况信息,使得重构的驾驶场景更接近于真实的驾驶环境,可以为驾驶员提供更有效的辅助作用,提升驾驶的安全性。The driving scene reconstruction method integrates sensor information, Internet of Vehicles information and map information, enriches the information sources of the surrounding environment of the autonomous driving vehicle, and combines road traffic situation information to make the reconstructed driving scene closer to the real one. The driving environment can provide more effective assistance for the driver and improve driving safety.
上述概述仅仅是为了说明书的目的,并不意图以任何方式进行限制。除上述描述的示意性的方面、实施方式和特征之外,通过参考附图和以下的详细描述,本申请进一步的方面、实施方式和特征将会是容易明白的。The above summary is for illustrative purposes only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments and features described above, further aspects, embodiments and features of the present application will become apparent by reference to the drawings and the following detailed description.
附图说明Description of drawings
在附图中,除非另外规定,否则贯穿多个附图相同的附图标记表示相同或相似的部件或元素。这些附图不一定是按照比例绘制的。应该理解,这些附图仅描绘了根据本申请公开的一些实施方式,而不应将其视为是对本申请范围的限制。In the drawings, unless stated otherwise, the same reference numbers refer to the same or like parts or elements throughout the several figures. The drawings are not necessarily to scale. It should be understood that these drawings depict only some embodiments disclosed in accordance with the present application and should not be considered as limiting the scope of the present application.
图1为一示例性实施例的驾驶场景重构系统的框图示意图;FIG. 1 is a schematic block diagram of a driving scene reconstruction system according to an exemplary embodiment;
图2示出了图1中控制器的处理过程;Fig. 2 shows the processing procedure of the controller in Fig. 1;
图3示出了处理器所处理的信息类型;Figure 3 shows the types of information handled by the processor;
图4为根据本申请一实施例的驾驶场景重构方法的流程示意图;4 is a schematic flowchart of a driving scene reconstruction method according to an embodiment of the present application;
图5为根据本申请一实施例的驾驶场景重构方法中道路交通情况信息的示意图;5 is a schematic diagram of road traffic situation information in a driving scene reconstruction method according to an embodiment of the present application;
图6为根据本申请一实施例的驾驶场景重构方法中目标信息的示意图;6 is a schematic diagram of target information in a driving scene reconstruction method according to an embodiment of the present application;
图7为根据本申请一实施例的驾驶场景重构方法重构出的驾驶场景的示意图;7 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to an embodiment of the present application;
图8为根据本申请另一实施例的驾驶场景重构方法重构出的驾驶场景的示意图;8 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application;
图9为根据本申请另一实施例的驾驶场景重构方法重构出的驾驶场景的示意图;9 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application;
图10为根据本申请另一实施例的驾驶场景重构方法重构出的驾驶场景的示意图;10 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application;
图11为根据本申请另一实施例的驾驶场景重构方法重构出的驾驶场景的示意图;11 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application;
图12为根据本申请另一实施例的驾驶场景重构方法重构出的驾驶场景的示意图;12 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application;
图13为根据本申请一实施例的驾驶场景重构装置的结构框图;13 is a structural block diagram of a driving scene reconstruction device according to an embodiment of the present application;
图14为根据本申请一实施例的驾驶场景重构装置的获取模块的结构框图;14 is a structural block diagram of an acquisition module of a driving scene reconstruction device according to an embodiment of the present application;
图15为根据本申请一实施例的驾驶场景重构装置的整合模块的结构框图;15 is a structural block diagram of an integration module of a driving scene reconstruction device according to an embodiment of the present application;
图16为根据本申请一实施例的驾驶场景重构系统的结构框图;16 is a structural block diagram of a driving scene reconstruction system according to an embodiment of the present application;
图17为根据本申请一实施例的电子设备的结构框图。FIG. 17 is a structural block diagram of an electronic device according to an embodiment of the present application.
具体实施方式detailed description
在下文中,仅简单地描述了某些示例性实施例。正如本领域技术人员可认识到的那样,在不脱离本申请的精神或范围的情况下,可通过各种不同方式修改所描述的实施例。因此,附图和描述被认为本质上是示例性的而非限制性的。In the following, only certain exemplary embodiments are briefly described. As those skilled in the art would realize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present application. Accordingly, the drawings and description are to be regarded as illustrative in nature and not restrictive.
高级驾驶辅助系统(Advanced Driver Assistant System,ADAS)传感器,可以通过各种车载传感器收集车内外的环境数据,进行静、动态物体的辨识、侦测与追踪等技术上的处理,从而让驾驶者在最快的时间察觉可能发生的危险,并采取相应的措施,以提升驾乘安全性。Advanced Driver Assist System (ADAS) sensors can collect environmental data inside and outside the car through various on-board sensors, and perform technical processing such as identification, detection and tracking of static and dynamic objects, so that drivers can The fastest time to detect possible dangers and take corresponding measures to improve driving safety.
传感器包括但不限于一个或多个图像采集设备(例如摄像机)、惯性测量单元、雷达等。图像采集设备可以用来采集自动驾驶车辆周围环境的目标信息、道路标识信息和车道线信息。惯性测量单元可以基于惯性加速度来感测自动驾驶车辆的位置和定向变化。雷达可以利用无线电信号来感测自动驾驶车辆的本地环境内的目标、道路标识。除感测目标之外,雷达单元可以另外感测目标的速度和/或前进方向。图像采集设备可以包括用来采集自动驾驶车辆周围环境的图像的一个或多个装置。例如,图像采集设备可以是静物摄像机和/或视频摄像机。摄像机可以包括红外线摄像机。摄像机可以是机械地移动的,例如,通过将摄像机安装在旋转和/或倾斜平台上。Sensors include, but are not limited to, one or more image acquisition devices (eg, cameras), inertial measurement units, radars, and the like. The image acquisition device can be used to collect target information, road marking information and lane line information of the surrounding environment of the autonomous vehicle. The inertial measurement unit can sense position and orientation changes of the autonomous vehicle based on inertial acceleration. Radar can use radio signals to sense objects, road signs, within the local environment of an autonomous vehicle. In addition to sensing the target, the radar unit may additionally sense the speed and/or heading of the target. The image capture device may include one or more devices for capturing images of the environment surrounding the autonomous vehicle. For example, the image capture device may be a still camera and/or a video camera. The cameras may include infrared cameras. The camera may be moved mechanically, eg by mounting the camera on a rotating and/or tilting platform.
传感器还可以包括例如:声纳传感器、红外传感器、转向传感器、油门传感器、制动传感器以及音频传感器(例如,麦克风)等。音频传感器可以被配置为从自动驾驶车辆周围的环境中采集声音。转向传感器可以被配置为感测方向盘、车辆的车轮或其组合的转向角度。油门传感器和制动传感器分别感测车辆的油门位置和制动位置。在一些情形下,油门传感器和制动传感器可集成为集成式油门/制动传感器。Sensors may also include, for example, sonar sensors, infrared sensors, steering sensors, accelerator sensors, brake sensors, audio sensors (eg, microphones), and the like. Audio sensors can be configured to pick up sound from the environment surrounding the autonomous vehicle. The steering sensor may be configured to sense the steering angle of the steering wheel, the wheels of the vehicle, or a combination thereof. The accelerator sensor and the brake sensor sense the accelerator position and the brake position of the vehicle, respectively. In some cases, the accelerator sensor and brake sensor may be integrated into an integrated accelerator/brake sensor.
V2X(vehicle to everything,车联万物,或称为车联网)即车对外界的信息交换。V2X通信是车联网中实现环境感知、信息交互与协同控制的重要关键技术。其采用各种通信技术实现车与车(Vehicle-To-Vehicle,简称V2V)、车与路(Vehicle-To-Infrastructure,简称V2I)和车与人(Vehicle-To-Person,简称V2P)互连互通,并在信息网络平台上对信息进行提取、共享等有效利用,对车辆进行有效的管控和提供综合服务。从而获得实时路况、道路标识信息、车道线信息、目标信息等一系列道路交通情况信息,从而提高驾驶安全性、减少拥堵、提高交通效率、提供车载娱乐信息等。V2X (vehicle to everything, the Internet of Vehicles, or the Internet of Vehicles) is the exchange of information between vehicles and the outside world. V2X communication is an important key technology for realizing environmental perception, information interaction and collaborative control in the Internet of Vehicles. It adopts various communication technologies to realize vehicle-to-vehicle (Vehicle-To-Vehicle, referred to as V2V), vehicle-to-road (Vehicle-To-Infrastructure, referred to as V2I) and vehicle-to-person (Vehicle-To-Person, referred to as V2P) interconnection It can effectively use information such as extraction and sharing on the information network platform to effectively manage and control vehicles and provide comprehensive services. In this way, a series of road traffic information such as real-time road conditions, road sign information, lane line information, and target information can be obtained, thereby improving driving safety, reducing congestion, improving traffic efficiency, and providing in-vehicle entertainment information.
在本申请实施例中的自动驾驶场景重构过程中,不仅可以获取来自ADAS传感器的传感器信息,还可以获取来自V2X的车联网信息,最终获得的周围环境信息丰富,能够反映出真实的驾驶环境,从而提高自动驾驶安全性,提升了自动驾驶体验。During the automatic driving scene reconstruction process in the embodiment of the present application, not only sensor information from ADAS sensors, but also vehicle networking information from V2X can be obtained, and the finally obtained surrounding environment information is rich and can reflect the real driving environment , thereby improving the safety of autonomous driving and enhancing the autonomous driving experience.
图1为一示例性实施例的驾驶场景重构系统的框图示意图。如图1所示,传感器、V2X和地图装 置采集车辆周围信息后,控制器可以接收来自传感器的原始数据,控制器还可以接收来自V2X的道路信息(也称作车道线信息)、交通标志信息(也称作道路标识信息)、周边车辆或/和行人等交通参与者信息(也称作目标信息)等,控制器还可以接收来自地图装置的道路信息(也称作车道线信息)、交通标志信息(也称作道路标识信息)、自车位置信息、导航信息(也称作导航路径规划信息)等。控制器接收来自传感器信息、车联网信息和地图信息的信息后,从接收到的信息中获取道路交通情况信息,并对道路交通情况信息进行整合,将整合后的道路交通情况信息例如车道线信息、交通标志信息(也称作道路标识信息)、目标信息(包括目标的类型、方向、位置、报警等)和自车运动轨迹等传输给仪表控制器。仪表控制器将接收到的信息处理成驾驶场景的数据,并将驾驶场景的数据传输给仪表显示装置来显示重构后的驾驶场景。FIG. 1 is a schematic block diagram of a driving scene reconstruction system according to an exemplary embodiment. As shown in Figure 1, after the sensor, V2X and map device collect the information around the vehicle, the controller can receive the raw data from the sensor, and the controller can also receive the road information (also called lane line information), traffic sign information from the V2X (also referred to as road sign information), surrounding vehicles or/and pedestrians and other traffic participants (also referred to as target information), etc., the controller can also receive road information (also referred to as lane line information) from the map device, traffic Sign information (also referred to as road sign information), own vehicle position information, navigation information (also referred to as navigation route planning information), and the like. After the controller receives information from sensor information, vehicle networking information and map information, it obtains road traffic condition information from the received information, integrates the road traffic condition information, and integrates the integrated road traffic condition information such as lane line information. , traffic sign information (also known as road sign information), target information (including target type, direction, location, alarm, etc.) and the vehicle's trajectory, etc. are transmitted to the instrument controller. The instrument controller processes the received information into driving scene data, and transmits the driving scene data to the instrument display device to display the reconstructed driving scene.
在一种实施方式中,道路交通情况信息可以包括道路标识信息、车道线信息、道路交通异常情况信息、拥堵状况信息、道路交通场景信息、导航路径规划信息、目标信息等。目标包括自车周围的车辆、行人等交通参与者。In one embodiment, the road traffic condition information may include road sign information, lane line information, road traffic abnormal condition information, congestion condition information, road traffic scene information, navigation path planning information, target information, and the like. The targets include traffic participants such as vehicles and pedestrians around the ego vehicle.
图2示出了图1中控制器的处理过程,图3示出了控制器所处理的信息类型。如图2所示,示例性地,传感器可以包括雷达和图像采集设备等。雷达可以利用无线电信号来感测自动驾驶车辆周围环境中的目标信息和道路标识信息,而生成点云数据例如点云图。图像采集设备例如摄像头可以用来采集周围环境中的道路标识、车道线和目标等,而生成视频流。如图2所示,示例性地,控制器可以包括分类处理模块和信息融合模块(也成为整合模块)。分类处理模块可以包括目标信息识别模块、交通信息识别模块(也称作道路交通情况信息识别模块)、功能报警模块。控制器接收来自雷达的点云图、摄像头的视频流、V2X的信息和地图装置的信息后,目标信息识别模块从控制器接收的信息中识别出目标信息,并将目标信息传输给信息融合模块;交通信息识别模块从控制器接收的信息中识别出道路交通情况信息,并将道路交通情况信息传输给信息融合模块;功能报警模块从控制器接收到的信息中识别出功能报警信息,并将功能报警信息传输给信息融合模块(也称作整合模块)。信息融合模块将接收到的目标信息、道路交通情况信息、功能报警信息分别进行整合,输出整合后的目标信息、道路交通情况信息、功能报警信息。FIG. 2 shows the processing procedure of the controller in FIG. 1 , and FIG. 3 shows the types of information processed by the controller. As shown in FIG. 2 , the sensors may include, for example, radars and image acquisition devices, among others. Radar can use radio signals to sense target information and road marking information in the surrounding environment of autonomous vehicles, and generate point cloud data such as point cloud maps. Image capture devices such as cameras can be used to capture road signs, lane lines, and objects in the surrounding environment to generate video streams. As shown in FIG. 2 , for example, the controller may include a classification processing module and an information fusion module (also referred to as an integration module). The classification processing module may include a target information identification module, a traffic information identification module (also referred to as a road traffic condition information identification module), and a function alarm module. After the controller receives the point cloud image from the radar, the video stream of the camera, the information of V2X and the information of the map device, the target information identification module identifies the target information from the information received by the controller, and transmits the target information to the information fusion module; The traffic information identification module identifies the road traffic condition information from the information received by the controller, and transmits the road traffic condition information to the information fusion module; the function alarm module identifies the function alarm information from the information received by the controller, and sends the function alarm information to the information fusion module. The alarm information is transmitted to the information fusion module (also called the integration module). The information fusion module integrates the received target information, road traffic condition information, and function alarm information respectively, and outputs the integrated target information, road traffic condition information, and function alarm information.
如图3所示,示例性地,目标识别模块可以识别出目标的类型、坐标、朝向等信息。交通信息识别模块可以识别出交通灯的位置和当前状态、限速标志的数值和位置、车道线的类型和坐标、道路交通场景、道路交通异常情况、最优化的道路路径规划等。功能报警模块可以包括自动驾驶等级、前向碰撞预警、紧急制动预警、交叉路口碰撞预警、盲区预警、变道预警、限速预警、车道保持预警、紧急车道保持、后向碰撞预警、后方交叉路口碰撞预警、开门预警、左转辅助、闯红灯预警、逆向超车预警、车辆失控预警、异常车辆预警、弱势交通参与者预警等。As shown in FIG. 3 , for example, the target identification module can identify information such as the type, coordinates, and orientation of the target. The traffic information recognition module can identify the location and current state of traffic lights, the value and location of speed limit signs, the type and coordinates of lane lines, road traffic scenarios, road traffic anomalies, and optimized road path planning. The function alarm module can include automatic driving level, forward collision warning, emergency braking warning, intersection collision warning, blind spot warning, lane change warning, speed limit warning, lane keeping warning, emergency lane keeping, rear collision warning, rear cross traffic Intersection collision warning, door opening warning, left turn assist, red light running warning, reverse overtaking warning, vehicle loss of control warning, abnormal vehicle warning, vulnerable traffic participant warning, etc.
信息融合模块将功能报警信息和目标信息相结合,将自动驾驶等级、车道保持状态和车道线信息相结合,将导航路径规划与自车运动相结合,输出整合后的目标信息、道路交通情况信息、功能报警信息。The information fusion module combines the function alarm information with the target information, the automatic driving level, the lane keeping status and the lane line information, the navigation path planning and the vehicle movement, and outputs the integrated target information and road traffic information. , Function alarm information.
如图3所示,示例性地,目标信息可以包括:As shown in FIG. 3, exemplarily, the target information may include:
目标状态:无、有、报警等级1、报警等级2、报警等级3等;Target status: none, yes, alarm level 1, alarm level 2, alarm level 3, etc.;
目标类型:例如,轿车、越野车SUV、小客车、大客车、小货车、大货车、摩托车、两轮车、大人、小孩等;Target type: For example, cars, off-road vehicles, SUVs, passenger cars, buses, small trucks, trucks, motorcycles, two-wheelers, adults, children, etc.;
目标朝向:例如,向前、向后、向左、向右等;Target orientation: for example, forward, backward, left, right, etc.;
坐标:横坐标、纵坐标等。Coordinates: abscissa, ordinate, etc.
如图3所示,示例性地,车道线信息可以包括:As shown in FIG. 3 , for example, the lane line information may include:
状态:无、有、报警、自适应巡航(Adaptive Cruise Control,ACC)、自动驾驶等级L2、自动驾驶等级L3、自动驾驶等级L4等;Status: None, Yes, Alarm, Adaptive Cruise Control (ACC), Autopilot Level L2, Autopilot Level L3, Autopilot Level L4, etc.;
类型:实线、虚线、双黄线、路沿、道路边沿等;Type: solid line, dotted line, double yellow line, road edge, road edge, etc.;
参数:A0、A1、A2、A3(车道线方程:y=a0+a1*x+a2*x 2+a3*x 3)。 Parameters: A0, A1, A2, A3 (lane line equation: y=a0+a1*x+a2*x 2 +a3*x 3 ).
本领域技术人员可以理解,A0、A1、A2、A3均为自车道左侧车道线多项式系数,其中,A0表示自车中心到车道线的横向距离,正值表示车道线在左边;A1表示自车相对车道线的航向角,正值表示车道线逆时针;A2表示车道线曲率,正值表示车道线向左弯曲;A3表示车道线曲率变化率,正值表示车道线向左弯曲。控制器从传感器信息或V2X信息中获得车道线信息后,可以得到A0、A1、A2、A3的数值。在显示装置上绘制车道线时,依据车道线方程绘制车道线,从而在重构的驾驶场景中可以显示出与实际车道线一致的车道线。Those skilled in the art can understand that A0, A1, A2, and A3 are all polynomial coefficients of the lane line on the left side of the own lane, where A0 represents the lateral distance from the center of the vehicle to the lane line, and a positive value indicates that the lane line is on the left; The heading angle of the vehicle relative to the lane line, a positive value means the lane line is counterclockwise; A2 means the lane line curvature, a positive value means the lane line bends to the left; A3 means the change rate of the lane line curvature, and a positive value means the lane line bends to the left. After the controller obtains the lane line information from the sensor information or V2X information, it can obtain the values of A0, A1, A2, and A3. When the lane line is drawn on the display device, the lane line is drawn according to the lane line equation, so that the lane line consistent with the actual lane line can be displayed in the reconstructed driving scene.
如图3所示,示例性地,交通信息可以包括:As shown in FIG. 3, exemplarily, the traffic information may include:
车道数:1、2、3、4、5、6、7、8等;Number of lanes: 1, 2, 3, 4, 5, 6, 7, 8, etc.;
当前所处车道:左1、左2、右1、右2等;The current lane: left 1, left 2, right 1, right 2, etc.;
路径规划方向:无、直行、左拐、右拐、左前方、右前方、掉头等Path planning direction: none, go straight, turn left, turn right, front left, front right, U-turn, etc.
限速信息状态:无、有、报警Speed limit information status: no, yes, alarm
限速值:5、10、15……130;Speed limit value: 5, 10, 15...130;
交通灯状态:无、红灯、黄灯、绿灯、倒计时等;Traffic light status: none, red light, yellow light, green light, countdown, etc.;
交通灯坐标:横坐标、纵坐标;Traffic light coordinates: abscissa, ordinate;
道路交通场景:匝道、十字路口、道路合并、道路分叉、丁字路口等。Road traffic scenarios: ramps, intersections, road merging, road bifurcations, T-junctions, etc.
基于传感器信息、车联网信息和地图信息,本申请提出了一种基于实际交通的驾驶场景重构方法。Based on sensor information, Internet of Vehicles information and map information, this application proposes a driving scene reconstruction method based on actual traffic.
图4为根据本申请一实施例的驾驶场景重构方法的流程示意图。如图1所示,该驾驶场景重构方法可以包括:FIG. 4 is a schematic flowchart of a driving scene reconstruction method according to an embodiment of the present application. As shown in Figure 1, the driving scene reconstruction method may include:
S101、获取传感器信息、车联网信息和地图信息中的道路交通情况信息;S101 , acquiring road traffic situation information in sensor information, vehicle networking information and map information;
S102、对所述传感器信息、所述车联网信息和所述地图信息中的道路交通情况信息进行整合;S102. Integrate the sensor information, the vehicle networking information and the road traffic situation information in the map information;
S103、基于整合后的道路交通情况信息,重构自车的驾驶场景。S103 , reconstruct the driving scene of the self-vehicle based on the integrated road traffic situation information.
本申请的驾驶场景重构方法,融合了来自ADAS传感器的传感器信息、来自V2X的车联网信息和地图信息,丰富了自动驾驶车辆周围环境的信息来源,而且,该驾驶场景重构方法,结合了道路交通情况信息,从而,重构的驾驶场景更接近于真实的驾驶环境,更符合自动驾驶状态下和普通驾驶状态下用户的实际需求,可以为驾驶员提供更有效的辅助作用,提升驾驶的安全性。The driving scene reconstruction method of the present application integrates sensor information from ADAS sensors, vehicle networking information and map information from V2X, and enriches the information sources of the surrounding environment of autonomous vehicles. Moreover, the driving scene reconstruction method combines Road traffic situation information, thus, the reconstructed driving scene is closer to the real driving environment, more in line with the actual needs of users in the automatic driving state and ordinary driving state, which can provide more effective assistance for the driver and improve the driving experience. safety.
本领域技术人员可以理解,驾驶场景重构方法中,实时获取传感器信息、车联网信息和地图信息中的道路交通情况信息,对所述传感器信息、所述车联网信息和所述地图信息中的道路交通情况信息进行整合;基于整合后的道路交通情况信息,实时重构自车的驾驶场景。从而,重构出的驾驶场景可以实时地展示了自车所处的周围环境。Those skilled in the art can understand that in the driving scene reconstruction method, the sensor information, the Internet of Vehicles information and the road traffic situation information in the map information are acquired in real time, and the sensor information, the Internet of Vehicles information and the map information are The road traffic situation information is integrated; based on the integrated road traffic situation information, the driving scene of the self-vehicle is reconstructed in real time. Therefore, the reconstructed driving scene can display the surrounding environment of the vehicle in real time.
图5为根据本申请一实施例的驾驶场景重构方法中道路交通情况信息的示意图。FIG. 5 is a schematic diagram of road traffic situation information in a driving scene reconstruction method according to an embodiment of the present application.
在一种实施方式中,在S101中,获取传感器信息中的道路交通情况信息,可以包括:接收来自雷达的点云数据,并解析该点云数据得到道路标识信息。In one embodiment, in S101, acquiring road traffic condition information in the sensor information may include: receiving point cloud data from radar, and analyzing the point cloud data to obtain road sign information.
在一种实施方式中,在S101中,获取传感器信息中的道路交通情况信息,可以包括:接收来自图像采集设备的视频流,并解析该视频流得到道路标识信息和车道线信息。In one embodiment, in S101, acquiring road traffic condition information in sensor information may include: receiving a video stream from an image acquisition device, and parsing the video stream to obtain road sign information and lane line information.
示例性地,如图5所示,道路标识信息可以包括红路灯信息(也称作交通灯信息)、限速标志信息等。交通灯信息可以包括交通灯的状态、交通灯的位置等。限速标志信息可以包括限速标志的数值、限速标志的位置等。例如,城市道路的某个路段的限速标志的数值为50,表示该路段最高限速50km/h。Exemplarily, as shown in FIG. 5 , the road sign information may include red street light information (also referred to as traffic light information), speed limit sign information, and the like. The traffic light information may include the status of the traffic light, the location of the traffic light, and the like. The speed limit sign information may include the value of the speed limit sign, the position of the speed limit sign, and the like. For example, the value of the speed limit sign on a certain section of an urban road is 50, indicating that the maximum speed limit of this section is 50km/h.
本领域技术人员可以理解,道路标识信息并不限于交通灯信息和限速标志信息,还可以包括例如掉头标志、直行标志、转向标志等可以被识别的标志。Those skilled in the art can understand that the road sign information is not limited to traffic light information and speed limit sign information, but may also include identifiable signs such as U-turn signs, going straight signs, turn signs and the like.
车道线信息可以包括自车所处车道的信息、周边车辆所处车道的信息、车道线的颜色信息、车道线的类型信息和车道线的位置信息中的至少之一。例如,车道线的颜色包括白色、黄色等颜色。车道线的类型包括实线、虚线、双黄线、路沿等。车道线的位置信息可以为车道线的坐标信息,与车道线的位置对应。本领域技术人员可以理解,车道线可以确定车道,因此,车道线信息可以反映出车道数量、自车所处车道以及周边车辆所处车道。The lane line information may include at least one of information about the lane where the vehicle is located, information about the lane where surrounding vehicles are located, color information of the lane line, type information of the lane line, and position information of the lane line. For example, the colors of the lane lines include white, yellow, and so on. The types of lane lines include solid lines, dashed lines, double yellow lines, curbs, etc. The position information of the lane line may be the coordinate information of the lane line, which corresponds to the position of the lane line. Those skilled in the art can understand that the lane line can determine the lane, and therefore, the lane line information can reflect the number of lanes, the lane in which the self-vehicle is located, and the lanes in which surrounding vehicles are located.
在一种实施方式中,在S101中,获取车联网信息中的道路交通情况信息,可以包括:获取来自车载单元和/或路侧单元的道路标识信息、车道线信息、道路交通异常情况信息和拥堵状况信息中的至少之一。In one embodiment, in S101, acquiring road traffic situation information in the Internet of Vehicles information may include: acquiring road identification information, lane line information, road traffic abnormality information and/or roadside units from the vehicle-mounted unit and/or the roadside unit At least one of the congestion status information.
在一种实施方式中,如图5所示,V2X通信的装置可以包括车载单元(On Board Unit,OBU)、路侧单元(Road Side Unit,RSU)中的至少之一。车联网信息的来源为车载单元(OBU)、路侧单元(RSU)中的至少之一。通过V2X通信,可以获得道路标识信息、车道线信息、道路交通异常情况信息和拥堵状况信息中的至少之一。道路交通异常情况信息可以包括道路施工信息、异常车辆信息和紧急车辆信息中的至少之一。从而,通过车联网信息,可以根据道路交通异常情况信息和拥堵状况信息规划最优的自车的行驶路线,高效到达目的地。In one embodiment, as shown in FIG. 5 , the device for V2X communication may include at least one of an on-board unit (On Board Unit, OBU) and a road side unit (Road Side Unit, RSU). The source of the Internet of Vehicles information is at least one of an on-board unit (OBU) and a roadside unit (RSU). Through V2X communication, at least one of road sign information, lane line information, road traffic abnormality information and congestion condition information can be obtained. The road traffic abnormal situation information may include at least one of road construction information, abnormal vehicle information, and emergency vehicle information. Therefore, through the information of the Internet of Vehicles, the optimal driving route of the self-vehicle can be planned according to the abnormal situation information of road traffic and the information of the congestion situation, so as to reach the destination efficiently.
在一种实施方式中,在S101中,获取地图信息中的道路交通情况信息,可以包括:从地图信息获取道路标识信息、车道线信息、道路交通场景信息和导航路径规划信息中的至少之一。In one embodiment, in S101, acquiring road traffic situation information in the map information may include: acquiring at least one of road identification information, lane line information, road traffic scene information and navigation path planning information from the map information .
例如,地图信息可以来自北斗导航系统或GPS导航系统。地图信息中可以包括道路标识信息、车道线信息、道路交通场景信息和导航路径规划信息等信息。为了获取道路交通情况信息,可以从地图信息中获取道路标识信息、车道线信息、道路交通场景信息和导航路径规划信息中的至少之一。道路交通场景信息可以包括十字路口信息、道路合并信息、道路分叉信息、匝道信息等道路信息。导航路径规划 信息包括由始发地至目的地的行驶路径信息。本领域技术人员可以理解,地图信息中,可以提供多个由始发地至目的地的行驶路径信息,导航路径规划信息可以包括多个由始发地至目的地的行驶路径信息。For example, the map information can come from a Beidou navigation system or a GPS navigation system. The map information may include information such as road marking information, lane line information, road traffic scene information, and navigation path planning information. In order to acquire road traffic situation information, at least one of road identification information, lane line information, road traffic scene information and navigation path planning information may be acquired from map information. The road traffic scene information may include road information such as intersection information, road merging information, road bifurcation information, and ramp information. The navigation route planning information includes the travel route information from the origin to the destination. Those skilled in the art can understand that the map information may provide a plurality of travel route information from the origin to the destination, and the navigation route planning information may include a plurality of travel route information from the origin to the destination.
示例性地,如图5所示,从地图信息中获取的道路交通情况信息,包括:自车与周边车辆所处的车道、车道线的类型和坐标、红绿灯的位置、自车距离匝道、交叉口等的距离、导航路径规划信息等。Exemplarily, as shown in FIG. 5 , the road traffic situation information obtained from the map information includes: the lane where the vehicle and surrounding vehicles are located, the type and coordinates of the lane line, the location of the traffic lights, the distance from the vehicle to the ramp, the intersection. The distance to the mouth, etc., navigation path planning information, etc.
传感器信息、车联网信息和地图信息中可能存在重复的信息,例如,传感器信息、车联网信息和地图信息中均包括道路标识信息。当分别从传感器信息、车联网信息和地图信息中获取道路标识信息后,可以对道路标识信息进行整合,从而,选择出用于重构驾驶场景的最优的道路标识信息。为了获取最优的道路交通情况信息,示例性地,如图5所示,在S102中,对传感器信息、车联网信息和地图信息中的道路交通情况信息进行整合,可以包括以下之一:There may be duplicate information in sensor information, vehicle networking information, and map information. For example, sensor information, vehicle networking information, and map information all include road sign information. After obtaining road sign information from sensor information, vehicle networking information, and map information, respectively, the road sign information can be integrated, thereby selecting the optimal road sign information for reconstructing the driving scene. In order to obtain the optimal road traffic situation information, exemplarily, as shown in FIG. 5 , in S102, the sensor information, the Internet of Vehicles information and the road traffic situation information in the map information are integrated, which may include one of the following:
对传感器信息、车联网信息和地图信息中的道路标识信息进行整合;Integrate sensor information, vehicle networking information and road sign information in map information;
对传感器信息、车联网信息和地图信息中的车道线信息进行整合;Integrate sensor information, vehicle networking information and lane line information in map information;
对地图信息中的道路交通场景信息进行整合;Integrate road traffic scene information in map information;
对车联网信息中的道路交通异常情况信息进行筛选和整合;Screen and integrate the abnormal road traffic information in the Internet of Vehicles information;
将车联网信息中的拥堵状况信息和地图信息中的导航路径规划信息结合,得到优化的导航路径规划信息。The congestion status information in the Internet of Vehicles information and the navigation path planning information in the map information are combined to obtain optimized navigation path planning information.
传感器信息中,可以从雷达的点云数据中解析出道路标识信息,例如,限速标志的位置、交通灯的位置等;可以从图像采集设备的视频流中解析出道路标识信息,例如,限速标志的位置、限速数值、交通灯的位置、交通灯的状态(红灯亮或绿灯亮或黄灯亮)等。车联网信息中包括道路标识信息,例如,限速标志的位置、限速数值、交通灯的位置、交通灯的状态等。地图信息中包括道路标识信息,例如,交通灯的位置、限速数值等。In sensor information, road sign information, such as the location of speed limit signs, traffic lights, etc., can be parsed from the point cloud data of the radar; road sign information can be parsed from the video stream of the image acquisition device, such as limited The position of the speed sign, the speed limit value, the position of the traffic light, the status of the traffic light (red light or green light or yellow light), etc. The Internet of Vehicles information includes road identification information, such as the position of the speed limit sign, the speed limit value, the position of the traffic light, the status of the traffic light, and so on. The map information includes road identification information, such as the location of traffic lights, speed limit values, and the like.
本领域技术人员可以理解,传感器信息、车联网信息、地图信息各自所采集的信息的精确度可能不同,例如,雷达的点云数据中的限速标志的位置、交通灯的位置相较于图像采集设备的视频流中的限速标志的位置、交通灯的位置的精度更高;雷达的点云数据中不包含限速数值、交通灯状态等。Those skilled in the art can understand that the accuracy of the information collected by sensor information, vehicle networking information, and map information may be different. For example, the position of the speed limit sign and the position of the traffic light in the point cloud data of the radar is compared with the image The position of the speed limit sign and the position of the traffic light in the video stream of the acquisition device is more accurate; the point cloud data of the radar does not contain the speed limit value, the status of the traffic light, etc.
在一种实施方式中,对传感器信息、车联网信息和地图信息中的道路标识信息进行整合,可以包括,对传感器信息、车联网信息和地图信息中的道路标识信息进行筛选、选择、融合。输出整合后道路标识信息可以包括:红绿灯的位置、当前状态;限速标志的数值、位置等。In one embodiment, integrating sensor information, vehicle networking information, and road marking information in map information may include screening, selecting, and fusing road marking information in sensor information, vehicle networking information, and map information. The output integrated road sign information may include: the position and current state of the traffic lights; the value and position of the speed limit sign, etc.
对于从传感器信息、车联网信息和地图信息中获取的重复的道路标识信息,可以选择精度最优的信息作为整合后的信息。例如,可以从传感器信息、车联网信息和地图信息中均获取交通灯的位置信息,那么,选择精度最优的交通灯的位置信息作为整合后的交通灯的位置信息。例如,雷达的点云数据中的交通灯的位置信息精度最优,那么,选择从雷达的点云数据中解析出的交通灯的位置信息作为整合后的交通灯位置信息。For the repeated road sign information obtained from sensor information, vehicle networking information and map information, the information with the best accuracy can be selected as the integrated information. For example, the location information of traffic lights can be obtained from sensor information, Internet of Vehicles information, and map information. Then, the location information of traffic lights with the best accuracy is selected as the integrated location information of traffic lights. For example, the location information of the traffic lights in the point cloud data of the radar has the best accuracy. Then, the location information of the traffic lights parsed from the point cloud data of the radar is selected as the integrated traffic light location information.
对于从传感器信息、车联网信息和地图信息中获取的仅有一项的道路标识信息,可以直接采用该信息作为整合后的信息。例如,传感器信息、车联网信息和地图信息中,只有从图像采集设备的视频流中可以解析出交通灯的状态,那么,直接采用从图像采集设备的视频流中解析出交通灯的状态作为整合后的交通灯的状态。For only one item of road sign information obtained from sensor information, vehicle networking information and map information, this information can be directly used as the integrated information. For example, in sensor information, vehicle networking information, and map information, the status of traffic lights can only be parsed from the video stream of the image acquisition device. The status of the traffic lights after.
对于本领域技术人员而言,传感器信息、车联网信息和地图信息中各个信息的精度可以是已知的,在信息整合过程中,可以直接设定信息的来源,或者,通过模型对信息进行筛选、选择、融合。对传感器信息、车联网信息和地图信息中的限速标志的位置、限速数值、交通灯的位置、交通灯的状态等信息进行整合后,得到整合后的限速标志的位置、限速数值、交通灯的位置、交通灯的状态,并作为整合后的道路标识信息。For those skilled in the art, the accuracy of each information in sensor information, vehicle networking information and map information can be known. In the process of information integration, the source of the information can be directly set, or the information can be screened through a model , selection, fusion. After integrating the sensor information, the information of the Internet of Vehicles, and the speed limit sign position, speed limit value, traffic light position, traffic light status and other information in the map information, the position and speed limit value of the integrated speed limit sign are obtained. , the position of the traffic light, the status of the traffic light, and serve as the integrated road sign information.
传感器信息中,可以从图像采集设备的视频流中解析出车道线信息,例如,车道线的颜色(白色或黄色等)、车道线的类型(虚线或实线或路沿等)、车道线的位置等信息。车联网信息中包括自车所处车道的信息、周边车辆所处车道的信息等。地图信息中包括自车所处车道的信息、周边车辆所处车道的信息、车道线的类型、车道线的位置等信息。In the sensor information, the lane line information can be parsed from the video stream of the image acquisition device, for example, the color of the lane line (white or yellow, etc.), the type of the lane line (dotted line or solid line or road edge, etc.), the lane line location, etc. The information of the Internet of Vehicles includes the information of the lane where the vehicle is located, and the information of the lane where the surrounding vehicles are located. The map information includes information about the lane where the vehicle is located, information about the lane where the surrounding vehicles are located, the type of lane line, the location of the lane line, and other information.
在一种实施方式中,对传感器信息、车联网信息和地图信息中的车道线信息进行整合,可以包括,对传感器信息、车联网信息和地图信息中的车道线信息进行筛选、选择、融合。输出整合后的车道线的类型、坐标等。In one embodiment, integrating sensor information, vehicle networking information, and lane line information in map information may include screening, selecting, and fusing the sensor information, vehicle networking information, and lane line information in map information. Output the type, coordinates, etc. of the integrated lane lines.
对于从传感器信息、车联网信息和地图信息中获取的重复的车道线信息,可以选择精度最优的信息作为整合后的信息。例如,可以从传感器信息和地图信息中均获取到车道线的类型和车道线的位置,那么,选择精度最优的车道线的类型和车道线的位置作为整合后的车道线的类型和车道线的位置。例如,从图像采集设备的视频流中解析出的车道线的类型精度最优,那么,选择从图像采集设备的视频流中解 析出的车道线的类型作为整合后的车道线的类型;地图信息中获取的车道线的位置精度最优,那么,选择从地图信息中获取的车道线的位置作为整合后的车道线位置。For the repeated lane line information obtained from sensor information, vehicle networking information and map information, the information with the best accuracy can be selected as the integrated information. For example, the type of lane line and the position of the lane line can be obtained from both the sensor information and the map information. Then, the type of lane line and the position of the lane line with the best accuracy are selected as the integrated type of lane line and lane line. s position. For example, the type of lane lines parsed from the video stream of the image acquisition device has the best accuracy, then the type of lane lines parsed from the video stream of the image acquisition device is selected as the type of integrated lane lines; map information The position accuracy of the lane lines obtained from the is the best, then, the position of the lane lines obtained from the map information is selected as the integrated lane line position.
对于从传感器信息、车联网信息和地图信息中获取的仅有一项的道路标识信息,可以直接采用该信息作为整合后的信息。例如,传感器信息、车联网信息和地图信息中,只有从图像采集设备的视频流中可以解析出车道线的颜色,那么,直接采用从图像采集设备的视频流中解析出的车道线颜色作为整合后的车道线颜色。For only one item of road sign information obtained from sensor information, vehicle networking information and map information, this information can be directly used as the integrated information. For example, in sensor information, vehicle networking information and map information, only the color of the lane line can be parsed from the video stream of the image acquisition device, then the color of the lane line parsed from the video stream of the image acquisition device is directly used as the integration Rear lane line color.
对于本领域技术人员而言,传感器信息、车联网信息和地图信息中各个信息的精度可以是已知的,在信息整合过程中,可以直接设定信息的来源,或者,通过模型对信息进行筛选、选择、融合。对传感器信息、车联网信息和地图信息中的自车所处车道的信息、周边车辆所处车道的信息、车道线的类型、车道线的颜色、车道线的位置等信息进行整合后,得到整合后的自车所处车道的信息、周边车辆所处车道的信息、车道线的类型、车道线的颜色、车道线的位置等信息,并作为整合后的车道线信息。For those skilled in the art, the accuracy of each information in sensor information, vehicle networking information and map information can be known. In the process of information integration, the source of the information can be directly set, or the information can be screened through a model , selection, fusion. After integrating the sensor information, the information of the Internet of Vehicles and the information of the lane where the vehicle is located, the information of the lane where the surrounding vehicles are located, the type of lane line, the color of the lane line, the position of the lane line, etc. The information of the lane where the self-vehicle is located, the information of the lane where the surrounding vehicles are located, the type of lane line, the color of the lane line, the position of the lane line and other information are used as the integrated lane line information.
在一种实施方式中,对地图信息中的道路交通场景信息进行整合,可以直接采用地图信息中的道路交通场景信息作为整合后的道路交通场景信息。道路交通场景信息可以包括十字路口信息、道路合并信息、道路分叉信息、匝道信息等道路信息。可以将从地图信息中获取的十字路口信息、道路合并信息、道路分叉信息、匝道信息等道路信息作为整合后的道路交通场景信息。In an embodiment, the road traffic scene information in the map information is integrated, and the road traffic scene information in the map information may be directly used as the integrated road traffic scene information. The road traffic scene information may include road information such as intersection information, road merging information, road bifurcation information, and ramp information. Road information such as intersection information, road merging information, road bifurcation information, and ramp information obtained from the map information can be used as the integrated road traffic scene information.
在一种实施方式中,对车联网信息中的道路交通异常情况信息进行筛选和整合,可以包括:对来自多个路侧单元的道路交通异常情况信息进行筛选、选择、融合,获得对自车的行驶产生影响的道路交通异常情况信息。本领域技术人员可以理解,自车可以与多个路侧单元进行通讯,从而,可以获得来自多个路侧单元的道路交通异常情况信息。通过对道路交通异常情况信息进行整合后,可以获得对自车的行驶产生影响的道路交通异常情况信息,从而有利于对车辆的行驶路线做出预测。例如,道路交通异常情况信息可以包括道路施工信息、异常车辆信息、紧急车辆信息等。可以将从车联网信息中获取的道路施工信息、异常车辆信息、紧急车辆信息等进行筛选和整合后作为整合后的道路交通异常情况信息。例如,异常车辆可以包括故障车辆、失控车辆等。紧急车辆可以包括救护车、消防车、警车等。In one embodiment, screening and integrating the road traffic abnormality information in the Internet of Vehicles information may include: screening, selecting, and merging the road traffic abnormality information from a plurality of roadside units, and obtaining the self-vehicle information. Information about road traffic anomalies that affect driving. Those skilled in the art can understand that the self-vehicle can communicate with multiple roadside units, so that road traffic abnormality information from multiple roadside units can be obtained. After integrating the road traffic abnormal situation information, the road traffic abnormal situation information that affects the driving of the vehicle can be obtained, which is beneficial to predict the driving route of the vehicle. For example, the road traffic abnormal situation information may include road construction information, abnormal vehicle information, emergency vehicle information, and the like. The road construction information, abnormal vehicle information, emergency vehicle information, etc. obtained from the Internet of Vehicles information can be filtered and integrated as the integrated road traffic abnormal situation information. For example, abnormal vehicles may include malfunctioning vehicles, out-of-control vehicles, and the like. Emergency vehicles may include ambulances, fire trucks, police cars, and the like.
在一种实施方式中,将车联网信息中的拥堵状况信息和地图信息中的导航路径规划信息结合,得到优化的导航路径规划信息,可以包括,从地图信息中可以得到多个由始发地至目的地的导航路径规划信息,结合车联网信息中的拥堵状况信息,可以摒弃受拥堵状况影响的导航路径规划信息,从而,得到优化的导航路径规划信息,可以更加高效地到达目的地。In one embodiment, combining the congestion status information in the Internet of Vehicles information with the navigation path planning information in the map information to obtain optimized navigation path planning information, which may include obtaining a plurality of origins from the map information. The navigation path planning information to the destination, combined with the congestion status information in the IoV information, can discard the navigation path planning information affected by the congestion status, thereby obtaining optimized navigation path planning information, which can reach the destination more efficiently.
在一种实施方式中,在S103中,基于整合后的道路交通情况信息,重构自车的驾驶场景,可以包括:In one embodiment, in S103, based on the integrated road traffic situation information, reconstructing the driving scene of the self-vehicle may include:
基于整合后的道路标识信息、车道线信息、道路交通场景信息、道路交通异常情况信息和优化的导航路径规划信息中的至少之一,重构自车的驾驶场景。Based on at least one of the integrated road sign information, lane line information, road traffic scene information, road traffic abnormality information and optimized navigation path planning information, the driving scene of the self-vehicle is reconstructed.
整合后的道路标识信息、车道线信息、道路交通场景信息、道路交通异常情况信息和优化的导航路径规划信息,均为优化后的信息,从而重构后的自车的驾驶场景更加接近于真实的驾驶环境,为驾驶员提供有效的辅助。The integrated road sign information, lane line information, road traffic scene information, road traffic abnormality information and optimized navigation path planning information are all optimized information, so that the reconstructed driving scene of the vehicle is closer to the real one The driving environment can provide effective assistance for the driver.
本领域技术人员可以理解,得到道路交通情况信息后,可以采用本领域的常规的重构技术重构自车的驾驶场景,在此,不再对重构自车的驾驶场景的重构过程进行详细描述。Those skilled in the art can understand that after obtaining the road traffic situation information, the driving scene of the self-vehicle can be reconstructed by using the conventional reconstruction technology in the field. A detailed description.
在一种实施方式中,驾驶场景重构方法,还可以包括:In one embodiment, the driving scene reconstruction method may further include:
获取传感器信息和车联网信息中的目标信息;Obtain target information from sensor information and vehicle networking information;
对传感器信息和车联网信息中的目标信息进行整合;Integrate sensor information and target information in IoV information;
基于整合后的道路交通情况信息和整合后的目标信息,重构自车的驾驶场景。Based on the integrated road traffic situation information and the integrated target information, the driving scene of the self-vehicle is reconstructed.
本领域技术人员可以理解,车辆的周围环境,除了周围的道路交通情况外,车辆周围还有很多目标(车辆或行人),这些目标的类型、方向、位置等也会对自车的驾驶产生影响。基于整合后的道路交通情况信息和整合后的目标信息,重构自车的驾驶场景,从而,得到的驾驶场景中,不仅可以反映道路交通情况,而且还反映出自车周围的目标信息,这样的驾驶场景更加符合真实的驾驶环境,并且,车辆周围的目标还可以对驾驶员起到预警的作用,使得驾驶员对碰撞危险可以进行预判,更好地提高驾驶的安全性。Those skilled in the art can understand that in the surrounding environment of the vehicle, in addition to the surrounding road traffic conditions, there are many objects (vehicles or pedestrians) around the vehicle, and the type, direction, location, etc. of these objects will also affect the driving of the vehicle. . Based on the integrated road traffic situation information and the integrated target information, the driving scene of the self-vehicle is reconstructed, so that the obtained driving scene can not only reflect the road traffic situation, but also reflect the target information around the self-vehicle. The driving scene is more in line with the real driving environment, and the targets around the vehicle can also play an early warning role for the driver, so that the driver can predict the danger of collision and better improve driving safety.
在一种实施方式中,目标信息可以包括目标的尺寸信息、目标的类型信息(例如,车辆及车辆类型,或者,行人及行人类型)、目标的方向信息(例如,向前、向后、向左、向右)、目标的位置信息(例如,目标相对于自车的横坐标、纵坐标)等。车辆类型可以包括轿车、越野车、小客车、大客车、小货车、大货车、摩托车、两轮车等类型。行人类型可以包括大人、小孩等。In one embodiment, the target information may include size information of the target, type information of the target (eg, vehicle and vehicle type, or pedestrian and pedestrian type), direction information of the target (eg, forward, backward, left and right), the location information of the target (for example, the abscissa and ordinate of the target relative to the vehicle), etc. Vehicle types may include cars, off-road vehicles, passenger cars, buses, pickup trucks, trucks, motorcycles, two-wheelers, and the like. Pedestrian types may include adults, children, and the like.
在一种实施方式中,获取传感器信息中的目标信息,可以包括以下至少一种:In one embodiment, acquiring target information in sensor information may include at least one of the following:
接收来自雷达的点云数据,并解析该点云数据得到目标信息;Receive point cloud data from radar, and analyze the point cloud data to obtain target information;
接收来自图像采集设备的视频流,并解析该视频流得到目标信息。Receive a video stream from an image capture device, and parse the video stream to obtain target information.
图6为根据本申请一实施例的驾驶场景重构方法中目标信息的示意图。在一种实施方式中,如图6所示,从雷达的点云数据中得到的目标信息可以包括目标的尺寸、目标的位置和目标的方向等信息。从图像采集设备的视频流中得到的目标信息可以包括目标的尺寸、目标的类型、目标的位置和目标的方向等信息。FIG. 6 is a schematic diagram of target information in a driving scene reconstruction method according to an embodiment of the present application. In one embodiment, as shown in FIG. 6 , the target information obtained from the point cloud data of the radar may include information such as the size of the target, the position of the target, and the direction of the target. The target information obtained from the video stream of the image capture device may include information such as the size of the target, the type of the target, the position of the target, and the direction of the target.
在一种实施方式中,获取车联网信息中的目标信息,可以包括:In one embodiment, acquiring target information in the Internet of Vehicles information may include:
获取来自车载单元和/或路侧单元的目标信息。Obtain target information from onboard units and/or roadside units.
从车联网信息中得到的目标信息,可以包括目标的类型、目标的位置、目标的方向等信息。The target information obtained from the Internet of Vehicles information may include information such as the type of the target, the location of the target, and the direction of the target.
在一种实施方式中,对传感器信息和车联网信息中的目标信息进行整合,可以包括:对传感器信息和车联网信息中的目标信息进行筛选、对应、选择。输出整合后的目标类型、坐标、朝向(也称作方向)等。对于从传感器信息和车联网信息中得到的重复的目标信息,可以选择精度最优的信息作为整合后的信息。例如,从雷达的点云数据、图像采集设备的视频流和车联网信息中均可以获取到目标的位置信息。从雷达的点云数据中获取到的目标的位置信息精确度更高,那么,将从雷达的点云数据中获取到的目标的位置信息作为整合后的目标的位置信息。从车联网信息中获取到的目标的方向信息精确度更高,那么,将从车联网信息中获取到的目标的方向信息作为整合后的目标的方向信息。In one embodiment, integrating the sensor information and the target information in the Internet of Vehicles information may include: screening, corresponding, and selecting the sensor information and the target information in the Internet of Vehicles information. Output the integrated target type, coordinates, orientation (also called direction), etc. For the repeated target information obtained from sensor information and IoV information, the information with the best accuracy can be selected as the integrated information. For example, the location information of the target can be obtained from the point cloud data of the radar, the video stream of the image acquisition device, and the information of the Internet of Vehicles. The position information of the target obtained from the point cloud data of the radar is more accurate, then the position information of the target obtained from the point cloud data of the radar is used as the position information of the integrated target. The direction information of the target obtained from the Internet of Vehicles information is more accurate, then the direction information of the target obtained from the Internet of Vehicles information is used as the direction information of the integrated target.
对于本领域技术人员而言,传感器信息和车联网信息中的目标信息的精确度是已知的,从而,在信息整合过程中,可以直接设定信息的来源,或者,通过模型对信息进行筛选、对应、选择、融合。对传感器信息和车联网信息中的目标的尺寸、目标的类型、目标的位置和目标的方向等信息进行整合后,得到整合后的目标的尺寸、目标的类型、目标的位置和目标的方向等信息,并作为整合后的目标信息。For those skilled in the art, the accuracy of the sensor information and the target information in the Internet of Vehicles information is known. Therefore, in the process of information integration, the source of the information can be directly set, or the information can be screened through a model. , correspondence, selection, fusion. After integrating the information such as the size of the target, the type of the target, the position of the target, and the direction of the target in the sensor information and the Internet of Vehicles information, the size of the integrated target, the type of the target, the position of the target, and the direction of the target are obtained. information and serve as the integrated target information.
在一种实施方式中,驾驶场景重构方法,还可以包括:接收重构后的驾驶场景的数据,显示重构后的驾驶场景。这样,就可以将重构后的驾驶场景呈现在出来。In one embodiment, the driving scene reconstruction method may further include: receiving data of the reconstructed driving scene, and displaying the reconstructed driving scene. In this way, the reconstructed driving scene can be presented.
图7为根据本申请一实施例的驾驶场景重构方法重构出的驾驶场景的示意图。在一种实施方式中,整合后的道路交通情况信息包括自车所处的车道、左侧车道、右侧车道、车道线颜色、车道线类型和位于车道上的目标等。重构出的驾驶场景如图7所示,从图7中可以看出,驾驶场景包括自车11、自车11所处的车道、自车11的左侧车道、自车11的右侧车道、左侧车道上的相关目标、右侧车道上的相关目标以及车道线(颜色、类型)。驾驶场景中不仅显示了相关目标,而且还显示出目标的方向,例如车辆12的方向、车辆13的方向均可以从驾驶场景中清晰地获得。FIG. 7 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to an embodiment of the present application. In one embodiment, the integrated road traffic situation information includes the lane in which the self-vehicle is located, the left lane, the right lane, the color of the lane line, the type of the lane line, the target located in the lane, and the like. The reconstructed driving scene is shown in Figure 7. It can be seen from Figure 7 that the driving scene includes the vehicle 11, the lane where the vehicle 11 is located, the left lane of the vehicle 11, and the right lane of the vehicle 11. , related objects in the left lane, related objects in the right lane, and lane lines (color, type). In the driving scene, not only the relevant target but also the direction of the target is displayed, for example, the direction of the vehicle 12 and the direction of the vehicle 13 can be clearly obtained from the driving scene.
在一种实施方式中,驾驶场景中可以仅包括与自车最相关的目标,例如,驾驶场景中可以显示自车前方3个目标、自车后方1个目标、自车左侧和右侧各1个目标的类型、方向和位置。In one embodiment, the driving scene may only include the most relevant targets of the vehicle. For example, the driving scene may display 3 targets in front of the vehicle, 1 target behind the vehicle, and two targets on the left and right sides of the vehicle. Type, direction and location of 1 target.
在一种实施方式中,驾驶场景中可以包括自车前方3个目标、自车后方1个目标、自车左侧和右侧各1个、自车左侧后方1个目标、自车左侧前方1个目标、自车右侧后方1个目标、自车右侧前方1个目标的类型、方向和位置。In one embodiment, the driving scene may include 3 targets in front of the vehicle, 1 target behind the vehicle, 1 target on the left side and 1 target on the right side of the vehicle, 1 target behind the vehicle on the left side, and 1 target on the left side of the vehicle. Type, direction and location of 1 target in front, 1 target behind the right side of the vehicle, and 1 target in front of the right side of the vehicle.
这样的方式,可以在驾驶场景中显示出与自车最相关的目标信息,避免显示与自车相关性较弱的目标,使得显示画面更简洁、清晰,使得驾驶员有更多注意去关注最相关的目标,提高了驾驶员的体验,避免了显示冗余。In this way, the target information most relevant to the self-vehicle can be displayed in the driving scene, avoiding the display of targets that are less relevant to the self-vehicle, making the display screen more concise and clear, and allowing the driver to pay more attention to the most relevant target information. Related goals, improve the driver's experience and avoid display redundancy.
本领域技术人员可以理解,车辆在行驶过程中,有时候车辆周围并不存在相关目标,例如,自车的前方可能在几公里范围内都不存在车辆。为了避免显示的场景距离过大,在一种实施方式中,驾驶场景中可以包括位于自车一定范围内的车辆、行人。例如,目标可以为与自车横向距离在X1、纵向距离在Y1范围内的车辆、行人。横向距离X1和纵向距离Y1的数值可以根据实际需要确定。Those skilled in the art can understand that, when the vehicle is driving, sometimes there is no relevant target around the vehicle, for example, there may be no vehicle within a range of several kilometers in front of the vehicle. In order to avoid the displayed scene distance from being too large, in one embodiment, the driving scene may include vehicles and pedestrians located within a certain range of the vehicle. For example, the target may be a vehicle or a pedestrian whose lateral distance from the vehicle is within the range of X1 and the longitudinal distance is within the range of Y1. The values of the horizontal distance X1 and the vertical distance Y1 can be determined according to actual needs.
图8为根据本申请另一实施例的驾驶场景重构方法重构出的驾驶场景的示意图。图8所示的驾驶场景中,包括十字路口、与自车11所处的车道相垂直的车道以及位于该垂直车道上的车辆12和车辆13、限速标志14、交通灯15以及自车周围相关目标。图8所示的驾驶场景中不仅显示了相关目标,而且还显示出目标的方向,例如车辆12的车头朝向、车辆13的车头朝向均可以从驾驶场景中清晰地获得。FIG. 8 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application. The driving scene shown in FIG. 8 includes an intersection, a lane perpendicular to the lane where the vehicle 11 is located, vehicles 12 and 13 located in the vertical lane, speed limit signs 14 , traffic lights 15 and surrounding areas of the vehicle related goals. In the driving scene shown in FIG. 8 , not only the relevant targets but also the directions of the targets are displayed. For example, the head direction of the vehicle 12 and the head direction of the vehicle 13 can be clearly obtained from the driving scene.
当自车位于丁字路口时,驾驶场景中可以包括丁字路口、与自车所处的车道相垂直的车道以及位于该垂直车道上的车辆、限速标志、交通灯以及自车周围相关目标。When the self-vehicle is located at a T-junction, the driving scene can include the T-junction, a lane perpendicular to the lane where the vehicle is located, vehicles located in the vertical lane, speed limit signs, traffic lights, and related objects around the vehicle.
图9为根据本申请另一实施例的驾驶场景重构方法重构出的驾驶场景的示意图。图9所示的驾驶场景中,包括车道合并。自车11所在车道有其他车辆汇入,驾驶场景显示车道合并、汇入车道以及合并后车道上的相关车辆。FIG. 9 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application. In the driving scenario shown in Figure 9, lane merging is included. There are other vehicles merging into the lane where the self-vehicle 11 is located, and the driving scene shows the lane merging, the merged lane, and the related vehicles in the merged lane.
在一种实施方式中,当车辆驶入分叉车道时,驾驶场景中可以显示道路分叉信息、分叉车道和与自车位于同一车道的其他目标信息。In one embodiment, when a vehicle enters a forked lane, information about road forks, forked lanes and other target information located in the same lane as the vehicle may be displayed in the driving scene.
本领域技术人员可以理解,车辆有时会行驶在最左侧车道,那么,位于车辆左侧的车道即为对向车道。在一种实施方式中,驾驶场景中还可以包括对向车道信息。通常,对象车道不会对驾驶产生影响,在驾驶场景中,可以将对向车道显示为预定颜色例如灰色,并且不显示对向车道目标信息。本领域技术人员可以理解,在实际驾驶中,对象车道的目标通常不会对驾驶产生影响,所以,将对向车道显示为预定颜色,以及不显示对向车道目标信息,更加符合驾驶习惯,避免驾驶员关注不相关的道路交通情况信息。Those skilled in the art can understand that a vehicle sometimes drives in the leftmost lane, so the lane on the left side of the vehicle is the opposite lane. In one embodiment, the driving scene may also include opposite lane information. Generally, the object lane does not affect driving, and in a driving scene, the opposite lane may be displayed in a predetermined color such as gray, and the opposite lane target information is not displayed. Those skilled in the art can understand that in actual driving, the target of the target lane usually does not affect driving. Therefore, displaying the opposite lane as a predetermined color and not displaying the target information of the opposite lane is more in line with driving habits and avoids Drivers pay attention to irrelevant road traffic situation information.
在图7、图8和图9中,显示的目标为具体的车辆样式。本领域技术人员可以理解,可以从传感器信息和/或车联网信息中获取目标的轮廓信息,并对轮廓信息进行处理,使得驾驶场景中可以显示目标的具体样式。在另一个实施方式中,可以为常用的目标(车辆或行人)建立专用的目标模型库,可以从传感器信息和/或车联网信息中获取目标的类型,然后,从目标模型库中调取对应的目标模型呈现在驾驶场景中。例如,目标模型库中包括奔驰车的模型,当从图像采集装置中解析出奔驰车的车标时,便可以从目标模型库中调取奔驰车的模型,从而,将奔驰车的模型显示在驾驶场景中。In Figures 7, 8 and 9, the targets shown are specific vehicle styles. Those skilled in the art can understand that the contour information of the target can be obtained from sensor information and/or Internet of Vehicles information, and the contour information can be processed so that the specific style of the target can be displayed in the driving scene. In another embodiment, a dedicated target model library can be established for commonly used targets (vehicles or pedestrians), the type of the target can be obtained from sensor information and/or Internet of Vehicles information, and then the corresponding target model library can be called from the target model library. The target model is presented in the driving scene. For example, the target model library includes the model of Mercedes-Benz. When the logo of the Mercedes-Benz is parsed from the image acquisition device, the model of the Mercedes-Benz can be retrieved from the target model library, thereby displaying the model of the Mercedes-Benz in the in the driving scene.
在一种实施方式中,实时获取传感器信息、车联网信息和地图信息中的道路交通情况信息、目标信息,从而,重构的驾驶场景中可以实时反馈车辆的周围环境状况,可以实时显示车辆的驾驶场景,更加接近于车辆的真实驾驶场景。In one embodiment, the sensor information, the Internet of Vehicles information and the road traffic condition information and target information in the map information are acquired in real time, so that the reconstructed driving scene can feed back the surrounding environment conditions of the vehicle in real time, and can display the real-time status of the vehicle in real time. The driving scene is closer to the real driving scene of the vehicle.
在一种实施方式中,可以将道路标识信息、车道线信息、道路交通场景信息、道路交通异常情况信息、拥堵状况信息和导航路径规划信息结合,得到最优化的导航路径规划信息,并且,可以拟合出车辆行驶的最佳路线。In one embodiment, road sign information, lane line information, road traffic scene information, road traffic abnormality information, congestion status information and navigation path planning information can be combined to obtain optimized navigation path planning information, and Fit the best route for the vehicle to travel.
在一种实施方式中,根据拟合出的车辆行驶的最佳路线和自车所处车道,可以在驾驶场景中显示出车辆的下一步行驶路径和运动方向,可以通过箭头等方式在驾驶场景中指示出车辆的下一步运动方向(例如,左转、右转、掉头、变道、左前方或右前方等),使得重构的驾驶场景更真实、可靠,为驾驶员提供有效的辅助。In one embodiment, according to the best route of the vehicle and the lane in which the vehicle is located, the next driving path and movement direction of the vehicle can be displayed in the driving scene, and arrows and other methods can be used in the driving scene. Indicates the next movement direction of the vehicle (for example, left turn, right turn, U-turn, lane change, left front or right front, etc.), making the reconstructed driving scene more realistic and reliable, and providing effective assistance for the driver.
图10为根据本申请另一实施例的驾驶场景重构方法重构出的驾驶场景的示意图。如图10所示,驾驶场景中显示出自车11的下一步行驶路径和运动方向(左转),并采用箭头的方式指示出。FIG. 10 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application. As shown in FIG. 10 , the next driving path and moving direction (turn left) of the self-vehicle 11 are displayed in the driving scene, and are indicated by means of arrows.
图11为根据本申请另一实施例的驾驶场景重构方法重构出的驾驶场景的示意图。如图11所示,驾驶场景中显示出自车11的下一步行驶路径和运动方向(变道),并采用箭头的方式指示出。FIG. 11 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application. As shown in FIG. 11 , the next driving path and moving direction (lane change) of the self-vehicle 11 are displayed in the driving scene, and are indicated by means of arrows.
图12为根据本申请另一实施例的驾驶场景重构方法重构出的驾驶场景的示意图。如图12所示,驾驶场景中显示出自车11的下一步行驶路径和运动方向(右转进入匝道),并采用箭头的方式指示出。FIG. 12 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application. As shown in FIG. 12 , the next driving path and moving direction (turning right into the ramp) of the self-vehicle 11 are displayed in the driving scene, and are indicated by means of arrows.
本领域技术人员可以理解,目前,自动驾驶可以分为六个自动驾驶等级,分别为:L0(无自动化),由人类驾驶者全权驾驶汽车,在行驶过程中可以得到警告;L1(驾驶支持),通过驾驶环境对方向盘和加速减速中的一项操作提供支持,其余由人类操作;L2(部分自动化),通过驾驶环境对方向盘和加速减速中的多项操作提供支持,其余由人类操作;L3(有条件自动化),由无人驾驶系统完成所有的驾驶操作,根据系统要求,人类提供适当的应答;L4(高度自动化),由无人驾驶系统完成所有的驾驶操作,根据系统要求,人类不一定提供所有的应答,限定道路和环境条件;L5(完全自动化),由无人驾驶系统完成所有的驾驶操作,可能的情况下,人类接管,不限定道路和环境条件。自动驾驶等级不同,需要驾驶员的参与度不同。Those skilled in the art can understand that at present, automatic driving can be divided into six automatic driving levels, namely: L0 (no automation), the human driver drives the car with full authority, and can be warned during the driving process; L1 (driving support) , one operation of the steering wheel and acceleration and deceleration are supported through the driving environment, and the rest are operated by humans; L2 (partial automation), multiple operations in the steering wheel and acceleration and deceleration are supported through the driving environment, and the rest are operated by humans; L3 (Conditional automation), all driving operations are completed by the unmanned system, and humans provide appropriate responses according to system requirements; L4 (high automation), all driving operations are completed by the unmanned system, according to system requirements, humans do not All responses must be provided, the road and environmental conditions are limited; L5 (full automation), all driving operations are performed by the driverless system, where possible, the human takes over, the road and environmental conditions are not limited. Different levels of autonomous driving require different levels of driver involvement.
在驾驶过程中,为了向驾驶员实时提醒车辆所处的驾驶等级,在一种实施方式中,驾驶场景重构方法,还可以包括:During the driving process, in order to remind the driver of the driving level of the vehicle in real time, in one embodiment, the driving scene reconstruction method may further include:
提取自车的自动驾驶等级信息和等级提示信息;Extract the autopilot level information and level prompt information of the vehicle;
基于整合后的道路交通情况信息、自动驾驶等级信息和等级提示信息,重构自车的驾驶场景。Based on the integrated road traffic situation information, automatic driving level information and level prompt information, the driving scene of the self-vehicle is reconstructed.
通过提取自车的自动驾驶等级信息和等级提示信息,基于整合后的道路交通情况信息、自动驾驶等级信息和等级提示信息,重构自车的驾驶场景,可以在驾驶场景中明显地区分不同的驾驶等级,使得驾驶员更真切地感受不同驾驶等级的差异和变化,使得驾驶员可以实时地把握驾驶参与度。By extracting the self-driving level information and level prompt information, and reconstructing the driving scene of the self-vehicle based on the integrated road traffic condition information, autopilot level information and level prompt information, it is possible to clearly distinguish different driving scenes in the driving scene. Driving level, so that the driver can feel the difference and change of different driving levels more realistically, so that the driver can grasp the driving participation in real time.
在一种实施方式中,等级提示信息可以包括颜色信息、声音信息、闪烁信息等中的至少一种。例如,等级提示信息可以为颜色信息,自动驾驶等级与颜色信息对应,一个自动驾驶等级对应一种颜色。示例性地,可以将驾驶场景的背景颜色显示为与自动驾驶等级相对应的颜色。例如,当自动驾驶等级为L1时,驾驶场景的背景颜色显示为浅蓝色;当自动驾驶等级为L2时,驾驶场景的背景颜色显示为浅绿色;当自动驾驶等级为L3时,驾驶场景的背景颜色显示为浅紫色。In one embodiment, the level prompt information may include at least one of color information, sound information, blinking information, and the like. For example, the level prompt information may be color information, the automatic driving level corresponds to the color information, and one automatic driving level corresponds to one color. Exemplarily, the background color of the driving scene may be displayed as a color corresponding to the automatic driving level. For example, when the automatic driving level is L1, the background color of the driving scene is displayed as light blue; when the automatic driving level is L2, the background color of the driving scene is displayed as light green; when the automatic driving level is L3, the background color of the driving scene is displayed as light green. The background color appears light purple.
在一种实施方式中,可以将自车的颜色显示为与自动驾驶等级相对应的颜色。In one embodiment, the color of the ego vehicle may be displayed as a color corresponding to the level of autonomous driving.
在一种实施方式中,可以将车道线颜色显示为与自动驾驶等级相对应的颜色。In one embodiment, the lane line color may be displayed as a color corresponding to the level of autonomous driving.
在一种实施方式中,等级提示信息可以为声音信息,自动驾驶等级与声音信息对应,一个自动驾驶 等级对应一种声音。In one embodiment, the level prompt information may be sound information, the automatic driving level corresponds to the sound information, and one automatic driving level corresponds to one sound.
本领域技术人员可以理解,等级提示信息的展示并不限于以上方式,只要重构的驾驶场景可以通过展示等级提示信息使得驾驶员感受车辆的驾驶等级,均在本申请所保护的范围内。Those skilled in the art can understand that the display of the level prompt information is not limited to the above methods, as long as the reconstructed driving scene can display the level prompt information so that the driver can feel the driving level of the vehicle, it is within the scope of protection of the present application.
在一种实施方式中,等级提示信息可以为场景中显示的内容,通过设置场景中显示不同的内容体现不同的驾驶等级。示例性地,可以通过设置驾驶场景中显示的道路交通情况信息的元素种类或个数来体现不同的驾驶等级。例如,在L0驾驶等级时,重构出的驾驶场景中可以显示出所有的实时道路交通情况信息;在L1驾驶等级时,如果仅有横向控制功能,可以只显示车道线信息,如果仅有纵向控制功能,可以仅显示位于自车前侧或后侧的目标(例如车辆);在L2驾驶等级时,可以同时显示车道线信息、前侧目标和后侧目标;在L3驾驶等级时,可以同时显示车道线信息、前侧目标、后侧目标和道路变化情况;在L4驾驶等级时,凸出显示自车,其他元素简化显示;在L5驾驶等级时,可以仅显示自车,同时,驾驶场景中可以增加具有娱乐、会议、游戏等环节的场景。In one embodiment, the level prompt information may be the content displayed in the scene, and different driving levels are reflected by setting different content displayed in the scene. Exemplarily, different driving levels can be embodied by setting the type or number of elements of the road traffic situation information displayed in the driving scene. For example, at the L0 driving level, all real-time road traffic information can be displayed in the reconstructed driving scene; at the L1 driving level, if only the lateral control function is available, only the lane line information can be displayed. Control function, which can only display the target (such as the vehicle) located on the front or rear side of the vehicle; at the L2 driving level, it can display the lane line information, the front side target and the rear side target at the same time; at the L3 driving level, it can be displayed at the same time Display lane line information, front targets, rear targets and road changes; at L4 driving level, the self-vehicle is highlighted, and other elements are simplified; at L5 driving level, only the self-vehicle can be displayed, and at the same time, the driving scene Scenes with entertainment, meetings, games and other links can be added to the system.
本领域技术人员可以理解,自动驾驶可以分为多个自动驾驶功能。例如,自动驾驶功能中包括无横纵向控制;车道保持辅助系统(Lane Keep Assist,LKA)、车道居中辅助系统(Lane Center Assist,LCA),仅有纵向控制;高速路驾驶辅助(Highway Assist,HWA),即有横纵向控制,且驾驶员可以脱手脱脚;交通拥堵驾驶功能(Traffic Jam Pilot,TJP),即有横纵向控制,且驾驶员可以脱手脱脚脱眼。Those skilled in the art can understand that automatic driving can be divided into multiple automatic driving functions. For example, the automatic driving function includes no lateral and longitudinal control; Lane Keep Assist (LKA), Lane Center Assist (LCA), only longitudinal control; Highway Assist (HWA) ), that is, there is horizontal and vertical control, and the driver can take off his hands and feet; the traffic jam driving function (Traffic Jam Pilot, TJP), that is, there is horizontal and vertical control, and the driver can take off his hands and feet and eyes.
在驾驶过程中,为了向驾驶员实时提醒车辆所采用的驾驶功能,在一种实施方式中,驾驶场景重构方法,还可以包括:During the driving process, in order to remind the driver of the driving function adopted by the vehicle in real time, in one embodiment, the driving scene reconstruction method may further include:
提取自车的自动驾驶功能信息和功能提示信息;Extract the self-driving function information and function prompt information of the vehicle;
基于整合后的道路交通情况信息、自动驾驶功能信息和功能提示信息,重构自车的驾驶场景。Based on the integrated road traffic condition information, automatic driving function information and function prompt information, the driving scene of the self-vehicle is reconstructed.
这样的驾驶场景重构方法,可以在驾驶场景中明显地区分不同的驾驶功能,使得驾驶员更真切地感受不同的驾驶功能的差异和变化,使得驾驶员可以实时地把握驾驶参与度。Such a driving scene reconstruction method can clearly distinguish different driving functions in the driving scene, so that the driver can more truly feel the difference and change of different driving functions, so that the driver can grasp the driving participation in real time.
在一种实施方式中,功能提示信息可以包括巡航速度信息、车道线颜色信息、声音信息、闪烁信息等中的至少一种。例如,可以采用巡航速度信息作为纵向控制功能的提示信息,可以采用车道线颜色信息作为横向控制功能的提示信息。采用巡航速度的变化来体现纵向控制功能,可以采用车道线颜色变化来体现横向控制功能。例如,无横纵向控制时,车道线颜色显示为白色;HWA功能时,车道线颜色显示为红色;TJP功能时,车道线颜色显示为黄色。In one embodiment, the function prompt information may include at least one of cruise speed information, lane line color information, sound information, flashing information, and the like. For example, the cruising speed information can be used as the prompt information of the longitudinal control function, and the color information of the lane line can be used as the prompt information of the lateral control function. The change of cruising speed is used to reflect the longitudinal control function, and the color change of the lane line can be used to reflect the lateral control function. For example, when there is no horizontal and vertical control, the color of the lane line is displayed as white; when the HWA function is used, the color of the lane line is displayed as red; when the TJP function is used, the color of the lane line is displayed as yellow.
本领域技术人员可以理解,功能提示信息的展示并不限于以上方式,只要重构的驾驶场景可以通过展示功能提示信息使得驾驶员感受车辆的驾驶功能,均在本申请所保护的范围内。Those skilled in the art can understand that the display of the function prompt information is not limited to the above methods, as long as the reconstructed driving scene can display the function prompt information so that the driver can feel the driving function of the vehicle, it is within the scope of protection of the present application.
本申请的驾驶场景重构方法,可以实时获取传感器信息、车联网信息和地图信息中的道路标识信息、车道线信息、道路交通场景信息、道路交通异常情况信息、目标信息和优化后的导航路径规划信息,并且对这些信息进行整合,基于整合后的道路标识信息、车道线信息、道路交通场景信息、道路交通异常情况信息、目标信息和优化的导航路径规划信息,重构自车的驾驶场景。从而,重构出的驾驶场景,可以根据自车所处的道路实际环境,自适应地调节驾驶场景的内容,并且向用户展示与自车自动驾驶任务最相关的信息,例如,可以向用户展示出交通灯、交通灯的状态、限速标志及数值、自车所处车道、周围车辆所处车道、车道线的颜色和类型等;当道路交通场景变化时,可以根据自车所处道路的实际情况,向用户展示十字路口、道路合并、道路分叉或匝道等;当驾驶场景中显示对向车道信息时,可以将对向车道显示为灰色,并且不显示对象车道目标信息;当自车所处环境中,存在会影响行驶的道路施工、异常车辆或紧急车辆等时,重构的驾驶场景中会显示出道路施工、异常车辆或紧急车辆等;并且,自车的行驶路径为最优的行驶路径。总之,本申请的驾驶场景重构方法,融合了来自ADAS传感器的传感器信息、来自V2X的车联网信息和地图信息,丰富了自动驾驶车辆周围环境的信息来源,而且,该驾驶场景重构方法,结合了道路交通情况信息和自车周围的目标信息,从而,重构的驾驶场景可以实时反映自车当前的真实驾驶环境,更符合自动驾驶状态下和普通驾驶状态下用户的实际需求,可以为驾驶员提供更有效的辅助作用,提升驾驶的安全性。The driving scene reconstruction method of the present application can acquire sensor information, Internet of Vehicles information and road sign information, lane line information, road traffic scene information, abnormal road traffic situation information, target information and optimized navigation paths in real time in the map information. Planning information, and integrating these information, based on the integrated road sign information, lane line information, road traffic scene information, road traffic abnormal situation information, target information and optimized navigation path planning information, reconstruct the driving scene of the self-vehicle . Therefore, the reconstructed driving scene can adaptively adjust the content of the driving scene according to the actual road environment where the vehicle is located, and display the most relevant information to the user for the automatic driving task of the vehicle. For example, it can be displayed to the user. Exit traffic lights, the status of traffic lights, speed limit signs and values, the lane where the vehicle is located, the lane where surrounding vehicles are located, the color and type of lane lines, etc.; when the road traffic scene changes, you can In the actual situation, the user will be shown the intersection, road merging, road bifurcation or ramp, etc.; when the opposite lane information is displayed in the driving scene, the opposite lane can be displayed in gray, and the target lane target information will not be displayed; When there are road constructions, abnormal vehicles or emergency vehicles that affect driving in the environment, the reconstructed driving scene will display road constructions, abnormal vehicles or emergency vehicles; and the driving path of the self-vehicle is optimal driving path. In a word, the driving scene reconstruction method of this application integrates sensor information from ADAS sensors, vehicle networking information and map information from V2X, and enriches the information sources of the surrounding environment of autonomous vehicles. Moreover, the driving scene reconstruction method, Combining the road traffic situation information and the target information around the vehicle, the reconstructed driving scene can reflect the current real driving environment of the vehicle in real time, which is more in line with the actual needs of users in the automatic driving state and ordinary driving state. The driver provides more effective assistance and improves driving safety.
需要说明的是,尽管已经列举了道路交通情况信息的详细内容,但本领域技术人员能够理解,道路交通情况信息并不限于所列举的内容。事实上,用户完全可根据个人喜好和/或实际应用场景灵活设定从传感器信息、车联网信息和地图信息中获取的道路交通情况信息的内容,只要采用本申请的驾驶场景重构方法来重构自车的驾驶场景均在本申请的保护范围之内。It should be noted that, although the detailed content of the road traffic situation information has been listed, those skilled in the art can understand that the road traffic situation information is not limited to the listed content. In fact, the user can flexibly set the content of the road traffic situation information obtained from sensor information, IoV information and map information according to personal preferences and/or actual application scenarios, as long as the driving scene reconstruction method of the present application is used to reconstruct the content of the road traffic situation information. The driving scenarios of the self-built vehicle are all within the scope of protection of the present application.
图13为根据本申请一实施例的驾驶场景重构装置的结构框图。本申请实施例还提供了一种驾驶场景重构装置,如图13所示,该驾驶场景重构装置,可以包括:FIG. 13 is a structural block diagram of a driving scene reconstruction apparatus according to an embodiment of the present application. The embodiment of the present application also provides a driving scene reconstruction device. As shown in FIG. 13 , the driving scene reconstruction device may include:
获取模块21,用于获取传感器信息、车联网信息和地图信息中的道路交通情况信息;an acquisition module 21, configured to acquire the road traffic situation information in the sensor information, the Internet of Vehicles information and the map information;
整合模块22,与获取模块21连接,用于对所述传感器信息、所述车联网信息和所述地图信息中的 道路交通情况信息进行整合;The integration module 22, connected with the acquisition module 21, is used to integrate the road traffic situation information in the sensor information, the vehicle networking information and the map information;
重构模块23,与整合模块22连接,用于基于整合后的道路交通情况信息,重构自车的驾驶场景。The reconstruction module 23 is connected with the integration module 22, and is used for reconstructing the driving scene of the self-vehicle based on the integrated road traffic situation information.
图14为根据本申请一实施例的驾驶场景重构装置的获取模块的结构框图。在一种实施方式中,如图14所示,获取模块21可以包括以下至少之一:FIG. 14 is a structural block diagram of an acquisition module of a driving scene reconstruction apparatus according to an embodiment of the present application. In one embodiment, as shown in FIG. 14 , the acquisition module 21 may include at least one of the following:
点云数据获取子模块211,用于接收来自雷达的点云数据,并解析所述点云数据得到道路标识信息;The point cloud data acquisition sub-module 211 is used to receive point cloud data from radar, and analyze the point cloud data to obtain road sign information;
视频流获取子模块212,用于接收来自图像采集设备的视频流,并解析所述视频流得到道路标识信息和车道线信息。The video stream acquisition sub-module 212 is configured to receive the video stream from the image acquisition device, and parse the video stream to obtain road sign information and lane line information.
在一种实施方式中,获取模块21可以包括:In one embodiment, the obtaining module 21 may include:
车联网信息获取子模块213,用于获取来自车载单元和/或路侧单元的道路标识信息、车道线信息、道路交通异常情况信息和拥堵状况信息中的至少之一。The vehicle networking information acquisition sub-module 213 is configured to acquire at least one of road sign information, lane line information, road traffic abnormality information and congestion condition information from the vehicle-mounted unit and/or the roadside unit.
在一种实施方式中,获取模块21可以包括:In one embodiment, the obtaining module 21 may include:
地图信息获取子模块214,用于从地图信息获取道路标识信息、车道线信息、道路交通场景信息和导航路径规划信息中的至少之一。The map information acquisition sub-module 214 is configured to acquire at least one of road identification information, lane line information, road traffic scene information and navigation path planning information from the map information.
图15为根据本申请一实施例的驾驶场景重构装置的整合模块的结构框图。在一种实施方式中,如图15所示,整合模块22包括以下至少之一:FIG. 15 is a structural block diagram of an integration module of a driving scene reconstruction apparatus according to an embodiment of the present application. In one embodiment, as shown in FIG. 15 , the integration module 22 includes at least one of the following:
道路标识信息整合子模块221,用于对所述传感器信息、所述车联网信息和所述地图信息中的道路标识信息进行整合;a road sign information integration sub-module 221, configured to integrate the sensor information, the vehicle networking information and the road sign information in the map information;
车道线信息整合子模块222,用于对所述传感器信息、所述车联网信息和所述地图信息中的车道线信息进行整合;a lane line information integration sub-module 222, configured to integrate the sensor information, the vehicle networking information and the lane line information in the map information;
场景信息整合子模块223,用于对地图信息中的道路交通场景信息进行整合;The scene information integration sub-module 223 is used to integrate the road traffic scene information in the map information;
异常情况信息整合子模块224,用于对车联网信息中的道路交通异常情况信息进行筛选和整合;The abnormal situation information integration sub-module 224 is used to screen and integrate the road traffic abnormal situation information in the Internet of Vehicles information;
导航路径优化子模块225,用于将车联网信息中的拥堵状况信息和所述地图信息中的导航路径规划信息结合,得到优化的导航路径规划信息。The navigation path optimization sub-module 225 is configured to combine the congestion status information in the IoV information with the navigation path planning information in the map information to obtain optimized navigation path planning information.
在一种实施方式中,道路标识信息包括交通灯信息和/或限速标志信息。In one embodiment, the road sign information includes traffic light information and/or speed limit sign information.
在一种实施方式中,车道线信息包括自车所处车道的信息、周边车辆所处车道的信息、车道线的颜色信息、车道线的类型信息和车道线的位置信息中的至少之一。In one embodiment, the lane line information includes at least one of information about the lane where the vehicle is located, information about the lane where surrounding vehicles are located, color information of the lane line, type information of the lane line, and position information of the lane line.
在一种实施方式中,道路交通场景信息包括十字路口信息、道路合并信息、道路分叉信息和匝道信息中的至少之一。In one embodiment, the road traffic scene information includes at least one of intersection information, road merging information, road bifurcation information, and ramp information.
在一种实施方式中,道路交通异常情况信息包括道路施工信息、异常车辆信息和紧急车辆信息中的至少之一。In one embodiment, the road traffic abnormal situation information includes at least one of road construction information, abnormal vehicle information and emergency vehicle information.
在一种实施方式中,重构模块23用于基于整合后的道路标识信息、车道线信息、道路交通场景信息、道路交通异常情况信息和优化的导航路径规划信息中的至少之一,重构自车的驾驶场景。In one embodiment, the reconstruction module 23 is configured to reconstruct the reconstruction based on at least one of the integrated road sign information, lane line information, road traffic scene information, road traffic abnormal situation information and optimized navigation path planning information. The driving scene of the car.
在一种实施方式中,获取模块21还用于获取所述传感器信息和所述车联网信息中的目标信息;整合模块22还用于对所述传感器信息和所述车联网信息中的目标信息进行整合;重构模块23用于基于整合后的道路交通情况信息和整合后的目标信息,重构自车的驾驶场景。In one embodiment, the acquiring module 21 is further configured to acquire the sensor information and the target information in the vehicle networking information; the integration module 22 is further configured to acquire the sensor information and the target information in the vehicle networking information Perform integration; the reconstruction module 23 is used to reconstruct the driving scene of the self-vehicle based on the integrated road traffic situation information and the integrated target information.
在一种实施方式中,目标信息包括目标的尺寸信息、类型信息、位置信息和方向信息中的至少之一。In one embodiment, the target information includes at least one of size information, type information, location information and orientation information of the target.
在一种实施方式中,如图13所示,驾驶场景重构装置还可以包括:In one embodiment, as shown in FIG. 13 , the driving scene reconstruction device may further include:
提取模块24,用于提取自车的自动驾驶等级信息和等级提示信息;The extraction module 24 is used to extract the automatic driving level information and level prompt information of the self-vehicle;
重构模块23用于基于整合后的道路交通情况信息、自动驾驶等级信息和等级提示信息,重构自车的驾驶场景。The reconstruction module 23 is configured to reconstruct the driving scene of the self-vehicle based on the integrated road traffic condition information, automatic driving level information and level prompt information.
在一种实施方式中,驾驶场景重构装置还可以包括:In one embodiment, the driving scene reconstruction device may further include:
提取模块24,用于提取自车的自动驾驶功能信息和功能提示信息;The extraction module 24 is used to extract the automatic driving function information and function prompt information of the self-vehicle;
重构模块23用于基于整合后的道路交通情况信息、自动驾驶功能信息和功能提示信息,重构自车的驾驶场景。The reconstruction module 23 is used to reconstruct the driving scene of the self-vehicle based on the integrated road traffic condition information, automatic driving function information and function prompt information.
本申请实施例中的各模块的功能可以参见上述方法中的对应描述,在此不再赘述。For the functions of the modules in the embodiments of the present application, reference may be made to the corresponding descriptions in the foregoing methods, and details are not described herein again.
图16为根据本申请一实施例的驾驶场景重构系统的结构框图。本申请实施例还提供了一种驾驶场景重构系统,如图16所示,该驾驶场景重构系统,包括以上所述的驾驶场景重构装置20,驾驶场景重构系统还可以包括:FIG. 16 is a structural block diagram of a driving scene reconstruction system according to an embodiment of the present application. The embodiment of the present application further provides a driving scene reconstruction system. As shown in FIG. 16 , the driving scene reconstruction system includes the driving scene reconstruction device 20 described above. The driving scene reconstruction system may further include:
传感器31,与驾驶场景重构装置20中连接,用于采集并向驾驶场景重构装置20输出传感器信息;The sensor 31 is connected to the driving scene reconstruction device 20 for collecting and outputting sensor information to the driving scene reconstruction device 20;
车联网装置32,与驾驶场景重构装置20连接,用于向驾驶场景重构装置输出车联网信息;The vehicle networking device 32 is connected to the driving scene reconstruction device 20, and is used for outputting the vehicle networking information to the driving scene reconstruction device;
地图装置33,与驾驶场景重构装置20连接,用于向驾驶场景重构装置输出地图信息;a map device 33, connected to the driving scene reconstruction device 20, and used for outputting map information to the driving scene reconstruction device;
显示装置34,与所述驾驶场景重构装置连接,用于从所述驾驶场景重构装置接收重构后的驾驶场景的数据,显示重构后的驾驶场景。The display device 34 is connected to the driving scene reconstruction device, and is configured to receive data of the reconstructed driving scene from the driving scene reconstruction device, and display the reconstructed driving scene.
在一种实施方式中,传感器21、车联网装置32、地图装置33均与驾驶场景重构装置20中的获取模块21连接。显示装置34可以与驾驶场景重构装置20中的重构装置连接。本领域技术人员可以理解,“连接”为电气连接,可以为CAN连接,或者可以为WIFI连接,或者可以为网络连接等。In one embodiment, the sensor 21 , the IoV device 32 , and the map device 33 are all connected to the acquisition module 21 in the driving scene reconstruction device 20 . The display device 34 may be connected to a reconstruction device in the driving scene reconstruction device 20 . Those skilled in the art can understand that a "connection" is an electrical connection, which may be a CAN connection, or a WIFI connection, or a network connection, or the like.
在一种实施方式中,驾驶场景重构装置可以为控制器,控制器集成有获取模块、整合模块、提取模块和重构模块。显示装置可以为车辆中的具有显示功能的仪表控制器。在一种实施方式中,控制器集成有获取模块、整合模块、提取模块。显示装置可以为车辆中的具有显示功能的仪表控制器,仪表控制器可以实现重构模块的功能和显示的功能。In one embodiment, the driving scene reconstruction device may be a controller, and the controller is integrated with an acquisition module, an integration module, an extraction module and a reconstruction module. The display device may be an instrument controller with a display function in the vehicle. In one embodiment, the controller is integrated with an acquisition module, an integration module, and an extraction module. The display device can be an instrument controller with a display function in the vehicle, and the instrument controller can realize the function of the reconstruction module and the function of display.
本申请实施例各装置中的各模块的功能可以参见上述方法中的对应描述,在此不再赘述。For the functions of each module in each device in this embodiment of the present application, reference may be made to the corresponding description in the foregoing method, and details are not described herein again.
本申请实施例还提供了一种车辆,在一种实施方式中,车辆可以包括以上所述的驾驶场景重构装置。在一种实施方式中,车辆可以包括以上所述的驾驶场景重构系统。An embodiment of the present application further provides a vehicle. In an implementation manner, the vehicle may include the above-mentioned driving scene reconstruction device. In one embodiment, the vehicle may include the driving scene reconstruction system described above.
图17为根据本申请一实施例的电子设备的结构框图。本申请实施例还提供了一种电子设备,如图17所示,该电子设备包括:至少一个处理器920,以及与至少一个处理器920通信连接的存储器910。存储器910内存储有可被至少一个处理器920执行的指令。指令被至少一个处理器920执行。处理器920执行该指令时实现上述实施例中的驾驶场景重构方法。存储器910和处理器920的数量可以为一个或多个。该电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本申请的实现。FIG. 17 is a structural block diagram of an electronic device according to an embodiment of the present application. An embodiment of the present application further provides an electronic device. As shown in FIG. 17 , the electronic device includes: at least one processor 920 , and a memory 910 communicatively connected to the at least one processor 920 . The memory 910 has stored therein instructions executable by at least one processor 920 . The instructions are executed by at least one processor 920 . When the processor 920 executes the instruction, the driving scene reconstruction method in the foregoing embodiment is implemented. The number of the memory 910 and the processor 920 may be one or more. The electronic device is intended to represent various forms of digital computers, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers. Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions are by way of example only, and are not intended to limit implementations of the application described and/or claimed herein.
该电子设备还可以包括通信接口930,用于与外界设备进行通信,进行数据交互传输。各个设备利用不同的总线互相连接,并且可以被安装在公共主板上或者根据需要以其它方式安装。处理器920可以对在电子设备内执行的指令进行处理,包括存储在存储器中或者存储器上以在外部输入/输出装置(诸如,耦合至接口的显示设备)上显示图形用户界面(Graphical User Interface,GUI)的图形信息的指令。在其它实施方式中,若需要,可以将多个处理器和/或多条总线与多个存储器和多个存储器一起使用。同样,可以连接多个电子设备,各个设备提供部分必要的操作(例如,作为服务器阵列、一组刀片式服务器、或者多处理器系统)。该总线可以分为地址总线、数据总线、控制总线等。为便于表示,图17中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。The electronic device may further include a communication interface 930 for communicating with external devices and performing interactive data transmission. The various devices are interconnected using different buses and can be mounted on a common motherboard or otherwise as desired. The processor 920 may process instructions for execution within the electronic device, including storing in or on memory to display a Graphical User Interface (Graphical User Interface) on an external input/output device, such as a display device coupled to the interface. GUI) commands for graphical information. In other embodiments, multiple processors and/or multiple buses may be used with multiple memories and multiple memories, if desired. Likewise, multiple electronic devices may be connected, each providing some of the necessary operations (eg, as a server array, a group of blade servers, or a multiprocessor system). The bus can be divided into address bus, data bus, control bus and so on. For ease of representation, only one thick line is shown in FIG. 17, but it does not mean that there is only one bus or one type of bus.
可选的,在具体实现上,如果存储器910、处理器920及通信接口930集成在一块芯片上,则存储器910、处理器920及通信接口930可以通过内部接口完成相互间的通信。Optionally, in specific implementation, if the memory 910, the processor 920 and the communication interface 930 are integrated on one chip, the memory 910, the processor 920 and the communication interface 930 can communicate with each other through an internal interface.
应理解的是,上述处理器可以是中央处理器(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processing,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者是任何常规的处理器等。值得说明的是,处理器可以是支持进阶精简指令集机器(Advanced RISC Machines,ARM)架构的处理器。It should be understood that the above-mentioned processor may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or any conventional processor or the like. It is worth noting that the processor may be a processor supporting an advanced reduced instruction set machine (Advanced RISC Machines, ARM) architecture.
本申请实施例提供了一种计算机可读存储介质(如上述的存储器910),其存储有计算机指令,该程序被处理器执行时实现本申请实施例中提供的方法。Embodiments of the present application provide a computer-readable storage medium (such as the above-mentioned memory 910 ), which stores computer instructions, and when the program is executed by a processor, implements the methods provided in the embodiments of the present application.
可选的,存储器910可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据驾驶场景重构方法的电子设备的使用所创建的数据等。此外,存储器910可以包括高速随机存取存储器,还可以包括非瞬时存储器,例如至少一个磁盘存储器件、闪存器件、或其他非瞬时固态存储器件。在一些实施例中,存储器910可选包括相对于处理器920远程设置的存储器,这些远程存储器可以通过网络连接至驾驶场景重构方法的电子设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。Optionally, the memory 910 may include a stored program area and a stored data area, wherein the stored program area may store an operating system and an application program required by at least one function; Use the created data, etc. Additionally, memory 910 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, the memory 910 may optionally include memory located remotely relative to the processor 920, and these remote memories may be connected to the electronics of the driving scene reconstruction method via a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包括于本申请的至少一个实施例或示例中。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。In the description of this specification, description with reference to the terms "one embodiment," "some embodiments," "example," "specific example," or "some examples", etc., mean specific features described in connection with the embodiment or example , structure, material or feature is included in at least one embodiment or example of the present application. Furthermore, the particular features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, those skilled in the art may combine and combine the different embodiments or examples described in this specification, as well as the features of the different embodiments or examples, without conflicting each other.
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或隐含地包括至少一个该特征。 在本申请的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。In addition, the terms "first" and "second" are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implying the number of indicated technical features. Thus, a feature delimited with "first", "second" may expressly or implicitly include at least one of that feature. In the description of the present application, "plurality" means two or more, unless otherwise expressly and specifically defined.
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或多个(两个或两个以上)用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分。并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能。Any description of a process or method in a flowchart or otherwise described herein may be understood to represent a representation of executable instructions comprising one or more (two or more) steps for implementing a specified logical function or process. A module, fragment or section of code. Also, the scope of the preferred embodiments of the present application includes alternative implementations in which the functions may be performed out of the order shown or discussed, including performing the functions substantially concurrently or in the reverse order depending upon the functions involved.
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。The logic and/or steps represented in flowcharts or otherwise described herein, for example, may be considered an ordered listing of executable instructions for implementing the logical functions, may be embodied in any computer-readable medium, For use with, or in conjunction with, an instruction execution system, apparatus, or device (such as a computer-based system, a system including a processor, or other system that can fetch instructions from and execute instructions from an instruction execution system, apparatus, or apparatus) or equipment.
应理解的是,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。上述实施例方法的全部或部分步骤是可以通过程序来指令相关的硬件完成,该程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。It should be understood that various parts of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. All or part of the steps of the method in the above-mentioned embodiments can be completed by instructing the relevant hardware through a program, and the program can be stored in a computer-readable storage medium. When the program is executed, it includes one of the steps of the method embodiment or its combination.
此外,在本申请各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。上述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读存储介质中。该存储介质可以是只读存储器,磁盘或光盘等。In addition, each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically alone, or two or more units may be integrated into one module. The above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. If the above-mentioned integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may also be stored in a computer-readable storage medium. The storage medium may be a read-only memory, a magnetic disk or an optical disk, and the like.
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到其各种变化或替换,这些都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。The above are only specific embodiments of the present application, but the protection scope of the present application is not limited thereto. Any person skilled in the art who is familiar with the technical field disclosed in the present application can easily think of various changes or Replacement, these should be covered within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (32)

  1. 一种驾驶场景重构方法,其特征在于,包括:A driving scene reconstruction method, comprising:
    获取传感器信息、车联网信息和地图信息中的道路交通情况信息;Obtain road traffic information from sensor information, vehicle networking information and map information;
    对所述传感器信息、所述车联网信息和所述地图信息中的道路交通情况信息进行整合;integrating the sensor information, the vehicle networking information and the road traffic situation information in the map information;
    基于整合后的道路交通情况信息,重构自车的驾驶场景。Based on the integrated road traffic situation information, the driving scene of the self-vehicle is reconstructed.
  2. 根据权利要求1所述的方法,其特征在于,获取传感器信息中的道路交通情况信息,包括以下至少之一:The method according to claim 1, wherein obtaining the road traffic situation information in the sensor information comprises at least one of the following:
    接收来自雷达的点云数据,并解析所述点云数据得到道路标识信息;Receive point cloud data from radar, and analyze the point cloud data to obtain road sign information;
    接收来自图像采集设备的视频流,并解析所述视频流得到道路标识信息和车道线信息。Receive a video stream from an image acquisition device, and parse the video stream to obtain road sign information and lane line information.
  3. 根据权利要求1所述的方法,其特征在于,获取车联网信息中的道路交通情况信息,包括:The method according to claim 1, wherein obtaining the road traffic situation information in the Internet of Vehicles information comprises:
    获取来自车载单元和/或路侧单元的道路标识信息、车道线信息、道路交通异常情况信息和拥堵状况信息中的至少之一。At least one of road identification information, lane line information, road traffic abnormality information and congestion condition information from the vehicle-mounted unit and/or the roadside unit is acquired.
  4. 根据权利要求1所述的方法,其特征在于,获取地图信息中的道路交通情况信息,包括:The method according to claim 1, wherein obtaining the road traffic situation information in the map information comprises:
    从地图信息获取道路标识信息、车道线信息、道路交通场景信息和导航路径规划信息中的至少之一。Obtain at least one of road identification information, lane line information, road traffic scene information and navigation path planning information from the map information.
  5. 根据权利要求1所述的方法,其特征在于,对所述传感器信息、所述车联网信息和所述地图信息中的道路交通情况信息进行整合,包括以下至少之一:The method according to claim 1, wherein integrating the sensor information, the Internet of Vehicles information and the road traffic situation information in the map information includes at least one of the following:
    对所述传感器信息、所述车联网信息和所述地图信息中的道路标识信息进行整合;integrating the sensor information, the vehicle networking information and the road sign information in the map information;
    对所述传感器信息、所述车联网信息和所述地图信息中的车道线信息进行整合;integrating the sensor information, the vehicle networking information and the lane line information in the map information;
    对所述地图信息中的道路交通场景信息进行整合;integrating the road traffic scene information in the map information;
    对所述车联网信息中的道路交通异常情况信息进行筛选和整合;Screening and integrating the abnormal road traffic information in the Internet of Vehicles information;
    将所述车联网信息中的拥堵状况信息和所述地图信息中的导航路径规划信息结合,得到优化的导航路径规划信息。Combining the congestion status information in the vehicle networking information and the navigation path planning information in the map information to obtain optimized navigation path planning information.
  6. 根据权利要求2至5中任一项所述的方法,其特征在于,所述道路标识信息包括交通灯信息和/或限速标志信息。The method according to any one of claims 2 to 5, wherein the road sign information includes traffic light information and/or speed limit sign information.
  7. 根据权利要求2至5中任一项所述的方法,其特征在于,所述车道线信息包括自车所处车道的信息、周边车辆所处车道的信息、车道线的颜色信息、车道线的类型信息和车道线的位置信息中的至少之一。The method according to any one of claims 2 to 5, wherein the lane line information includes information about the lane where the vehicle is located, information about the lane where surrounding vehicles are located, the color information of the lane line, the information of the lane line At least one of type information and position information of lane lines.
  8. 根据权利要求4或5所述的方法,其特征在于,所述道路交通场景信息包括十字路口信息、道路合并信息、道路分叉信息和匝道信息中的至少之一。The method according to claim 4 or 5, wherein the road traffic scene information includes at least one of intersection information, road merging information, road bifurcation information and ramp information.
  9. 根据权利要求3或5所述的方法,其特征在于,所述道路交通异常情况信息包括道路施工信息、异常车辆信息和紧急车辆信息中的至少之一。The method according to claim 3 or 5, wherein the road traffic abnormal situation information includes at least one of road construction information, abnormal vehicle information and emergency vehicle information.
  10. 根据权利要求1至5中任一项所述的方法,其特征在于,基于整合后的道路交通情况信息,重构自车的驾驶场景,包括:The method according to any one of claims 1 to 5, wherein, based on the integrated road traffic situation information, reconstructing the driving scene of the self-vehicle includes:
    基于整合后的道路标识信息、车道线信息、道路交通场景信息、道路交通异常情况信息和优化的导航路径规划信息中的至少之一,重构自车的驾驶场景。Based on at least one of the integrated road sign information, lane line information, road traffic scene information, road traffic abnormality information and optimized navigation path planning information, the driving scene of the self-vehicle is reconstructed.
  11. 根据权利要求1至5中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 5, wherein the method further comprises:
    获取所述传感器信息和所述车联网信息中的目标信息;acquiring the sensor information and the target information in the vehicle networking information;
    对所述传感器信息和所述车联网信息中的目标信息进行整合;integrating the sensor information and the target information in the vehicle networking information;
    基于整合后的道路交通情况信息和整合后的目标信息,重构自车的驾驶场景。Based on the integrated road traffic situation information and the integrated target information, the driving scene of the self-vehicle is reconstructed.
  12. 根据权利要求11所述的方法,其特征在于,所述目标信息包括目标的尺寸信息、类型信息、位置信息和方向信息中的至少之一。The method according to claim 11, wherein the target information includes at least one of size information, type information, position information and direction information of the target.
  13. 根据权利要求1至5中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 5, wherein the method further comprises:
    提取自车的自动驾驶等级信息和等级提示信息;Extract the autopilot level information and level prompt information of the vehicle;
    基于整合后的道路交通情况信息、自动驾驶等级信息和等级提示信息,重构自车的驾驶场景。Based on the integrated road traffic situation information, automatic driving level information and level prompt information, the driving scene of the self-vehicle is reconstructed.
  14. 根据权利要求1至5中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 5, wherein the method further comprises:
    提取自车的自动驾驶功能信息和功能提示信息;Extract the self-driving function information and function prompt information of the vehicle;
    基于整合后的道路交通情况信息、自动驾驶功能信息和功能提示信息,重构自车的驾驶场景。Based on the integrated road traffic condition information, automatic driving function information and function prompt information, the driving scene of the self-vehicle is reconstructed.
  15. 一种驾驶场景重构装置,其特征在于,包括:A driving scene reconstruction device, characterized in that it includes:
    获取模块,用于获取传感器信息、车联网信息和地图信息中的道路交通情况信息;The acquisition module is used to acquire the road traffic situation information in the sensor information, the Internet of Vehicles information and the map information;
    整合模块,用于对所述传感器信息、所述车联网信息和所述地图信息中的道路交通情况信息进行整 合;an integration module for integrating the sensor information, the vehicle networking information and the road traffic situation information in the map information;
    重构模块,用于基于整合后的道路交通情况信息,重构自车的驾驶场景。The reconstruction module is used to reconstruct the driving scene of the self-vehicle based on the integrated road traffic situation information.
  16. 根据权利要求15所述的装置,其特征在于,所述获取模块包括以下至少之一:The apparatus according to claim 15, wherein the acquiring module comprises at least one of the following:
    点云数据获取子模块,用于接收来自雷达的点云数据,并解析所述点云数据得到道路标识信息;The point cloud data acquisition sub-module is used to receive the point cloud data from the radar, and analyze the point cloud data to obtain the road sign information;
    视频流获取子模块,用于接收来自图像采集设备的视频流,并解析所述视频流得到道路标识信息和车道线信息。The video stream acquisition sub-module is used to receive the video stream from the image acquisition device, and analyze the video stream to obtain road sign information and lane line information.
  17. 根据权利要求15所述的装置,其特征在于,所述获取模块包括:The device according to claim 15, wherein the acquiring module comprises:
    车联网信息获取子模块,用于获取来自车载单元和/或路侧单元的道路标识信息、车道线信息、道路交通异常情况信息和拥堵状况信息中的至少之一。The vehicle networking information acquisition sub-module is used to acquire at least one of road identification information, lane line information, road traffic abnormal situation information and congestion situation information from the vehicle-mounted unit and/or the roadside unit.
  18. 根据权利要求15所述的装置,其特征在于,所述获取模块包括:The device according to claim 15, wherein the acquiring module comprises:
    地图信息获取子模块,用于从地图信息获取道路标识信息、车道线信息、道路交通场景信息和导航路径规划信息中的至少之一。The map information acquisition sub-module is used for acquiring at least one of road identification information, lane line information, road traffic scene information and navigation path planning information from the map information.
  19. 根据权利要求15所述的装置,其特征在于,所述整合模块包括以下至少之一:The device according to claim 15, wherein the integration module comprises at least one of the following:
    道路标识信息整合子模块,用于对所述传感器信息、所述车联网信息和所述地图信息中的道路标识信息进行整合;a road sign information integration sub-module for integrating the sensor information, the vehicle networking information and the road sign information in the map information;
    车道线信息整合子模块,用于对所述传感器信息、所述车联网信息和所述地图信息中的车道线信息进行整合;a lane line information integration sub-module for integrating the sensor information, the vehicle networking information and the lane line information in the map information;
    场景信息整合子模块,用于对所述地图信息中的道路交通场景信息进行整合;a scene information integration sub-module for integrating the road traffic scene information in the map information;
    异常情况信息整合子模块,用于对所述车联网信息中的道路交通异常情况信息进行筛选和整合;Abnormal situation information integration sub-module, used for screening and integrating the road traffic abnormal situation information in the IoV information;
    导航路径优化子模块,用于将所述车联网信息中的拥堵状况信息和所述地图信息中的导航路径规划信息结合,得到优化的导航路径规划信息。The navigation path optimization sub-module is used for combining the congestion status information in the vehicle networking information and the navigation path planning information in the map information to obtain optimized navigation path planning information.
  20. 根据权利要求16至19中任一项所述的装置,其特征在于,所述道路标识信息包括交通灯信息和/或限速标志信息。The device according to any one of claims 16 to 19, wherein the road sign information includes traffic light information and/or speed limit sign information.
  21. 根据权利要求16至19中任一项所述的装置,其特征在于,所述车道线信息包括自车所处车道的信息、周边车辆所处车道的信息、车道线的颜色信息、车道线的类型信息和车道线的位置信息中的至少之一。The device according to any one of claims 16 to 19, wherein the lane line information includes information about the lane where the vehicle is located, information about the lane where surrounding vehicles are located, the color information of the lane line, the information of the lane line At least one of type information and position information of lane lines.
  22. 根据权利要求18或19所述的装置,其特征在于,所述道路交通场景信息包括十字路口信息、道路合并信息、道路分叉信息和匝道信息中的至少之一。The apparatus according to claim 18 or 19, wherein the road traffic scene information includes at least one of intersection information, road merging information, road bifurcation information, and ramp information.
  23. 根据权利要求17或19所述的装置,其特征在于,所述道路交通异常情况信息包括道路施工信息、异常车辆信息和紧急车辆信息中的至少之一。The device according to claim 17 or 19, wherein the road traffic abnormal situation information includes at least one of road construction information, abnormal vehicle information and emergency vehicle information.
  24. 根据权利要求15至19中任一项所述的装置,其特征在于,所述重构模块用于基于整合后的道路标识信息、车道线信息、道路交通场景信息、道路交通异常情况信息和优化的导航路径规划信息中的至少之一,重构自车的驾驶场景。The device according to any one of claims 15 to 19, wherein the reconstruction module is configured to optimize the information based on the integrated road sign information, lane line information, road traffic scene information, road traffic abnormality information and At least one of the navigation path planning information, reconstructs the driving scene of the self-vehicle.
  25. 根据权利要求15至19中任一项所述的装置,其特征在于,The device according to any one of claims 15 to 19, characterized in that,
    所述获取模块还用于获取所述传感器信息和所述车联网信息中的目标信息;The acquisition module is further configured to acquire the sensor information and the target information in the vehicle networking information;
    所述整合模块还用于对所述传感器信息和所述车联网信息中的目标信息进行整合;The integration module is further configured to integrate the sensor information and the target information in the vehicle networking information;
    所述重构模块用于基于整合后的道路交通情况信息和整合后的目标信息,重构自车的驾驶场景。The reconstruction module is used to reconstruct the driving scene of the self-vehicle based on the integrated road traffic situation information and the integrated target information.
  26. 根据权利要求25所述的装置,其特征在于,所述目标信息包括目标的尺寸信息、类型信息、位置信息和方向信息中的至少之一。The apparatus according to claim 25, wherein the target information includes at least one of size information, type information, location information and direction information of the target.
  27. 根据权利要求15至19中任一项所述的装置,其特征在于,所述装置还包括:The device according to any one of claims 15 to 19, wherein the device further comprises:
    提取模块,用于提取自车的自动驾驶等级信息和等级提示信息;The extraction module is used to extract the autopilot level information and level prompt information of the vehicle;
    所述重构模块用于基于整合后的道路交通情况信息、自动驾驶等级信息和等级提示信息,重构自车的驾驶场景。The reconstruction module is used for reconstructing the driving scene of the self-vehicle based on the integrated road traffic situation information, automatic driving level information and level prompt information.
  28. 根据权利要求15至19中任一项所述的装置,其特征在于,所述装置还包括:The device according to any one of claims 15 to 19, wherein the device further comprises:
    提取模块,用于提取自车的自动驾驶功能信息和功能提示信息;The extraction module is used to extract the automatic driving function information and function prompt information of the self-vehicle;
    所述重构模块用于基于整合后的道路交通情况信息、自动驾驶功能信息和功能提示信息,重构自车的驾驶场景。The reconstruction module is used for reconstructing the driving scene of the self-vehicle based on the integrated road traffic situation information, automatic driving function information and function prompt information.
  29. 一种驾驶场景重构系统,其特征在于,包括权利要求15至28中任一项所述的驾驶场景重构装置,所述系统还包括:A driving scene reconstruction system, comprising the driving scene reconstruction device according to any one of claims 15 to 28, the system further comprising:
    传感器,与所述驾驶场景重构装置连接,用于采集并向所述驾驶场景重构装置输出传感器信息;a sensor, connected to the driving scene reconstruction device, for collecting and outputting sensor information to the driving scene reconstruction device;
    车联网装置,与所述驾驶场景重构装置连接,用于向所述驾驶场景重构装置输出车联网信息;an Internet of Vehicles device, connected to the driving scene reconstruction device, and used for outputting the Internet of Vehicles information to the driving scene reconstruction device;
    地图装置,与所述驾驶场景重构装置连接,用于向所述驾驶场景重构装置输出地图信息;a map device, connected to the driving scene reconstruction device, for outputting map information to the driving scene reconstruction device;
    显示装置,与所述驾驶场景重构装置连接,用于从所述驾驶场景重构装置接收重构后的驾驶场景的数据,显示重构后的驾驶场景。The display device is connected to the driving scene reconstruction device, and is used for receiving data of the reconstructed driving scene from the driving scene reconstruction device, and displaying the reconstructed driving scene.
  30. 一种车辆,其特征在于,包括权利要求15至28中任一项所述的驾驶场景重构装置,或者,包括权利要求29所述的驾驶场景重构系统。A vehicle, characterized by comprising the driving scene reconstruction device according to any one of claims 15 to 28 , or comprising the driving scene reconstruction system according to claim 29 .
  31. 一种电子设备,其特征在于,包括:An electronic device, comprising:
    至少一个处理器;以及at least one processor; and
    与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-14中任一项所述的方法。The memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform the execution of any of claims 1-14 Methods.
  32. 一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机指令,所述计算机指令被处理器执行时实现如权利要求1-14中任一项所述的方法。A computer-readable storage medium having computer instructions stored in the computer-readable storage medium, the computer instructions implementing the method according to any one of claims 1-14 when executed by a processor.
PCT/CN2021/085226 2020-07-16 2021-04-02 Driving scene reconstruction method and apparatus, system, vehicle, device, and storage medium WO2022012094A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010685995.5A CN111880533B (en) 2020-07-16 2020-07-16 Driving scene reconstruction method, device, system, vehicle, equipment and storage medium
CN202010685995.5 2020-07-16

Publications (1)

Publication Number Publication Date
WO2022012094A1 true WO2022012094A1 (en) 2022-01-20

Family

ID=73155618

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/085226 WO2022012094A1 (en) 2020-07-16 2021-04-02 Driving scene reconstruction method and apparatus, system, vehicle, device, and storage medium

Country Status (2)

Country Link
CN (1) CN111880533B (en)
WO (1) WO2022012094A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114413952A (en) * 2022-01-29 2022-04-29 重庆长安汽车股份有限公司 Test method for scene reconstruction of automobile instrument
CN114513887A (en) * 2022-02-15 2022-05-17 遥相科技发展(北京)有限公司 Driving assistance method and system based on Internet of vehicles
CN115474176A (en) * 2022-08-22 2022-12-13 武汉大学 Interaction method and equipment for vehicle-road-cloud three-terminal data in automatic driving map
CN115523939A (en) * 2022-09-21 2022-12-27 合肥工业大学智能制造技术研究院 Driving information visualization system based on cognitive map
CN116046014A (en) * 2023-03-31 2023-05-02 小米汽车科技有限公司 Track planning method, track planning device, electronic equipment and readable storage medium
CN116052454A (en) * 2023-01-06 2023-05-02 中国第一汽车股份有限公司 Vehicle driving data determining method and device and electronic equipment
CN116608879A (en) * 2023-05-19 2023-08-18 亿咖通(湖北)技术有限公司 Information display method, apparatus, storage medium, and program product

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111880533B (en) * 2020-07-16 2023-03-24 华人运通(上海)自动驾驶科技有限公司 Driving scene reconstruction method, device, system, vehicle, equipment and storage medium
CN112590670A (en) * 2020-12-07 2021-04-02 安徽江淮汽车集团股份有限公司 Three-lane environment display method, device, equipment and storage medium
CN112560253B (en) * 2020-12-08 2023-02-24 中国第一汽车股份有限公司 Method, device and equipment for reconstructing driving scene and storage medium
CN112612287B (en) * 2020-12-28 2022-03-15 清华大学 System, method, medium and device for planning local path of automatic driving automobile
CN113033684A (en) * 2021-03-31 2021-06-25 浙江吉利控股集团有限公司 Vehicle early warning method, device, equipment and storage medium
CN113460086B (en) * 2021-06-30 2022-08-09 重庆长安汽车股份有限公司 Control system, method, vehicle and storage medium for automatically driving to enter ramp
CN113547979A (en) * 2021-07-01 2021-10-26 深圳元戎启行科技有限公司 Vehicle behavior information prompting method and device, computer equipment and storage medium
CN113706870B (en) * 2021-08-30 2022-06-10 广州文远知行科技有限公司 Method for collecting main vehicle lane change data in congested scene and related equipment
CN113947893A (en) * 2021-09-03 2022-01-18 网络通信与安全紫金山实验室 Method and system for restoring driving scene of automatic driving vehicle
CN114013452B (en) * 2021-09-29 2024-02-06 江铃汽车股份有限公司 Automatic driving control method, system, readable storage medium and vehicle
CN114291102A (en) * 2021-12-13 2022-04-08 浙江华锐捷技术有限公司 Auxiliary driving strategy fusion method, system, vehicle and readable storage medium
CN114170803B (en) * 2021-12-15 2023-06-16 阿波罗智联(北京)科技有限公司 Road side sensing system and traffic control method
CN114104002B (en) * 2021-12-21 2023-11-21 华人运通(江苏)技术有限公司 Automatic driving system monitoring method, device, equipment and storage medium
CN114435403B (en) * 2022-02-22 2023-11-03 重庆长安汽车股份有限公司 Navigation positioning checking system and method based on environment information
CN115115779A (en) * 2022-06-28 2022-09-27 重庆长安汽车股份有限公司 Road scene reconstruction method, system, equipment and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190130765A1 (en) * 2017-10-31 2019-05-02 Cummins Inc. Sensor fusion and information sharing using inter-vehicle communication
CN110083163A (en) * 2019-05-20 2019-08-02 三亚学院 A kind of 5G C-V2X bus or train route cloud cooperation perceptive method and system for autonomous driving vehicle
CN110758243A (en) * 2019-10-31 2020-02-07 的卢技术有限公司 Method and system for displaying surrounding environment in vehicle driving process
CN110926487A (en) * 2018-09-19 2020-03-27 阿里巴巴集团控股有限公司 Driving assistance method, driving assistance system, computing device, and storage medium
CN111402588A (en) * 2020-04-10 2020-07-10 河北德冠隆电子科技有限公司 High-precision map rapid generation system and method for reconstructing abnormal roads based on space-time trajectory
CN111880533A (en) * 2020-07-16 2020-11-03 华人运通(上海)自动驾驶科技有限公司 Driving scene reconstruction method, device, system, vehicle, equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016151750A1 (en) * 2015-03-24 2016-09-29 パイオニア株式会社 Map information storage device, automatic drive control device, control method, program, and storage medium
JP6625148B2 (en) * 2018-02-09 2019-12-25 本田技研工業株式会社 Self-driving vehicle and vehicle control method
CN110069064B (en) * 2019-03-19 2021-01-29 驭势科技(北京)有限公司 Method for upgrading automatic driving system, automatic driving system and vehicle-mounted equipment
CN110232257B (en) * 2019-07-02 2020-10-23 吉林大学 Construction method of automatic driving test scene and difficulty coefficient calculation method thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190130765A1 (en) * 2017-10-31 2019-05-02 Cummins Inc. Sensor fusion and information sharing using inter-vehicle communication
CN110926487A (en) * 2018-09-19 2020-03-27 阿里巴巴集团控股有限公司 Driving assistance method, driving assistance system, computing device, and storage medium
CN110083163A (en) * 2019-05-20 2019-08-02 三亚学院 A kind of 5G C-V2X bus or train route cloud cooperation perceptive method and system for autonomous driving vehicle
CN110758243A (en) * 2019-10-31 2020-02-07 的卢技术有限公司 Method and system for displaying surrounding environment in vehicle driving process
CN111402588A (en) * 2020-04-10 2020-07-10 河北德冠隆电子科技有限公司 High-precision map rapid generation system and method for reconstructing abnormal roads based on space-time trajectory
CN111880533A (en) * 2020-07-16 2020-11-03 华人运通(上海)自动驾驶科技有限公司 Driving scene reconstruction method, device, system, vehicle, equipment and storage medium

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114413952A (en) * 2022-01-29 2022-04-29 重庆长安汽车股份有限公司 Test method for scene reconstruction of automobile instrument
CN114413952B (en) * 2022-01-29 2023-06-16 重庆长安汽车股份有限公司 Test method for automobile instrument scene reconstruction
CN114513887A (en) * 2022-02-15 2022-05-17 遥相科技发展(北京)有限公司 Driving assistance method and system based on Internet of vehicles
CN115474176A (en) * 2022-08-22 2022-12-13 武汉大学 Interaction method and equipment for vehicle-road-cloud three-terminal data in automatic driving map
CN115474176B (en) * 2022-08-22 2024-03-08 武汉大学 Interaction method and device for vehicle-road-cloud three-terminal data in automatic driving map
CN115523939A (en) * 2022-09-21 2022-12-27 合肥工业大学智能制造技术研究院 Driving information visualization system based on cognitive map
CN115523939B (en) * 2022-09-21 2023-10-20 合肥工业大学智能制造技术研究院 Driving information visualization system based on cognitive map
CN116052454A (en) * 2023-01-06 2023-05-02 中国第一汽车股份有限公司 Vehicle driving data determining method and device and electronic equipment
CN116046014A (en) * 2023-03-31 2023-05-02 小米汽车科技有限公司 Track planning method, track planning device, electronic equipment and readable storage medium
CN116046014B (en) * 2023-03-31 2023-06-30 小米汽车科技有限公司 Track planning method, track planning device, electronic equipment and readable storage medium
CN116608879A (en) * 2023-05-19 2023-08-18 亿咖通(湖北)技术有限公司 Information display method, apparatus, storage medium, and program product

Also Published As

Publication number Publication date
CN111880533B (en) 2023-03-24
CN111880533A (en) 2020-11-03

Similar Documents

Publication Publication Date Title
WO2022012094A1 (en) Driving scene reconstruction method and apparatus, system, vehicle, device, and storage medium
US11789445B2 (en) Remote control system for training deep neural networks in autonomous machine applications
JP7399164B2 (en) Object detection using skewed polygons suitable for parking space detection
US10810872B2 (en) Use sub-system of autonomous driving vehicles (ADV) for police car patrol
CN111123933B (en) Vehicle track planning method and device, intelligent driving area controller and intelligent vehicle
US10625676B1 (en) Interactive driving system and method
CN111915915A (en) Driving scene reconstruction method, device, system, vehicle, equipment and storage medium
DE102021117456A1 (en) SYSTEMS AND PROCEDURES FOR RISK ASSESSMENT AND DIRECTIONAL WARNING FOR PEDESTRIAN CROSSINGS
DE102018104801A1 (en) SUPPORT FOR DRIVERS ON TRIP CHANGES
CN111133448A (en) Controlling autonomous vehicles using safe arrival times
US20210191394A1 (en) Systems and methods for presenting curated autonomy-system information of a vehicle
CN112543876B (en) System for sensor synchronicity data analysis in an autonomous vehicle
WO2022134364A1 (en) Vehicle control method, apparatus and system, device, and storage medium
DE102019113114A1 (en) BEHAVIOR-CONTROLLED ROUTE PLANNING IN AUTONOMOUS MACHINE APPLICATIONS
US20240037964A1 (en) Systems and methods for performing operations in a vehicle using gaze detection
JP2020053046A (en) Driver assistance system and method for displaying traffic information
US11285974B2 (en) Vehicle control system and vehicle
EP3627110B1 (en) Method for planning trajectory of vehicle
US11338819B2 (en) Cloud-based vehicle calibration system for autonomous driving
US20220073104A1 (en) Traffic accident management device and traffic accident management method
JP2022132075A (en) Ground Truth Data Generation for Deep Neural Network Perception in Autonomous Driving Applications
DE102020131353A1 (en) FACIAL ANALYSIS BASED ON A NEURAL NETWORK USING FACE LANDMARKS AND RELATED TRUSTED VALUES
CN114475656B (en) Travel track prediction method, apparatus, electronic device and storage medium
WO2021053763A1 (en) Driving assistance device, driving assistance method, and program
US20220324490A1 (en) System and method for providing an rnn-based human trust model

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21842429

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14-06-2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21842429

Country of ref document: EP

Kind code of ref document: A1