WO2022012094A1 - Procédé et appareil de reconstruction de scène de conduite, système, véhicule, dispositif et support d'enregistrement - Google Patents

Procédé et appareil de reconstruction de scène de conduite, système, véhicule, dispositif et support d'enregistrement Download PDF

Info

Publication number
WO2022012094A1
WO2022012094A1 PCT/CN2021/085226 CN2021085226W WO2022012094A1 WO 2022012094 A1 WO2022012094 A1 WO 2022012094A1 CN 2021085226 W CN2021085226 W CN 2021085226W WO 2022012094 A1 WO2022012094 A1 WO 2022012094A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
vehicle
road traffic
driving scene
road
Prior art date
Application number
PCT/CN2021/085226
Other languages
English (en)
Chinese (zh)
Inventor
丁磊
朱兰芹
何磊
胡健
Original Assignee
华人运通(上海)自动驾驶科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华人运通(上海)自动驾驶科技有限公司 filed Critical 华人运通(上海)自动驾驶科技有限公司
Publication of WO2022012094A1 publication Critical patent/WO2022012094A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0278Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/028Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using a RF signal

Definitions

  • the present application relates to the technical field of automatic driving, and in particular, to a driving scene reconstruction method, apparatus, system, vehicle, electronic device, and computer-readable storage medium.
  • Self-driving car also known as driverless car, computer-driven car or wheeled mobile robot
  • driverless car computer-driven car or wheeled mobile robot
  • the driving scene reconstruction system can better provide the driver with the surrounding situation of the vehicle, so that the driver can clearly understand the surrounding situation of the vehicle in a relaxed state.
  • the sensors and other components used in autonomous driving work normally, which can realize scene reconstruction and provide driving assistance for the driver.
  • the driving scene information provided by the driving scene reconstruction system is limited, which reduces the automatic driving experience and has hidden dangers of automatic driving.
  • the embodiments of the present application provide a method, device, system, vehicle, electronic device, and computer-readable storage medium for automatic driving reconstruction, so as to solve the problems existing in the related art, and the technical solutions are as follows:
  • an embodiment of the present application provides a driving scene reconstruction method, including:
  • the driving scene of the self-vehicle is reconstructed.
  • an embodiment of the present application provides a driving scene reconstruction device, including:
  • the acquisition module is used to acquire the road traffic situation information in the sensor information, the Internet of Vehicles information and the map information;
  • an integration module configured to integrate the sensor information, the vehicle networking information and the road traffic situation information in the map information
  • the reconstruction module is used to reconstruct the driving scene of the self-vehicle based on the integrated road traffic situation information.
  • an embodiment of the present application provides a driving scene reconstruction system, including the driving scene reconstruction device described above, and the system further includes:
  • a sensor connected to the driving scene reconstruction device, for collecting and outputting sensor information to the driving scene reconstruction device;
  • an Internet of Vehicles device connected to the driving scene reconstruction device, and used for outputting the Internet of Vehicles information to the driving scene reconstruction device;
  • a map device connected to the driving scene reconstruction device, for outputting map information to the driving scene reconstruction device;
  • the display device is connected to the driving scene reconstruction device, and is used for receiving data of the reconstructed driving scene from the driving scene reconstruction device, and displaying the reconstructed driving scene.
  • an embodiment of the present application provides a vehicle, which includes the above-mentioned driving scene reconstruction device, or includes the above-mentioned driving scene reconstruction system.
  • an embodiment of the present application provides an electronic device, the electronic device includes: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions that can be executed by the at least one processor , so that at least one processor can execute the above driving scene reconstruction method.
  • an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores computer instructions, and when the computer instructions are executed on a computer, the method in any one of the implementation manners of the above aspects is executed.
  • the driving scene reconstruction method integrates sensor information, Internet of Vehicles information and map information, enriches the information sources of the surrounding environment of the autonomous driving vehicle, and combines road traffic situation information to make the reconstructed driving scene closer to the real one.
  • the driving environment can provide more effective assistance for the driver and improve driving safety.
  • FIG. 1 is a schematic block diagram of a driving scene reconstruction system according to an exemplary embodiment
  • Fig. 2 shows the processing procedure of the controller in Fig. 1;
  • FIG. 3 shows the types of information handled by the processor
  • FIG. 4 is a schematic flowchart of a driving scene reconstruction method according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of road traffic situation information in a driving scene reconstruction method according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of target information in a driving scene reconstruction method according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application.
  • FIG. 9 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application.
  • FIG. 10 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application.
  • FIG. 11 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application.
  • FIG. 12 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application.
  • FIG. 13 is a structural block diagram of a driving scene reconstruction device according to an embodiment of the present application.
  • FIG. 14 is a structural block diagram of an acquisition module of a driving scene reconstruction device according to an embodiment of the present application.
  • FIG. 15 is a structural block diagram of an integration module of a driving scene reconstruction device according to an embodiment of the present application.
  • 16 is a structural block diagram of a driving scene reconstruction system according to an embodiment of the present application.
  • FIG. 17 is a structural block diagram of an electronic device according to an embodiment of the present application.
  • ADAS sensors can collect environmental data inside and outside the car through various on-board sensors, and perform technical processing such as identification, detection and tracking of static and dynamic objects, so that drivers can The fastest time to detect possible dangers and take corresponding measures to improve driving safety.
  • Sensors include, but are not limited to, one or more image acquisition devices (eg, cameras), inertial measurement units, radars, and the like.
  • the image acquisition device can be used to collect target information, road marking information and lane line information of the surrounding environment of the autonomous vehicle.
  • the inertial measurement unit can sense position and orientation changes of the autonomous vehicle based on inertial acceleration.
  • Radar can use radio signals to sense objects, road signs, within the local environment of an autonomous vehicle.
  • the radar unit may additionally sense the speed and/or heading of the target.
  • the image capture device may include one or more devices for capturing images of the environment surrounding the autonomous vehicle.
  • the image capture device may be a still camera and/or a video camera.
  • the cameras may include infrared cameras. The camera may be moved mechanically, eg by mounting the camera on a rotating and/or tilting platform.
  • Sensors may also include, for example, sonar sensors, infrared sensors, steering sensors, accelerator sensors, brake sensors, audio sensors (eg, microphones), and the like. Audio sensors can be configured to pick up sound from the environment surrounding the autonomous vehicle.
  • the steering sensor may be configured to sense the steering angle of the steering wheel, the wheels of the vehicle, or a combination thereof.
  • the accelerator sensor and the brake sensor sense the accelerator position and the brake position of the vehicle, respectively. In some cases, the accelerator sensor and brake sensor may be integrated into an integrated accelerator/brake sensor.
  • V2X vehicle to everything, the Internet of Vehicles, or the Internet of Vehicles
  • V2X communication is an important key technology for realizing environmental perception, information interaction and collaborative control in the Internet of Vehicles. It adopts various communication technologies to realize vehicle-to-vehicle (Vehicle-To-Vehicle, referred to as V2V), vehicle-to-road (Vehicle-To-Infrastructure, referred to as V2I) and vehicle-to-person (Vehicle-To-Person, referred to as V2P) interconnection It can effectively use information such as extraction and sharing on the information network platform to effectively manage and control vehicles and provide comprehensive services. In this way, a series of road traffic information such as real-time road conditions, road sign information, lane line information, and target information can be obtained, thereby improving driving safety, reducing congestion, improving traffic efficiency, and providing in-vehicle entertainment information.
  • V2V vehicle-to-vehicle
  • V2I vehicle-to-road
  • FIG. 1 is a schematic block diagram of a driving scene reconstruction system according to an exemplary embodiment.
  • the controller can receive the raw data from the sensor, and the controller can also receive the road information (also called lane line information), traffic sign information from the V2X (also referred to as road sign information), surrounding vehicles or/and pedestrians and other traffic participants (also referred to as target information), etc., the controller can also receive road information (also referred to as lane line information) from the map device, traffic Sign information (also referred to as road sign information), own vehicle position information, navigation information (also referred to as navigation route planning information), and the like.
  • road information also called lane line information
  • traffic sign information also referred to as road sign information
  • surrounding vehicles or/and pedestrians and other traffic participants also referred to as target information
  • target information also referred to as target information
  • the controller can also receive road information (also referred to as lane line information) from the map device, traffic Sign information (also referred to as road sign information), own vehicle position information, navigation information (also
  • the controller After the controller receives information from sensor information, vehicle networking information and map information, it obtains road traffic condition information from the received information, integrates the road traffic condition information, and integrates the integrated road traffic condition information such as lane line information. , traffic sign information (also known as road sign information), target information (including target type, direction, location, alarm, etc.) and the vehicle's trajectory, etc. are transmitted to the instrument controller.
  • the instrument controller processes the received information into driving scene data, and transmits the driving scene data to the instrument display device to display the reconstructed driving scene.
  • the road traffic condition information may include road sign information, lane line information, road traffic abnormal condition information, congestion condition information, road traffic scene information, navigation path planning information, target information, and the like.
  • the targets include traffic participants such as vehicles and pedestrians around the ego vehicle.
  • FIG. 2 shows the processing procedure of the controller in FIG. 1
  • FIG. 3 shows the types of information processed by the controller.
  • the sensors may include, for example, radars and image acquisition devices, among others. Radar can use radio signals to sense target information and road marking information in the surrounding environment of autonomous vehicles, and generate point cloud data such as point cloud maps. Image capture devices such as cameras can be used to capture road signs, lane lines, and objects in the surrounding environment to generate video streams.
  • the controller may include a classification processing module and an information fusion module (also referred to as an integration module).
  • the classification processing module may include a target information identification module, a traffic information identification module (also referred to as a road traffic condition information identification module), and a function alarm module.
  • the target information identification module identifies the target information from the information received by the controller, and transmits the target information to the information fusion module;
  • the traffic information identification module identifies the road traffic condition information from the information received by the controller, and transmits the road traffic condition information to the information fusion module;
  • the function alarm module identifies the function alarm information from the information received by the controller, and sends the function alarm information to the information fusion module.
  • the alarm information is transmitted to the information fusion module (also called the integration module).
  • the information fusion module integrates the received target information, road traffic condition information, and function alarm information respectively, and outputs the integrated target information, road traffic condition information, and function alarm information.
  • the target identification module can identify information such as the type, coordinates, and orientation of the target.
  • the traffic information recognition module can identify the location and current state of traffic lights, the value and location of speed limit signs, the type and coordinates of lane lines, road traffic scenarios, road traffic anomalies, and optimized road path planning.
  • the function alarm module can include automatic driving level, forward collision warning, emergency braking warning, intersection collision warning, blind spot warning, lane change warning, speed limit warning, lane keeping warning, emergency lane keeping, rear collision warning, rear cross traffic Intersection collision warning, door opening warning, left turn assist, red light running warning, reverse overtaking warning, vehicle loss of control warning, abnormal vehicle warning, vulnerable traffic participant warning, etc.
  • the information fusion module combines the function alarm information with the target information, the automatic driving level, the lane keeping status and the lane line information, the navigation path planning and the vehicle movement, and outputs the integrated target information and road traffic information. , Function alarm information.
  • the target information may include:
  • Target status none, yes, alarm level 1, alarm level 2, alarm level 3, etc.
  • Target type For example, cars, off-road vehicles, SUVs, passenger cars, buses, small trucks, trucks, motorcycles, two-wheelers, adults, children, etc.;
  • Target orientation for example, forward, backward, left, right, etc.
  • the lane line information may include:
  • Type solid line, dotted line, double yellow line, road edge, road edge, etc.
  • A0, A1, A2, and A3 are all polynomial coefficients of the lane line on the left side of the own lane, where A0 represents the lateral distance from the center of the vehicle to the lane line, and a positive value indicates that the lane line is on the left;
  • the heading angle of the vehicle relative to the lane line, a positive value means the lane line is counterclockwise;
  • A2 means the lane line curvature, a positive value means the lane line bends to the left;
  • A3 means the change rate of the lane line curvature, and a positive value means the lane line bends to the left.
  • the controller After the controller obtains the lane line information from the sensor information or V2X information, it can obtain the values of A0, A1, A2, and A3.
  • the lane line is drawn on the display device, the lane line is drawn according to the lane line equation, so that the lane line consistent with the actual lane line can be displayed in the reconstructed driving scene.
  • the traffic information may include:
  • the current lane left 1, left 2, right 1, right 2, etc.;
  • Path planning direction none, go straight, turn left, turn right, front left, front right, U-turn, etc.
  • Traffic light status none, red light, yellow light, green light, countdown, etc.
  • Road traffic scenarios ramps, intersections, road merging, road bifurcations, T-junctions, etc.
  • this application proposes a driving scene reconstruction method based on actual traffic.
  • FIG. 4 is a schematic flowchart of a driving scene reconstruction method according to an embodiment of the present application.
  • the driving scene reconstruction method may include:
  • the driving scene reconstruction method of the present application integrates sensor information from ADAS sensors, vehicle networking information and map information from V2X, and enriches the information sources of the surrounding environment of autonomous vehicles. Moreover, the driving scene reconstruction method combines Road traffic situation information, thus, the reconstructed driving scene is closer to the real driving environment, more in line with the actual needs of users in the automatic driving state and ordinary driving state, which can provide more effective assistance for the driver and improve the driving experience. safety.
  • the sensor information, the Internet of Vehicles information and the road traffic situation information in the map information are acquired in real time, and the sensor information, the Internet of Vehicles information and the map information are The road traffic situation information is integrated; based on the integrated road traffic situation information, the driving scene of the self-vehicle is reconstructed in real time. Therefore, the reconstructed driving scene can display the surrounding environment of the vehicle in real time.
  • FIG. 5 is a schematic diagram of road traffic situation information in a driving scene reconstruction method according to an embodiment of the present application.
  • acquiring road traffic condition information in the sensor information may include: receiving point cloud data from radar, and analyzing the point cloud data to obtain road sign information.
  • acquiring road traffic condition information in sensor information may include: receiving a video stream from an image acquisition device, and parsing the video stream to obtain road sign information and lane line information.
  • the road sign information may include red street light information (also referred to as traffic light information), speed limit sign information, and the like.
  • the traffic light information may include the status of the traffic light, the location of the traffic light, and the like.
  • the speed limit sign information may include the value of the speed limit sign, the position of the speed limit sign, and the like. For example, the value of the speed limit sign on a certain section of an urban road is 50, indicating that the maximum speed limit of this section is 50km/h.
  • road sign information is not limited to traffic light information and speed limit sign information, but may also include identifiable signs such as U-turn signs, going straight signs, turn signs and the like.
  • the lane line information may include at least one of information about the lane where the vehicle is located, information about the lane where surrounding vehicles are located, color information of the lane line, type information of the lane line, and position information of the lane line.
  • the colors of the lane lines include white, yellow, and so on.
  • the types of lane lines include solid lines, dashed lines, double yellow lines, curbs, etc.
  • the position information of the lane line may be the coordinate information of the lane line, which corresponds to the position of the lane line.
  • the lane line can determine the lane, and therefore, the lane line information can reflect the number of lanes, the lane in which the self-vehicle is located, and the lanes in which surrounding vehicles are located.
  • acquiring road traffic situation information in the Internet of Vehicles information may include: acquiring road identification information, lane line information, road traffic abnormality information and/or roadside units from the vehicle-mounted unit and/or the roadside unit At least one of the congestion status information.
  • the device for V2X communication may include at least one of an on-board unit (On Board Unit, OBU) and a road side unit (Road Side Unit, RSU).
  • the source of the Internet of Vehicles information is at least one of an on-board unit (OBU) and a roadside unit (RSU).
  • OBU on-board unit
  • RSU roadside unit
  • the road traffic abnormal situation information may include at least one of road construction information, abnormal vehicle information, and emergency vehicle information. Therefore, through the information of the Internet of Vehicles, the optimal driving route of the self-vehicle can be planned according to the abnormal situation information of road traffic and the information of the congestion situation, so as to reach the destination efficiently.
  • acquiring road traffic situation information in the map information may include: acquiring at least one of road identification information, lane line information, road traffic scene information and navigation path planning information from the map information .
  • the map information can come from a Beidou navigation system or a GPS navigation system.
  • the map information may include information such as road marking information, lane line information, road traffic scene information, and navigation path planning information.
  • road marking information In order to acquire road traffic situation information, at least one of road identification information, lane line information, road traffic scene information and navigation path planning information may be acquired from map information.
  • the road traffic scene information may include road information such as intersection information, road merging information, road bifurcation information, and ramp information.
  • the navigation route planning information includes the travel route information from the origin to the destination.
  • the map information may provide a plurality of travel route information from the origin to the destination, and the navigation route planning information may include a plurality of travel route information from the origin to the destination.
  • the road traffic situation information obtained from the map information includes: the lane where the vehicle and surrounding vehicles are located, the type and coordinates of the lane line, the location of the traffic lights, the distance from the vehicle to the ramp, the intersection. The distance to the mouth, etc., navigation path planning information, etc.
  • sensor information there may be duplicate information in sensor information, vehicle networking information, and map information.
  • sensor information, vehicle networking information, and map information all include road sign information.
  • the road sign information can be integrated, thereby selecting the optimal road sign information for reconstructing the driving scene.
  • the optimal road traffic situation information exemplarily, as shown in FIG. 5 , in S102, the sensor information, the Internet of Vehicles information and the road traffic situation information in the map information are integrated, which may include one of the following:
  • the congestion status information in the Internet of Vehicles information and the navigation path planning information in the map information are combined to obtain optimized navigation path planning information.
  • road sign information such as the location of speed limit signs, traffic lights, etc.
  • road sign information can be parsed from the point cloud data of the radar; road sign information can be parsed from the video stream of the image acquisition device, such as limited The position of the speed sign, the speed limit value, the position of the traffic light, the status of the traffic light (red light or green light or yellow light), etc.
  • the Internet of Vehicles information includes road identification information, such as the position of the speed limit sign, the speed limit value, the position of the traffic light, the status of the traffic light, and so on.
  • the map information includes road identification information, such as the location of traffic lights, speed limit values, and the like.
  • the accuracy of the information collected by sensor information, vehicle networking information, and map information may be different.
  • the position of the speed limit sign and the position of the traffic light in the point cloud data of the radar is compared with the image
  • the position of the speed limit sign and the position of the traffic light in the video stream of the acquisition device is more accurate; the point cloud data of the radar does not contain the speed limit value, the status of the traffic light, etc.
  • integrating sensor information, vehicle networking information, and road marking information in map information may include screening, selecting, and fusing road marking information in sensor information, vehicle networking information, and map information.
  • the output integrated road sign information may include: the position and current state of the traffic lights; the value and position of the speed limit sign, etc.
  • the information with the best accuracy can be selected as the integrated information.
  • the location information of traffic lights can be obtained from sensor information, Internet of Vehicles information, and map information. Then, the location information of traffic lights with the best accuracy is selected as the integrated location information of traffic lights.
  • the location information of the traffic lights in the point cloud data of the radar has the best accuracy. Then, the location information of the traffic lights parsed from the point cloud data of the radar is selected as the integrated traffic light location information.
  • this information can be directly used as the integrated information.
  • the status of traffic lights can only be parsed from the video stream of the image acquisition device. The status of the traffic lights after.
  • the accuracy of each information in sensor information, vehicle networking information and map information can be known.
  • the source of the information can be directly set, or the information can be screened through a model , selection, fusion.
  • the position and speed limit value of the integrated speed limit sign are obtained. , the position of the traffic light, the status of the traffic light, and serve as the integrated road sign information.
  • the lane line information can be parsed from the video stream of the image acquisition device, for example, the color of the lane line (white or yellow, etc.), the type of the lane line (dotted line or solid line or road edge, etc.), the lane line location, etc.
  • the information of the Internet of Vehicles includes the information of the lane where the vehicle is located, and the information of the lane where the surrounding vehicles are located.
  • the map information includes information about the lane where the vehicle is located, information about the lane where the surrounding vehicles are located, the type of lane line, the location of the lane line, and other information.
  • integrating sensor information, vehicle networking information, and lane line information in map information may include screening, selecting, and fusing the sensor information, vehicle networking information, and lane line information in map information. Output the type, coordinates, etc. of the integrated lane lines.
  • the information with the best accuracy can be selected as the integrated information.
  • the type of lane line and the position of the lane line can be obtained from both the sensor information and the map information. Then, the type of lane line and the position of the lane line with the best accuracy are selected as the integrated type of lane line and lane line. s position.
  • the type of lane lines parsed from the video stream of the image acquisition device has the best accuracy, then the type of lane lines parsed from the video stream of the image acquisition device is selected as the type of integrated lane lines; map information The position accuracy of the lane lines obtained from the is the best, then, the position of the lane lines obtained from the map information is selected as the integrated lane line position.
  • this information can be directly used as the integrated information.
  • sensor information vehicle networking information and map information
  • only the color of the lane line can be parsed from the video stream of the image acquisition device, then the color of the lane line parsed from the video stream of the image acquisition device is directly used as the integration Rear lane line color.
  • the accuracy of each information in sensor information, vehicle networking information and map information can be known.
  • the source of the information can be directly set, or the information can be screened through a model , selection, fusion.
  • the information of the Internet of Vehicles and the information of the lane where the vehicle is located the information of the lane where the surrounding vehicles are located, the type of lane line, the color of the lane line, the position of the lane line, etc.
  • the information of the lane where the self-vehicle is located, the information of the lane where the surrounding vehicles are located, the type of lane line, the color of the lane line, the position of the lane line and other information are used as the integrated lane line information.
  • the road traffic scene information in the map information is integrated, and the road traffic scene information in the map information may be directly used as the integrated road traffic scene information.
  • the road traffic scene information may include road information such as intersection information, road merging information, road bifurcation information, and ramp information. Road information such as intersection information, road merging information, road bifurcation information, and ramp information obtained from the map information can be used as the integrated road traffic scene information.
  • screening and integrating the road traffic abnormality information in the Internet of Vehicles information may include: screening, selecting, and merging the road traffic abnormality information from a plurality of roadside units, and obtaining the self-vehicle information.
  • Information about road traffic anomalies that affect driving Those skilled in the art can understand that the self-vehicle can communicate with multiple roadside units, so that road traffic abnormality information from multiple roadside units can be obtained.
  • the road traffic abnormal situation information that affects the driving of the vehicle can be obtained, which is beneficial to predict the driving route of the vehicle.
  • the road traffic abnormal situation information may include road construction information, abnormal vehicle information, emergency vehicle information, and the like.
  • the road construction information, abnormal vehicle information, emergency vehicle information, etc. obtained from the Internet of Vehicles information can be filtered and integrated as the integrated road traffic abnormal situation information.
  • abnormal vehicles may include malfunctioning vehicles, out-of-control vehicles, and the like.
  • Emergency vehicles may include ambulances, fire trucks, police cars, and the like.
  • combining the congestion status information in the Internet of Vehicles information with the navigation path planning information in the map information to obtain optimized navigation path planning information may include obtaining a plurality of origins from the map information.
  • the navigation path planning information to the destination combined with the congestion status information in the IoV information, can discard the navigation path planning information affected by the congestion status, thereby obtaining optimized navigation path planning information, which can reach the destination more efficiently.
  • reconstructing the driving scene of the self-vehicle may include:
  • the driving scene of the self-vehicle is reconstructed.
  • the integrated road sign information, lane line information, road traffic scene information, road traffic abnormality information and optimized navigation path planning information are all optimized information, so that the reconstructed driving scene of the vehicle is closer to the real one
  • the driving environment can provide effective assistance for the driver.
  • the driving scene reconstruction method may further include:
  • the driving scene of the self-vehicle is reconstructed.
  • the driving scene of the self-vehicle is reconstructed, so that the obtained driving scene can not only reflect the road traffic situation, but also reflect the target information around the self-vehicle.
  • the driving scene is more in line with the real driving environment, and the targets around the vehicle can also play an early warning role for the driver, so that the driver can predict the danger of collision and better improve driving safety.
  • the target information may include size information of the target, type information of the target (eg, vehicle and vehicle type, or pedestrian and pedestrian type), direction information of the target (eg, forward, backward, left and right), the location information of the target (for example, the abscissa and ordinate of the target relative to the vehicle), etc.
  • Vehicle types may include cars, off-road vehicles, passenger cars, buses, pickup trucks, trucks, motorcycles, two-wheelers, and the like.
  • Pedestrian types may include adults, children, and the like.
  • acquiring target information in sensor information may include at least one of the following:
  • FIG. 6 is a schematic diagram of target information in a driving scene reconstruction method according to an embodiment of the present application.
  • the target information obtained from the point cloud data of the radar may include information such as the size of the target, the position of the target, and the direction of the target.
  • the target information obtained from the video stream of the image capture device may include information such as the size of the target, the type of the target, the position of the target, and the direction of the target.
  • acquiring target information in the Internet of Vehicles information may include:
  • the target information obtained from the Internet of Vehicles information may include information such as the type of the target, the location of the target, and the direction of the target.
  • integrating the sensor information and the target information in the Internet of Vehicles information may include: screening, corresponding, and selecting the sensor information and the target information in the Internet of Vehicles information. Output the integrated target type, coordinates, orientation (also called direction), etc.
  • the information with the best accuracy can be selected as the integrated information.
  • the location information of the target can be obtained from the point cloud data of the radar, the video stream of the image acquisition device, and the information of the Internet of Vehicles.
  • the position information of the target obtained from the point cloud data of the radar is more accurate, then the position information of the target obtained from the point cloud data of the radar is used as the position information of the integrated target.
  • the direction information of the target obtained from the Internet of Vehicles information is more accurate, then the direction information of the target obtained from the Internet of Vehicles information is used as the direction information of the integrated target.
  • the accuracy of the sensor information and the target information in the Internet of Vehicles information is known. Therefore, in the process of information integration, the source of the information can be directly set, or the information can be screened through a model. , correspondence, selection, fusion. After integrating the information such as the size of the target, the type of the target, the position of the target, and the direction of the target in the sensor information and the Internet of Vehicles information, the size of the integrated target, the type of the target, the position of the target, and the direction of the target are obtained. information and serve as the integrated target information.
  • the driving scene reconstruction method may further include: receiving data of the reconstructed driving scene, and displaying the reconstructed driving scene. In this way, the reconstructed driving scene can be presented.
  • FIG. 7 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to an embodiment of the present application.
  • the integrated road traffic situation information includes the lane in which the self-vehicle is located, the left lane, the right lane, the color of the lane line, the type of the lane line, the target located in the lane, and the like.
  • the reconstructed driving scene is shown in Figure 7. It can be seen from Figure 7 that the driving scene includes the vehicle 11, the lane where the vehicle 11 is located, the left lane of the vehicle 11, and the right lane of the vehicle 11. , related objects in the left lane, related objects in the right lane, and lane lines (color, type).
  • the driving scene not only the relevant target but also the direction of the target is displayed, for example, the direction of the vehicle 12 and the direction of the vehicle 13 can be clearly obtained from the driving scene.
  • the driving scene may only include the most relevant targets of the vehicle.
  • the driving scene may display 3 targets in front of the vehicle, 1 target behind the vehicle, and two targets on the left and right sides of the vehicle. Type, direction and location of 1 target.
  • the driving scene may include 3 targets in front of the vehicle, 1 target behind the vehicle, 1 target on the left side and 1 target on the right side of the vehicle, 1 target behind the vehicle on the left side, and 1 target on the left side of the vehicle. Type, direction and location of 1 target in front, 1 target behind the right side of the vehicle, and 1 target in front of the right side of the vehicle.
  • the target information most relevant to the self-vehicle can be displayed in the driving scene, avoiding the display of targets that are less relevant to the self-vehicle, making the display screen more concise and clear, and allowing the driver to pay more attention to the most relevant target information.
  • Related goals improve the driver's experience and avoid display redundancy.
  • the driving scene may include vehicles and pedestrians located within a certain range of the vehicle.
  • the target may be a vehicle or a pedestrian whose lateral distance from the vehicle is within the range of X1 and the longitudinal distance is within the range of Y1.
  • the values of the horizontal distance X1 and the vertical distance Y1 can be determined according to actual needs.
  • FIG. 8 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application.
  • the driving scene shown in FIG. 8 includes an intersection, a lane perpendicular to the lane where the vehicle 11 is located, vehicles 12 and 13 located in the vertical lane, speed limit signs 14 , traffic lights 15 and surrounding areas of the vehicle related goals.
  • the driving scene shown in FIG. 8 not only the relevant targets but also the directions of the targets are displayed. For example, the head direction of the vehicle 12 and the head direction of the vehicle 13 can be clearly obtained from the driving scene.
  • the driving scene can include the T-junction, a lane perpendicular to the lane where the vehicle is located, vehicles located in the vertical lane, speed limit signs, traffic lights, and related objects around the vehicle.
  • FIG. 9 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application.
  • lane merging is included.
  • the driving scene shows the lane merging, the merged lane, and the related vehicles in the merged lane.
  • information about road forks, forked lanes and other target information located in the same lane as the vehicle may be displayed in the driving scene.
  • the driving scene may also include opposite lane information.
  • the object lane does not affect driving, and in a driving scene, the opposite lane may be displayed in a predetermined color such as gray, and the opposite lane target information is not displayed.
  • the target of the target lane usually does not affect driving. Therefore, displaying the opposite lane as a predetermined color and not displaying the target information of the opposite lane is more in line with driving habits and avoids Drivers pay attention to irrelevant road traffic situation information.
  • the targets shown are specific vehicle styles.
  • the contour information of the target can be obtained from sensor information and/or Internet of Vehicles information, and the contour information can be processed so that the specific style of the target can be displayed in the driving scene.
  • a dedicated target model library can be established for commonly used targets (vehicles or pedestrians), the type of the target can be obtained from sensor information and/or Internet of Vehicles information, and then the corresponding target model library can be called from the target model library.
  • the target model is presented in the driving scene.
  • the target model library includes the model of Mercedes-Benz. When the logo of the Mercedes-Benz is parsed from the image acquisition device, the model of the Mercedes-Benz can be retrieved from the target model library, thereby displaying the model of the Mercedes-Benz in the in the driving scene.
  • the sensor information, the Internet of Vehicles information and the road traffic condition information and target information in the map information are acquired in real time, so that the reconstructed driving scene can feed back the surrounding environment conditions of the vehicle in real time, and can display the real-time status of the vehicle in real time.
  • the driving scene is closer to the real driving scene of the vehicle.
  • road sign information, lane line information, road traffic scene information, road traffic abnormality information, congestion status information and navigation path planning information can be combined to obtain optimized navigation path planning information, and Fit the best route for the vehicle to travel.
  • the next driving path and movement direction of the vehicle can be displayed in the driving scene, and arrows and other methods can be used in the driving scene. Indicates the next movement direction of the vehicle (for example, left turn, right turn, U-turn, lane change, left front or right front, etc.), making the reconstructed driving scene more realistic and reliable, and providing effective assistance for the driver.
  • FIG. 10 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application. As shown in FIG. 10 , the next driving path and moving direction (turn left) of the self-vehicle 11 are displayed in the driving scene, and are indicated by means of arrows.
  • FIG. 11 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application. As shown in FIG. 11 , the next driving path and moving direction (lane change) of the self-vehicle 11 are displayed in the driving scene, and are indicated by means of arrows.
  • FIG. 12 is a schematic diagram of a driving scene reconstructed by a driving scene reconstruction method according to another embodiment of the present application. As shown in FIG. 12 , the next driving path and moving direction (turning right into the ramp) of the self-vehicle 11 are displayed in the driving scene, and are indicated by means of arrows.
  • automatic driving can be divided into six automatic driving levels, namely: L0 (no automation), the human driver drives the car with full authority, and can be warned during the driving process; L1 (driving support) , one operation of the steering wheel and acceleration and deceleration are supported through the driving environment, and the rest are operated by humans; L2 (partial automation), multiple operations in the steering wheel and acceleration and deceleration are supported through the driving environment, and the rest are operated by humans; L3 (Conditional automation), all driving operations are completed by the unmanned system, and humans provide appropriate responses according to system requirements; L4 (high automation), all driving operations are completed by the unmanned system, according to system requirements, humans do not All responses must be provided, the road and environmental conditions are limited; L5 (full automation), all driving operations are performed by the driverless system, where possible, the human takes over, the road and environmental conditions are not limited. Different levels of autonomous driving require different levels of driver involvement.
  • the driving scene reconstruction method may further include:
  • the level prompt information may include at least one of color information, sound information, blinking information, and the like.
  • the level prompt information may be color information
  • the automatic driving level corresponds to the color information
  • one automatic driving level corresponds to one color.
  • the background color of the driving scene may be displayed as a color corresponding to the automatic driving level. For example, when the automatic driving level is L1, the background color of the driving scene is displayed as light blue; when the automatic driving level is L2, the background color of the driving scene is displayed as light green; when the automatic driving level is L3, the background color of the driving scene is displayed as light green. The background color appears light purple.
  • the color of the ego vehicle may be displayed as a color corresponding to the level of autonomous driving.
  • the lane line color may be displayed as a color corresponding to the level of autonomous driving.
  • the level prompt information may be sound information
  • the automatic driving level corresponds to the sound information
  • one automatic driving level corresponds to one sound.
  • the display of the level prompt information is not limited to the above methods, as long as the reconstructed driving scene can display the level prompt information so that the driver can feel the driving level of the vehicle, it is within the scope of protection of the present application.
  • the level prompt information may be the content displayed in the scene, and different driving levels are reflected by setting different content displayed in the scene.
  • different driving levels can be embodied by setting the type or number of elements of the road traffic situation information displayed in the driving scene. For example, at the L0 driving level, all real-time road traffic information can be displayed in the reconstructed driving scene; at the L1 driving level, if only the lateral control function is available, only the lane line information can be displayed.
  • Control function which can only display the target (such as the vehicle) located on the front or rear side of the vehicle; at the L2 driving level, it can display the lane line information, the front side target and the rear side target at the same time; at the L3 driving level, it can be displayed at the same time Display lane line information, front targets, rear targets and road changes; at L4 driving level, the self-vehicle is highlighted, and other elements are simplified; at L5 driving level, only the self-vehicle can be displayed, and at the same time, the driving scene Scenes with entertainment, meetings, games and other links can be added to the system.
  • the automatic driving function includes no lateral and longitudinal control; Lane Keep Assist (LKA), Lane Center Assist (LCA), only longitudinal control; Highway Assist (HWA) ), that is, there is horizontal and vertical control, and the driver can take off his hands and feet; the traffic jam driving function (Traffic Jam Pilot, TJP), that is, there is horizontal and vertical control, and the driver can take off his hands and feet and eyes.
  • LKA Lane Keep Assist
  • LCDA Lane Center Assist
  • HWA Highway Assist
  • TJP Traffic Jam Pilot
  • the driving scene reconstruction method may further include:
  • Such a driving scene reconstruction method can clearly distinguish different driving functions in the driving scene, so that the driver can more truly feel the difference and change of different driving functions, so that the driver can grasp the driving participation in real time.
  • the function prompt information may include at least one of cruise speed information, lane line color information, sound information, flashing information, and the like.
  • the cruising speed information can be used as the prompt information of the longitudinal control function
  • the color information of the lane line can be used as the prompt information of the lateral control function.
  • the change of cruising speed is used to reflect the longitudinal control function
  • the color change of the lane line can be used to reflect the lateral control function. For example, when there is no horizontal and vertical control, the color of the lane line is displayed as white; when the HWA function is used, the color of the lane line is displayed as red; when the TJP function is used, the color of the lane line is displayed as yellow.
  • the display of the function prompt information is not limited to the above methods, as long as the reconstructed driving scene can display the function prompt information so that the driver can feel the driving function of the vehicle, it is within the scope of protection of the present application.
  • the driving scene reconstruction method of the present application can acquire sensor information, Internet of Vehicles information and road sign information, lane line information, road traffic scene information, abnormal road traffic situation information, target information and optimized navigation paths in real time in the map information. Planning information, and integrating these information, based on the integrated road sign information, lane line information, road traffic scene information, road traffic abnormal situation information, target information and optimized navigation path planning information, reconstruct the driving scene of the self-vehicle . Therefore, the reconstructed driving scene can adaptively adjust the content of the driving scene according to the actual road environment where the vehicle is located, and display the most relevant information to the user for the automatic driving task of the vehicle. For example, it can be displayed to the user.
  • Exit traffic lights the status of traffic lights, speed limit signs and values, the lane where the vehicle is located, the lane where surrounding vehicles are located, the color and type of lane lines, etc.; when the road traffic scene changes, you can In the actual situation, the user will be shown the intersection, road merging, road bifurcation or ramp, etc.; when the opposite lane information is displayed in the driving scene, the opposite lane can be displayed in gray, and the target lane target information will not be displayed; When there are road constructions, abnormal vehicles or emergency vehicles that affect driving in the environment, the reconstructed driving scene will display road constructions, abnormal vehicles or emergency vehicles; and the driving path of the self-vehicle is optimal driving path.
  • the driving scene reconstruction method of this application integrates sensor information from ADAS sensors, vehicle networking information and map information from V2X, and enriches the information sources of the surrounding environment of autonomous vehicles. Moreover, the driving scene reconstruction method, Combining the road traffic situation information and the target information around the vehicle, the reconstructed driving scene can reflect the current real driving environment of the vehicle in real time, which is more in line with the actual needs of users in the automatic driving state and ordinary driving state. The driver provides more effective assistance and improves driving safety.
  • the road traffic situation information is not limited to the listed content.
  • the user can flexibly set the content of the road traffic situation information obtained from sensor information, IoV information and map information according to personal preferences and/or actual application scenarios, as long as the driving scene reconstruction method of the present application is used to reconstruct the content of the road traffic situation information.
  • the driving scenarios of the self-built vehicle are all within the scope of protection of the present application.
  • FIG. 13 is a structural block diagram of a driving scene reconstruction apparatus according to an embodiment of the present application.
  • the embodiment of the present application also provides a driving scene reconstruction device.
  • the driving scene reconstruction device may include:
  • an acquisition module 21 configured to acquire the road traffic situation information in the sensor information, the Internet of Vehicles information and the map information;
  • the integration module 22, connected with the acquisition module 21, is used to integrate the road traffic situation information in the sensor information, the vehicle networking information and the map information;
  • the reconstruction module 23 is connected with the integration module 22, and is used for reconstructing the driving scene of the self-vehicle based on the integrated road traffic situation information.
  • FIG. 14 is a structural block diagram of an acquisition module of a driving scene reconstruction apparatus according to an embodiment of the present application.
  • the acquisition module 21 may include at least one of the following:
  • the point cloud data acquisition sub-module 211 is used to receive point cloud data from radar, and analyze the point cloud data to obtain road sign information;
  • the video stream acquisition sub-module 212 is configured to receive the video stream from the image acquisition device, and parse the video stream to obtain road sign information and lane line information.
  • the obtaining module 21 may include:
  • the vehicle networking information acquisition sub-module 213 is configured to acquire at least one of road sign information, lane line information, road traffic abnormality information and congestion condition information from the vehicle-mounted unit and/or the roadside unit.
  • the obtaining module 21 may include:
  • the map information acquisition sub-module 214 is configured to acquire at least one of road identification information, lane line information, road traffic scene information and navigation path planning information from the map information.
  • FIG. 15 is a structural block diagram of an integration module of a driving scene reconstruction apparatus according to an embodiment of the present application.
  • the integration module 22 includes at least one of the following:
  • a road sign information integration sub-module 221, configured to integrate the sensor information, the vehicle networking information and the road sign information in the map information;
  • a lane line information integration sub-module 222 configured to integrate the sensor information, the vehicle networking information and the lane line information in the map information;
  • the scene information integration sub-module 223 is used to integrate the road traffic scene information in the map information
  • the abnormal situation information integration sub-module 224 is used to screen and integrate the road traffic abnormal situation information in the Internet of Vehicles information;
  • the navigation path optimization sub-module 225 is configured to combine the congestion status information in the IoV information with the navigation path planning information in the map information to obtain optimized navigation path planning information.
  • the road sign information includes traffic light information and/or speed limit sign information.
  • the lane line information includes at least one of information about the lane where the vehicle is located, information about the lane where surrounding vehicles are located, color information of the lane line, type information of the lane line, and position information of the lane line.
  • the road traffic scene information includes at least one of intersection information, road merging information, road bifurcation information, and ramp information.
  • the road traffic abnormal situation information includes at least one of road construction information, abnormal vehicle information and emergency vehicle information.
  • the reconstruction module 23 is configured to reconstruct the reconstruction based on at least one of the integrated road sign information, lane line information, road traffic scene information, road traffic abnormal situation information and optimized navigation path planning information.
  • the driving scene of the car is configured to reconstruct the reconstruction based on at least one of the integrated road sign information, lane line information, road traffic scene information, road traffic abnormal situation information and optimized navigation path planning information. The driving scene of the car.
  • the acquiring module 21 is further configured to acquire the sensor information and the target information in the vehicle networking information;
  • the integration module 22 is further configured to acquire the sensor information and the target information in the vehicle networking information Perform integration;
  • the reconstruction module 23 is used to reconstruct the driving scene of the self-vehicle based on the integrated road traffic situation information and the integrated target information.
  • the target information includes at least one of size information, type information, location information and orientation information of the target.
  • the driving scene reconstruction device may further include:
  • the extraction module 24 is used to extract the automatic driving level information and level prompt information of the self-vehicle;
  • the reconstruction module 23 is configured to reconstruct the driving scene of the self-vehicle based on the integrated road traffic condition information, automatic driving level information and level prompt information.
  • the driving scene reconstruction device may further include:
  • the extraction module 24 is used to extract the automatic driving function information and function prompt information of the self-vehicle;
  • the reconstruction module 23 is used to reconstruct the driving scene of the self-vehicle based on the integrated road traffic condition information, automatic driving function information and function prompt information.
  • FIG. 16 is a structural block diagram of a driving scene reconstruction system according to an embodiment of the present application.
  • the embodiment of the present application further provides a driving scene reconstruction system.
  • the driving scene reconstruction system includes the driving scene reconstruction device 20 described above.
  • the driving scene reconstruction system may further include:
  • the sensor 31 is connected to the driving scene reconstruction device 20 for collecting and outputting sensor information to the driving scene reconstruction device 20;
  • the vehicle networking device 32 is connected to the driving scene reconstruction device 20, and is used for outputting the vehicle networking information to the driving scene reconstruction device;
  • a map device 33 connected to the driving scene reconstruction device 20, and used for outputting map information to the driving scene reconstruction device;
  • the display device 34 is connected to the driving scene reconstruction device, and is configured to receive data of the reconstructed driving scene from the driving scene reconstruction device, and display the reconstructed driving scene.
  • the senor 21 , the IoV device 32 , and the map device 33 are all connected to the acquisition module 21 in the driving scene reconstruction device 20 .
  • the display device 34 may be connected to a reconstruction device in the driving scene reconstruction device 20 .
  • a "connection" is an electrical connection, which may be a CAN connection, or a WIFI connection, or a network connection, or the like.
  • the driving scene reconstruction device may be a controller, and the controller is integrated with an acquisition module, an integration module, an extraction module and a reconstruction module.
  • the display device may be an instrument controller with a display function in the vehicle.
  • the controller is integrated with an acquisition module, an integration module, and an extraction module.
  • the display device can be an instrument controller with a display function in the vehicle, and the instrument controller can realize the function of the reconstruction module and the function of display.
  • An embodiment of the present application further provides a vehicle.
  • the vehicle may include the above-mentioned driving scene reconstruction device.
  • the vehicle may include the driving scene reconstruction system described above.
  • FIG. 17 is a structural block diagram of an electronic device according to an embodiment of the present application.
  • An embodiment of the present application further provides an electronic device.
  • the electronic device includes: at least one processor 920 , and a memory 910 communicatively connected to the at least one processor 920 .
  • the memory 910 has stored therein instructions executable by at least one processor 920 .
  • the instructions are executed by at least one processor 920 .
  • the processor 920 executes the instruction, the driving scene reconstruction method in the foregoing embodiment is implemented.
  • the number of the memory 910 and the processor 920 may be one or more.
  • the electronic device is intended to represent various forms of digital computers, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers.
  • Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular phones, smart phones, wearable devices, and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions are by way of example only, and are not intended to limit implementations of the application described and/or claimed herein.
  • the electronic device may further include a communication interface 930 for communicating with external devices and performing interactive data transmission.
  • the various devices are interconnected using different buses and can be mounted on a common motherboard or otherwise as desired.
  • the processor 920 may process instructions for execution within the electronic device, including storing in or on memory to display a Graphical User Interface (Graphical User Interface) on an external input/output device, such as a display device coupled to the interface. GUI) commands for graphical information.
  • multiple processors and/or multiple buses may be used with multiple memories and multiple memories, if desired.
  • multiple electronic devices may be connected, each providing some of the necessary operations (eg, as a server array, a group of blade servers, or a multiprocessor system).
  • the bus can be divided into address bus, data bus, control bus and so on. For ease of representation, only one thick line is shown in FIG. 17, but it does not mean that there is only one bus or one type of bus.
  • the memory 910, the processor 920 and the communication interface 930 are integrated on one chip, the memory 910, the processor 920 and the communication interface 930 can communicate with each other through an internal interface.
  • processor may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or any conventional processor or the like. It is worth noting that the processor may be a processor supporting an advanced reduced instruction set machine (Advanced RISC Machines, ARM) architecture.
  • Advanced RISC Machines Advanced RISC Machines
  • Embodiments of the present application provide a computer-readable storage medium (such as the above-mentioned memory 910 ), which stores computer instructions, and when the program is executed by a processor, implements the methods provided in the embodiments of the present application.
  • a computer-readable storage medium such as the above-mentioned memory 910
  • the memory 910 may include a stored program area and a stored data area, wherein the stored program area may store an operating system and an application program required by at least one function; Use the created data, etc.
  • memory 910 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device.
  • the memory 910 may optionally include memory located remotely relative to the processor 920, and these remote memories may be connected to the electronics of the driving scene reconstruction method via a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • first and second are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implying the number of indicated technical features. Thus, a feature delimited with “first”, “second” may expressly or implicitly include at least one of that feature.
  • plurality means two or more, unless otherwise expressly and specifically defined.
  • Any description of a process or method in a flowchart or otherwise described herein may be understood to represent a representation of executable instructions comprising one or more (two or more) steps for implementing a specified logical function or process.
  • a module, fragment or section of code may be understood to represent a representation of executable instructions comprising one or more (two or more) steps for implementing a specified logical function or process.
  • a module, fragment or section of code may be understood to represent a representation of executable instructions comprising one or more (two or more) steps for implementing a specified logical function or process.
  • a module, fragment or section of code A module, fragment or section of code.
  • the scope of the preferred embodiments of the present application includes alternative implementations in which the functions may be performed out of the order shown or discussed, including performing the functions substantially concurrently or in the reverse order depending upon the functions involved.
  • each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically alone, or two or more units may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. If the above-mentioned integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may also be stored in a computer-readable storage medium.
  • the storage medium may be a read-only memory, a magnetic disk or an optical disk, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)

Abstract

Procédé et appareil de reconstruction de scène de conduite, système, véhicule, dispositif électronique et support d'enregistrement lisible par ordinateur. Un procédé de reconstruction de scène de conduite, comprenant les étapes consistant à : acquérir des informations d'état de circulation routière dans des informations de capteur, des informations de l'Internet des véhicules et des informations de carte (S101) ; intégrer les informations d'état de circulation routière dans les informations de capteur, les informations de l'Internet des véhicules et les informations de carte (S102) ; et, sur la base des informations d'état de circulation routière intégrées, reconstruire le scénario de conduite d'un véhicule autonome (11) (S103). Le présent procédé de reconstruction de scénario de conduite fusionne des informations de capteur, des informations de l'Internet des véhicules, et des informations de carte, enrichissant les sources d'informations de l'environnement à proximité d'un véhicule autonome et incorporant des informations d'état de circulation routière de sorte que le scénario de conduite reconstruit soit plus proche de l'environnement de conduite réel, fournissant au conducteur une fonction d'assistance plus efficace et améliorant la sécurité de conduite.
PCT/CN2021/085226 2020-07-16 2021-04-02 Procédé et appareil de reconstruction de scène de conduite, système, véhicule, dispositif et support d'enregistrement WO2022012094A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010685995.5A CN111880533B (zh) 2020-07-16 2020-07-16 驾驶场景重构方法、装置、系统、车辆、设备及存储介质
CN202010685995.5 2020-07-16

Publications (1)

Publication Number Publication Date
WO2022012094A1 true WO2022012094A1 (fr) 2022-01-20

Family

ID=73155618

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/085226 WO2022012094A1 (fr) 2020-07-16 2021-04-02 Procédé et appareil de reconstruction de scène de conduite, système, véhicule, dispositif et support d'enregistrement

Country Status (2)

Country Link
CN (1) CN111880533B (fr)
WO (1) WO2022012094A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114413952A (zh) * 2022-01-29 2022-04-29 重庆长安汽车股份有限公司 一种汽车仪表场景重构的测试方法
CN114513887A (zh) * 2022-02-15 2022-05-17 遥相科技发展(北京)有限公司 一种基于车联网的驾驶辅助方法及系统
CN115474176A (zh) * 2022-08-22 2022-12-13 武汉大学 自动驾驶地图中车-路-云三端数据的交互方法及设备
CN115523939A (zh) * 2022-09-21 2022-12-27 合肥工业大学智能制造技术研究院 一种基于认知地图的驾驶信息可视化系统
CN116046014A (zh) * 2023-03-31 2023-05-02 小米汽车科技有限公司 道线规划方法、装置、电子设备及可读存储介质
CN116052454A (zh) * 2023-01-06 2023-05-02 中国第一汽车股份有限公司 车辆行驶数据确定方法、装置及电子设备
CN116608879A (zh) * 2023-05-19 2023-08-18 亿咖通(湖北)技术有限公司 信息显示方法、设备、存储介质及程序产品

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111880533B (zh) * 2020-07-16 2023-03-24 华人运通(上海)自动驾驶科技有限公司 驾驶场景重构方法、装置、系统、车辆、设备及存储介质
CN112590670A (zh) * 2020-12-07 2021-04-02 安徽江淮汽车集团股份有限公司 三车道环境显示方法、装置、设备及存储介质
CN112560253B (zh) * 2020-12-08 2023-02-24 中国第一汽车股份有限公司 驾驶场景的重构方法、装置、设备及存储介质
CN112612287B (zh) * 2020-12-28 2022-03-15 清华大学 一种自动驾驶汽车局部路径规划系统、方法、介质及设备
CN113033684A (zh) * 2021-03-31 2021-06-25 浙江吉利控股集团有限公司 一种车辆预警方法、装置、设备及存储介质
CN113460086B (zh) * 2021-06-30 2022-08-09 重庆长安汽车股份有限公司 自动驾驶进入匝道的控制系统、方法、车辆及存储介质
CN113547979A (zh) * 2021-07-01 2021-10-26 深圳元戎启行科技有限公司 车辆行为信息提示方法、装置、计算机设备和存储介质
CN113706870B (zh) * 2021-08-30 2022-06-10 广州文远知行科技有限公司 一种拥堵场景下主车换道数据的收集方法及相关设备
CN113947893A (zh) * 2021-09-03 2022-01-18 网络通信与安全紫金山实验室 一种自动驾驶车辆行车场景还原方法及系统
CN114013452B (zh) * 2021-09-29 2024-02-06 江铃汽车股份有限公司 自动驾驶控制方法、系统、可读存储介质及车辆
CN114291102A (zh) * 2021-12-13 2022-04-08 浙江华锐捷技术有限公司 辅助驾驶策略融合方法、系统、车辆和可读存储介质
CN114170803B (zh) * 2021-12-15 2023-06-16 阿波罗智联(北京)科技有限公司 路侧感知系统和交通控制方法
CN114104002B (zh) * 2021-12-21 2023-11-21 华人运通(江苏)技术有限公司 自动驾驶系统监控方法、装置、设备和存储介质
CN114435403B (zh) * 2022-02-22 2023-11-03 重庆长安汽车股份有限公司 一种基于环境信息的导航定位校核系统及方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190130765A1 (en) * 2017-10-31 2019-05-02 Cummins Inc. Sensor fusion and information sharing using inter-vehicle communication
CN110083163A (zh) * 2019-05-20 2019-08-02 三亚学院 一种用于自动驾驶汽车的5g c-v2x车路云协同感知方法及系统
CN110758243A (zh) * 2019-10-31 2020-02-07 的卢技术有限公司 一种车辆行驶过程中的周围环境显示方法和系统
CN110926487A (zh) * 2018-09-19 2020-03-27 阿里巴巴集团控股有限公司 辅助驾驶方法、辅助驾驶系统、计算设备及存储介质
CN111402588A (zh) * 2020-04-10 2020-07-10 河北德冠隆电子科技有限公司 基于时空轨迹重构异常道路高精地图快速生成系统与方法
CN111880533A (zh) * 2020-07-16 2020-11-03 华人运通(上海)自动驾驶科技有限公司 驾驶场景重构方法、装置、系统、车辆、设备及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016151750A1 (fr) * 2015-03-24 2016-09-29 パイオニア株式会社 Dispositif de mémorisation d'informations de carte, dispositif de commande de conduite automatique, procédé de commande, programme et support d'informations
JP6625148B2 (ja) * 2018-02-09 2019-12-25 本田技研工業株式会社 自動運転車両、及び車両制御方法
CN110069064B (zh) * 2019-03-19 2021-01-29 驭势科技(北京)有限公司 一种自动驾驶系统升级的方法、自动驾驶系统及车载设备
CN110232257B (zh) * 2019-07-02 2020-10-23 吉林大学 一种自动驾驶测试场景的构建方法及其难度系数计算方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190130765A1 (en) * 2017-10-31 2019-05-02 Cummins Inc. Sensor fusion and information sharing using inter-vehicle communication
CN110926487A (zh) * 2018-09-19 2020-03-27 阿里巴巴集团控股有限公司 辅助驾驶方法、辅助驾驶系统、计算设备及存储介质
CN110083163A (zh) * 2019-05-20 2019-08-02 三亚学院 一种用于自动驾驶汽车的5g c-v2x车路云协同感知方法及系统
CN110758243A (zh) * 2019-10-31 2020-02-07 的卢技术有限公司 一种车辆行驶过程中的周围环境显示方法和系统
CN111402588A (zh) * 2020-04-10 2020-07-10 河北德冠隆电子科技有限公司 基于时空轨迹重构异常道路高精地图快速生成系统与方法
CN111880533A (zh) * 2020-07-16 2020-11-03 华人运通(上海)自动驾驶科技有限公司 驾驶场景重构方法、装置、系统、车辆、设备及存储介质

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114413952A (zh) * 2022-01-29 2022-04-29 重庆长安汽车股份有限公司 一种汽车仪表场景重构的测试方法
CN114413952B (zh) * 2022-01-29 2023-06-16 重庆长安汽车股份有限公司 一种汽车仪表场景重构的测试方法
CN114513887A (zh) * 2022-02-15 2022-05-17 遥相科技发展(北京)有限公司 一种基于车联网的驾驶辅助方法及系统
CN115474176A (zh) * 2022-08-22 2022-12-13 武汉大学 自动驾驶地图中车-路-云三端数据的交互方法及设备
CN115474176B (zh) * 2022-08-22 2024-03-08 武汉大学 自动驾驶地图中车-路-云三端数据的交互方法及设备
CN115523939A (zh) * 2022-09-21 2022-12-27 合肥工业大学智能制造技术研究院 一种基于认知地图的驾驶信息可视化系统
CN115523939B (zh) * 2022-09-21 2023-10-20 合肥工业大学智能制造技术研究院 一种基于认知地图的驾驶信息可视化系统
CN116052454A (zh) * 2023-01-06 2023-05-02 中国第一汽车股份有限公司 车辆行驶数据确定方法、装置及电子设备
CN116046014A (zh) * 2023-03-31 2023-05-02 小米汽车科技有限公司 道线规划方法、装置、电子设备及可读存储介质
CN116046014B (zh) * 2023-03-31 2023-06-30 小米汽车科技有限公司 道线规划方法、装置、电子设备及可读存储介质
CN116608879A (zh) * 2023-05-19 2023-08-18 亿咖通(湖北)技术有限公司 信息显示方法、设备、存储介质及程序产品

Also Published As

Publication number Publication date
CN111880533A (zh) 2020-11-03
CN111880533B (zh) 2023-03-24

Similar Documents

Publication Publication Date Title
WO2022012094A1 (fr) Procédé et appareil de reconstruction de scène de conduite, système, véhicule, dispositif et support d'enregistrement
US11789445B2 (en) Remote control system for training deep neural networks in autonomous machine applications
JP7399164B2 (ja) 駐車スペース検出に適したスキューされたポリゴンを使用した物体検出
US10810872B2 (en) Use sub-system of autonomous driving vehicles (ADV) for police car patrol
CN111123933B (zh) 车辆轨迹规划的方法、装置、智能驾驶域控制器和智能车
US10625676B1 (en) Interactive driving system and method
CN111915915A (zh) 驾驶场景重构方法、装置、系统、车辆、设备及存储介质
DE102021117456A1 (de) Systeme und verfahren zur risikobewertung und gerichteten warnung bei fussgängerüberwegen
DE102018104801A1 (de) Unterstützen von fahrern bei fahrbahnspurwechseln
US20210191394A1 (en) Systems and methods for presenting curated autonomy-system information of a vehicle
WO2022134364A1 (fr) Procédé de commande de véhicule, appareil et système, dispositif et support de stockage
DE102019113114A1 (de) Verhaltensgesteuerte wegplanung in autonomen maschinenanwendungen
JP2020053046A (ja) 交通情報を表示するための運転者支援システム及び方法
US11285974B2 (en) Vehicle control system and vehicle
EP3627110B1 (fr) Procédé de planification de la trajectoire d'un véhicule
US11338819B2 (en) Cloud-based vehicle calibration system for autonomous driving
US20240037964A1 (en) Systems and methods for performing operations in a vehicle using gaze detection
US20220073104A1 (en) Traffic accident management device and traffic accident management method
DE112021001882T5 (de) Informationsverarbeitungseinrichtung, informationsverarbeitungsverfahren und programm
JP2022132075A (ja) 自律運転アプリケーションにおけるディープ・ニューラル・ネットワーク知覚のためのグラウンド・トゥルース・データ生成
CN116030652A (zh) 用于自主系统的让行场景编码
WO2021053763A1 (fr) Dispositif d'aide à la conduite, procédé d'aide à la conduite et programme
US20220324490A1 (en) System and method for providing an rnn-based human trust model
DE102020131353A1 (de) Auf einem neuronalen netz basierende gesichtsanalyse mittels gesichtslandmarken und zugehörigen vertrauenswerten
DE102022117475A1 (de) Übermitteln von fehlern an einen isolierten sicherheitsbereich eines systems auf einem chip

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21842429

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14-06-2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21842429

Country of ref document: EP

Kind code of ref document: A1