CN115221260B - Data processing method, device, vehicle and storage medium - Google Patents

Data processing method, device, vehicle and storage medium Download PDF

Info

Publication number
CN115221260B
CN115221260B CN202210843898.3A CN202210843898A CN115221260B CN 115221260 B CN115221260 B CN 115221260B CN 202210843898 A CN202210843898 A CN 202210843898A CN 115221260 B CN115221260 B CN 115221260B
Authority
CN
China
Prior art keywords
data
map data
road condition
vehicle
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210843898.3A
Other languages
Chinese (zh)
Other versions
CN115221260A (en
Inventor
李志明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202210843898.3A priority Critical patent/CN115221260B/en
Publication of CN115221260A publication Critical patent/CN115221260A/en
Application granted granted Critical
Publication of CN115221260B publication Critical patent/CN115221260B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models

Abstract

The present disclosure relates to a data processing method, apparatus, vehicle, and storage medium. A data processing method comprising: determining a road condition mode of a vehicle, and acquiring perception data and standard definition SD map data when the vehicle is determined to be in a first road condition mode, wherein the first road condition mode is a road condition which is not covered by high-precision HD map data; performing data fusion on the perception data and the SD map data to obtain fusion data; and displaying the fusion map data. Through the method and the device, even if the road conditions not covered by the high-precision HD map data are met, the user driving environment can be fed back comprehensively and in real time, the user driving environment is more real and vivid, and the user experience is improved.

Description

Data processing method, device, vehicle and storage medium
Technical Field
The present disclosure relates to the field of autopilot, and in particular to a data processing method, apparatus, vehicle, and storage medium.
Background
With the development of autopilot technology, synthetic reality (Synthesized Reality, SR) autopilot environment simulation display technology has been increasingly applied to vehicles.
At present, through an SR automatic driving environment simulation display technology, a High Definition (HD) High-precision map of a specific road condition can be rendered, and lane-level road conditions and traffic information are displayed for a driver.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a data processing method, apparatus, vehicle, and storage medium.
According to a first aspect of an embodiment of the present disclosure, there is provided a data processing method, including:
determining a road condition mode of a vehicle, and acquiring perception data and standard definition SD map data when the vehicle is determined to be in a first road condition mode, wherein the first road condition mode is a road condition which is not covered by high-precision HD map data;
performing data fusion on the perception data and the SD map data to obtain fusion map data;
and displaying the fusion map data.
Optionally, the method further comprises:
when the vehicle is determined to be switched from the first road condition mode to a second road condition mode, acquiring high-definition HD map data, wherein the second road condition mode is a road condition covered by the high-definition HD map data;
and displaying the high-definition HD map data.
Optionally, the data fusion of the perception data and the SD map data is performed to obtain fused map data, which includes:
three-dimensional rendering is carried out on the perception data to obtain three-dimensional perception data, and three-dimensional rendering is carried out on the SD map data to obtain three-dimensional SD map data;
performing coordinate transformation on the three-dimensional perception data and the three-dimensional SD map data to obtain target three-dimensional perception data and target three-dimensional SD map data under a target coordinate system;
and fusing the target three-dimensional perception data and the target three-dimensional SD map data to obtain the fused map data.
Optionally, the SD map data includes surrounding environment data outside of a road where the vehicle is located;
the three-dimensional rendering of the SD map data includes:
and performing three-dimensional rendering on the surrounding environment data based on an ear cutting method.
Optionally, the perceived data includes at least road data and obstacle data of a road on which the vehicle is located.
Optionally, the determining the road condition mode of the vehicle includes:
receiving a selection instruction of a road condition mode, wherein the road condition mode comprises the first road condition mode and the second road condition mode;
and determining the road condition mode of the vehicle according to the selection instruction.
According to a second aspect of embodiments of the present disclosure, there is provided a data processing apparatus comprising:
the determining module is used for determining a road condition mode of a vehicle, and acquiring perception data and standard definition SD map data when the vehicle is determined to be in a first road condition mode, wherein the first road condition mode is a road condition which is not covered by high-precision HD map data;
the fusion module is used for carrying out data fusion on the perception data and the SD map data to obtain fusion map data;
and the display module is used for displaying the fusion map data.
Optionally, the determining module is further configured to:
when the vehicle is determined to be switched from the first road condition mode to a second road condition mode, acquiring high-definition HD map data, wherein the second road condition mode is a road condition covered by the high-definition HD map data;
and the display module is also used for displaying the high-definition HD map data.
Optionally, the fusion module performs data fusion on the perceived data and the SD map data in the following manner to obtain fused map data:
three-dimensional rendering is carried out on the perception data to obtain three-dimensional perception data, and three-dimensional rendering is carried out on the SD map data to obtain three-dimensional SD map data;
performing coordinate transformation on the three-dimensional perception data and the three-dimensional SD map data to obtain target three-dimensional perception data and target three-dimensional SD map data under a target coordinate system;
and fusing the target three-dimensional perception data and the target three-dimensional SD map data to obtain the fused map data.
Optionally, the SD map data includes surrounding environment data outside of a road where the vehicle is located;
the fusion module performs three-dimensional rendering on the SD map data in the following manner:
and performing three-dimensional rendering on the surrounding environment data based on an ear cutting method.
Optionally, the perceived data includes at least road data and obstacle data of a road on which the vehicle is located.
Optionally, the determining module determines the road condition mode in which the vehicle is located by:
receiving a selection instruction of a road condition mode, wherein the road condition mode comprises the first road condition mode and the second road condition mode;
and determining the road condition mode of the vehicle according to the selection instruction.
According to a third aspect of embodiments of the present disclosure, there is provided a vehicle comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: a method of implementing any of the first aspects of the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the method of any of the first aspects of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects: when the vehicle is determined to run in a first road condition mode which is not covered by the high-precision HD map data, the perception data and the SD map data are acquired, the road data and the barrier data of the vehicle running road can be obtained through the perception data, and then the perception data and the SD map data are subjected to data fusion, so that the obtained fusion map data comprise the road data of the vehicle running road, the barrier data and the SD map data comprising the surrounding environment data except the road where the vehicle is located, the situation that even for the road condition which is not covered by the high-precision HD map data, the user running environment can be fed back comprehensively and in real time, the user running environment is more real and lifelike, and the user experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating a method of data processing according to an exemplary embodiment.
Fig. 2 is a block diagram of a data processing apparatus according to an exemplary embodiment.
Fig. 3 is a block diagram illustrating an apparatus for data processing according to an exemplary embodiment.
FIG. 4 is a functional block diagram of a vehicle, shown in an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
It should be noted that, all actions for acquiring signals, information or data in the present application are performed under the condition of conforming to the corresponding data protection rule policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
In the related art, when a vehicle runs on a closed road condition such as a highway, an urban expressway and the like, based on an SR automatic driving environment simulation display technology, an HD high-precision map can be rendered aiming at the highway or the urban expressway, and lane-level road condition data of the running road is displayed for a driver.
However, when the vehicle runs on a road condition that the high-precision HD map data cannot be covered, lane-level road condition data such as a road surface and a lane line of the road condition cannot be simulated and restored in real time, so that the data processing method is provided, when the vehicle is determined to run on a first road condition mode that the high-precision HD map data cannot be covered, perception data and SD map data are acquired, road data and barrier data of a vehicle running road can be obtained through the perception data, and further, the perception data and the SD map data are fused, and the obtained fusion map data are SD map data comprising the road data of the vehicle running road, the barrier data and surrounding environment data except the road where the vehicle is located, so that even for the road condition that the high-precision HD map data does not cover, the user running environment can be fed back comprehensively and in real time, the user running environment is more real and lifelike, and the user running environment is improved.
Fig. 1 is a flow chart of a data processing method according to an exemplary embodiment, as shown in fig. 1, including the following steps.
In step S11, a road condition mode in which the vehicle is located is determined, and when it is determined that the vehicle is in the first road condition mode, the sensing data and the standard definition SD map data are acquired.
The road condition modes may include a first road condition mode not covered by the high-precision HD map data, for example, a road condition mode such as an urban interior road. A second road condition mode including high-definition HD map data coverage, such as a highway, city expressway, etc., is also possible.
In one embodiment, the disclosure may determine a road condition mode of vehicle driving by receiving a user selection instruction of the road condition mode, where the selection instruction of the road condition mode may be triggered based on a touch area on the central control panel or may be triggered based on a button on a steering wheel. The road condition mode selection instruction may include a first road condition mode selection instruction not covered by the high-definition HD map data and a second road condition mode selection instruction covered by the high-definition HD map data.
The perceived data and standard definition (Standard Definition, SD) map data are map data obtained after gridding processing, so as to facilitate subsequent rendering and other operations.
The sensing data may be, for example, sensing data including road data and obstacle data of a road where the vehicle is located, which are collected by a camera, a laser radar, and other sensors included in the vehicle.
In step S12, the sensing data and the SD map data are subjected to data fusion, and fusion map data is obtained.
In one embodiment, the perceived data and the SD map data may be data fused, for example, to obtain fused map data by:
three-dimensional rendering is carried out on the sensing data to obtain three-dimensional sensing data, three-dimensional rendering is carried out on the SD map data to obtain three-dimensional SD map data, coordinate conversion is carried out on the three-dimensional sensing data and the three-dimensional SD map data to obtain target three-dimensional sensing data and target three-dimensional SD map data under a target coordinate system, and fusion is carried out on the target three-dimensional sensing data and the target three-dimensional SD map data to obtain fusion map data.
The perception data at least comprise road data and obstacle data of a road where the vehicle is located. The road data may include road surface, lane line data of the road. The obstacle data may include other vehicles on the road, pedestrians, motorcycles, traffic lights, etc. The three-dimensional rendering of the perception data may include, for example, rendering of a road surface of a road on which the vehicle is located, rendering of lane lines included in the road, and rendering of obstacles such as other vehicles, pedestrians, motorcycles, traffic lights, and the like on the road.
The SD map data includes surrounding environment data other than the road on which the vehicle is located, such as buildings, greenhouses, water systems, etc. other than the road on which the vehicle is located. In one embodiment, the ambient environment data may be rendered in three dimensions, for example, based on an ear-cut method. Wherein the surrounding environment data may be rendered three-dimensionally, for example, based on an ear-cut method.
In order to make the fused map data more accurate and the restored driving environment more realistic, in the present disclosure, the perceived data and the three-dimensional SD map data may be subjected to coordinate transformation, both in a universal transverse ink card grid system (Universal Transverse Mercator Grid System, UTM) coordinate system, no matter in which coordinate system the perceived data and the three-dimensional SD map data are located, for example, a mars coordinate system or a world geodetic system 1984 (World Geodetic System 1984Coordinate System,wgs84) coordinate system.
In step S13, the fused map data is displayed.
In the exemplary embodiment of the disclosure, when it is determined that the vehicle runs in the first road condition mode not covered by the high-precision HD map data, the sensing data and the SD map data are acquired, the road data and the obstacle data of the vehicle running road can be obtained through the sensing data, and then the sensing data and the SD map data are fused, so that the obtained fusion map data is the SD map data including the road data of the vehicle running road, the obstacle data and the surrounding environment data except the road where the vehicle is located, and it is ensured that the user running environment can be fed back comprehensively and in real time even for the road condition not covered by the high-precision HD map data, and the user running environment is more real and vivid, and the user experience is improved.
Fig. 2 is a block diagram of a data processing apparatus 200, shown according to an exemplary embodiment. Referring to fig. 2, the apparatus includes a determination module 201, a fusion module 202, and a presentation module 203.
The determining module 201 is configured to determine a road condition mode in which a vehicle is located, and obtain sensing data and standard definition SD map data when the vehicle is determined to be in a first road condition mode, where the first road condition mode is a road condition not covered by high-precision HD map data;
the fusion module 202 is configured to perform data fusion on the perceived data and the SD map data to obtain fused map data;
and the display module 203 is configured to display the fused map data.
Optionally, the determining module 201 is further configured to:
when the vehicle is determined to be switched from the first road condition mode to a second road condition mode, acquiring high-definition HD map data, wherein the second road condition mode is a road condition covered by the high-definition HD map data;
the display module 203 is further configured to display the high-definition HD map data.
Optionally, the fusion module 202 performs data fusion on the sensing data and the SD map data in the following manner to obtain fused map data:
three-dimensional rendering is carried out on the perception data to obtain three-dimensional perception data, and three-dimensional rendering is carried out on the SD map data to obtain three-dimensional SD map data;
performing coordinate transformation on the three-dimensional perception data and the three-dimensional SD map data to obtain target three-dimensional perception data and target three-dimensional SD map data under a target coordinate system;
and fusing the target three-dimensional perception data and the target three-dimensional SD map data to obtain the fused map data.
Optionally, the SD map data includes surrounding environment data outside of a road where the vehicle is located;
the fusion module 202 performs three-dimensional rendering on the SD map data in the following manner:
and performing three-dimensional rendering on the surrounding environment data based on an ear cutting method.
Optionally, the perceived data includes at least road data and obstacle data of a road on which the vehicle is located.
Optionally, the determining module 201 determines the road condition mode in which the vehicle is located by:
receiving a selection instruction of a road condition mode, wherein the road condition mode comprises the first road condition mode and the second road condition mode;
and determining the road condition mode of the vehicle according to the selection instruction.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the data processing method provided by the present disclosure.
Fig. 3 is a block diagram illustrating an apparatus 800 for data processing according to an example embodiment. For example, apparatus 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 3, apparatus 800 may include one or more of the following components: a processing component 802, a first memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the apparatus 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more first processors 820 to execute instructions to perform all or part of the steps of the data processing methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The first memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on the device 800, contact data, phonebook data, messages, pictures, videos, and the like. The first memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 800.
The multimedia component 808 includes a screen between the device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the apparatus 800 is in an operational mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the first memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
Input/output interface 812 provides an interface between processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the apparatus 800. For example, the sensor assembly 814 may detect an on/off state of the device 800, a relative positioning of the components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, an orientation or acceleration/deceleration of the device 800, and a change in temperature of the device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the apparatus 800 and other devices, either in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the data processing methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as first memory 804, including instructions executable by first processor 820 of apparatus 800 to perform the data processing method described above. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Referring to fig. 4, fig. 4 is a functional block diagram of a vehicle 600 according to an exemplary embodiment. The vehicle 600 may be configured in a fully or partially autonomous mode. For example, the vehicle 600 may obtain environmental information of its surroundings through the perception system 620 and derive an automatic driving strategy based on analysis of the surrounding environmental information to achieve full automatic driving, or present the analysis results to the user to achieve partial automatic driving.
The vehicle 600 may include various subsystems, such as an infotainment system 610, a perception system 620, a decision control system 630, a drive system 640, and a computing platform 650. Alternatively, vehicle 600 may include more or fewer subsystems, and each subsystem may include multiple components. In addition, each of the subsystems and components of vehicle 600 may be interconnected via wires or wirelessly.
In some embodiments, the infotainment system 610 may include a communication system 611, an entertainment system 612, and a navigation system 613.
The communication system 611 may comprise a wireless communication system, which may communicate wirelessly with one or more devices, either directly or via a communication network. For example, the wireless communication system may use 3G cellular communication, such as CDMA, EVD0, GSM/GPRS, or 4G cellular communication, such as LTE. Or 5G cellular communication. The wireless communication system may communicate with a wireless local area network (wireless local area network, WLAN) using WiFi. In some embodiments, the wireless communication system may communicate directly with the device using an infrared link, bluetooth, or ZigBee. Other wireless protocols, such as various vehicle communication systems, for example, wireless communication systems may include one or more dedicated short-range communication (dedicated short range communications, DSRC) devices, which may include public and/or private data communications between vehicles and/or roadside stations.
Entertainment system 612 may include a display device, a microphone, and an audio, and a user may listen to the broadcast in the vehicle based on the entertainment system, playing music; or the mobile phone is communicated with the vehicle, the screen of the mobile phone is realized on the display equipment, the display equipment can be in a touch control type, and a user can operate through touching the screen.
In some cases, the user's voice signal may be acquired through a microphone and certain controls of the vehicle 600 by the user may be implemented based on analysis of the user's voice signal, such as adjusting the temperature within the vehicle, etc. In other cases, music may be played to the user through sound.
The navigation system 613 may include a map service provided by a map provider to provide navigation of a travel route for the vehicle 600, and the navigation system 613 may be used with the global positioning system 621 and the inertial measurement unit 622 of the vehicle. The map service provided by the map provider may be a two-dimensional map or a high-precision map.
The perception system 620 may include several types of sensors that sense information about the environment surrounding the vehicle 600. For example, sensing system 620 may include a global positioning system 621 (which may be a GPS system, or may be a beidou system, or other positioning system), an inertial measurement unit (inertial measurement unit, IMU) 622, a lidar 623, a millimeter wave radar 624, an ultrasonic radar 625, and a camera 626. The sensing system 620 may also include sensors (e.g., in-vehicle air quality monitors, fuel gauges, oil temperature gauges, etc.) of the internal systems of the monitored vehicle 600. Sensor data from one or more of these sensors may be used to detect objects and their corresponding characteristics (location, shape, direction, speed, etc.). Such detection and identification is a critical function of the safe operation of the vehicle 600.
The global positioning system 621 is used to estimate the geographic location of the vehicle 600.
The inertial measurement unit 622 is configured to sense a change in the pose of the vehicle 600 based on inertial acceleration. In some embodiments, inertial measurement unit 622 may be a combination of an accelerometer and a gyroscope.
The lidar 623 uses a laser to sense objects in the environment in which the vehicle 600 is located. In some embodiments, lidar 623 may include one or more laser sources, a laser scanner, and one or more detectors, among other system components.
The millimeter-wave radar 624 utilizes radio signals to sense objects within the surrounding environment of the vehicle 600. In some embodiments, millimeter-wave radar 624 may be used to sense the speed and/or heading of an object in addition to sensing the object.
The ultrasonic radar 625 may utilize ultrasonic signals to sense objects around the vehicle 600.
The image pickup device 626 is used to capture image information of the surrounding environment of the vehicle 600. The image capturing device 626 may include a monocular camera, a binocular camera, a structured light camera, a panoramic camera, etc., and the image information acquired by the image capturing device 626 may include still images or video stream information.
The decision control system 630 includes a computing system 631 that makes analysis decisions based on information acquired by the perception system 620, and the decision control system 630 also includes a vehicle controller 632 that controls the powertrain of the vehicle 600, as well as a steering system 633, throttle 634, and braking system 635 for controlling the vehicle 600.
The computing system 631 may be operable to process and analyze the various information acquired by the perception system 620 in order to identify targets, objects, and/or features in the environment surrounding the vehicle 600. The targets may include pedestrians or animals and the objects and/or features may include traffic signals, road boundaries, and obstacles. The computing system 631 may use object recognition algorithms, in-motion restoration structure (Structure from Motion, SFM) algorithms, video tracking, and the like. In some embodiments, the computing system 631 may be used to map the environment, track objects, estimate the speed of objects, and so forth. The computing system 631 may analyze the acquired various information and derive control strategies for the vehicle.
The vehicle controller 632 may be configured to coordinate control of the power battery and the engine 641 of the vehicle to enhance the power performance of the vehicle 600.
Steering system 633 is operable to adjust the direction of travel of vehicle 600. For example, in one embodiment may be a steering wheel system.
Throttle 634 is used to control the operating speed of engine 641 and thereby the speed of vehicle 600.
The braking system 635 is used to control deceleration of the vehicle 600. The braking system 635 may use friction to slow the wheels 644. In some embodiments, the braking system 635 may convert kinetic energy of the wheels 644 into electrical current. The braking system 635 may take other forms to slow the rotational speed of the wheels 644 to control the speed of the vehicle 600.
The drive system 640 may include components that provide powered movement of the vehicle 600. In one embodiment, the drive system 640 may include an engine 641, an energy source 642, a transmission 643, and wheels 644. The engine 641 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine of a gasoline engine and an electric motor, or a hybrid engine of an internal combustion engine and an air compression engine. The engine 641 converts the energy source 642 into mechanical energy.
Examples of energy sources 642 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity. The energy source 642 may also provide energy to other systems of the vehicle 600.
The transmission 643 may transfer mechanical power from the engine 641 to wheels 644. The transmission 643 may include a gearbox, a differential, and a driveshaft. In one embodiment, the transmission 643 may also include other devices, such as a clutch. Wherein the drive shaft may include one or more axles that may be coupled to one or more wheels 644.
Some or all of the functions of the vehicle 600 are controlled by the computing platform 650. The computing platform 650 may include at least one second processor 651, and the second processor 651 may execute instructions 653 stored in a non-transitory computer-readable medium, such as a second memory 652. In some embodiments, computing platform 650 may also be a plurality of computing devices that control individual components or subsystems of vehicle 600 in a distributed manner.
The second processor 651 may be any conventional processor, such as a commercially available CPU. Alternatively, the second processor 651 may also include, for example, an image processor (Graphic Process Unit, GPU), a field programmable gate array (FieldProgrammable Gate Array, FPGA), a System On Chip (SOC), an application specific integrated Chip (Application Specific Integrated Circuit, ASIC), or a combination thereof. Although FIG. 4 functionally illustrates a processor, memory, and other elements of a computer in the same block, it will be understood by those of ordinary skill in the art that the processor, computer, or memory may in fact comprise multiple processors, computers, or memories that may or may not be stored within the same physical housing. For example, the memory may be a hard disk drive or other storage medium located in a different housing than the computer. Thus, references to a processor or computer will be understood to include references to a collection of processors or computers or memories that may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, some components, such as the steering component and the retarding component, may each have their own processor that performs only calculations related to the component-specific functions.
In the present disclosure, the second processor 651 may perform the above-described data processing method.
In various aspects described herein, the second processor 651 can be located remotely from and in wireless communication with the vehicle. In other aspects, some of the processes described herein are performed on a processor disposed within the vehicle and others are performed by a remote processor, including taking the necessary steps to perform a single maneuver.
In some embodiments, the second memory 652 may contain instructions 653 (e.g., program logic), the instructions 653 being executable by the second processor 651 to perform various functions of the vehicle 600. The second memory 652 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of the infotainment system 610, the perception system 620, the decision control system 630, the drive system 640.
In addition to instructions 653, the second memory 652 may also store data such as road maps, route information, vehicle location, direction, speed, and other such vehicle data, as well as other information. Such information may be used by the vehicle 600 and the computing platform 650 during operation of the vehicle 600 in autonomous, semi-autonomous, and/or manual modes.
The computing platform 650 may control the functions of the vehicle 600 based on inputs received from various subsystems (e.g., the drive system 640, the perception system 620, and the decision control system 630). For example, computing platform 650 may utilize input from decision control system 630 in order to control steering system 633 to avoid obstacles detected by perception system 620. In some embodiments, computing platform 650 is operable to provide control over many aspects of vehicle 600 and its subsystems.
Alternatively, one or more of these components may be mounted separately from or associated with vehicle 600. For example, the second memory 652 may exist partially or completely separate from the vehicle 600. The above components may be communicatively coupled together in a wired and/or wireless manner.
Alternatively, the above components are only an example, and in practical applications, components in the above modules may be added or deleted according to actual needs, and fig. 4 should not be construed as limiting the embodiments of the present disclosure.
An autonomous car traveling on a road, such as the vehicle 600 above, may identify objects within its surrounding environment to determine adjustments to the current speed. The object may be another vehicle, a traffic control device, or another type of object. In some examples, each identified object may be considered independently and based on its respective characteristics, such as its current speed, acceleration, spacing from the vehicle, etc., may be used to determine the speed at which the autonomous car is to adjust.
Alternatively, the vehicle 600 or a sensing and computing device associated with the vehicle 600 (e.g., computing system 631, computing platform 650) may predict the behavior of the identified object based on the characteristics of the identified object and the state of the surrounding environment (e.g., traffic, rain, ice on a road, etc.). Alternatively, each identified object depends on each other's behavior, so all of the identified objects can also be considered together to predict the behavior of a single identified object. The vehicle 600 is able to adjust its speed based on the predicted behavior of the identified object. In other words, the autonomous car is able to determine what steady state the vehicle will need to adjust to (e.g., accelerate, decelerate, or stop) based on the predicted behavior of the object. In this process, other factors may also be considered to determine the speed of the vehicle 600, such as the lateral position of the vehicle 600 in the road on which it is traveling, the curvature of the road, the proximity of static and dynamic objects, and so forth.
In addition to providing instructions to adjust the speed of the autonomous vehicle, the computing device may also provide instructions to modify the steering angle of the vehicle 600 so that the autonomous vehicle follows a given trajectory and/or maintains safe lateral and longitudinal distances from objects in the vicinity of the autonomous vehicle (e.g., vehicles in adjacent lanes on a roadway).
The vehicle 600 may be various types of traveling tools, such as a car, a truck, a motorcycle, a bus, a ship, an airplane, a helicopter, a recreational vehicle, a train, etc., and embodiments of the present disclosure are not particularly limited.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-mentioned data processing method when being executed by the programmable apparatus.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (9)

1. A method of data processing, comprising:
determining a road condition mode of a vehicle, and acquiring perception data and standard definition SD map data when the vehicle is determined to be in a first road condition mode, wherein the first road condition mode is a road condition which is not covered by high-precision HD map data;
performing data fusion on the perception data and the SD map data to obtain fusion map data;
displaying the fusion map data;
the data fusion of the perception data and the SD map data is performed to obtain fused map data, which comprises the following steps:
three-dimensional rendering is carried out on the perception data to obtain three-dimensional perception data, and three-dimensional rendering is carried out on the SD map data to obtain three-dimensional SD map data;
performing coordinate transformation on the three-dimensional perception data and the three-dimensional SD map data to obtain target three-dimensional perception data and target three-dimensional SD map data under a target coordinate system;
and fusing the target three-dimensional perception data and the target three-dimensional SD map data to obtain the fused map data.
2. The method according to claim 1, wherein the method further comprises:
when the vehicle is determined to be switched from the first road condition mode to a second road condition mode, acquiring high-definition HD map data, wherein the second road condition mode is a road condition covered by the high-definition HD map data;
and displaying the high-definition HD map data.
3. The method according to claim 1, wherein the SD map data includes surrounding environment data outside a road on which the vehicle is located;
the three-dimensional rendering of the SD map data includes:
and performing three-dimensional rendering on the surrounding environment data based on an ear cutting method.
4. The method of claim 1, wherein the perceived data includes at least road data and obstacle data for a road on which the vehicle is located.
5. The method of claim 2, wherein determining the road condition mode in which the vehicle is located comprises:
receiving a selection instruction of a road condition mode, wherein the road condition mode comprises the first road condition mode and the second road condition mode;
and determining the road condition mode of the vehicle according to the selection instruction.
6. A data processing apparatus, comprising:
the determining module is used for determining a road condition mode of a vehicle, and acquiring perception data and standard definition SD map data when the vehicle is determined to be in a first road condition mode, wherein the first road condition mode is a road condition which is not covered by high-precision HD map data;
the fusion module is used for carrying out data fusion on the perception data and the SD map data to obtain fusion map data;
the display module is used for displaying the fusion map data;
the fusion module performs data fusion on the perception data and the SD map data in the following manner to obtain fusion map data:
three-dimensional rendering is carried out on the perception data to obtain three-dimensional perception data, and three-dimensional rendering is carried out on the SD map data to obtain three-dimensional SD map data;
performing coordinate transformation on the three-dimensional perception data and the three-dimensional SD map data to obtain target three-dimensional perception data and target three-dimensional SD map data under a target coordinate system;
and fusing the target three-dimensional perception data and the target three-dimensional SD map data to obtain the fused map data.
7. The apparatus of claim 6, wherein the means for determining is further for:
when the vehicle is determined to be switched from the first road condition mode to a second road condition mode, acquiring high-definition HD map data, wherein the second road condition mode is a road condition covered by the high-definition HD map data;
and the display module is also used for displaying the high-definition HD map data.
8. A vehicle, characterized by comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: the steps of carrying out the method of any one of claims 1 to 5.
9. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the steps of the method of any of claims 1 to 5.
CN202210843898.3A 2022-07-18 2022-07-18 Data processing method, device, vehicle and storage medium Active CN115221260B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210843898.3A CN115221260B (en) 2022-07-18 2022-07-18 Data processing method, device, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210843898.3A CN115221260B (en) 2022-07-18 2022-07-18 Data processing method, device, vehicle and storage medium

Publications (2)

Publication Number Publication Date
CN115221260A CN115221260A (en) 2022-10-21
CN115221260B true CN115221260B (en) 2024-02-09

Family

ID=83612527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210843898.3A Active CN115221260B (en) 2022-07-18 2022-07-18 Data processing method, device, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN115221260B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110873568A (en) * 2018-08-30 2020-03-10 百度在线网络技术(北京)有限公司 High-precision map generation method and device and computer equipment
CN112884892A (en) * 2021-02-26 2021-06-01 武汉理工大学 Unmanned mine car position information processing system and method based on road side device
WO2021226921A1 (en) * 2020-05-14 2021-11-18 Harman International Industries, Incorporated Method and system of data processing for autonomous driving
CN113706702A (en) * 2021-08-11 2021-11-26 重庆九洲星熠导航设备有限公司 Mining area three-dimensional map construction system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110873568A (en) * 2018-08-30 2020-03-10 百度在线网络技术(北京)有限公司 High-precision map generation method and device and computer equipment
WO2021226921A1 (en) * 2020-05-14 2021-11-18 Harman International Industries, Incorporated Method and system of data processing for autonomous driving
CN112884892A (en) * 2021-02-26 2021-06-01 武汉理工大学 Unmanned mine car position information processing system and method based on road side device
CN113706702A (en) * 2021-08-11 2021-11-26 重庆九洲星熠导航设备有限公司 Mining area three-dimensional map construction system and method

Also Published As

Publication number Publication date
CN115221260A (en) 2022-10-21

Similar Documents

Publication Publication Date Title
CN114882464B (en) Multi-task model training method, multi-task processing method, device and vehicle
CN114935334B (en) Construction method and device of lane topological relation, vehicle, medium and chip
CN115330923B (en) Point cloud data rendering method and device, vehicle, readable storage medium and chip
CN115170630B (en) Map generation method, map generation device, electronic equipment, vehicle and storage medium
CN115164910B (en) Travel route generation method, travel route generation device, vehicle, storage medium, and chip
CN114771539B (en) Vehicle lane change decision method and device, storage medium and vehicle
CN114863717B (en) Parking stall recommendation method and device, storage medium and vehicle
CN114842455B (en) Obstacle detection method, device, equipment, medium, chip and vehicle
CN114756700B (en) Scene library establishing method and device, vehicle, storage medium and chip
CN114880408A (en) Scene construction method, device, medium and chip
CN114537450A (en) Vehicle control method, device, medium, chip, electronic device and vehicle
CN115221260B (en) Data processing method, device, vehicle and storage medium
CN114842454B (en) Obstacle detection method, device, equipment, storage medium, chip and vehicle
CN115115822B (en) Vehicle-end image processing method and device, vehicle, storage medium and chip
CN115221261A (en) Map data fusion method and device, vehicle and storage medium
CN114802217B (en) Method and device for determining parking mode, storage medium and vehicle
CN115219151B (en) Vehicle testing method, system, electronic equipment and medium
CN114789723B (en) Vehicle running control method and device, vehicle, storage medium and chip
CN114771514B (en) Vehicle running control method, device, equipment, medium, chip and vehicle
CN115535004B (en) Distance generation method, device, storage medium and vehicle
CN115042813B (en) Vehicle control method and device, storage medium and vehicle
CN115205804A (en) Image processing method, image processing apparatus, vehicle, medium, and chip
CN114964294A (en) Navigation method, navigation device, storage medium, electronic equipment, chip and vehicle
CN114954528A (en) Vehicle control method, device, vehicle, storage medium and chip
CN114987549A (en) Vehicle control method, device, storage medium and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant