WO2021134357A1 - 感知信息处理方法、装置、计算机设备和存储介质 - Google Patents

感知信息处理方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2021134357A1
WO2021134357A1 PCT/CN2019/130191 CN2019130191W WO2021134357A1 WO 2021134357 A1 WO2021134357 A1 WO 2021134357A1 CN 2019130191 W CN2019130191 W CN 2019130191W WO 2021134357 A1 WO2021134357 A1 WO 2021134357A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
feature
map
prediction
obstacle
Prior art date
Application number
PCT/CN2019/130191
Other languages
English (en)
French (fr)
Inventor
何明
叶茂盛
邹晓艺
吴伟
许双杰
许家妙
曹通易
Original Assignee
深圳元戎启行科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳元戎启行科技有限公司 filed Critical 深圳元戎启行科技有限公司
Priority to CN201980037292.7A priority Critical patent/CN113383283B/zh
Priority to PCT/CN2019/130191 priority patent/WO2021134357A1/zh
Publication of WO2021134357A1 publication Critical patent/WO2021134357A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions

Definitions

  • This application relates to a method, device, computer equipment and storage medium for processing perceptual information.
  • a perceptual information processing method, device, computer device, and storage medium that can improve the computing efficiency of a computer device are provided.
  • a method for processing perception information including:
  • the perception information includes original point cloud signals and map information
  • the point cloud feature information and the map feature image are input to the trained prediction model, and the prediction operation is performed on the point cloud feature information and the map feature image through the prediction model, and output corresponding to the obstacle detection task Obstacle detection results, obstacle trajectory prediction results corresponding to the obstacle trajectory prediction task, and driving path planning results corresponding to the driving path planning task.
  • a perceptual information processing device including:
  • the first acquisition module is used to acquire obstacle detection tasks, obstacle trajectory prediction tasks, and driving path planning tasks;
  • the second acquisition module is configured to acquire perception information according to the obstacle detection task, the obstacle trajectory prediction task, and the driving route planning task, where the perception information includes original point cloud signals and map information;
  • the first extraction module is configured to perform feature extraction on the original point cloud signal to obtain point cloud feature information
  • the second extraction module is used to perform feature extraction on the map information to obtain a map feature image
  • the calculation module is used to input the point cloud feature information and the map feature image to the trained prediction model, and perform prediction calculations on the point cloud feature information and the map feature image through the prediction model, and output the obstacle The obstacle detection result corresponding to the object detection task, the obstacle trajectory prediction result corresponding to the obstacle trajectory prediction task, and the driving path planning result corresponding to the driving path planning task.
  • a computer device including a memory and one or more processors, the memory stores computer readable instructions, and when the computer readable instructions are executed by the processor, the one or more processors execute The following steps:
  • the perception information includes original point cloud signals and map information
  • the point cloud feature information and the map feature image are input to the trained prediction model, and the prediction operation is performed on the point cloud feature information and the map feature image through the prediction model, and output corresponding to the obstacle detection task Obstacle detection results, obstacle trajectory prediction results corresponding to the obstacle trajectory prediction task, and driving path planning results corresponding to the driving path planning task.
  • One or more non-volatile computer-readable storage media storing computer-readable instructions.
  • the computer-readable instructions When executed by one or more processors, the one or more processors perform the following steps:
  • the perception information includes original point cloud signals and map information
  • the point cloud feature information and the map feature image are input to the trained prediction model, and the prediction operation is performed on the point cloud feature information and the map feature image through the prediction model, and output corresponding to the obstacle detection task Obstacle detection results, obstacle trajectory prediction results corresponding to the obstacle trajectory prediction task, and driving path planning results corresponding to the driving path planning task.
  • Fig. 1 is an application environment diagram of the sensing information processing method in one or more embodiments.
  • Fig. 2 is a schematic flowchart of a method for processing sensory information in one or more embodiments.
  • FIG. 3 is a schematic flowchart of the steps of performing prediction operations on point cloud feature information and map feature images through a trained prediction model in one or more embodiments.
  • Fig. 4 is a block diagram of a device for processing perceptual information in one or more embodiments.
  • Figure 5 is a block diagram of a computer device in one or more embodiments.
  • the perceptual information processing method provided in this application can be applied to the application environment as shown in FIG. 1.
  • the on-board sensor 102 and the first on-board computer device 104 are connected through the network
  • the second on-board computer device 106 and the first on-board computer device 104 are connected through the network.
  • the first vehicle-mounted computer device 104 acquires obstacle detection tasks, obstacle trajectory prediction tasks, and travel path planning tasks.
  • the first vehicle-mounted computer device may be referred to as a computer device for short.
  • the computer device 104 obtains perception information according to the obstacle detection task, the obstacle trajectory prediction task, and the driving route planning task.
  • the perception information includes the original point cloud signal obtained in the on-board sensor 102 according to the obstacle detection task and the map information obtained in the second on-board computer device 106 according to the obstacle trajectory prediction task and the driving route planning task.
  • the vehicle-mounted sensor may be a lidar.
  • the second onboard computer device may be a positioning device.
  • the computer device 104 performs feature extraction on the original point cloud signal to obtain point cloud feature information.
  • the computer device 104 performs feature extraction on the map information to obtain a map feature image.
  • the computer device 104 inputs the point cloud feature information and the map feature image to the trained prediction model, performs prediction operations on the point cloud feature information and the map feature image through the prediction model, and outputs obstacle detection results and obstacles corresponding to the obstacle detection task
  • the obstacle trajectory prediction result corresponding to the trajectory prediction task and the driving path planning result corresponding to the driving path planning task inputs the point cloud feature information and the map feature image to the trained prediction model, performs prediction operations on the point cloud feature information and the map feature image through the prediction model, and outputs obstacle detection results and obstacles corresponding to the obstacle detection task.
  • a method for processing perceptual information is provided. Taking the method applied to the computer device in FIG. 1 as an example for description, the method includes the following steps:
  • Step 202 Obtain an obstacle detection task, an obstacle trajectory prediction task, and a driving path planning task.
  • Step 204 Obtain perception information according to obstacle detection tasks, obstacle trajectory prediction tasks, and driving route planning tasks, where the perception information includes original point cloud signals and map information.
  • the surrounding environment can be scanned by the lidar installed on the vehicle to obtain the corresponding original point cloud signal.
  • the original point cloud signal may be a three-dimensional point cloud signal.
  • Map information can also be generated by a positioning device installed on the vehicle.
  • the map information may include road information, location information of the vehicle on the map, and so on.
  • the map information may be a high-precision map.
  • the computer device obtains the obstacle detection task, the obstacle trajectory prediction task, and the driving path planning task, it can obtain corresponding perception information according to the obstacle detection task, the obstacle trajectory prediction task, and the driving path planning task.
  • Perception information includes original point cloud signals and map information.
  • the computer device obtains the original point cloud data collected by the lidar according to the obstacle detection task.
  • the original point cloud signal is the point cloud signal collected by the lidar in the visible range. The visible range of different lidars can be different.
  • the computer equipment obtains the map information from the positioning equipment according to the obstacle trajectory prediction task and the driving route planning task.
  • Step 206 Perform feature extraction on the original point cloud signal to obtain point cloud feature information.
  • the computer equipment performs feature extraction on the original point cloud signal.
  • the rasterization processing method can be used to extract the point cloud feature information of the original point cloud signal.
  • the rasterization processing method can be adopted. For example, in the scenario of real-time monitoring of autonomous driving, rasterization processing can be used.
  • the computer device determines the signal area corresponding to the original point cloud signal according to the acquired original point cloud signal.
  • the signal area can be the smallest signal space that contains all the original point cloud signals. For example, for the original point cloud signal collected by a lidar with a visible range of 200m, the corresponding signal area is 400m*400m*10m (length*width*height).
  • the computer device can divide the signal area where the original point cloud signal is located according to the preset size, thereby obtaining multiple grid cells.
  • the size of the preset size may represent the size of the grid unit.
  • the computer equipment divides the signal area, it can divide the original point cloud data into corresponding grid cells.
  • the computer device performs feature extraction on the original point cloud signal in the grid unit, and then obtains the point cloud feature information.
  • the point cloud feature information may include the number of points in the original point cloud signal, the maximum height of the original point cloud signal, the minimum height of the original point cloud signal, the average height of the original point cloud signal, and the height variance of the original point cloud signal.
  • Step 208 Perform feature extraction on the map information to obtain a map feature image.
  • the computer equipment extracts map elements from the map information and renders the map elements to obtain map feature images.
  • Map elements can include lane lines, stop lines, pedestrian passages, traffic lights, traffic signs, etc.
  • performing feature extraction on map information to obtain map feature images includes: extracting map elements from the map information; rendering corresponding map elements according to multiple element channels to obtain map feature images.
  • the computer equipment obtains the element channel corresponding to the map element, and renders the map element into the target color value corresponding to the map element according to the element channel, thereby converting lane lines, stop lines, pedestrian passages, traffic lights, traffic signs, etc.
  • the map elements are rendered into map feature images.
  • the element channel may include three color channels of red (Red), green (Green), and blue (Blue).
  • the map feature image may be an RGB (Red, Green, Blue) image.
  • the computer device extracts map elements from the map information, thereby rendering corresponding map elements according to multiple element channels to obtain map feature images. Since the map feature image contains road information of the vehicle in the driving process, the map feature image can be used to predict the target trajectory and plan the driving path of the vehicle.
  • Step 210 Input the point cloud feature information and the map feature image to the trained prediction model, perform prediction calculations on the point cloud feature information and the map feature image through the prediction model, and output obstacle detection results and obstacles corresponding to the obstacle detection task The obstacle trajectory prediction result corresponding to the trajectory prediction task and the driving path planning result corresponding to the driving path planning task.
  • Pre-trained prediction models are stored on the computer equipment.
  • the prediction model is obtained by training a large amount of sample data.
  • the prediction model can use a variety of deep learning neural network models, for example, deep convolutional neural network models, Hopfield, etc.
  • the computer equipment converts the point cloud feature information to obtain the point cloud feature vector.
  • the computer equipment converts the map feature image to obtain the map feature vector.
  • the computer device thus inputs the point cloud feature vector and the map feature vector into the trained prediction model.
  • the computer equipment fuses the point cloud feature vector with the map feature vector through the prediction model, and the fusion feature information can be obtained.
  • Predictive calculation of the fusion feature information through the predictive model, the obstacles in the surrounding environment and the location information of each obstacle, the driving direction of the obstacle in the preset time period and the corresponding location information, and the vehicle in the preset time period Multiple driving paths within and the weight corresponding to each driving path.
  • the computer equipment outputs the obstacles in the surrounding environment and the location information of each obstacle as the obstacle detection result through the prediction model, and uses the obstacle's driving direction and corresponding position information within the preset time period as the obstacle trajectory prediction result Output, the multiple driving paths of the vehicle in the preset time period and the weight corresponding to each driving path are output as the driving path planning result.
  • the obstacles in the obstacle detection result may include dynamic foreground obstacles, static foreground obstacles, road line signs, and the like.
  • the computer device obtains obstacle detection tasks, trajectory prediction tasks, and driving path planning tasks, obtains original point cloud signals according to obstacle detection tasks, and obtains map information according to trajectory prediction tasks and driving path planning tasks, and extracts The point cloud feature information corresponding to the original point cloud signal, and the map feature image corresponding to the map information.
  • the computer equipment inputs the point cloud feature information and the map feature information into the same trained prediction model to perform prediction calculations to realize obstacle detection, obstacle trajectory prediction and driving path planning and other tasks in the same prediction model
  • Parallel processing eliminates the need for repeated processing of original point cloud signals and map information, thereby reducing the amount of task data, improving the computing efficiency of computer equipment, and improving the computing efficiency of prediction results.
  • the point cloud feature information and map feature image are predicted by the trained prediction model, and the obstacle detection result corresponding to the obstacle detection task is output, and the obstacle trajectory prediction task corresponds to
  • the steps of the obstacle trajectory prediction result and the driving path planning result corresponding to the driving path planning task include:
  • Step 302 Extract the point cloud context feature corresponding to the point cloud feature information and the map context feature corresponding to the map feature image through the perception layer of the prediction model.
  • Step 304 Input the point cloud context feature and the map context feature to the semantic analysis layer, and merge the point cloud context feature and the map context feature through the semantic analysis layer to obtain fusion feature information.
  • Step 306 Input the fusion feature information to the prediction layer, predict the fusion feature information through the prediction layer, and output obstacle detection results corresponding to the obstacle detection task, obstacle trajectory prediction results corresponding to the obstacle trajectory prediction task, and driving path planning
  • the driving path planning result corresponding to the task, the prediction layer includes the prediction layer corresponding to the obstacle detection task, the prediction layer corresponding to the obstacle trajectory prediction task, and the prediction layer corresponding to the driving path planning task.
  • the computer device converts the point cloud feature information and the map feature image to obtain a point cloud feature vector corresponding to the point cloud feature information and a map feature vector corresponding to the map feature image.
  • the trained prediction model may include a perception layer, a semantic analysis layer, a prediction layer, and so on.
  • the computer device thus inputs the point cloud feature vector and the map feature vector to the perception layer of the trained prediction model, and extracts the point cloud context feature corresponding to the point cloud feature vector and the map context feature corresponding to the map feature vector through the perception layer.
  • the point cloud context feature and the map context feature are used as the input of the semantic analysis layer, and the point cloud context feature and the map context feature are fused through the semantic analysis layer to obtain fusion feature information.
  • the fusion feature information is used as the input of multiple prediction layers through the prediction model.
  • the prediction layer includes the prediction layer corresponding to the obstacle detection task, the prediction layer corresponding to the obstacle trajectory prediction task, and the prediction layer corresponding to the driving path planning task.
  • the prediction model performs corresponding prediction operations on the fusion feature information through multiple prediction layers, and obtains obstacles in the surrounding environment and the location information of each obstacle, the driving direction of the obstacle in the preset time period and the corresponding location information, The multiple driving paths of the vehicle in the preset time period and the corresponding weight of each driving path.
  • the obstacles in the surrounding environment and the position information of each obstacle are output as the obstacle detection result, and the obstacle's driving direction and corresponding position information within the preset time period are output as the obstacle trajectory prediction result.
  • the multiple driving paths of the vehicle in the preset time period and the weight corresponding to each driving path are output as the driving path planning result.
  • the computer device extracts the point cloud context feature corresponding to the point cloud feature information through the perception layer of the prediction model, and the map context feature corresponding to the map feature image.
  • the point cloud context feature and the map context feature are processed through the semantic analysis layer. Fusion, and then input the fusion feature information to multiple prediction layers, perform corresponding prediction operations through multiple prediction layers, and output obstacle detection results, obstacle trajectory prediction results, and driving path planning results. Since obstacle detection requires point cloud context features, obstacle trajectory prediction and driving path planning require point cloud context features and map context features.
  • the point cloud context features and map context features are fused and merged through the semantic analysis layer of the prediction model.
  • the feature information is input into the prediction layer corresponding to each task, so as to realize the parallel processing of obstacle detection, obstacle trajectory prediction, and driving path planning, which further improves the computing efficiency of computer equipment.
  • the point cloud context feature and the map context feature are fused through the semantic analysis layer to obtain the fusion feature information, including: obtaining the point cloud weight corresponding to the point cloud context feature through the semantic analysis layer, and the map context feature image Corresponding map weight; calculate the fusion feature information according to the point cloud weight and the point cloud context feature, and the map weight and the map context feature.
  • the point cloud context features and map context features are merged through the semantic analysis layer. Specifically, through the semantic analysis layer of the prediction model, the point cloud weights corresponding to the point cloud context features and the map weights corresponding to the map context features are obtained respectively, so that the point cloud weights and the point cloud context features, and the map weights and map context features are determined according to The preset relationship is calculated, and the fusion feature information is obtained.
  • the preset relationship may be to perform a weighted summation of the context feature information and the map context feature, and perform an average operation on the sum.
  • the computer device obtains the point cloud weight corresponding to the point cloud context feature through the semantic analysis layer of the predictive model, the map weight corresponding to the map context feature image, according to the point cloud weight and the point cloud context feature, and the map weight and Map context feature calculation and fusion feature information.
  • the accuracy of the fusion feature information is effectively improved, and at the same time, obstacle detection, obstacle trajectory prediction, and driving path analysis can be realized in the process of automatic driving. Planning multiple tasks for parallel processing further improves the computing efficiency of computer equipment.
  • performing feature extraction on the original point cloud signal to obtain point cloud feature information includes: determining the signal area corresponding to the original point cloud signal according to the original point cloud signal; dividing the signal area into a plurality of signal areas according to a preset size Grid unit: Perform feature extraction on the corresponding original point cloud signal in each grid unit to obtain point cloud feature information.
  • the computer equipment performs feature extraction on the original point cloud signal, and first needs to determine the signal area corresponding to the original point cloud signal.
  • the computer device calculates the difference between the maximum value and the minimum value of the coordinates of the original point cloud signal in the three directions of the X direction, the Y direction and the Z direction according to the original point cloud signal.
  • the computer equipment determines the length, width, and height of the signal area based on the three differences.
  • the signal area can be the smallest signal space that contains all the original point cloud signals.
  • the computer device may perform rasterization processing on the original point cloud signal, thereby dividing the signal area into multiple grid units.
  • the preset size of the rasterization process may be length*width. The length and width in the preset size can be different.
  • the computer device divides the signal area along the X direction according to the length in the preset size to obtain the first grid unit.
  • the first grid unit may be a plurality of grid units equally divided along the X direction.
  • the computer device divides the signal area along the Y direction according to the width in the preset size to obtain the second grid unit.
  • the second grid unit may be a plurality of grid units equally divided along the Y direction.
  • the height of the grid cells can be the same.
  • the computer device obtains the target grid unit according to the first grid unit and the second grid unit.
  • the order of the direction division of the signal area is not limited. It can improve the extraction efficiency of point cloud feature information when computing resources are limited and real-time requirements are high in the automatic driving mode.
  • the computer device can voxelize the original point cloud signal. Specifically, the computer device performs voxelization processing on the original point cloud signal according to the preset size.
  • the preset size can be length*width*height.
  • the length, width, and height of the preset size can be the same.
  • the computer device divides the signal area along the X direction according to the length in the preset size to obtain the first grid unit.
  • the computer device divides the signal area along the Y direction according to the width in the preset size to obtain the second grid unit.
  • the computer device can divide the signal area along the Z direction according to the height in the preset size to obtain the third grid unit.
  • the computer device generates the target grid unit according to the first grid unit, the second grid unit, and the third grid unit.
  • the order of the direction division of the signal area is not limited.
  • the computing resource of the computer device is greater than or equal to a preset threshold, the signal area corresponding to the original point cloud signal is divided into three directions: X direction, Y direction, and Z direction. It can handle the situation where other point cloud signals are obstructed above the obstacle, so that the extraction accuracy of point cloud feature information can be further improved in the automatic driving mode.
  • the computer device divides the original point cloud signal into corresponding target grid units, and performs feature extraction on the location information of each point in the original point cloud signal in the target grid unit to obtain point cloud feature information.
  • the point cloud feature information includes: the number of points in the point cloud data, the maximum height of the point cloud data, the minimum height of the point cloud data, the average height of the point cloud data, and the height variance of the point cloud data.
  • the computer equipment can also arrange the point cloud feature information in rows to generate matrix units.
  • the computer equipment arranges the matrix units according to preset rules to generate a point cloud feature matrix.
  • the computer device extracts point cloud feature information by dividing the signal area corresponding to the original point cloud signal, which is beneficial to subsequent obstacle detection tasks, obstacle trajectory prediction tasks, and driving path planning tasks through predictive models. Perform parallel processing.
  • the above method further includes: determining an obstacle detection result that meets a preset condition in the obstacle detection result; extracting a corresponding target from the obstacle trajectory prediction result according to the obstacle detection result that meets the preset condition Trajectory: According to the extracted target trajectory and the obstacle detection result that meets the preset conditions, the target driving path is determined in the driving path planning result.
  • the computer equipment obtains the obstacle detection result, the obstacle trajectory prediction result and the driving path planning result output by the prediction model.
  • Obstacle detection results can include obstacles in the surrounding environment and the location of the obstacles. Obstacles may include dynamic foreground obstacles, static foreground obstacles, road markings, backgrounds, and so on.
  • the obstacle trajectory prediction result may include the driving direction of the obstacle within the preset time period and corresponding location information.
  • the driving path planning result may include multiple driving paths of the vehicle in a preset time period and the weight corresponding to each driving path.
  • the computer device determines the obstacle detection result that meets the preset condition from the obstacle detection result.
  • the preset condition may be obstacles such as dynamic foreground obstacles and static foreground obstacles.
  • the computer device extracts the target trajectory corresponding to the obstacle detection result that meets the preset condition from the obstacle trajectory prediction result. Furthermore, the computer device extracts the corresponding driving path from the driving path planning result according to the extracted target trajectory and the obstacle detection result that meets the preset condition. The computer device selects the driving path with the largest weight among the extracted driving paths as the target driving path.
  • the computer device determines an obstacle detection result that meets a preset condition in the obstacle detection result, and extracts a corresponding target trajectory from the obstacle trajectory prediction result according to the obstacle detection result that meets the preset condition. Then, the target driving path is determined in the driving path planning result according to the extracted target trajectory and the obstacle detection result that meets the preset conditions.
  • the optimal driving path can be selected from multiple driving paths, thereby improving the planning accuracy of the driving path.
  • a sensory information processing device which includes: a first acquisition module 402, a second acquisition module 404, a first extraction module 406, a second extraction module 408, and an arithmetic module 410, of which:
  • the first acquisition module 402 is used to acquire obstacle detection tasks, obstacle trajectory prediction tasks, and driving path planning tasks.
  • the second acquisition module 404 is configured to acquire perception information according to obstacle detection tasks, obstacle trajectory prediction tasks, and driving route planning tasks.
  • the perception information includes original point cloud signals and map information.
  • the first extraction module 406 is configured to perform feature extraction on the original point cloud signal to obtain point cloud feature information.
  • the second extraction module 408 is used to perform feature extraction on map information to obtain map feature images.
  • the arithmetic module 410 is used to input point cloud feature information and map feature images to the trained prediction model, perform prediction operations on the point cloud feature information and map feature images through the prediction model, and output obstacle detection results corresponding to the obstacle detection task , The obstacle trajectory prediction result corresponding to the obstacle trajectory prediction task and the driving path planning result corresponding to the driving path planning task.
  • the arithmetic module 410 is also used to extract the point cloud context feature corresponding to the point cloud feature information and the map context feature corresponding to the map feature image through the perception layer of the prediction model; compare the point cloud context feature with the map
  • the context feature is input to the semantic analysis layer, and the point cloud context feature and the map context feature are fused through the semantic analysis layer to obtain the fusion feature information; the fusion feature information is input to the prediction layer, and the fusion feature information is predicted by the prediction layer to output Obstacle detection results corresponding to obstacle detection tasks, obstacle trajectory prediction results corresponding to obstacle trajectory prediction tasks, and driving path planning results corresponding to driving path planning tasks.
  • the prediction layer includes the prediction layer and obstacle trajectory corresponding to the obstacle detection task.
  • the arithmetic module 410 is further configured to obtain the point cloud weight corresponding to the point cloud context feature and the map weight corresponding to the map context feature image through the semantic analysis layer; according to the point cloud weight and the point cloud context feature, and the map The weight and the map context feature are calculated to fuse feature information.
  • the above-mentioned device further includes: a determining module, configured to determine an obstacle detection result that meets a preset condition in the obstacle detection result; the obstacle trajectory prediction result is based on the obstacle detection result that meets the preset condition According to the extracted target trajectory and the obstacle detection result that meets the preset conditions, the target driving path is determined in the driving path planning result.
  • a determining module configured to determine an obstacle detection result that meets a preset condition in the obstacle detection result
  • the obstacle trajectory prediction result is based on the obstacle detection result that meets the preset condition
  • the target driving path is determined in the driving path planning result.
  • the first extraction module 406 is further configured to determine the signal area corresponding to the original point cloud signal according to the original point cloud signal; divide the signal area into a plurality of grid units according to a preset size; The unit performs feature extraction on the corresponding original point cloud signal to obtain point cloud feature information.
  • the second extraction module 408 extracts map elements from the map information; renders corresponding map elements according to multiple element channels to obtain map feature images.
  • Each module in the above-mentioned perceptual information processing device can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • a computer device is provided, and its internal structure diagram may be as shown in FIG. 5.
  • the computer equipment includes a processor, a memory, a communication interface and a database connected through a system bus.
  • the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, computer readable instructions, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer-readable instructions in the non-volatile storage medium.
  • the database of the computer equipment is used to store obstacle detection results, obstacle trajectory prediction results and driving path planning results.
  • the communication interface of the computer device is used to connect and communicate with the vehicle-mounted sensor and the second vehicle-mounted computer device.
  • the computer-readable instruction is executed by the processor to realize a method for processing perceptual information.
  • FIG. 5 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied.
  • the specific computer device may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
  • a computer device that includes a memory and one or more processors.
  • the memory stores computer-readable instructions.
  • the one or more processors execute each of the foregoing method implementations. The steps in the example.
  • One or more non-volatile computer-readable storage media storing computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, the one or more processors execute the steps in each of the foregoing method embodiments. step.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种感知信息处理方法,包括:获取障碍物检测任务、障碍物轨迹预测任务和行驶路径规划任务;根据障碍物检测任务、障碍物轨迹预测任务和行驶路径规划任务获取感知信息,感知信息包括原始点云信号和地图信息;对原始点云信号进行特征提取,得到点云特征信息;对地图信息进行特征提取,得到地图特征图像;将点云特征信息与地图特征图像输入至训练后的预测模型,通过预测模型对点云特征信息与地图特征图像进行预测运算,输出障碍物检测任务对应的障碍物检测结果、障碍物轨迹预测任务对应的障碍物轨迹预测结果和行驶路径规划任务对应的行驶路径规划结果。

Description

感知信息处理方法、装置、计算机设备和存储介质 技术领域
本申请涉及一种感知信息处理方法、装置、计算机设备和存储介质。
背景技术
人工智能技术的发展,促进了自动驾驶技术的发展。在自动驾驶过程中,需要时刻检测车辆周围的障碍物、预测障碍物的轨迹以及规划车辆的行驶路径,从而控制车辆自动行驶。传统方式中,是通过计算机设备将障碍物检测、障碍物轨迹预测以及行驶路径规划等多个任务按照对应的任务处理方式进行独立处理。每个任务都需要对特定的感知信息进行处理,因此,传统方式需要对特定感知信息进行重复处理,从而导致任务的数据量较大,进而导致计算机设备的运算效率较低。
发明内容
根据本申请公开的各种实施例,提供一种能够提计算机设备的运算效率的感知信息处理方法、装置、计算机设备和存储介质。
一种感知信息处理方法,包括:
获取障碍物检测任务、障碍物轨迹预测任务和行驶路径规划任务;
根据所述障碍物检测任务、所述障碍物轨迹预测任务和所述行驶路径规划任务获取感知信息,所述感知信息包括原始点云信号和地图信息;
对所述原始点云信号进行特征提取,得到点云特征信息;
对所述地图信息进行特征提取,得到地图特征图像;及
将所述点云特征信息与地图特征图像输入至训练后的预测模型,通过所述预测模型对所述点云特征信息与所述地图特征图像进行预测运算,输出所述障碍物检测任务对应的障碍物检测结果、所述障碍物轨迹预测任务对应的障碍物轨迹预测结果和所述行驶路径规划任务对应的行驶路径规划结果。
一种感知信息处理装置,包括:
第一获取模块,用于获取障碍物检测任务、障碍物轨迹预测任务和行驶路径规划任务;
第二获取模块,用于根据所述障碍物检测任务、所述障碍物轨迹预测任务和所述行驶路径规划任务获取感知信息,所述感知信息包括原始点云信号和地图信息;
第一提取模块,用于对所述原始点云信号进行特征提取,得到点云特征信息;
第二提取模块,用于对所述地图信息进行特征提取,得到地图特征图像;及
运算模块,用于将所述点云特征信息与地图特征图像输入至训练后的预测模型,通过所述预测模型对所述点云特征信息与所述地图特征图像进行预测运算,输出所述障碍物检测任务对应的障碍物检测结果、所述障碍物轨迹预测任务对应的障碍物轨迹预测结果和所述行驶路径规划任务对应的行驶路径规划结果。
一种计算机设备,包括存储器和一个或多个处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器执行以下步骤:
获取障碍物检测任务、障碍物轨迹预测任务和行驶路径规划任务;
根据所述障碍物检测任务、所述障碍物轨迹预测任务和所述行驶路径规划任务获取感知信息,所述感知信息包括原始点云信号和地图信息;
对所述原始点云信号进行特征提取,得到点云特征信息;
对所述地图信息进行特征提取,得到地图特征图像;及
将所述点云特征信息与地图特征图像输入至训练后的预测模型,通过所述预测模型对所述点云特征信息与所述地图特征图像进行预测运算,输出所述障碍物检测任务对应的障碍物检测结果、所述障碍物轨迹预测任务对应的障碍物轨迹预测结果和所述行驶路径规划任务对应的行驶路径规划结果。
一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行以下步骤:
获取障碍物检测任务、障碍物轨迹预测任务和行驶路径规划任务;
根据所述障碍物检测任务、所述障碍物轨迹预测任务和所述行驶路径规划任务获取感知信息,所述感知信息包括原始点云信号和地图信息;
对所述原始点云信号进行特征提取,得到点云特征信息;
对所述地图信息进行特征提取,得到地图特征图像;及
将所述点云特征信息与地图特征图像输入至训练后的预测模型,通过所述预测模型对所述点云特征信息与所述地图特征图像进行预测运算,输出所述障碍物检测任务对应的障碍物检测结果、所述障碍物轨迹预测任务对应的障碍物轨迹预测结果和所述行驶路径规划任务对应的行驶路径规划结果。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1为一个或多个实施例中感知信息处理方法的应用环境图。
图2为一个或多个实施例中感知信息处理方法的流程示意图。
图3为一个或多个实施例中通过训练后的预测模型对点云特征信息与地图特征图像进行预测运算步骤的流程示意图。
图4为一个或多个实施例中感知信息处理装置的框图。
图5为一个或多个实施例中计算机设备的框图。
具体实施方式
为了使本申请的技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供的感知信息处理方法,可以应用于如图1所示的应用环境中。在自动驾驶过程中,车载传感器102与第一车载计算机设备104通过网络进行连接,第二车载计算机设备106与第一车载计算机设备104通过网络进行连接。第一车载计算机设备104获取障碍物检测任务、障碍物轨迹预测任务和行驶路径规划任务。第一车载计算机设备可以简称为计算机设备。计算机设备104根据障碍物检测任务、障碍物轨迹预测任务和行驶路径规划任务获取感知信息。感知信息包括根据障碍物检测任务在车载传感器102获取的原始点云信号以及 根据障碍物轨迹预测任务和行驶路径规划任务在第二车载计算机设备106中获取的地图信息。车载传感器可以是激光雷达。第二车载计算机设备可以是定位设备。计算机设备104对原始点云信号进行特征提取,得到点云特征信息。计算机设备104对地图信息进行特征提取,得到地图特征图像。计算机设备104将点云特征信息与地图特征图像输入至训练后的预测模型,通过预测模型对点云特征信息与地图特征图像进行预测运算,输出障碍物检测任务对应的障碍物检测结果、障碍物轨迹预测任务对应的障碍物轨迹预测结果和行驶路径规划任务对应的行驶路径规划结果。
在其中一个实施例中,如图2所示,提供了一种感知信息处理方法,以该方法应用于图1中的计算机设备为例进行说明,包括以下步骤:
步骤202,获取障碍物检测任务、障碍物轨迹预测任务和行驶路径规划任务。
步骤204,根据障碍物检测任务、障碍物轨迹预测任务和行驶路径规划任务获取感知信息,感知信息包括原始点云信号和地图信息。
车辆在自动驾驶的过程中,可以通过安装在车辆上的激光雷达对周围环境进行扫描,得到相应的原始点云信号。原始点云信号可以是三维点云信号。还可以通过安装在车辆上的定位设备生成地图信息。地图信息可以包括道路信息、车辆在地图中的位置信息等。例如,地图信息可以是高精地图。当计算机设备获取到障碍物检测任务、障碍物轨迹预测任务和行驶路径规划任务时,可以根据障碍物检测任务、障碍物轨迹预测任务和行驶路径规划任务获取对应的感知信息。感知信息包括原始点云信号和地图信息。具体的,计算机设备根据障碍物检测任务获取激光雷达采集的原始点云数据。原始点云信号是激光雷达在可视范围内采集到的点云信号。不同激光雷达的可视范围可以是不同的。计算机设备根据障碍物轨迹预测任务和行驶路径规划任务在定位设备中获取地图信 息。
步骤206,对原始点云信号进行特征提取,得到点云特征信息。
计算机设备对原始点云信号进行特征提取。可以采用栅格化处理方式来提取原始点云信号的点云特征信息。在自动驾驶模式下,当计算机设备的计算资源小于预设阈值时,可以采用栅格化处理方式。例如,在自动驾驶实时监测的场景下,可以采用栅格化处理方式。
具体的,计算机设备根据获取到的原始点云信号确定原始点云信号对应的信号区域。信号区域可以是包含所有原始点云信号的最小信号空间。例如,可视范围为200m的激光雷达采集到的原始点云信号,其对应的信号区域为400m*400m*10m(长*宽*高)。计算机设备可以根据预设尺寸将原始点云信号所在的信号区域进行划分,从而得到多个网格单元。预设尺寸的大小可以表示网格单元的大小。计算机设备在对信号区域进行划分时,可以将原始点云数据划分到对应的网格单元中。计算机设备对网格单元中的原始点云信号进行特征提取,进而得到点云特征信息。点云特征信息可以包括原始点云信号中点的个数、原始点云信号的最大高度、原始点云信号的最低高度、原始点云信号的平均高度、原始点云信号的高度方差等。
步骤208,对地图信息进行特征提取,得到地图特征图像。
计算机设备通过在地图信息中提取地图元素,将地图元素进行渲染,得到地图特征图像。地图元素可以包括车道线、停止线、行人通道、交通灯、交通指示牌等。在其中一个实施例中,对地图信息进行特征提取,得到地图特征图像,包括:在地图信息中提取地图元素;根据多个元素通道对相对应的地图元素进行渲染,得到地图特征图像。计算机设备在提取地图元素后,获取地图元素对应的元素通道,根据元素通道将地图元素渲染为地图元素对应的目标颜色 值,从而将车道线、停止线、行人通道、交通灯、交通指示牌等地图元素渲染成地图特征图像。元素通道可以包括红(Red)、绿(Green)、蓝(Blue)三个颜色通道。地图特征图像可以是RGB(Red,Green,Blue)图像。计算机设备通过在地图信息中提取地图元素,从而根据多个元素通道对相应的地图元素进行渲染,得到地图特征图像。由于地图特征图像中包含有车辆在行驶过程中的道路信息,进而可以通过地图特征图像预测目标轨迹以及规划车辆的行驶路径。
步骤210,将点云特征信息与地图特征图像输入至训练后的预测模型,通过预测模型对点云特征信息与地图特征图像进行预测运算,输出障碍物检测任务对应的障碍物检测结果、障碍物轨迹预测任务对应的障碍物轨迹预测结果和行驶路径规划任务对应的行驶路径规划结果。
计算机设备上存储有预先训练后的预测模型。该预测模型是通过对大量的样本数据进行训练后所得到的。预测模型可以采用多种深度学习神经网络模型,例如,深度卷积神经网络模型、Hopfield等。
计算机设备对点云特征信息进行转换,得到点云特征向量。计算机设备对地图特征图像进行转换,得到地图特征向量。计算机设备从而将点云特征向量与地图特征向量输入至训练后的预测模型中。计算机设备通过预测模型将点云特征向量与地图特征向量进行融合,可以得到融合特征信息。通过预测模型对融合特征信息进行预测运算,得到周围环境中的障碍物和每个障碍物的位置信息、障碍物在预设时间段内的行驶方向和对应的位置信息、车辆在预设时间段内的多条行驶路径和每条行驶路径对应的权重。计算机设备通过预测模型将周围环境中的障碍物和每个障碍物的位置信息作为障碍物检测结果输出,将障碍物在预设时间段内的行驶方向和对应的位置信息作为障碍物轨迹预测结果输出,将车辆在预设时间段内的多条行驶路径和每条行驶路径对应的权重作为行 驶路径规划结果输出。障碍物检测结果中的障碍物可以包括动态前景障碍物、静态前景障碍物、道路线标识等。
在本实施例中,计算机设备获取障碍物检测任务、轨迹预测任务和行驶路径规划任务,根据障碍物检测任务获取原始点云信号,根据轨迹预测任务和行驶路径规划任务获取地图信息地图信息,提取原始点云信号对应的点云特征信息,与地图信息对应的地图特征图像。通过对原始点云信息与地图信息进行特征提取,能够将原始点云信号以及地图信息中的不必要信息进行过滤,有利于提高后续预测模型的预测准确性。计算机设备将点云特征信息与地图特征信息输入至同一个训练后的预测模型中,进行预测运算,实现将障碍物检测、障碍物轨迹预测和行驶路径规划等多个任务在同一个预测模型中并行处理,无需对原始点云信号以及地图信息进行重复处理,从而减小了任务的数据量,提高了计算机设备的运算效率,同时提高了预测结果的运算效率。
在其中一个实施例中,如图3所示,通过训练后的预测模型对点云特征信息与地图特征图像进行预测运算,输出障碍物检测任务对应的障碍物检测结果、障碍物轨迹预测任务对应的障碍物轨迹预测结果和行驶路径规划任务对应的行驶路径规划结果的步骤,包括:
步骤302,通过预测模型的感知层提取点云特征信息对应的点云上下文特征,与地图特征图像对应的地图上下文特征。
步骤304,将点云上下文特征与地图上下文特征输入至语义分析层,通过语义分析层将点云上下文特征与地图上下文特征进行融合,得到融合特征信息。
步骤306,将融合特征信息输入至预测层,通过预测层对融合特征信息进行预测,输出障碍物检测任务对应的障碍物检测结果、障碍物轨迹预测任务对应的障碍物轨迹预测结果和行驶路径规划任务对应的行驶路径规划结果,预测层 包括障碍物检测任务对应的预测层、障碍物轨迹预测任务对应的预测层和行驶路径规划任务对应的预测层。
计算机设备对点云特征信息和地图特征图像进行转换,得到点云特征信息对应的点云特向量,与地图特征图像对应的地图特征向量。训练后的预测模型可以包括感知层、语义分析层、预测层等。计算机设备从而将点云特征向量与地图特征向量输入至训练后的预测模型的感知层,通过感知层提取点云特征向量对应的点云上下文特征,与地图特征向量对应的地图上下文特征。将点云上下文特征与地图上下文特征作为语义分析层的输入,通过语义分析层将点云上下文特征与所述地图上下文特征进行融合,得到融合特征信息。通过预测模型将融合特征信息作为多个预测层的输入。预测层包括障碍物检测任务对应的预测层、障碍物轨迹预测任务对应的预测层和行驶路径规划任务对应的预测层。预测模型通过多个预测层对融合特征信息进行相应地预测运算,得到周围环境中的障碍物和每个障碍物的位置信息、障碍物在预设时间段内的行驶方向和对应的位置信息、车辆在预设时间段内的多条行驶路径和每条行驶路径对应的权重。通过预测模型将周围环境中的障碍物和每个障碍物的位置信息作为障碍物检测结果输出,将障碍物在预设时间段内的行驶方向和对应的位置信息作为障碍物轨迹预测结果输出,将车辆在预设时间段内的多条行驶路径和每条行驶路径对应的权重作为行驶路径规划结果输出。
在本实施例中,计算机设备通过预测模型的感知层提取点云特征信息对应的点云上下文特征,与地图特征图像对应的地图上下文特征,通过语义分析层将点云上下文特征与地图上下文特征进行融合,进而将融合特征信息输入至多个预测层,通过多个预测层进行相应地预测运算,输出障碍物检测结果、障碍物轨迹预测结果和行驶路径规划结果。由于障碍物检测需要点云上下文特征,障 碍物轨迹预测和行驶路径规划需要点云上下文特征以及地图上下文特征,通过预测模型的语义分析层将点云上下文特征与地图上下文特征进行融合,并将融合特征信息输入至每个任务对应的预测层中,从而实现障碍物检测、障碍物轨迹预测、行驶路径规划多个任务的并行处理,进一步提高了计算机设备的运算效率。
在其中一个实施例中,通过语义分析层将点云上下文特征与地图上下文特征进行融合,得到融合特征信息,包括:通过语义分析层获取点云上下文特征对应的点云权重,与地图上下文特征图像对应的地图权重;根据点云权重与点云上下文特征,和地图权重与地图上下文特征计算融合特征信息。
预测模型将点云上下文特征与地图上下文特征输入至语义分析层后,通过语义分析层对点云上下文特征与地图上下文特征进行融合。具体的,通过预测模型的语义分析层分别获取点云上下文特征对应的点云权重,与地图上下文特征对应的地图权重,从而将点云权重与点云上下文特征,和地图权重与地图上下文特征按照预设关系进行计算,得到融合特征信息。预设关系可以是对上下文特征信息与地图上下文特征进行加权求和,并将总合进行平均运算。
在本实施例中,计算机设备通过预测模型的语义分析层获取点云上下文特征对应的点云权重,与地图上下文特征图像对应的地图权重,根据点云权重与点云上下文特征,和地图权重与地图上下文特征计算融合特征信息。通过按照上下文特征与地图上下文特征分别对应的权重进行特征之间的融合,有效提高了融合特征信息的准确性,同时能够实现在自动驾驶过程中将障碍物检测、障碍物轨迹预测以及行驶路径的规划多个任务进行并行处理,进一步提高了计算机设备的运算效率。
在其中一个实施例中,对原始点云信号进行特征提取,得到点云特征信息, 包括:根据原始点云信号确定原始点云信号对应的信号区域;根据预设尺寸将信号区域划分为多个网格单元;在每个网格单元中对相应的原始点云信号进行特征提取,得到点云特征信息。
计算机设备对原始点云信号进行特征提取,首先需要确定原始点云信号对应的信号区域。计算机设备根据原始点云信号计算原始点云信号的坐标在X方向、Y方向以及Z方向三个方向上最大值与最小值的差值。计算机设备根据三个差值来确定信号区域的长宽高。信号区域可以是包含所有原始点云信号的最小信号空间。
将信号区域进行划分的方式可以有多种,可以是栅格化处理,也可以是体素化处理。当计算机设备的计算资源小于预设阈值时,计算机设备可以对原始点云信号进行栅格化处理,从而将信号区域划分为多个网格单元。具体的,栅格化处理的预设尺寸可以是长*宽。预设尺寸中的长和宽可以是不同的。计算机设备根据预设尺寸中的长度将信号区域沿X方向进行划分,得到第一网格单元。第一网格单元可以是多个沿X方向均等划分的网格单元。计算机设备根据预设尺寸中的宽度将信号区域沿Y方向进行划分,得到第二网格单元。第二网格单元可以是多个沿Y方向均等划分的网格单元。网格单元的高度可以是相同的。计算机设备根据第一网格单元以及第二网格单元得到目标网格单元。对信号区域进行方向划分的顺序不作限定。能够在自动驾驶模式下,计算资源有限且实时性要求高时,提高了点云特征信息的提取效率。
当计算机设备的计算资源大于或者等于预设阈值时,计算机设备可以对原始点云信号进行体素化处理。具体的,计算机设备根据预设尺寸对原始点云信号进行体素化处理。其中,预设尺寸可以是长*宽*高。预设尺寸的长、宽和高可以是相同的。计算机设备在根据预设尺寸中的长度将信号区域沿X方向进行 划分,得到第一网格单元。计算机设备根据预设尺寸中的宽度将信号区域沿Y方向进行划分,得到第二网格单元。计算机设备可根据预设尺寸中的高度将信号区域沿Z方向进行划分,得到第三网格单元。计算机设备根据第一网格单元、第二网格单元和第三网格单元生成目标网格单元。对信号区域进行方向划分的顺序不作限定。通过在计算机设备的计算资源大于或者等于预设阈值时,对原始点云信号对应的信号区域进行X方向、Y方向以及Z方向三个方向上的划分。能够很好地处理障碍物上方存在遮挡其他点云信号的情况,从而能够在自动驾驶模式下,进一步提高点云特征信息的提取准确性。
计算机设备将原始点云信号划分至对应的目标网格单元中,对原始点云信号中每个点在目标网格单元中的位置信息进行特征提取,得到点云特征信息。点云特征信息包括:点云数据中点的个数、点云数据的最大高度、点云数据的最低高度、点云数据的平均高度、点云数据的高度方差等。计算机设备还可以将点云特征信息按行进行排列,生成矩阵单元。计算机设备将矩阵单元按照预设规则进行排列生成点云特征矩阵。
在本实施例中,计算机设备通过对原始点云信号对应的信号区域进行划分,来提取点云特征信息,有利于后续通过预测模型对障碍物检测任务、障碍物轨迹预测任务和行驶路径规划任务进行并行处理。
在其中一个实施例中,上述方法还包括:在障碍物检测结果中确定满足预设条件的障碍物检测结果;根据满足预设条件的障碍物检测结果在障碍物轨迹预测结果中提取相应的目标轨迹;根据提取出的目标轨迹与满足预设条件的障碍物检测结果在行驶路径规划结果中确定目标行驶路径。
计算机设备得到预测模型输出的障碍物检测结果、障碍物轨迹预测结果和行驶路径规划结果。障碍物检测结果中可以包括周围环境中的障碍物,以及障 碍物的位置。障碍物可以包括动态前景障碍物、静态前景障碍物、道路线标识、背景等。障碍物轨迹预测结果可以包括障碍物在预设时间段内的行驶方向和对应的位置信息。行驶路径规划结果可以包括车辆在预设时间段内的多条行驶路径,以及每条行驶路径对应的权重。计算机设备在障碍物检测结果中确定满足预设条件的障碍物检测结果。预设条件可以是动态前景障碍物、静态前景障碍物等障碍物。计算机设备在障碍物轨迹预测结果中提取满足预设条件的障碍物检测结果对应的目标轨迹。进而计算机设备根据提取出的目标轨迹以及满足预设条件的障碍物检测结果在行驶路径规划结果中提取对应的行驶路径。计算机设备在提取的行驶路径中选取权重最大的行驶路径作为目标行驶路径。
在本实施例中,计算机设备通过在障碍物检测结果中确定满足预设条件的障碍物检测结果,根据满足预设条件的障碍物检测结果在障碍物轨迹预测结果中提取相应的目标轨迹。进而根据提取出的目标轨迹与满足预设条件的障碍物检测结果在行驶路径规划结果中确定目标行驶路径。能够在多条行驶路径中选取最优的行驶路径,从而提高行驶路径的规划准确性。
在其中一个实施例中,如图4所示,提供了一种感知信息处理装置,包括:第一获取模块402、第二获取模块404、第一提取模块406、第二提取模块408和运算模块410,其中:
第一获取模402,用于获取障碍物检测任务、障碍物轨迹预测任务和行驶路径规划任务。
第二获取模块404,用于根据障碍物检测任务、障碍物轨迹预测任务和行驶路径规划任务获取感知信息,感知信息包括原始点云信号和地图信息。
第一提取模块406,用于对原始点云信号进行特征提取,得到点云特征信息。
第二提取模块408,用于对地图信息进行特征提取,得到地图特征图像。
运算模块410,用于将点云特征信息与地图特征图像输入至训练后的预测模型,通过预测模型对点云特征信息与地图特征图像进行预测运算,输出障碍物检测任务对应的障碍物检测结果、障碍物轨迹预测任务对应的障碍物轨迹预测结果和行驶路径规划任务对应的行驶路径规划结果。
在其中一个实施例中,运算模块410还用于通过预测模型的感知层提取点云特征信息对应的点云上下文特征,与地图特征图像对应的地图上下文特征;将点云上下文特征与所述地图上下文特征输入至语义分析层,通过语义分析层将点云上下文特征与地图上下文特征进行融合,得到融合特征信息;将融合特征信息输入至预测层,通过预测层对融合特征信息进行预测运算,输出障碍物检测任务对应的障碍物检测结果、障碍物轨迹预测任务对应的障碍物轨迹预测结果和行驶路径规划任务对应的行驶路径规划结果,预测层包括障碍物检测任务对应的预测层、障碍物轨迹预测任务对应的预测层和行驶路径规划任务对应的预测层。
在其中一个实施例中,运算模块410还用于通过语义分析层获取点云上下文特征对应的点云权重,与地图上下文特征图像对应的地图权重;根据点云权重与点云上下文特征,和地图权重与地图上下文特征计算融合特征信息。
在其中一个实施例中,上述装置还包括:确定模块,用于在障碍物检测结果中确定满足预设条件的障碍物检测结果;根据满足预设条件的障碍物检测结果在障碍物轨迹预测结果中提取相应的目标轨迹;根据提取出的目标轨迹与满足预设条件的障碍物检测结果在行驶路径规划结果中确定目标行驶路径。
在其中一个实施例中,第一提取模块406还用于根据原始点云信号确定原始点云信号对应的信号区域;根据预设尺寸将信号区域划分为多个网格单元;在每个网格单元中对相应的原始点云信号进行特征提取,得到点云特征信息。
在其中一个实施例中,第二提取模块408在地图信息中提取地图元素;根据多个元素通道对相应的地图元素进行渲染,得到地图特征图像。
关于感知信息处理装置的具体限定可以参见上文中对于感知信息处理方法的限定,在此不再赘述。上述感知信息处理装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在其中一个实施例中,提供了一种计算机设备,其内部结构图可以如图5所示。该计算机设备包括通过系统总线连接的处理器、存储器、通信接口和数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机可读指令和数据库。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的数据库用于存储障碍物检测结果、障碍物轨迹预测结果和行驶路径规划结果。该计算机设备的通信接口用于与车载传感器以及第二车载计算机设备连接通信。该计算机可读指令被处理器执行时以实现一种感知信息处理方法。
本领域技术人员可以理解,图5中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
一种计算机设备,包括存储器及一个或多个处理器,存储器中储存有计算机可读指令,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行上述各个方法实施例中的步骤。
一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行上述各个方法实施例中的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权 利要求为准。

Claims (20)

  1. 一种感知信息处理方法,包括:
    获取障碍物检测任务、障碍物轨迹预测任务和行驶路径规划任务;
    根据所述障碍物检测任务、所述障碍物轨迹预测任务和所述行驶路径规划任务获取感知信息,所述感知信息包括原始点云信号和地图信息;
    对所述原始点云信号进行特征提取,得到点云特征信息;
    对所述地图信息进行特征提取,得到地图特征图像;及
    将所述点云特征信息与地图特征图像输入至训练后的预测模型,通过所述预测模型对所述点云特征信息与所述地图特征图像进行预测运算,输出所述障碍物检测任务对应的障碍物检测结果、所述障碍物轨迹预测任务对应的障碍物轨迹预测结果和所述行驶路径规划任务对应的行驶路径规划结果。
  2. 根据权利要求1所述的方法,其特征在于,所述训练后的预测模型包括感知层、语义分析层和预测层,所述通过所述训练后的预测模型对所述点云特征信息与所述地图特征图像进行预测运算,输出所述障碍物检测任务对应的障碍物检测结果、所述障碍物轨迹预测任务对应的障碍物轨迹预测结果和所述行驶路径规划任务对应的行驶路径规划结果,包括:
    通过所述预测模型的感知层提取所述点云特征信息对应的点云上下文特征,与所述地图特征图像对应的地图上下文特征;
    将所述点云上下文特征与所述地图上下文特征输入至语义分析层,通过所述语义分析层将所述点云上下文特征与所述地图上下文特征进行融合,得到融合特征信息;及
    将所述融合特征信息输入至预测层,通过所述预测层对所述融合特征信息进行预测运算,输出所述障碍物检测任务对应的障碍物检测结果、所述障 碍物轨迹预测任务对应的障碍物轨迹预测结果和所述行驶路径规划任务对应的行驶路径规划结果,所述预测层包括所述障碍物检测任务对应的预测层、所述障碍物轨迹预测任务对应的预测层和所述行驶路径规划任务对应的预测层。
  3. 根据权利要求2所述的方法,其特征在于,所述通过所述语义分析层将所述点云上下文特征与所述地图上下文特征进行融合,得到融合特征信息,包括:
    通过所述语义分析层获取所述点云上下文特征对应的点云权重,与所述地图上下文特征图像对应的地图权重;及
    根据所述点云权重与所述点云上下文特征,和所述地图权重与所述地图上下文特征计算融合特征信息。
  4. 根据权利要求1所述的方法,其特征在于,所述对所述原始点云信号进行特征提取,得到点云特征信息,包括:
    根据所述原始点云信号确定所述原始点云信号对应的信号区域;
    根据预设尺寸将所述信号区域划分为多个网格单元;及
    在每个网格单元中对相应的原始点云信号进行特征提取,得到点云特征信息。
  5. 根据权利要求1所述的方法,其特征在于,所述对所述地图信息进行特征提取,得到地图特征图像,包括:
    在所述地图信息中提取地图元素;及
    根据多个元素通道对相应的地图元素进行渲染,得到地图特征图像。
  6. 根据权利要求1至5任意一项所述的方法,其特征在于,所述方法还包括:
    在所述障碍物检测结果中确定满足预设条件的障碍物检测结果;
    根据所述满足预设条件的障碍物检测结果在所述障碍物轨迹预测结果中提取相应的目标轨迹;及
    根据提取出的目标轨迹与所述满足预设条件的障碍物检测结果在所述行驶路径规划结果中确定目标行驶路径。
  7. 一种感知信息处理装置,包括:
    第一获取模块,用于获取障碍物检测任务、障碍物轨迹预测任务和行驶路径规划任务;
    第二获取模块,用于根据所述障碍物检测任务、所述障碍物轨迹预测任务和所述行驶路径规划任务获取感知信息,所述感知信息包括原始点云信号和地图信息;
    第一提取模块,用于对所述原始点云信号进行特征提取,得到点云特征信息;
    第二提取模块,用于对所述地图信息进行特征提取,得到地图特征图像;及
    运算模块,用于将所述点云特征信息与地图特征图像输入至训练后的预测模型,通过所述预测模型对所述点云特征信息与所述地图特征图像进行预测运算,输出所述障碍物检测任务对应的障碍物检测结果、所述障碍物轨迹预测任务对应的障碍物轨迹预测结果和所述行驶路径规划任务对应的行驶路径规划结果。
  8. 根据权利要求7所述的装置,其特征在于,所述运算模块还用于通过所述预测模型的感知层提取所述点云特征信息对应的点云上下文特征,与所述地图特征图像对应的地图上下文特征;将所述点云上下文特征与所述地图 上下文特征输入至语义分析层,通过所述语义分析层将所述点云上下文特征与所述地图上下文特征进行融合,得到融合特征信息;及将所述融合特征信息输入至预测层,通过所述预测层对所述融合特征信息进行预测运算,输出所述障碍物检测任务对应的障碍物检测结果、所述障碍物轨迹预测任务对应的障碍物轨迹预测结果和所述行驶路径规划任务对应的行驶路径规划结果,所述预测层包括所述障碍物检测任务对应的预测层、所述障碍物轨迹预测任务对应的预测层和所述行驶路径规划任务对应的预测层。
  9. 根据权利要求7所述的装置,其特征在于,所述运算模块还用于通过所述语义分析层获取所述点云上下文特征对应的点云权重,与所述地图上下文特征图像对应的地图权重;及根据所述点云权重与所述点云上下文特征,和所述地图权重与所述地图上下文特征计算融合特征信息。
  10. 根据权利要求7至9任意一项所述的装置,其特征在于,所述装置还包括:确定模块,用于在所述障碍物检测结果中确定满足预设条件的障碍物检测结果;根据所述满足预设条件的障碍物检测结果在所述障碍物轨迹预测结果中提取相应的目标轨迹;及根据提取出的目标轨迹与所述满足预设条件的障碍物检测结果在所述行驶路径规划结果中确定目标行驶路径。
  11. 一种计算机设备,包括存储器及一个或多个处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:获取障碍物检测任务、障碍物轨迹预测任务和行驶路径规划任务;根据所述障碍物检测任务、所述障碍物轨迹预测任务和所述行驶路径规划任务获取感知信息,所述感知信息包括原始点云信号和地图信息;对所述原始点云信号进行特征提取,得到点云特征信息;对所述地图信息进行特征提取,得到地图特征图像;及将所述点云特征 信息与地图特征图像输入至训练后的预测模型,通过所述预测模型对所述点云特征信息与所述地图特征图像进行预测运算,输出所述障碍物检测任务对应的障碍物检测结果、所述障碍物轨迹预测任务对应的障碍物轨迹预测结果和所述行驶路径规划任务对应的行驶路径规划结果。
  12. 根据权利要求11所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时还执行以下步骤:通过所述预测模型的感知层提取所述点云特征信息对应的点云上下文特征,与所述地图特征图像对应的地图上下文特征;将所述点云上下文特征与所述地图上下文特征输入至语义分析层,通过所述语义分析层将所述点云上下文特征与所述地图上下文特征进行融合,得到融合特征信息;及将所述融合特征信息输入至预测层,通过所述预测层对所述融合特征信息进行预测运算,输出所述障碍物检测任务对应的障碍物检测结果、所述障碍物轨迹预测任务对应的障碍物轨迹预测结果和所述行驶路径规划任务对应的行驶路径规划结果,所述预测层包括所述障碍物检测任务对应的预测层、所述障碍物轨迹预测任务对应的预测层和所述行驶路径规划任务对应的预测层。
  13. 根据权利要求11所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时还执行以下步骤:通过所述语义分析层获取所述点云上下文特征对应的点云权重,与所述地图上下文特征图像对应的地图权重;及根据所述点云权重与所述点云上下文特征,和所述地图权重与所述地图上下文特征计算融合特征信息。
  14. 根据权利要求11所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时还执行以下步骤:根据所述原始点云信号确定所述原始点云信号对应的信号区域;根据预设尺寸将所述信号区域划分为多个网格 单元;及在每个网格单元中对相应的原始点云信号进行特征提取,得到点云特征信息。
  15. 根据权利要求11至14任意一项所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时还执行以下步骤:在所述障碍物检测结果中确定满足预设条件的障碍物检测结果;根据所述满足预设条件的障碍物检测结果在所述障碍物轨迹预测结果中提取相应的目标轨迹;及根据提取出的目标轨迹与所述满足预设条件的障碍物检测结果在所述行驶路径规划结果中确定目标行驶路径。
  16. 一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:获取障碍物检测任务、障碍物轨迹预测任务和行驶路径规划任务;根据所述障碍物检测任务、所述障碍物轨迹预测任务和所述行驶路径规划任务获取感知信息,所述感知信息包括原始点云信号和地图信息;对所述原始点云信号进行特征提取,得到点云特征信息;对所述地图信息进行特征提取,得到地图特征图像;及将所述点云特征信息与地图特征图像输入至训练后的预测模型,通过所述预测模型对所述点云特征信息与所述地图特征图像进行预测运算,输出所述障碍物检测任务对应的障碍物检测结果、所述障碍物轨迹预测任务对应的障碍物轨迹预测结果和所述行驶路径规划任务对应的行驶路径规划结果。
  17. 根据权利要求16所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还执行以下步骤:通过所述预测模型的感知层提取所述点云特征信息对应的点云上下文特征,与所述地图特征图像对应的地图上下文特征;将所述点云上下文特征与所述地图上下文特征输入至语义分析层, 通过所述语义分析层将所述点云上下文特征与所述地图上下文特征进行融合,得到融合特征信息;及将所述融合特征信息输入至预测层,通过所述预测层对所述融合特征信息进行预测运算,输出所述障碍物检测任务对应的障碍物检测结果、所述障碍物轨迹预测任务对应的障碍物轨迹预测结果和所述行驶路径规划任务对应的行驶路径规划结果,所述预测层包括所述障碍物检测任务对应的预测层、所述障碍物轨迹预测任务对应的预测层和所述行驶路径规划任务对应的预测层。
  18. 根据权利要求16所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还执行以下步骤:通过所述语义分析层获取所述点云上下文特征对应的点云权重,与所述地图上下文特征图像对应的地图权重;及根据所述点云权重与所述点云上下文特征,和所述地图权重与所述地图上下文特征计算融合特征信息。
  19. 根据权利要求16所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还执行以下步骤:根据所述原始点云信号确定所述原始点云信号对应的信号区域;根据预设尺寸将所述信号区域划分为多个网格单元;及在每个网格单元中对相应的原始点云信号进行特征提取,得到点云特征信息。
  20. 根据权利要求16至19任意一项所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还执行以下步骤:在所述障碍物检测结果中确定满足预设条件的障碍物检测结果;根据所述满足预设条件的障碍物检测结果在所述障碍物轨迹预测结果中提取相应的目标轨迹;及根据提取出的目标轨迹与所述满足预设条件的障碍物检测结果在所述行驶路径规划结果中确定目标行驶路径。
PCT/CN2019/130191 2019-12-30 2019-12-30 感知信息处理方法、装置、计算机设备和存储介质 WO2021134357A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980037292.7A CN113383283B (zh) 2019-12-30 2019-12-30 感知信息处理方法、装置、计算机设备和存储介质
PCT/CN2019/130191 WO2021134357A1 (zh) 2019-12-30 2019-12-30 感知信息处理方法、装置、计算机设备和存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/130191 WO2021134357A1 (zh) 2019-12-30 2019-12-30 感知信息处理方法、装置、计算机设备和存储介质

Publications (1)

Publication Number Publication Date
WO2021134357A1 true WO2021134357A1 (zh) 2021-07-08

Family

ID=76687487

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/130191 WO2021134357A1 (zh) 2019-12-30 2019-12-30 感知信息处理方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN113383283B (zh)
WO (1) WO2021134357A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113848947B (zh) * 2021-10-20 2024-06-28 上海擎朗智能科技有限公司 路径规划方法、装置、计算机设备及存储介质
CN113920166B (zh) * 2021-10-29 2024-05-28 广州文远知行科技有限公司 一种选择物体运动模型方法、装置、交通工具及存储介质
CN115164931B (zh) * 2022-09-08 2022-12-09 南开大学 一种盲人出行辅助系统、方法及设备
CN117407694B (zh) * 2023-11-06 2024-08-27 九识(苏州)智能科技有限公司 多模态信息的处理方法、装置、设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106428000A (zh) * 2016-09-07 2017-02-22 清华大学 一种车辆速度控制装置和方法
CN108196535A (zh) * 2017-12-12 2018-06-22 清华大学苏州汽车研究院(吴江) 基于增强学习和多传感器融合的自动驾驶系统
US20180201273A1 (en) * 2017-01-17 2018-07-19 NextEv USA, Inc. Machine learning for personalized driving
CN109029422A (zh) * 2018-07-10 2018-12-18 北京木业邦科技有限公司 一种多无人机协作构建三维调查地图的方法和装置
CN109029417A (zh) * 2018-05-21 2018-12-18 南京航空航天大学 基于混合视觉里程计和多尺度地图的无人机slam方法
US20180364725A1 (en) * 2017-06-19 2018-12-20 Hitachi, Ltd. Real-time vehicle state trajectory prediction for vehicle energy management and autonomous drive
CN109556615A (zh) * 2018-10-10 2019-04-02 吉林大学 基于自动驾驶的多传感器融合认知的驾驶地图生成方法
CN110542908A (zh) * 2019-09-09 2019-12-06 阿尔法巴人工智能(深圳)有限公司 应用于智能驾驶车辆上的激光雷达动态物体感知方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018232680A1 (en) * 2017-06-22 2018-12-27 Baidu.Com Times Technology (Beijing) Co., Ltd. EVALUATION FRAME FOR PREDICTED TRAJECTORIES IN A SELF-CONTAINING VEHICLE TRAFFIC PREDICTION
CN108981726A (zh) * 2018-06-09 2018-12-11 安徽宇锋智能科技有限公司 基于感知定位监测的无人车语义地图建模及构建应用方法
CN110286387B (zh) * 2019-06-25 2021-09-24 深兰科技(上海)有限公司 应用于自动驾驶系统的障碍物检测方法、装置及存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106428000A (zh) * 2016-09-07 2017-02-22 清华大学 一种车辆速度控制装置和方法
US20180201273A1 (en) * 2017-01-17 2018-07-19 NextEv USA, Inc. Machine learning for personalized driving
US20180364725A1 (en) * 2017-06-19 2018-12-20 Hitachi, Ltd. Real-time vehicle state trajectory prediction for vehicle energy management and autonomous drive
CN108196535A (zh) * 2017-12-12 2018-06-22 清华大学苏州汽车研究院(吴江) 基于增强学习和多传感器融合的自动驾驶系统
CN109029417A (zh) * 2018-05-21 2018-12-18 南京航空航天大学 基于混合视觉里程计和多尺度地图的无人机slam方法
CN109029422A (zh) * 2018-07-10 2018-12-18 北京木业邦科技有限公司 一种多无人机协作构建三维调查地图的方法和装置
CN109556615A (zh) * 2018-10-10 2019-04-02 吉林大学 基于自动驾驶的多传感器融合认知的驾驶地图生成方法
CN110542908A (zh) * 2019-09-09 2019-12-06 阿尔法巴人工智能(深圳)有限公司 应用于智能驾驶车辆上的激光雷达动态物体感知方法

Also Published As

Publication number Publication date
CN113383283A (zh) 2021-09-10
CN113383283B (zh) 2024-06-18

Similar Documents

Publication Publication Date Title
WO2021134357A1 (zh) 感知信息处理方法、装置、计算机设备和存储介质
CN111666921B (zh) 车辆控制方法、装置、计算机设备和计算机可读存储介质
US10915793B2 (en) Method and system for converting point cloud data for use with 2D convolutional neural networks
CN113678136B (zh) 基于无人驾驶技术的障碍物检测方法、装置和计算机设备
CN111142557B (zh) 无人机路径规划方法、系统、计算机设备及可读存储介质
EP3822852B1 (en) Method, apparatus, computer storage medium and program for training a trajectory planning model
CN113424121A (zh) 基于自动驾驶的车辆速度控制方法、装置和计算机设备
CN114930401A (zh) 基于点云的三维重建方法、装置和计算机设备
CN113811830B (zh) 轨迹预测方法、装置、计算机设备和存储介质
CN111874006A (zh) 路线规划处理方法和装置
CN111986128A (zh) 偏心图像融合
CN110723072B (zh) 辅助驾驶方法、装置、计算机设备和存储介质
EP3620945A1 (en) Obstacle distribution simulation method, device and terminal based on multiple models
KR20190131207A (ko) 센서 품질 저하에 강인한 딥러닝 기반 카메라, 라이더 센서 융합 인지 방법 및 시스템
KR20230070253A (ko) 포인트 클라우드들로부터의 효율적인 3차원 객체 검출
JP7520444B2 (ja) 乗り物に基づくデータ処理方法、データ処理装置、コンピュータ機器、及びコンピュータプログラム
US20210237737A1 (en) Method for Determining a Lane Change Indication of a Vehicle
CN110751040B (zh) 一种三维物体的检测方法和装置、电子设备、存储介质
CN111062405A (zh) 训练图像识别模型的方法和装置以及图像识别方法和装置
US20230278587A1 (en) Method and apparatus for detecting drivable area, mobile device and storage medium
CN111098850A (zh) 一种自动停车辅助系统及自动泊车方法
KR20200095357A (ko) 비최대값 억제를 학습하는 병합 네트워크를 이용한 이종 센서 융합을 위한 학습 방법 및 학습 장치
JP7119197B2 (ja) 車線属性検出
US12008743B2 (en) Hazard detection ensemble architecture system and method
CN110097077B (zh) 点云数据分类方法、装置、计算机设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19958659

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19958659

Country of ref document: EP

Kind code of ref document: A1