CN113383283B - Perceptual information processing method, apparatus, computer device, and storage medium - Google Patents

Perceptual information processing method, apparatus, computer device, and storage medium Download PDF

Info

Publication number
CN113383283B
CN113383283B CN201980037292.7A CN201980037292A CN113383283B CN 113383283 B CN113383283 B CN 113383283B CN 201980037292 A CN201980037292 A CN 201980037292A CN 113383283 B CN113383283 B CN 113383283B
Authority
CN
China
Prior art keywords
point cloud
map
task
prediction
obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201980037292.7A
Other languages
Chinese (zh)
Other versions
CN113383283A (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DeepRoute AI Ltd
Original Assignee
DeepRoute AI Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DeepRoute AI Ltd filed Critical DeepRoute AI Ltd
Publication of CN113383283A publication Critical patent/CN113383283A/en
Application granted granted Critical
Publication of CN113383283B publication Critical patent/CN113383283B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

A perception information processing method, comprising: acquiring an obstacle detection task, an obstacle track prediction task and a driving path planning task; obtaining perception information according to an obstacle detection task, an obstacle track prediction task and a driving path planning task, wherein the perception information comprises an original point cloud signal and map information; extracting features of the original point cloud signals to obtain point cloud feature information; extracting features of the map information to obtain a map feature image; and inputting the point cloud characteristic information and the map characteristic image into a trained prediction model, performing prediction operation on the point cloud characteristic information and the map characteristic image through the prediction model, and outputting an obstacle detection result corresponding to an obstacle detection task, an obstacle track prediction result corresponding to an obstacle track prediction task and a driving path planning result corresponding to a driving path planning task.

Description

Perceptual information processing method, apparatus, computer device, and storage medium
Technical Field
The application relates to a perception information processing method, a perception information processing device, computer equipment and a storage medium.
Background
The development of artificial intelligence technology has prompted the development of autopilot technology. In the automatic driving process, it is necessary to detect obstacles around the vehicle at all times, predict the trajectories of the obstacles, and plan the travel path of the vehicle, thereby controlling the vehicle to travel automatically. In the conventional method, a plurality of tasks such as obstacle detection, obstacle track prediction, and travel path planning are independently processed by a computer device according to a corresponding task processing mode. Each task needs to process specific perception information, so that the traditional mode needs to repeatedly process the specific perception information, thereby resulting in larger data volume of the task and lower operation efficiency of the computer equipment.
Disclosure of Invention
According to various embodiments of the present disclosure, there are provided a perception information processing method, apparatus, computer device, and storage medium capable of improving the operation efficiency of the computer device.
A perception information processing method, comprising:
Acquiring an obstacle detection task, an obstacle track prediction task and a driving path planning task;
Obtaining perception information according to the obstacle detection task, the obstacle track prediction task and the driving path planning task, wherein the perception information comprises an original point cloud signal and map information;
Extracting features of the original point cloud signals to obtain point cloud feature information;
extracting features of the map information to obtain a map feature image; and
And inputting the point cloud characteristic information and the map characteristic image into a trained prediction model, performing prediction operation on the point cloud characteristic information and the map characteristic image through the prediction model, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle track prediction result corresponding to the obstacle track prediction task and a driving path planning result corresponding to the driving path planning task.
A perception information processing apparatus comprising:
the first acquisition module is used for acquiring an obstacle detection task, an obstacle track prediction task and a driving path planning task;
The second acquisition module is used for acquiring perception information according to the obstacle detection task, the obstacle track prediction task and the driving path planning task, wherein the perception information comprises an original point cloud signal and map information;
The first extraction module is used for extracting the characteristics of the original point cloud signals to obtain point cloud characteristic information;
The second extraction module is used for extracting the characteristics of the map information to obtain a map characteristic image; and
The operation module is used for inputting the point cloud characteristic information and the map characteristic image into a trained prediction model, performing prediction operation on the point cloud characteristic information and the map characteristic image through the prediction model, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle track prediction result corresponding to the obstacle track prediction task and a travel path planning result corresponding to the travel path planning task.
A computer device comprising a memory and one or more processors, the memory having stored therein computer-readable instructions that, when executed by the processor, cause the one or more processors to perform the steps of:
Acquiring an obstacle detection task, an obstacle track prediction task and a driving path planning task;
Obtaining perception information according to the obstacle detection task, the obstacle track prediction task and the driving path planning task, wherein the perception information comprises an original point cloud signal and map information;
Extracting features of the original point cloud signals to obtain point cloud feature information;
extracting features of the map information to obtain a map feature image; and
And inputting the point cloud characteristic information and the map characteristic image into a trained prediction model, performing prediction operation on the point cloud characteristic information and the map characteristic image through the prediction model, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle track prediction result corresponding to the obstacle track prediction task and a driving path planning result corresponding to the driving path planning task.
One or more non-transitory computer-readable storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of:
Acquiring an obstacle detection task, an obstacle track prediction task and a driving path planning task;
Obtaining perception information according to the obstacle detection task, the obstacle track prediction task and the driving path planning task, wherein the perception information comprises an original point cloud signal and map information;
Extracting features of the original point cloud signals to obtain point cloud feature information;
extracting features of the map information to obtain a map feature image; and
And inputting the point cloud characteristic information and the map characteristic image into a trained prediction model, performing prediction operation on the point cloud characteristic information and the map characteristic image through the prediction model, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle track prediction result corresponding to the obstacle track prediction task and a driving path planning result corresponding to the driving path planning task.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below. Other features and advantages of the application will be apparent from the description and drawings, and from the claims.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a diagram of an application environment for a method of perceptual information processing in one or more embodiments.
FIG. 2 is a flow diagram of a method of perceptual information processing in one or more embodiments.
Fig. 3 is a flowchart illustrating a prediction operation step performed on point cloud feature information and a map feature image by a trained prediction model in one or more embodiments.
Fig. 4 is a block diagram of a perceptual information processing device in one or more embodiments.
FIG. 5 is a block diagram of a computer device in one or more embodiments.
Detailed Description
In order to make the technical scheme and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The perception information processing method provided by the application can be applied to an application environment shown in figure 1. During autopilot, the in-vehicle sensor 102 is connected to the first in-vehicle computer device 104 via a network, and the second in-vehicle computer device 106 is connected to the first in-vehicle computer device 104 via a network. The first vehicle computer device 104 acquires an obstacle detection task, an obstacle trajectory prediction task, and a travel path planning task. The first vehicle computer device may simply be referred to as a computer device. The computer device 104 obtains perception information based on the obstacle detection task, the obstacle trajectory prediction task, and the travel path planning task. The perception information includes the raw point cloud signal acquired in the in-vehicle sensor 102 according to the obstacle detection task and the map information acquired in the second in-vehicle computer device 106 according to the obstacle trajectory prediction task and the travel path planning task. The in-vehicle sensor may be a lidar. The second in-vehicle computer device may be a locating device. The computer device 104 performs feature extraction on the original point cloud signal to obtain point cloud feature information. The computer device 104 performs feature extraction on the map information to obtain a map feature image. The computer device 104 inputs the point cloud feature information and the map feature image into the trained prediction model, performs prediction operation on the point cloud feature information and the map feature image through the prediction model, and outputs an obstacle detection result corresponding to the obstacle detection task, an obstacle track prediction result corresponding to the obstacle track prediction task, and a travel path planning result corresponding to the travel path planning task.
In one embodiment, as shown in fig. 2, a method for processing perception information is provided, which is taken as an example of application of the method to the computer device in fig. 1, and includes the following steps:
step 202, an obstacle detection task, an obstacle trajectory prediction task and a driving path planning task are obtained.
And 204, obtaining perception information according to the obstacle detection task, the obstacle track prediction task and the driving path planning task, wherein the perception information comprises an original point cloud signal and map information.
In the automatic driving process of the vehicle, the surrounding environment can be scanned through a laser radar installed on the vehicle, and corresponding original point cloud signals are obtained. The original point cloud signal may be a three-dimensional point cloud signal. Map information may also be generated by a positioning device mounted on the vehicle. The map information may include road information, position information of the vehicle in the map, and the like. For example, the map information may be a high-definition map. When the computer device acquires the obstacle detection task, the obstacle track prediction task and the driving path planning task, corresponding perception information can be acquired according to the obstacle detection task, the obstacle track prediction task and the driving path planning task. The perception information includes an original point cloud signal and map information. Specifically, the computer equipment acquires original point cloud data acquired by the laser radar according to the obstacle detection task. The original point cloud signal is a point cloud signal acquired by the laser radar in a visible range. The visual range of different lidars may be different. The computer device obtains map information in the positioning device according to the obstacle track prediction task and the driving path planning task.
And 206, extracting features of the original point cloud signals to obtain point cloud feature information.
And the computer equipment performs feature extraction on the original point cloud signals. And the point cloud characteristic information of the original point cloud signal can be extracted by adopting a rasterization processing mode. In the automatic driving mode, when the computing resource of the computer equipment is smaller than a preset threshold value, a rasterization processing mode can be adopted. For example, in the scenario of real-time monitoring of autopilot, a rasterization approach may be employed.
Specifically, the computer device determines a signal area corresponding to the original point cloud signal according to the obtained original point cloud signal. The signal region may be a minimum signal space containing all of the original point cloud signals. For example, the original point cloud signal acquired by the lidar with the visual range of 200m corresponds to a signal area of 400m×400m×10m (long×wide×high). The computer device may divide the signal area where the original point cloud signal is located according to a preset size, so as to obtain a plurality of grid cells. The size of the preset dimension may represent the size of the grid cell. The computer device may divide the original point cloud data into corresponding grid cells when dividing the signal region. The computer equipment performs feature extraction on the original point cloud signals in the grid cells, and further obtains point cloud feature information. The point cloud characteristic information may include the number of points in the original point cloud signal, the maximum height of the original point cloud signal, the minimum height of the original point cloud signal, the average height of the original point cloud signal, the variance of the height of the original point cloud signal, and the like.
And step 208, extracting features of the map information to obtain a map feature image.
The computer equipment extracts map elements from the map information, and renders the map elements to obtain a map feature image. Map elements may include lane lines, stop lines, pedestrian paths, traffic lights, traffic signs, and the like. In one embodiment, feature extraction is performed on map information to obtain a map feature image, including: extracting map elements from the map information; and rendering the corresponding map elements according to the element channels to obtain a map feature image. After extracting the map elements, the computer equipment acquires element channels corresponding to the map elements, renders the map elements into target color values corresponding to the map elements according to the element channels, and accordingly renders map elements such as lane lines, stop lines, pedestrian channels, traffic lights, traffic signs and the like into map feature images. The element channels may include three color channels of Red (Red), green (Green), blue (Blue). The map feature image may be an RGB (Red, green, blue) image. The computer equipment extracts map elements from the map information, so that corresponding map elements are rendered according to a plurality of element channels, and a map feature image is obtained. Because the map feature image contains road information of the vehicle in the running process, the target track can be predicted and the running path of the vehicle can be planned through the map feature image.
And 210, inputting the point cloud characteristic information and the map characteristic image into a trained prediction model, performing prediction operation on the point cloud characteristic information and the map characteristic image through the prediction model, and outputting an obstacle detection result corresponding to an obstacle detection task, an obstacle track prediction result corresponding to an obstacle track prediction task and a travel path planning result corresponding to a travel path planning task.
The computer device has a pre-trained predictive model stored thereon. The predictive model is obtained by training a large amount of sample data. The predictive model may employ a variety of deep learning neural network models, such as a deep convolutional neural network model, hopfield, and the like.
The computer equipment converts the point cloud characteristic information to obtain a point cloud characteristic vector. The computer equipment converts the map feature image to obtain a map feature vector. The computer device thus inputs the point cloud feature vectors and the map feature vectors into the trained predictive model. The computer equipment fuses the point cloud feature vector and the map feature vector through the prediction model, and fusion feature information can be obtained. And carrying out prediction operation on the fusion characteristic information through a prediction model to obtain the position information of the obstacles and each obstacle in the surrounding environment, the running direction of the obstacle in a preset time period and the corresponding position information, and a plurality of running paths of the vehicle in the preset time period and the corresponding weights of each running path. The computer device outputs the obstacle in the surrounding environment and the position information of each obstacle through the prediction model as an obstacle detection result, outputs the driving direction of the obstacle in a preset time period and the corresponding position information as an obstacle track prediction result, and outputs a plurality of driving paths of the vehicle in the preset time period and the weights corresponding to each driving path as a driving path planning result. The obstacle in the obstacle detection result may include a dynamic foreground obstacle, a static foreground obstacle, a road line sign, and the like.
In this embodiment, the computer device acquires an obstacle detection task, a track prediction task and a travel path planning task, acquires an original point cloud signal according to the obstacle detection task, acquires map information according to the track prediction task and the travel path planning task, extracts point cloud feature information corresponding to the original point cloud signal, and extracts a map feature image corresponding to the map information. By extracting the characteristics of the original point cloud information and the map information, unnecessary information in the original point cloud signals and the map information can be filtered, and the prediction accuracy of the subsequent prediction model is improved. The computer equipment inputs the point cloud characteristic information and the map characteristic information into the same trained prediction model to perform prediction operation, so that a plurality of tasks such as obstacle detection, obstacle track prediction, driving path planning and the like are processed in parallel in the same prediction model, and the original point cloud signals and the map information are not required to be repeatedly processed, thereby reducing the data volume of the tasks, improving the operation efficiency of the computer equipment, and improving the operation efficiency of a prediction result.
In one embodiment, as shown in fig. 3, the step of performing prediction operation on point cloud feature information and map feature images through a trained prediction model, and outputting an obstacle detection result corresponding to an obstacle detection task, an obstacle track prediction result corresponding to an obstacle track prediction task, and a travel path planning result corresponding to a travel path planning task, includes:
and step 302, extracting point cloud context characteristics corresponding to the point cloud characteristic information through a perception layer of the prediction model, and map context characteristics corresponding to the map characteristic image.
And step 304, inputting the point cloud context features and the map context features into a semantic analysis layer, and fusing the point cloud context features and the map context features through the semantic analysis layer to obtain fused feature information.
Step 306, inputting the fusion characteristic information into a prediction layer, predicting the fusion characteristic information through the prediction layer, and outputting an obstacle detection result corresponding to an obstacle detection task, an obstacle track prediction result corresponding to an obstacle track prediction task and a travel path planning result corresponding to a travel path planning task, wherein the prediction layer comprises a prediction layer corresponding to the obstacle detection task, a prediction layer corresponding to the obstacle track prediction task and a prediction layer corresponding to the travel path planning task.
The computer equipment converts the point cloud characteristic information and the map characteristic image to obtain a point Yun Te vector corresponding to the point cloud characteristic information and a map characteristic vector corresponding to the map characteristic image. The trained predictive model may include a perception layer, a semantic analysis layer, a predictive layer, and the like. The computer equipment inputs the point cloud feature vector and the map feature vector into a perception layer of the trained prediction model, extracts the point cloud context feature corresponding to the point cloud feature vector through the perception layer, and extracts the map context feature corresponding to the map feature vector. And taking the point cloud context features and the map context features as input of a semantic analysis layer, and fusing the point cloud context features and the map context features through the semantic analysis layer to obtain fused feature information. And taking the fusion characteristic information as input of a plurality of prediction layers through a prediction model. The prediction layers comprise a prediction layer corresponding to an obstacle detection task, a prediction layer corresponding to an obstacle track prediction task and a prediction layer corresponding to a driving path planning task. The prediction model carries out corresponding prediction operation on the fusion characteristic information through a plurality of prediction layers to obtain the position information of the obstacle and each obstacle in the surrounding environment, the running direction of the obstacle in a preset time period and the corresponding position information, a plurality of running paths of the vehicle in the preset time period and the corresponding weight of each running path. The method comprises the steps of outputting the position information of obstacles and each obstacle in the surrounding environment as an obstacle detection result through a prediction model, outputting the driving direction of the obstacle in a preset time period and the corresponding position information as an obstacle track prediction result, and outputting a plurality of driving paths of the vehicle in the preset time period and the weights corresponding to each driving path as a driving path planning result.
In this embodiment, the computer device extracts the point cloud context feature corresponding to the point cloud feature information through the sensing layer of the prediction model, fuses the point cloud context feature and the map context feature corresponding to the map feature image through the semantic analysis layer, further inputs the fused feature information to the plurality of prediction layers, performs corresponding prediction operation through the plurality of prediction layers, and outputs the obstacle detection result, the obstacle track prediction result and the driving path planning result. Because the obstacle detection needs the point cloud context features, the obstacle track prediction and the driving path planning need the point cloud context features and the map context features, the point cloud context features and the map context features are fused through the semantic analysis layer of the prediction model, and the fusion feature information is input into the prediction layer corresponding to each task, so that the parallel processing of a plurality of tasks of the obstacle detection, the obstacle track prediction and the driving path planning is realized, and the operation efficiency of the computer equipment is further improved.
In one embodiment, fusing the point cloud context features and the map context features through the semantic analysis layer to obtain fused feature information includes: acquiring point cloud weights corresponding to the point cloud context characteristics through a semantic analysis layer, and map weights corresponding to map context characteristic images; and calculating fusion characteristic information according to the point Yun Quan weight and the point cloud context characteristic and the map weight and the map context characteristic.
After the point cloud context features and the map context features are input into the semantic analysis layer by the prediction model, the point cloud context features and the map context features are fused through the semantic analysis layer. Specifically, the semantic analysis layer of the prediction model is used for respectively acquiring point cloud weights corresponding to the point cloud context features and map weights corresponding to the map context features, so that the point cloud weights and the point cloud context features, and the map weights and the map context features are calculated according to a preset relationship to obtain fusion feature information. The preset relationship may be to weight and sum the context feature information and the map context feature, and average the sum.
In this embodiment, the computer device obtains, through a semantic analysis layer of the prediction model, a point cloud weight corresponding to the point cloud context feature, a map weight corresponding to the map context feature image, and calculates fusion feature information according to the point Yun Quan weight and the point cloud context feature, and the map weight and the map context feature. The method has the advantages that the characteristics are fused according to the weights respectively corresponding to the contextual characteristics and the contextual characteristics of the map, so that the accuracy of fused characteristic information is effectively improved, and meanwhile, the parallel processing of a plurality of tasks of obstacle detection, obstacle track prediction and planning of a driving path in the automatic driving process can be realized, and the operation efficiency of computer equipment is further improved.
In one embodiment, feature extraction is performed on an original point cloud signal to obtain point cloud feature information, including: determining a signal area corresponding to the original point cloud signal according to the original point cloud signal; dividing a signal area into a plurality of grid cells according to a preset size; and carrying out feature extraction on the corresponding original point cloud signals in each grid unit to obtain point cloud feature information.
The computer equipment performs feature extraction on the original point cloud signals, and firstly, a signal area corresponding to the original point cloud signals needs to be determined. The computer device calculates the difference between the maximum value and the minimum value of the coordinates of the original point cloud signal in the X direction, the Y direction and the Z direction according to the original point cloud signal. The computer device determines the length, width and height of the signal area based on the three differences. The signal region may be a minimum signal space containing all of the original point cloud signals.
The signal region may be divided into a plurality of regions, and may be rasterized or voxelized. When the computing resource of the computer device is smaller than the preset threshold value, the computer device can conduct rasterization processing on the original point cloud signal, so that the signal area is divided into a plurality of grid units. Specifically, the predetermined size of the rasterization process may be long. The length and width in the preset dimensions may be different. The computer equipment divides the signal area along the X direction according to the length in the preset size to obtain a first grid unit. The first grid cell may be a plurality of grid cells equally divided in the X direction. The computer equipment divides the signal area along the Y direction according to the width in the preset size to obtain a second grid unit. The second grid cell may be a plurality of grid cells equally divided in the Y direction. The heights of the grid cells may be the same. The computer device obtains a target grid cell from the first grid cell and the second grid cell. The order in which the signal areas are divided in directions is not limited. Under the automatic driving mode, when the computing resources are limited and the real-time requirement is high, the extraction efficiency of the point cloud characteristic information is improved.
When the computing resource of the computer equipment is greater than or equal to a preset threshold value, the computer equipment can perform voxelization processing on the original point cloud signals. Specifically, the computer equipment performs voxelization processing on the original point cloud signals according to a preset size. The preset size may be long-by-wide-by-high. The length, width and height of the preset dimensions may be the same. The computer equipment divides the signal area along the X direction according to the length in the preset size to obtain a first grid unit. The computer equipment divides the signal area along the Y direction according to the width in the preset size to obtain a second grid unit. The computer device may divide the signal area along the Z direction according to the height in the preset size to obtain a third grid cell. The computer device generates a target grid cell from the first grid cell, the second grid cell, and the third grid cell. The order in which the signal areas are divided in directions is not limited. When the computing resource of the computer equipment is larger than or equal to a preset threshold value, the signal area corresponding to the original point cloud signal is divided in the X direction, the Y direction and the Z direction. The condition that other point cloud signals are shielded above the obstacle can be well processed, so that the extraction accuracy of the point cloud characteristic information can be further improved in an automatic driving mode.
The computer equipment divides the original point cloud signals into corresponding target grid cells, and performs feature extraction on the position information of each point in the original point cloud signals in the target grid cells to obtain point cloud feature information. The point cloud characteristic information comprises: the number of points in the point cloud data, the maximum height of the point cloud data, the minimum height of the point cloud data, the average height of the point cloud data, the height variance of the point cloud data, and the like. The computer device may also arrange the point cloud characteristic information in rows to generate a matrix unit. The computer equipment arranges the matrix units according to a preset rule to generate a point cloud characteristic matrix.
In this embodiment, the computer device extracts the point cloud feature information by dividing the signal area corresponding to the original point cloud signal, which is favorable for parallel processing of the obstacle detection task, the obstacle track prediction task and the driving path planning task through the prediction model.
In one embodiment, the method further comprises: determining an obstacle detection result meeting a preset condition from the obstacle detection results; extracting a corresponding target track from the obstacle track prediction result according to the obstacle detection result meeting the preset condition; and determining a target driving path in the driving path planning result according to the extracted target track and the obstacle detection result meeting the preset condition.
The computer equipment obtains an obstacle detection result, an obstacle track prediction result and a driving path planning result which are output by the prediction model. The obstacle detection result may include an obstacle in the surrounding environment and a position of the obstacle. The obstacles may include dynamic foreground obstacles, static foreground obstacles, road line signs, backgrounds, and the like. The obstacle trajectory prediction result may include a traveling direction of the obstacle and corresponding position information within a preset period of time. The driving path planning result may include a plurality of driving paths of the vehicle within a preset period of time, and weights corresponding to each driving path. The computer device determines an obstacle detection result satisfying a preset condition among the obstacle detection results. The preset condition may be a dynamic foreground obstacle, a static foreground obstacle, or the like. And the computer equipment extracts a target track corresponding to the obstacle detection result meeting the preset condition from the obstacle track prediction result. And the computer equipment extracts a corresponding driving path from the driving path planning result according to the extracted target track and the obstacle detection result meeting the preset condition. The computer equipment selects a travel path with the largest weight from the extracted travel paths as a target travel path.
In this embodiment, the computer device extracts the corresponding target track from the obstacle track prediction results according to the obstacle detection results satisfying the preset condition by determining the obstacle detection results satisfying the preset condition from the obstacle detection results. And determining a target driving path in the driving path planning result according to the extracted target track and the obstacle detection result meeting the preset condition. The optimal driving path can be selected from the plurality of driving paths, so that the planning accuracy of the driving path is improved.
In one embodiment, as shown in fig. 4, there is provided a perception information processing apparatus including: a first acquisition module 402, a second acquisition module 404, a first extraction module 406, a second extraction module 408, and an operation module 410, wherein:
a first acquisition module 402 is configured to acquire an obstacle detection task, an obstacle trajectory prediction task, and a travel path planning task.
The second obtaining module 404 is configured to obtain perception information according to the obstacle detection task, the obstacle trajectory prediction task, and the driving path planning task, where the perception information includes an original point cloud signal and map information.
The first extraction module 406 is configured to perform feature extraction on the original point cloud signal, so as to obtain point cloud feature information.
The second extraction module 408 is configured to perform feature extraction on the map information, so as to obtain a map feature image.
The operation module 410 is configured to input the point cloud feature information and the map feature image into a trained prediction model, perform a prediction operation on the point cloud feature information and the map feature image through the prediction model, and output an obstacle detection result corresponding to an obstacle detection task, an obstacle track prediction result corresponding to an obstacle track prediction task, and a travel path planning result corresponding to a travel path planning task.
In one embodiment, the operation module 410 is further configured to extract, through a sensing layer of the prediction model, a point cloud context feature corresponding to the point cloud feature information, and a map context feature corresponding to the map feature image; inputting the point cloud context features and the map context features into a semantic analysis layer, and fusing the point cloud context features and the map context features through the semantic analysis layer to obtain fused feature information; the fusion characteristic information is input into a prediction layer, prediction operation is carried out on the fusion characteristic information through the prediction layer, and an obstacle detection result corresponding to an obstacle detection task, an obstacle track prediction result corresponding to an obstacle track prediction task and a driving path planning result corresponding to a driving path planning task are output.
In one embodiment, the operation module 410 is further configured to obtain, through the semantic analysis layer, a point cloud weight corresponding to the point cloud context feature, and a map weight corresponding to the map context feature image; and calculating fusion characteristic information according to the point Yun Quan weight and the point cloud context characteristic and the map weight and the map context characteristic.
In one embodiment, the apparatus further includes: a determining module, configured to determine an obstacle detection result that meets a preset condition from the obstacle detection results; extracting a corresponding target track from the obstacle track prediction result according to the obstacle detection result meeting the preset condition; and determining a target driving path in the driving path planning result according to the extracted target track and the obstacle detection result meeting the preset condition.
In one embodiment, the first extraction module 406 is further configured to determine a signal area corresponding to the original point cloud signal according to the original point cloud signal; dividing a signal area into a plurality of grid cells according to a preset size; and carrying out feature extraction on the corresponding original point cloud signals in each grid unit to obtain point cloud feature information.
In one embodiment, the second extraction module 408 extracts map elements in the map information; and rendering corresponding map elements according to the element channels to obtain a map feature image.
The specific limitation of the sensing information processing apparatus may be referred to the limitation of the sensing information processing method hereinabove, and will not be described herein. The respective modules in the above-described perception information processing apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, the internal structure of which may be as shown in FIG. 5. The computer device includes a processor, a memory, a communication interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer readable instructions, and a database. The internal memory provides an environment for the execution of an operating system and computer-readable instructions in a non-volatile storage medium. The database of the computer device is used for storing obstacle detection results, obstacle track prediction results and driving path planning results. The communication interface of the computer device is used for connecting and communicating with the vehicle-mounted sensor and the second vehicle-mounted computer device. The computer readable instructions when executed by a processor implement a method of perceptual information processing.
It will be appreciated by those skilled in the art that the structure shown in FIG. 5 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
A computer device comprising a memory and one or more processors, the memory having stored thereon computer-readable instructions that, when executed by the one or more processors, cause the one or more processors to perform the steps of the various method embodiments described above.
One or more non-transitory computer-readable storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps in the various method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the processes of the methods of the embodiments described above may be accomplished by instructing the associated hardware by computer readable instructions stored on a non-transitory computer readable storage medium, which when executed may comprise processes of embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (20)

1. A perception information processing method, comprising:
Acquiring an obstacle detection task, an obstacle track prediction task and a driving path planning task;
Obtaining perception information according to the obstacle detection task, the obstacle track prediction task and the driving path planning task, wherein the perception information comprises an original point cloud signal and map information;
dividing a signal area where the original point cloud signal is located according to a preset size to obtain a plurality of grid cells;
extracting features of the corresponding original point cloud signals in each grid unit to obtain point cloud feature information;
extracting features of the map information to obtain a map feature image;
Inputting the point cloud characteristic information and the map characteristic image into a trained prediction model, performing prediction operation on the point cloud characteristic information and the map characteristic image through the prediction model, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle track prediction result corresponding to the obstacle track prediction task and a travel path planning result corresponding to the travel path planning task; the obstacle track prediction result is the running direction and the corresponding position information of the obstacle in a preset time period, and the running path planning result is a plurality of running paths and weights corresponding to each running path in the preset time period;
Determining an obstacle detection result meeting a preset condition from the obstacle detection results;
extracting a corresponding target track from the obstacle track prediction result according to the obstacle detection result meeting the preset condition; and
And determining a target driving path in the driving path planning result according to the extracted target track and the obstacle detection result meeting the preset condition.
2. The method according to claim 1, wherein the trained prediction model includes a perception layer, a semantic analysis layer, and a prediction layer, the predicting the point cloud feature information and the map feature image by the trained prediction model, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle trajectory prediction result corresponding to the obstacle trajectory prediction task, and a travel path planning result corresponding to the travel path planning task, includes:
extracting point cloud context characteristics corresponding to the point cloud characteristic information through a perception layer of the prediction model, and map context characteristics corresponding to the map characteristic image;
inputting the point cloud context features and the map context features to a semantic analysis layer, and fusing the point cloud context features and the map context features through the semantic analysis layer to obtain fused feature information; and
And inputting the fusion characteristic information to a prediction layer, performing prediction operation on the fusion characteristic information through the prediction layer, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle track prediction result corresponding to the obstacle track prediction task and a driving path planning result corresponding to the driving path planning task, wherein the prediction layer comprises a prediction layer corresponding to the obstacle detection task, a prediction layer corresponding to the obstacle track prediction task and a prediction layer corresponding to the driving path planning task.
3. The method according to claim 2, wherein the fusing, by the semantic analysis layer, the point cloud context feature and the map context feature to obtain fused feature information includes:
acquiring point cloud weights corresponding to the point cloud context features through the semantic analysis layer, and map weights corresponding to the map context feature images; and
And calculating fusion characteristic information according to the point cloud weight and the point cloud context characteristics and the map weight and the map context characteristics.
4. The method of claim 1, wherein before the dividing the signal area where the original point cloud signal is located according to the preset size, the method further includes:
calculating the difference value between the maximum value and the minimum value of the coordinates of the original point cloud signals in the X direction, the Y direction and the Z direction; and
Determining the length, width and height of the signal area according to the three difference values; the signal region is the minimum signal space that contains all of the original point cloud signals.
5. The method according to claim 1, wherein the feature extracting the map information to obtain a map feature image includes:
Extracting map elements from the map information; and
And rendering corresponding map elements according to the element channels to obtain a map feature image.
6. The method of claim 1, wherein the dividing the signal area where the original point cloud signal is located according to a preset size to obtain a plurality of grid cells includes:
When the computing resource is smaller than a preset threshold value, dividing the signal area along the X direction according to the length in the preset size to obtain a first grid unit, dividing the signal area along the Y direction according to the width in the preset size to obtain a second grid unit, and obtaining a target grid unit according to the first grid unit and the second grid unit; and
When the computing resource is greater than or equal to a preset threshold value, dividing a signal area along the X direction according to the length in the preset size to obtain a first grid unit, dividing the signal area along the Y direction according to the width in the preset size to obtain a second grid unit, dividing the signal area along the Z direction by the height in the preset size to obtain a third grid unit, and generating a target grid unit according to the first grid unit, the second grid unit and the third grid unit.
7. A perception information processing apparatus comprising:
the first acquisition module is used for acquiring an obstacle detection task, an obstacle track prediction task and a driving path planning task;
The second acquisition module is used for acquiring perception information according to the obstacle detection task, the obstacle track prediction task and the driving path planning task, wherein the perception information comprises an original point cloud signal and map information;
The first extraction module is used for dividing a signal area where the original point cloud signal is located according to a preset size to obtain a plurality of grid cells;
The first extraction module is further used for extracting characteristics of the original point cloud signals in the grid cells to obtain point cloud characteristic information;
the second extraction module is used for extracting the characteristics of the map information to obtain a map characteristic image;
The operation module is used for inputting the point cloud characteristic information and the map characteristic image into a trained prediction model, performing prediction operation on the point cloud characteristic information and the map characteristic image through the prediction model, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle track prediction result corresponding to the obstacle track prediction task and a travel path planning result corresponding to the travel path planning task; the obstacle track prediction result is the running direction and the corresponding position information of the obstacle in a preset time period, and the running path planning result is a plurality of running paths and weights corresponding to each running path in the preset time period;
a determining module, configured to determine an obstacle detection result that meets a preset condition from the obstacle detection results;
The determining module is further configured to extract a corresponding target track from the obstacle track prediction result according to the obstacle detection result that meets the preset condition; and
The determining module is further configured to determine a target driving path from the driving path planning result according to the extracted target track and the obstacle detection result that meets the preset condition.
8. The apparatus of claim 7, wherein the operation module is further configured to extract, through a perception layer of the prediction model, a point cloud context feature corresponding to the point cloud feature information, a map context feature corresponding to the map feature image; inputting the point cloud context features and the map context features to a semantic analysis layer, and fusing the point cloud context features and the map context features through the semantic analysis layer to obtain fused feature information; and inputting the fusion characteristic information to a prediction layer, performing prediction operation on the fusion characteristic information through the prediction layer, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle track prediction result corresponding to the obstacle track prediction task and a travel path planning result corresponding to the travel path planning task, wherein the prediction layer comprises a prediction layer corresponding to the obstacle detection task, a prediction layer corresponding to the obstacle track prediction task and a prediction layer corresponding to the travel path planning task.
9. The apparatus of claim 8, wherein the operation module is further configured to obtain, by the semantic analysis layer, a point cloud weight corresponding to the point cloud context feature, a map weight corresponding to the map context feature image; and calculating fusion characteristic information according to the point cloud weight and the point cloud context characteristics and the map weight and the map context characteristics.
10. The apparatus of claim 7, wherein the first extraction module is further configured to divide the signal region along the X direction according to the length in the preset size to obtain a first grid cell, divide the signal region along the Y direction according to the width in the preset size to obtain a second grid cell, and obtain a target grid cell according to the first grid cell and the second grid cell when the computing resource is less than a preset threshold; and when the computing resource is greater than or equal to a preset threshold value, dividing the signal area along the X direction according to the length in the preset size to obtain a first grid unit, dividing the signal area along the Y direction according to the width in the preset size to obtain a second grid unit, dividing the signal area along the Z direction by the height in the preset size to obtain a third grid unit, and generating a target grid unit according to the first grid unit, the second grid unit and the third grid unit.
11. A computer device comprising a memory and one or more processors, the memory having stored therein computer-readable instructions that, when executed by the one or more processors, cause the one or more processors to perform the steps of: acquiring an obstacle detection task, an obstacle track prediction task and a driving path planning task; obtaining perception information according to the obstacle detection task, the obstacle track prediction task and the driving path planning task, wherein the perception information comprises an original point cloud signal and map information; dividing a signal area where the original point cloud signal is located according to a preset size to obtain a plurality of grid cells; extracting features of the corresponding original point cloud signals in each grid unit to obtain point cloud feature information; extracting features of the map information to obtain a map feature image; inputting the point cloud characteristic information and the map characteristic image into a trained prediction model, performing prediction operation on the point cloud characteristic information and the map characteristic image through the prediction model, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle track prediction result corresponding to the obstacle track prediction task and a travel path planning result corresponding to the travel path planning task; the obstacle track prediction result is the running direction and the corresponding position information of the obstacle in a preset time period, and the running path planning result is a plurality of running paths and weights corresponding to each running path in the preset time period; determining an obstacle detection result meeting a preset condition from the obstacle detection results; extracting a corresponding target track from the obstacle track prediction result according to the obstacle detection result meeting the preset condition; and determining a target driving path in the driving path planning result according to the extracted target track and the obstacle detection result meeting the preset condition.
12. The computer device of claim 11, wherein the processor when executing the computer readable instructions further performs the steps of: extracting point cloud context characteristics corresponding to the point cloud characteristic information through a perception layer of the prediction model, and map context characteristics corresponding to the map characteristic image; inputting the point cloud context features and the map context features to a semantic analysis layer, and fusing the point cloud context features and the map context features through the semantic analysis layer to obtain fused feature information; and inputting the fusion characteristic information to a prediction layer, performing prediction operation on the fusion characteristic information through the prediction layer, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle track prediction result corresponding to the obstacle track prediction task and a travel path planning result corresponding to the travel path planning task, wherein the prediction layer comprises a prediction layer corresponding to the obstacle detection task, a prediction layer corresponding to the obstacle track prediction task and a prediction layer corresponding to the travel path planning task.
13. The computer device of claim 12, wherein the processor, when executing the computer readable instructions, further performs the steps of: acquiring point cloud weights corresponding to the point cloud context features through the semantic analysis layer, and map weights corresponding to the map context feature images; and calculating fusion characteristic information according to the point cloud weight and the point cloud context characteristics and the map weight and the map context characteristics.
14. The computer device of claim 11, wherein the processor when executing the computer readable instructions further performs the steps of: determining a signal area corresponding to the original point cloud signal according to the original point cloud signal; dividing the signal area into a plurality of grid cells according to a preset size; and extracting the characteristics of the corresponding original point cloud signals in each grid unit to obtain point cloud characteristic information.
15. The computer device of claim 14, wherein the processor, when executing the computer readable instructions, further performs the steps of: when the computing resource is smaller than a preset threshold value, dividing the signal area along the X direction according to the length in the preset size to obtain a first grid unit, dividing the signal area along the Y direction according to the width in the preset size to obtain a second grid unit, and obtaining a target grid unit according to the first grid unit and the second grid unit; and when the computing resource is greater than or equal to a preset threshold value, dividing the signal area along the X direction according to the length in the preset size to obtain a first grid unit, dividing the signal area along the Y direction according to the width in the preset size to obtain a second grid unit, dividing the signal area along the Z direction by the height in the preset size to obtain a third grid unit, and generating a target grid unit according to the first grid unit, the second grid unit and the third grid unit.
16. One or more non-transitory computer-readable storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of: acquiring an obstacle detection task, an obstacle track prediction task and a driving path planning task; obtaining perception information according to the obstacle detection task, the obstacle track prediction task and the driving path planning task, wherein the perception information comprises an original point cloud signal and map information; dividing a signal area where the original point cloud signal is located according to a preset size to obtain a plurality of grid cells; extracting features of the corresponding original point cloud signals in each grid unit to obtain point cloud feature information; extracting features of the map information to obtain a map feature image; inputting the point cloud characteristic information and the map characteristic image into a trained prediction model, performing prediction operation on the point cloud characteristic information and the map characteristic image through the prediction model, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle track prediction result corresponding to the obstacle track prediction task and a travel path planning result corresponding to the travel path planning task; the obstacle track prediction result is the running direction and the corresponding position information of the obstacle in a preset time period, and the running path planning result is a plurality of running paths and weights corresponding to each running path in the preset time period; determining an obstacle detection result meeting a preset condition from the obstacle detection results; extracting a corresponding target track from the obstacle track prediction result according to the obstacle detection result meeting the preset condition; and determining a target driving path in the driving path planning result according to the extracted target track and the obstacle detection result meeting the preset condition.
17. The storage medium of claim 16, wherein the computer readable instructions, when executed by the processor, further perform the steps of: extracting point cloud context characteristics corresponding to the point cloud characteristic information through a perception layer of the prediction model, and map context characteristics corresponding to the map characteristic image; inputting the point cloud context features and the map context features to a semantic analysis layer, and fusing the point cloud context features and the map context features through the semantic analysis layer to obtain fused feature information; and inputting the fusion characteristic information to a prediction layer, performing prediction operation on the fusion characteristic information through the prediction layer, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle track prediction result corresponding to the obstacle track prediction task and a travel path planning result corresponding to the travel path planning task, wherein the prediction layer comprises a prediction layer corresponding to the obstacle detection task, a prediction layer corresponding to the obstacle track prediction task and a prediction layer corresponding to the travel path planning task.
18. The storage medium of claim 17, wherein the computer readable instructions, when executed by the processor, further perform the steps of: acquiring point cloud weights corresponding to the point cloud context features through the semantic analysis layer, and map weights corresponding to the map context feature images; and calculating fusion characteristic information according to the point cloud weight and the point cloud context characteristics and the map weight and the map context characteristics.
19. The storage medium of claim 16, wherein the computer readable instructions, when executed by the processor, further perform the steps of: determining a signal area corresponding to the original point cloud signal according to the original point cloud signal; dividing the signal area into a plurality of grid cells according to a preset size; and extracting the characteristics of the corresponding original point cloud signals in each grid unit to obtain point cloud characteristic information.
20. The storage medium of claim 19, wherein the computer readable instructions, when executed by the processor, further perform the steps of: when the computing resource is smaller than a preset threshold value, dividing the signal area along the X direction according to the length in the preset size to obtain a first grid unit, dividing the signal area along the Y direction according to the width in the preset size to obtain a second grid unit, and obtaining a target grid unit according to the first grid unit and the second grid unit; and when the computing resource is greater than or equal to a preset threshold value, dividing the signal area along the X direction according to the length in the preset size to obtain a first grid unit, dividing the signal area along the Y direction according to the width in the preset size to obtain a second grid unit, dividing the signal area along the Z direction by the height in the preset size to obtain a third grid unit, and generating a target grid unit according to the first grid unit, the second grid unit and the third grid unit.
CN201980037292.7A 2019-12-30 2019-12-30 Perceptual information processing method, apparatus, computer device, and storage medium Active CN113383283B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/130191 WO2021134357A1 (en) 2019-12-30 2019-12-30 Perception information processing method and apparatus, computer device and storage medium

Publications (2)

Publication Number Publication Date
CN113383283A CN113383283A (en) 2021-09-10
CN113383283B true CN113383283B (en) 2024-06-18

Family

ID=76687487

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980037292.7A Active CN113383283B (en) 2019-12-30 2019-12-30 Perceptual information processing method, apparatus, computer device, and storage medium

Country Status (2)

Country Link
CN (1) CN113383283B (en)
WO (1) WO2021134357A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113920166B (en) * 2021-10-29 2024-05-28 广州文远知行科技有限公司 Method, device, vehicle and storage medium for selecting object motion model
CN115164931B (en) * 2022-09-08 2022-12-09 南开大学 System, method and equipment for assisting blind person in going out
CN117407694A (en) * 2023-11-06 2024-01-16 九识(苏州)智能科技有限公司 Multi-mode information processing method, device, equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108196535A (en) * 2017-12-12 2018-06-22 清华大学苏州汽车研究院(吴江) Automated driving system based on enhancing study and Multi-sensor Fusion

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106428000B (en) * 2016-09-07 2018-12-21 清华大学 A kind of vehicle speed control device and method
US10286915B2 (en) * 2017-01-17 2019-05-14 Nio Usa, Inc. Machine learning for personalized driving
US10551842B2 (en) * 2017-06-19 2020-02-04 Hitachi, Ltd. Real-time vehicle state trajectory prediction for vehicle energy management and autonomous drive
KR102070527B1 (en) * 2017-06-22 2020-01-28 바이두닷컴 타임즈 테크놀로지(베이징) 컴퍼니 리미티드 Evaluation Framework for Trajectories Predicted in Autonomous Vehicle Traffic Prediction
CN109029417B (en) * 2018-05-21 2021-08-10 南京航空航天大学 Unmanned aerial vehicle SLAM method based on mixed visual odometer and multi-scale map
CN108981726A (en) * 2018-06-09 2018-12-11 安徽宇锋智能科技有限公司 Unmanned vehicle semanteme Map building and building application method based on perceptual positioning monitoring
CN109029422B (en) * 2018-07-10 2021-03-05 北京木业邦科技有限公司 Method and device for building three-dimensional survey map through cooperation of multiple unmanned aerial vehicles
CN109556615B (en) * 2018-10-10 2022-10-04 吉林大学 Driving map generation method based on multi-sensor fusion cognition of automatic driving
CN110286387B (en) * 2019-06-25 2021-09-24 深兰科技(上海)有限公司 Obstacle detection method and device applied to automatic driving system and storage medium
CN110542908B (en) * 2019-09-09 2023-04-25 深圳市海梁科技有限公司 Laser radar dynamic object sensing method applied to intelligent driving vehicle

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108196535A (en) * 2017-12-12 2018-06-22 清华大学苏州汽车研究院(吴江) Automated driving system based on enhancing study and Multi-sensor Fusion

Also Published As

Publication number Publication date
WO2021134357A1 (en) 2021-07-08
CN113383283A (en) 2021-09-10

Similar Documents

Publication Publication Date Title
CN111666921B (en) Vehicle control method, apparatus, computer device, and computer-readable storage medium
CN112417967B (en) Obstacle detection method, obstacle detection device, computer device, and storage medium
JP7239703B2 (en) Object classification using extraterritorial context
CN111142557B (en) Unmanned aerial vehicle path planning method and system, computer equipment and readable storage medium
CN113383283B (en) Perceptual information processing method, apparatus, computer device, and storage medium
CN110843789B (en) Vehicle lane change intention prediction method based on time sequence convolution network
CN113424121A (en) Vehicle speed control method and device based on automatic driving and computer equipment
EP3822852B1 (en) Method, apparatus, computer storage medium and program for training a trajectory planning model
CN113366496A (en) Neural network for coarse and fine object classification
CN113678136A (en) Obstacle detection method and device based on unmanned technology and computer equipment
CN114930401A (en) Point cloud-based three-dimensional reconstruction method and device and computer equipment
US11693415B2 (en) Predicting cut-in probabilities of surrounding agents
CN113811830B (en) Trajectory prediction method, apparatus, computer device and storage medium
Niranjan et al. Deep learning based object detection model for autonomous driving research using carla simulator
CN115917559A (en) Trajectory prediction method, apparatus, computer device and storage medium
CN113189989B (en) Vehicle intention prediction method, device, equipment and storage medium
CN115917357A (en) Method and device for detecting undefined class obstacle and computer equipment
CN110751040B (en) Three-dimensional object detection method and device, electronic equipment and storage medium
KR20200095357A (en) Learning method and learning device for heterogeneous sensor fusion by using merging network which learns non-maximum suppression
CN115053277B (en) Method, system, computer device and storage medium for lane change classification of surrounding moving object
CN110097077B (en) Point cloud data classification method and device, computer equipment and storage medium
CN114945961B (en) Lane changing prediction regression model training method, lane changing prediction method and apparatus
CN115240168A (en) Perception result obtaining method and device, computer equipment and storage medium
US11544899B2 (en) System and method for generating terrain maps
Katare et al. Autonomous embedded system enabled 3-D object detector:(With point cloud and camera)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant