CN113383283A - Perception information processing method and device, computer equipment and storage medium - Google Patents

Perception information processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113383283A
CN113383283A CN201980037292.7A CN201980037292A CN113383283A CN 113383283 A CN113383283 A CN 113383283A CN 201980037292 A CN201980037292 A CN 201980037292A CN 113383283 A CN113383283 A CN 113383283A
Authority
CN
China
Prior art keywords
point cloud
map
prediction
task
obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201980037292.7A
Other languages
Chinese (zh)
Other versions
CN113383283B (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DeepRoute AI Ltd
Original Assignee
DeepRoute AI Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DeepRoute AI Ltd filed Critical DeepRoute AI Ltd
Publication of CN113383283A publication Critical patent/CN113383283A/en
Application granted granted Critical
Publication of CN113383283B publication Critical patent/CN113383283B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

A perceptual information processing method, comprising: acquiring an obstacle detection task, an obstacle track prediction task and a driving path planning task; acquiring perception information according to the obstacle detection task, the obstacle track prediction task and the driving path planning task, wherein the perception information comprises an original point cloud signal and map information; extracting the characteristics of the original point cloud signal to obtain point cloud characteristic information; carrying out feature extraction on the map information to obtain a map feature image; and inputting the point cloud characteristic information and the map characteristic image into a trained prediction model, performing prediction operation on the point cloud characteristic information and the map characteristic image through the prediction model, and outputting an obstacle detection result corresponding to an obstacle detection task, an obstacle track prediction result corresponding to an obstacle track prediction task and a driving path planning result corresponding to a driving path planning task.

Description

Perception information processing method and device, computer equipment and storage medium Technical Field
The application relates to a perception information processing method, a perception information processing device, computer equipment and a storage medium.
Background
The development of the automatic driving technology is promoted by the development of the artificial intelligence technology. In the automatic driving process, it is necessary to detect obstacles around the vehicle at all times, predict the trajectory of the obstacles, and plan the running path of the vehicle, thereby controlling the vehicle to run automatically. In a traditional mode, a plurality of tasks such as obstacle detection, obstacle trajectory prediction and driving path planning are processed independently through computer equipment according to corresponding task processing modes. Each task needs to process specific perception information, and therefore, the conventional method needs to repeatedly process the specific perception information, so that the data volume of the task is large, and further, the computational efficiency of the computer device is low.
Disclosure of Invention
According to various embodiments disclosed in the present application, a perceptual information processing method, an apparatus, a computer device, and a storage medium capable of improving the operational efficiency of the computer device are provided.
A perceptual information processing method, comprising:
acquiring an obstacle detection task, an obstacle track prediction task and a driving path planning task;
acquiring perception information according to the obstacle detection task, the obstacle track prediction task and the driving path planning task, wherein the perception information comprises an original point cloud signal and map information;
extracting the characteristics of the original point cloud signal to obtain point cloud characteristic information;
extracting the features of the map information to obtain a map feature image; and
and inputting the point cloud characteristic information and the map characteristic image into a trained prediction model, performing prediction operation on the point cloud characteristic information and the map characteristic image through the prediction model, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle track prediction result corresponding to the obstacle track prediction task and a driving path planning result corresponding to the driving path planning task.
A perceptual information processing apparatus comprising:
the system comprises a first acquisition module, a second acquisition module and a control module, wherein the first acquisition module is used for acquiring an obstacle detection task, an obstacle track prediction task and a driving path planning task;
the second acquisition module is used for acquiring perception information according to the obstacle detection task, the obstacle track prediction task and the driving path planning task, wherein the perception information comprises an original point cloud signal and map information;
the first extraction module is used for extracting the characteristics of the original point cloud signal to obtain point cloud characteristic information;
the second extraction module is used for extracting the features of the map information to obtain a map feature image; and
and the operation module is used for inputting the point cloud characteristic information and the map characteristic image into a trained prediction model, performing prediction operation on the point cloud characteristic information and the map characteristic image through the prediction model, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle track prediction result corresponding to the obstacle track prediction task and a driving path planning result corresponding to the driving path planning task.
A computer device comprising a memory and one or more processors, the memory having stored therein computer-readable instructions that, when executed by the processors, cause the one or more processors to perform the steps of:
acquiring an obstacle detection task, an obstacle track prediction task and a driving path planning task;
acquiring perception information according to the obstacle detection task, the obstacle track prediction task and the driving path planning task, wherein the perception information comprises an original point cloud signal and map information;
extracting the characteristics of the original point cloud signal to obtain point cloud characteristic information;
extracting the features of the map information to obtain a map feature image; and
and inputting the point cloud characteristic information and the map characteristic image into a trained prediction model, performing prediction operation on the point cloud characteristic information and the map characteristic image through the prediction model, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle track prediction result corresponding to the obstacle track prediction task and a driving path planning result corresponding to the driving path planning task.
One or more non-transitory computer-readable storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of:
acquiring an obstacle detection task, an obstacle track prediction task and a driving path planning task;
acquiring perception information according to the obstacle detection task, the obstacle track prediction task and the driving path planning task, wherein the perception information comprises an original point cloud signal and map information;
extracting the characteristics of the original point cloud signal to obtain point cloud characteristic information;
extracting the features of the map information to obtain a map feature image; and
and inputting the point cloud characteristic information and the map characteristic image into a trained prediction model, performing prediction operation on the point cloud characteristic information and the map characteristic image through the prediction model, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle track prediction result corresponding to the obstacle track prediction task and a driving path planning result corresponding to the driving path planning task.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below. Other features and advantages of the application will be apparent from the description and drawings, and from the claims.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a diagram of an application environment of a method for perceptual information processing in one or more embodiments.
FIG. 2 is a flow diagram of a method for perceptual information processing in one or more embodiments.
Fig. 3 is a schematic flow chart illustrating a step of performing prediction operation on point cloud feature information and a map feature image through a trained prediction model in one or more embodiments.
FIG. 4 is a block diagram of a perceptual information processing apparatus in one or more embodiments.
FIG. 5 is a block diagram of a computer device in one or more embodiments.
Detailed Description
In order to make the technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The perceptual information processing method provided by the application can be applied to the application environment shown in fig. 1. During autonomous driving, the onboard sensors 102 are networked to the first onboard computer device 104, and the second onboard computer device 106 is networked to the first onboard computer device 104. The first onboard computer device 104 acquires an obstacle detection task, an obstacle trajectory prediction task, and a travel path planning task. The first onboard computer device may be referred to simply as a computer device. The computer device 104 obtains perception information according to the obstacle detection task, the obstacle trajectory prediction task, and the travel path planning task. The sensory information includes the raw point cloud signals acquired at the on-board sensor 102 according to the obstacle detection task and the map information acquired in the second on-board computer device 106 according to the obstacle trajectory prediction task and the travel path planning task. The onboard sensor may be a lidar. The second on-board computer device may be a positioning device. The computer device 104 performs feature extraction on the original point cloud signal to obtain point cloud feature information. The computer device 104 performs feature extraction on the map information to obtain a map feature image. The computer device 104 inputs the point cloud characteristic information and the map characteristic image into the trained prediction model, performs prediction operation on the point cloud characteristic information and the map characteristic image through the prediction model, and outputs an obstacle detection result corresponding to the obstacle detection task, an obstacle trajectory prediction result corresponding to the obstacle trajectory prediction task, and a driving path planning result corresponding to the driving path planning task.
In one embodiment, as shown in fig. 2, a perceptual information processing method is provided, which is described by taking the example that the method is applied to the computer device in fig. 1, and includes the following steps:
step 202, obtaining an obstacle detection task, an obstacle trajectory prediction task and a driving path planning task.
And 204, acquiring perception information according to the obstacle detection task, the obstacle track prediction task and the driving path planning task, wherein the perception information comprises an original point cloud signal and map information.
In the process of automatic driving of the vehicle, the surrounding environment can be scanned by the laser radar installed on the vehicle, and a corresponding original point cloud signal is obtained. The original point cloud signal may be a three-dimensional point cloud signal. Map information may also be generated by a positioning device mounted on the vehicle. The map information may include road information, position information of the vehicle in a map, and the like. For example, the map information may be a high-precision map. When the computer device obtains the obstacle detection task, the obstacle trajectory prediction task and the travel path planning task, corresponding perception information can be obtained according to the obstacle detection task, the obstacle trajectory prediction task and the travel path planning task. The perception information includes an original point cloud signal and map information. Specifically, the computer equipment acquires original point cloud data acquired by the laser radar according to the obstacle detection task. The original point cloud signal is acquired by the laser radar in a visible range. The visibility range of different lidar may be different. And the computer equipment acquires map information in the positioning equipment according to the obstacle track prediction task and the driving path planning task.
And step 206, extracting the characteristics of the original point cloud signal to obtain point cloud characteristic information.
And the computer equipment extracts the characteristics of the original point cloud signal. The point cloud feature information of the original point cloud signal can be extracted by adopting a rasterization processing mode. In the automatic driving mode, when the computing resource of the computer device is smaller than a preset threshold value, a rasterization processing mode can be adopted. For example, in the scenario of automatic driving real-time monitoring, a rasterization processing mode may be adopted.
Specifically, the computer device determines a signal area corresponding to the original point cloud signal according to the acquired original point cloud signal. The signal region may be the smallest signal space that contains all of the original point cloud signals. For example, the original point cloud signal collected by the laser radar with the visible range of 200m corresponds to a signal area of 400m × 10m (length × width × height). The computer equipment can divide the signal area where the original point cloud signal is located according to the preset size, so that a plurality of grid units are obtained. The size of the preset size may represent the size of the grid cell. When the computer device divides the signal area, the original point cloud data can be divided into corresponding grid units. And the computer equipment extracts the characteristics of the original point cloud signals in the grid unit so as to obtain point cloud characteristic information. The point cloud feature information may include the number of points in the original point cloud signal, the maximum height of the original point cloud signal, the minimum height of the original point cloud signal, the average height of the original point cloud signal, the height variance of the original point cloud signal, and the like.
And step 208, extracting the features of the map information to obtain a map feature image.
The computer equipment obtains the map characteristic image by extracting the map elements from the map information and rendering the map elements. Map elements may include lane lines, stop lines, pedestrian pathways, traffic lights, traffic signs, and the like. In one embodiment, the feature extraction of the map information to obtain a map feature image includes: extracting map elements from the map information; and rendering the corresponding map elements according to the plurality of element channels to obtain a map feature image. After extracting the map elements, the computer equipment acquires the element channels corresponding to the map elements, renders the map elements into target color values corresponding to the map elements according to the element channels, and renders the map elements such as lane lines, stop lines, pedestrian channels, traffic lights, traffic signs and the like into map feature images. The element channel may include three color channels of Red (Red), Green (Green), and Blue (Blue). The map feature image may be an RGB (Red, Green, Blue) image. The computer equipment obtains the map characteristic image by extracting the map elements from the map information so as to render the corresponding map elements according to the plurality of element channels. Because the map feature image contains road information of the vehicle in the driving process, the target track can be predicted and the driving path of the vehicle can be planned through the map feature image.
Step 210, inputting the point cloud characteristic information and the map characteristic image into the trained prediction model, performing prediction operation on the point cloud characteristic information and the map characteristic image through the prediction model, and outputting an obstacle detection result corresponding to an obstacle detection task, an obstacle track prediction result corresponding to an obstacle track prediction task and a driving path planning result corresponding to a driving path planning task.
The computer device is stored with a pre-trained predictive model. The prediction model is obtained by training a large amount of sample data. The predictive model may employ a variety of deep learning neural network models, such as deep convolutional neural network models, hopfields, and the like.
And converting the point cloud characteristic information by the computer equipment to obtain a point cloud characteristic vector. And the computer equipment converts the map feature image to obtain a map feature vector. The computer device then inputs the point cloud feature vector and the map feature vector into the trained predictive model. And the computer equipment fuses the point cloud characteristic vector and the map characteristic vector through the prediction model to obtain fused characteristic information. And performing prediction operation on the fusion characteristic information through a prediction model to obtain the position information of the obstacles and each obstacle in the surrounding environment, the driving direction and the corresponding position information of the obstacles in a preset time period, and a plurality of driving paths of the vehicle in the preset time period and the weight corresponding to each driving path. The computer equipment outputs the obstacles in the surrounding environment and the position information of each obstacle as an obstacle detection result through the prediction model, outputs the driving direction and the corresponding position information of the obstacles in a preset time period as an obstacle track prediction result, and outputs a plurality of driving paths of the vehicle in the preset time period and the weight corresponding to each driving path as a driving path planning result. The obstacles in the obstacle detection result may include dynamic foreground obstacles, static foreground obstacles, road line markers, and the like.
In this embodiment, the computer device obtains an obstacle detection task, a trajectory prediction task, and a travel path planning task, obtains an original point cloud signal according to the obstacle detection task, obtains map information according to the trajectory prediction task and the travel path planning task, and extracts point cloud feature information corresponding to the original point cloud signal and a map feature image corresponding to the map information. By extracting the characteristics of the original point cloud information and the map information, unnecessary information in the original point cloud signal and the map information can be filtered, and the prediction accuracy of a subsequent prediction model can be improved. The computer equipment inputs the point cloud characteristic information and the map characteristic information into the same trained prediction model for prediction operation, so that a plurality of tasks such as obstacle detection, obstacle track prediction, driving path planning and the like are processed in parallel in the same prediction model, and the original point cloud signal and the map information do not need to be processed repeatedly, so that the data volume of the tasks is reduced, the operation efficiency of the computer equipment is improved, and the operation efficiency of the prediction result is improved.
In one embodiment, as shown in fig. 3, the step of performing a prediction operation on the point cloud feature information and the map feature image through the trained prediction model, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle trajectory prediction result corresponding to the obstacle trajectory prediction task, and a driving path planning result corresponding to the driving path planning task includes:
step 302, extracting the point cloud context features corresponding to the point cloud feature information and the map context features corresponding to the map feature image through the perception layer of the prediction model.
Step 304, inputting the point cloud context features and the map context features into a semantic analysis layer, and fusing the point cloud context features and the map context features through the semantic analysis layer to obtain fused feature information.
And step 306, inputting the fusion characteristic information into a prediction layer, predicting the fusion characteristic information through the prediction layer, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle trajectory prediction result corresponding to the obstacle trajectory prediction task and a driving path planning result corresponding to the driving path planning task, wherein the prediction layer comprises a prediction layer corresponding to the obstacle detection task, a prediction layer corresponding to the obstacle trajectory prediction task and a prediction layer corresponding to the driving path planning task.
And the computer equipment converts the point cloud characteristic information and the map characteristic image to obtain a point cloud characteristic vector corresponding to the point cloud characteristic information and a map characteristic vector corresponding to the map characteristic image. The trained prediction model may include a perception layer, a semantic analysis layer, a prediction layer, and the like. The computer equipment inputs the point cloud characteristic vector and the map characteristic vector to a perception layer of the trained prediction model, and extracts point cloud context characteristics corresponding to the point cloud characteristic vector and map context characteristics corresponding to the map characteristic vector through the perception layer. And taking the point cloud context features and the map context features as input of a semantic analysis layer, and fusing the point cloud context features and the map context features through the semantic analysis layer to obtain fused feature information. The fused feature information is input to a plurality of prediction layers by a prediction model. The prediction layer comprises a prediction layer corresponding to the obstacle detection task, a prediction layer corresponding to the obstacle trajectory prediction task and a prediction layer corresponding to the driving path planning task. The prediction model carries out corresponding prediction operation on the fusion characteristic information through a plurality of prediction layers to obtain obstacles in the surrounding environment and position information of each obstacle, the driving direction and corresponding position information of the obstacles in a preset time period, a plurality of driving paths of the vehicle in the preset time period and the weight corresponding to each driving path. And outputting the position information of the obstacles and each obstacle in the surrounding environment as an obstacle detection result through the prediction model, outputting the driving direction and the corresponding position information of the obstacles in a preset time period as an obstacle track prediction result, and outputting a plurality of driving paths of the vehicle in the preset time period and the weight corresponding to each driving path as a driving path planning result.
In this embodiment, the computer device extracts point cloud context features corresponding to the point cloud feature information through a perception layer of the prediction model, maps context features corresponding to the map feature images, fuses the point cloud context features and the map context features through a semantic analysis layer, further inputs the fused feature information into a plurality of prediction layers, performs corresponding prediction operation through the plurality of prediction layers, and outputs an obstacle detection result, an obstacle trajectory prediction result, and a driving path planning result. The obstacle detection needs point cloud context features, the obstacle track prediction and the driving path planning need point cloud context features and map context features, the point cloud context features and the map context features are fused through a semantic analysis layer of a prediction model, and fused feature information is input into a prediction layer corresponding to each task, so that parallel processing of multiple tasks of the obstacle detection, the obstacle track prediction and the driving path planning is achieved, and the operation efficiency of computer equipment is further improved.
In one embodiment, the point cloud context feature and the map context feature are fused through a semantic analysis layer to obtain fused feature information, including: acquiring point cloud weight corresponding to the point cloud context features and map weight corresponding to the map context feature image through a semantic analysis layer; and calculating fusion characteristic information according to the point cloud weight and the point cloud context characteristic and the map weight and the map context characteristic.
After the point cloud context features and the map context features are input to the semantic analysis layer by the prediction model, the point cloud context features and the map context features are fused through the semantic analysis layer. Specifically, point cloud weights corresponding to the point cloud context features and map weights corresponding to the map context features are respectively obtained through a semantic analysis layer of the prediction model, so that the point cloud weights and the point cloud context features, and the map weights and the map context features are calculated according to a preset relation to obtain fusion feature information. The preset relationship may be a weighted sum of the contextual feature information and the map contextual feature, and an average operation is performed on the sum.
In this embodiment, the computer device obtains a point cloud weight corresponding to the point cloud context feature and a map weight corresponding to the map context feature image through a semantic analysis layer of the prediction model, and calculates fusion feature information according to the point cloud weight and the point cloud context feature, and the map weight and the map context feature. The feature fusion is carried out according to the weights respectively corresponding to the context features and the map context features, so that the accuracy of feature fusion information is effectively improved, and meanwhile, a plurality of tasks of obstacle detection, obstacle track prediction and driving path planning can be processed in parallel in the automatic driving process, and the operation efficiency of computer equipment is further improved.
In one embodiment, the extracting the features of the original point cloud signal to obtain point cloud feature information includes: determining a signal area corresponding to the original point cloud signal according to the original point cloud signal; dividing a signal area into a plurality of grid units according to a preset size; and extracting the characteristics of the corresponding original point cloud signals in each grid unit to obtain point cloud characteristic information.
The computer equipment extracts the characteristics of the original point cloud signal, and firstly, a signal area corresponding to the original point cloud signal needs to be determined. And the computer equipment calculates the difference value between the maximum value and the minimum value of the coordinate of the original point cloud signal in the X direction, the Y direction and the Z direction according to the original point cloud signal. The computer device determines the length, width and height of the signal region according to the three difference values. The signal region may be the smallest signal space that contains all of the original point cloud signals.
The signal region may be divided in various ways, and may be subjected to rasterization or voxelization. When the computing resource of the computer device is less than the preset threshold, the computer device may perform rasterization processing on the original point cloud signal, so as to divide the signal area into a plurality of grid cells. Specifically, the predetermined size of the rasterization process may be length by width. The length and width in the preset dimension may be different. And the computer equipment divides the signal area along the X direction according to the length in the preset size to obtain a first grid unit. The first mesh unit may be a plurality of mesh units equally divided in the X direction. And the computer equipment divides the signal area along the Y direction according to the width in the preset size to obtain a second grid unit. The second mesh unit may be a plurality of mesh units equally divided in the Y direction. The height of the grid cells may be the same. The computer device obtains a target grid cell according to the first grid cell and the second grid cell. The order of dividing the signal regions into directions is not limited. Under the automatic driving mode, when the computing resources are limited and the real-time requirement is high, the extraction efficiency of the point cloud characteristic information is improved.
When the computing resource of the computer device is greater than or equal to the preset threshold, the computer device may perform voxelization processing on the original point cloud signal. Specifically, the computer device performs voxelization processing on the original point cloud signal according to a preset size. Wherein, the predetermined size may be length, width, and height. The length, width and height of the preset dimension may be the same. The computer device divides the signal area along the X direction according to the length in the preset size to obtain a first grid unit. And the computer equipment divides the signal area along the Y direction according to the width in the preset size to obtain a second grid unit. The computer device may divide the signal area along the Z direction according to a height in the preset size to obtain a third grid cell. The computer device generates a target grid cell from the first grid cell, the second grid cell, and the third grid cell. The order of dividing the signal regions into directions is not limited. When the computing resource of the computer equipment is larger than or equal to a preset threshold value, dividing a signal area corresponding to the original point cloud signal in the X direction, the Y direction and the Z direction. The condition that other point cloud signals are shielded above the obstacle can be well processed, so that the extraction accuracy of the point cloud characteristic information can be further improved in an automatic driving mode.
The computer equipment divides the original point cloud signal into corresponding target grid units, and performs feature extraction on the position information of each point in the original point cloud signal in the target grid units to obtain point cloud feature information. The point cloud characteristic information comprises: the number of points in the point cloud data, the maximum height of the point cloud data, the minimum height of the point cloud data, the average height of the point cloud data, the height variance of the point cloud data and the like. The computer equipment can also arrange the point cloud characteristic information according to rows to generate a matrix unit. And the computer equipment arranges the matrix units according to a preset rule to generate a point cloud characteristic matrix.
In this embodiment, the computer device extracts the point cloud feature information by dividing the signal area corresponding to the original point cloud signal, which is beneficial to parallel processing of the obstacle detection task, the obstacle trajectory prediction task and the driving path planning task by the prediction model in the following.
In one embodiment, the method further includes: determining an obstacle detection result meeting a preset condition in the obstacle detection results; extracting a corresponding target track from the obstacle track prediction result according to the obstacle detection result meeting the preset condition; and determining a target driving path in the driving path planning result according to the extracted target track and the obstacle detection result meeting the preset condition.
And the computer equipment obtains the obstacle detection result, the obstacle track prediction result and the driving path planning result which are output by the prediction model. The obstacle detection result may include an obstacle in the surrounding environment, and the position of the obstacle. The obstacles may include dynamic foreground obstacles, static foreground obstacles, road line markings, backgrounds, and the like. The obstacle trajectory prediction result may include a driving direction of the obstacle and corresponding position information within a preset time period. The driving path planning result may include a plurality of driving paths of the vehicle within a preset time period, and a weight corresponding to each driving path. The computer device determines an obstacle detection result satisfying a preset condition among the obstacle detection results. The preset condition may be a dynamic foreground barrier, a static foreground barrier, or the like. And extracting a target track corresponding to the obstacle detection result meeting the preset condition from the obstacle track prediction result by the computer equipment. And then the computer equipment extracts a corresponding driving path from the driving path planning result according to the extracted target track and the obstacle detection result meeting the preset condition. The computer device selects a travel route having the largest weight among the extracted travel routes as a target travel route.
In this embodiment, the computer device extracts a corresponding target trajectory from the obstacle trajectory prediction result according to the obstacle detection result satisfying the preset condition by determining the obstacle detection result satisfying the preset condition among the obstacle detection results. And determining a target driving path in the driving path planning result according to the extracted target track and the obstacle detection result meeting the preset condition. The optimal driving path can be selected from the multiple driving paths, so that the planning accuracy of the driving path is improved.
In one embodiment, as shown in fig. 4, there is provided a perceptual information processing apparatus including: a first obtaining module 402, a second obtaining module 404, a first extracting module 406, a second extracting module 408, and an operation module 410, wherein:
the first obtaining module 402 is configured to obtain an obstacle detection task, an obstacle trajectory prediction task, and a travel path planning task.
And a second obtaining module 404, configured to obtain perception information according to the obstacle detection task, the obstacle trajectory prediction task, and the driving path planning task, where the perception information includes an original point cloud signal and map information.
The first extraction module 406 is configured to perform feature extraction on the original point cloud signal to obtain point cloud feature information.
The second extraction module 408 is configured to perform feature extraction on the map information to obtain a map feature image.
And the operation module 410 is configured to input the point cloud feature information and the map feature image into the trained prediction model, perform prediction operation on the point cloud feature information and the map feature image through the prediction model, and output an obstacle detection result corresponding to the obstacle detection task, an obstacle trajectory prediction result corresponding to the obstacle trajectory prediction task, and a driving path planning result corresponding to the driving path planning task.
In one embodiment, the operation module 410 is further configured to extract a point cloud context feature corresponding to the point cloud feature information and a map context feature corresponding to the map feature image through a perception layer of the prediction model; inputting the point cloud context features and the map context features into a semantic analysis layer, and fusing the point cloud context features and the map context features through the semantic analysis layer to obtain fused feature information; and inputting the fusion characteristic information into a prediction layer, performing prediction operation on the fusion characteristic information through the prediction layer, and outputting an obstacle detection result corresponding to an obstacle detection task, an obstacle track prediction result corresponding to the obstacle track prediction task and a driving path planning result corresponding to the driving path planning task, wherein the prediction layer comprises a prediction layer corresponding to the obstacle detection task, a prediction layer corresponding to the obstacle track prediction task and a prediction layer corresponding to the driving path planning task.
In one embodiment, the operation module 410 is further configured to obtain, through the semantic analysis layer, a point cloud weight corresponding to the point cloud context feature and a map weight corresponding to the map context feature image; and calculating fusion characteristic information according to the point cloud weight and the point cloud context characteristic and the map weight and the map context characteristic.
In one embodiment, the apparatus further includes: the determining module is used for determining an obstacle detection result meeting a preset condition in the obstacle detection results; extracting a corresponding target track from the obstacle track prediction result according to the obstacle detection result meeting the preset condition; and determining a target driving path in the driving path planning result according to the extracted target track and the obstacle detection result meeting the preset condition.
In one embodiment, the first extraction module 406 is further configured to determine a signal region corresponding to the original point cloud signal according to the original point cloud signal; dividing a signal area into a plurality of grid units according to a preset size; and extracting the characteristics of the corresponding original point cloud signals in each grid unit to obtain point cloud characteristic information.
In one embodiment, the second extraction module 408 extracts map elements in the map information; and rendering the corresponding map elements according to the plurality of element channels to obtain a map feature image.
For specific limitations of the perceptual information processing apparatus, reference may be made to the above limitations on perceptual information processing methods, which are not described herein again. The modules in the perception information processing device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, the internal structure of which may be as shown in FIG. 5. The computer device includes a processor, a memory, a communication interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer readable instructions, and a database. The internal memory provides an environment for the operating system and execution of computer-readable instructions in the non-volatile storage medium. The database of the computer device is used for storing the obstacle detection result, the obstacle trajectory prediction result and the driving path planning result. The communication interface of the computer device is used for connecting and communicating with the vehicle-mounted sensor and the second vehicle-mounted computer device. The computer readable instructions, when executed by a processor, implement a method of perceptual information processing.
Those skilled in the art will appreciate that the architecture shown in fig. 5 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
A computer device comprising a memory and one or more processors, the memory having stored therein computer-readable instructions that, when executed by the one or more processors, cause the one or more processors to perform the steps of the various method embodiments described above.
One or more non-transitory computer-readable storage media storing computer-readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the various method embodiments described above.
It will be understood by those of ordinary skill in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware associated with computer readable instructions, which can be stored in a non-volatile computer readable storage medium, and when executed, can include processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent application shall be subject to the appended claims.

Claims (20)

  1. A perceptual information processing method, comprising:
    acquiring an obstacle detection task, an obstacle track prediction task and a driving path planning task;
    acquiring perception information according to the obstacle detection task, the obstacle track prediction task and the driving path planning task, wherein the perception information comprises an original point cloud signal and map information;
    extracting the characteristics of the original point cloud signal to obtain point cloud characteristic information;
    extracting the features of the map information to obtain a map feature image; and
    and inputting the point cloud characteristic information and the map characteristic image into a trained prediction model, performing prediction operation on the point cloud characteristic information and the map characteristic image through the prediction model, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle track prediction result corresponding to the obstacle track prediction task and a driving path planning result corresponding to the driving path planning task.
  2. The method according to claim 1, wherein the trained prediction model includes a perception layer, a semantic analysis layer and a prediction layer, and the predicting operation is performed on the point cloud feature information and the map feature image through the trained prediction model to output an obstacle detection result corresponding to the obstacle detection task, an obstacle trajectory prediction result corresponding to the obstacle trajectory prediction task and a driving path planning result corresponding to the driving path planning task, and the method includes:
    extracting point cloud context features corresponding to the point cloud feature information and map context features corresponding to the map feature image through a perception layer of the prediction model;
    inputting the point cloud context features and the map context features into a semantic analysis layer, and fusing the point cloud context features and the map context features through the semantic analysis layer to obtain fused feature information; and
    inputting the fusion characteristic information into a prediction layer, performing prediction operation on the fusion characteristic information through the prediction layer, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle trajectory prediction result corresponding to the obstacle trajectory prediction task, and a driving path planning result corresponding to the driving path planning task, wherein the prediction layer comprises a prediction layer corresponding to the obstacle detection task, a prediction layer corresponding to the obstacle trajectory prediction task, and a prediction layer corresponding to the driving path planning task.
  3. The method of claim 2, wherein the fusing the point cloud context feature with the map context feature by the semantic analysis layer to obtain fused feature information comprises:
    acquiring point cloud weight corresponding to the point cloud context features and map weight corresponding to the map context feature image through the semantic analysis layer; and
    and calculating fusion characteristic information according to the point cloud weight and the point cloud context characteristic, and the map weight and the map context characteristic.
  4. The method of claim 1, wherein the extracting the features of the original point cloud signal to obtain point cloud feature information comprises:
    determining a signal area corresponding to the original point cloud signal according to the original point cloud signal;
    dividing the signal area into a plurality of grid units according to a preset size; and
    and extracting the characteristics of the corresponding original point cloud signals in each grid unit to obtain point cloud characteristic information.
  5. The method of claim 1, wherein the performing feature extraction on the map information to obtain a map feature image comprises:
    extracting map elements from the map information; and
    and rendering the corresponding map elements according to the plurality of element channels to obtain a map feature image.
  6. The method according to any one of claims 1 to 5, further comprising:
    determining an obstacle detection result meeting a preset condition in the obstacle detection results;
    extracting a corresponding target track from the obstacle track prediction result according to the obstacle detection result meeting the preset condition; and
    and determining a target driving path in the driving path planning result according to the extracted target track and the obstacle detection result meeting the preset condition.
  7. A perceptual information processing apparatus comprising:
    the system comprises a first acquisition module, a second acquisition module and a control module, wherein the first acquisition module is used for acquiring an obstacle detection task, an obstacle track prediction task and a driving path planning task;
    the second acquisition module is used for acquiring perception information according to the obstacle detection task, the obstacle track prediction task and the driving path planning task, wherein the perception information comprises an original point cloud signal and map information;
    the first extraction module is used for extracting the characteristics of the original point cloud signal to obtain point cloud characteristic information;
    the second extraction module is used for extracting the features of the map information to obtain a map feature image; and
    and the operation module is used for inputting the point cloud characteristic information and the map characteristic image into a trained prediction model, performing prediction operation on the point cloud characteristic information and the map characteristic image through the prediction model, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle track prediction result corresponding to the obstacle track prediction task and a driving path planning result corresponding to the driving path planning task.
  8. The device of claim 7, wherein the operation module is further configured to extract a point cloud context feature corresponding to the point cloud feature information and a map context feature corresponding to the map feature image through a perception layer of the prediction model; inputting the point cloud context features and the map context features into a semantic analysis layer, and fusing the point cloud context features and the map context features through the semantic analysis layer to obtain fused feature information; and inputting the fusion characteristic information into a prediction layer, performing prediction operation on the fusion characteristic information through the prediction layer, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle trajectory prediction result corresponding to the obstacle trajectory prediction task, and a driving path planning result corresponding to the driving path planning task, wherein the prediction layer comprises a prediction layer corresponding to the obstacle detection task, a prediction layer corresponding to the obstacle trajectory prediction task, and a prediction layer corresponding to the driving path planning task.
  9. The device of claim 7, wherein the operation module is further configured to obtain, through the semantic analysis layer, a point cloud weight corresponding to the point cloud contextual feature and a map weight corresponding to the map contextual feature image; and calculating fusion feature information according to the point cloud weight and the point cloud context feature, and the map weight and the map context feature.
  10. The apparatus of any one of claims 7 to 9, further comprising: the determining module is used for determining an obstacle detection result meeting a preset condition in the obstacle detection results; extracting a corresponding target track from the obstacle track prediction result according to the obstacle detection result meeting the preset condition; and determining a target driving path in the driving path planning result according to the extracted target track and the obstacle detection result meeting the preset condition.
  11. A computer device comprising one or more processors and memory having stored therein computer-readable instructions that, when executed by the one or more processors, cause the one or more processors to perform the steps of: acquiring an obstacle detection task, an obstacle track prediction task and a driving path planning task; acquiring perception information according to the obstacle detection task, the obstacle track prediction task and the driving path planning task, wherein the perception information comprises an original point cloud signal and map information; extracting the characteristics of the original point cloud signal to obtain point cloud characteristic information; extracting the features of the map information to obtain a map feature image; and inputting the point cloud characteristic information and the map characteristic image into a trained prediction model, performing prediction operation on the point cloud characteristic information and the map characteristic image through the prediction model, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle track prediction result corresponding to the obstacle track prediction task and a driving path planning result corresponding to the driving path planning task.
  12. The computer device of claim 11, wherein the processor, when executing the computer readable instructions, further performs the steps of: extracting point cloud context features corresponding to the point cloud feature information and map context features corresponding to the map feature image through a perception layer of the prediction model; inputting the point cloud context features and the map context features into a semantic analysis layer, and fusing the point cloud context features and the map context features through the semantic analysis layer to obtain fused feature information; and inputting the fusion characteristic information into a prediction layer, performing prediction operation on the fusion characteristic information through the prediction layer, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle trajectory prediction result corresponding to the obstacle trajectory prediction task, and a driving path planning result corresponding to the driving path planning task, wherein the prediction layer comprises a prediction layer corresponding to the obstacle detection task, a prediction layer corresponding to the obstacle trajectory prediction task, and a prediction layer corresponding to the driving path planning task.
  13. The computer device of claim 11, wherein the processor, when executing the computer readable instructions, further performs the steps of: acquiring point cloud weight corresponding to the point cloud context features and map weight corresponding to the map context feature image through the semantic analysis layer; and calculating fusion feature information according to the point cloud weight and the point cloud context feature, and the map weight and the map context feature.
  14. The computer device of claim 11, wherein the processor, when executing the computer readable instructions, further performs the steps of: determining a signal area corresponding to the original point cloud signal according to the original point cloud signal; dividing the signal area into a plurality of grid units according to a preset size; and extracting the characteristics of the corresponding original point cloud signals in each grid unit to obtain point cloud characteristic information.
  15. The computer device of any one of claims 11 to 14, wherein the processor, when executing the computer readable instructions, further performs the steps of: determining an obstacle detection result meeting a preset condition in the obstacle detection results; extracting a corresponding target track from the obstacle track prediction result according to the obstacle detection result meeting the preset condition; and determining a target driving path in the driving path planning result according to the extracted target track and the obstacle detection result meeting the preset condition.
  16. One or more non-transitory computer-readable storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of: acquiring an obstacle detection task, an obstacle track prediction task and a driving path planning task; acquiring perception information according to the obstacle detection task, the obstacle track prediction task and the driving path planning task, wherein the perception information comprises an original point cloud signal and map information; extracting the characteristics of the original point cloud signal to obtain point cloud characteristic information; extracting the features of the map information to obtain a map feature image; and inputting the point cloud characteristic information and the map characteristic image into a trained prediction model, performing prediction operation on the point cloud characteristic information and the map characteristic image through the prediction model, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle track prediction result corresponding to the obstacle track prediction task and a driving path planning result corresponding to the driving path planning task.
  17. The storage medium of claim 16, wherein the computer readable instructions, when executed by the processor, further perform the steps of: extracting point cloud context features corresponding to the point cloud feature information and map context features corresponding to the map feature image through a perception layer of the prediction model; inputting the point cloud context features and the map context features into a semantic analysis layer, and fusing the point cloud context features and the map context features through the semantic analysis layer to obtain fused feature information; and inputting the fusion characteristic information into a prediction layer, performing prediction operation on the fusion characteristic information through the prediction layer, and outputting an obstacle detection result corresponding to the obstacle detection task, an obstacle trajectory prediction result corresponding to the obstacle trajectory prediction task, and a driving path planning result corresponding to the driving path planning task, wherein the prediction layer comprises a prediction layer corresponding to the obstacle detection task, a prediction layer corresponding to the obstacle trajectory prediction task, and a prediction layer corresponding to the driving path planning task.
  18. The storage medium of claim 16, wherein the computer readable instructions, when executed by the processor, further perform the steps of: acquiring point cloud weight corresponding to the point cloud context features and map weight corresponding to the map context feature image through the semantic analysis layer; and calculating fusion feature information according to the point cloud weight and the point cloud context feature, and the map weight and the map context feature.
  19. The storage medium of claim 16, wherein the computer readable instructions, when executed by the processor, further perform the steps of: determining a signal area corresponding to the original point cloud signal according to the original point cloud signal; dividing the signal area into a plurality of grid units according to a preset size; and extracting the characteristics of the corresponding original point cloud signals in each grid unit to obtain point cloud characteristic information.
  20. The storage medium of any one of claims 16 to 19, wherein the computer readable instructions, when executed by the processor, further perform the steps of: determining an obstacle detection result meeting a preset condition in the obstacle detection results; extracting a corresponding target track from the obstacle track prediction result according to the obstacle detection result meeting the preset condition; and determining a target driving path in the driving path planning result according to the extracted target track and the obstacle detection result meeting the preset condition.
CN201980037292.7A 2019-12-30 2019-12-30 Perceptual information processing method, apparatus, computer device, and storage medium Active CN113383283B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/130191 WO2021134357A1 (en) 2019-12-30 2019-12-30 Perception information processing method and apparatus, computer device and storage medium

Publications (2)

Publication Number Publication Date
CN113383283A true CN113383283A (en) 2021-09-10
CN113383283B CN113383283B (en) 2024-06-18

Family

ID=76687487

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980037292.7A Active CN113383283B (en) 2019-12-30 2019-12-30 Perceptual information processing method, apparatus, computer device, and storage medium

Country Status (2)

Country Link
CN (1) CN113383283B (en)
WO (1) WO2021134357A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113848947A (en) * 2021-10-20 2021-12-28 上海擎朗智能科技有限公司 Path planning method and device, computer equipment and storage medium
CN113920166A (en) * 2021-10-29 2022-01-11 广州文远知行科技有限公司 Method and device for selecting object motion model, vehicle and storage medium
CN115164931A (en) * 2022-09-08 2022-10-11 南开大学 System, method and equipment for assisting blind people in going out
CN117407694A (en) * 2023-11-06 2024-01-16 九识(苏州)智能科技有限公司 Multi-mode information processing method, device, equipment and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118608913A (en) * 2024-08-08 2024-09-06 浙江吉利控股集团有限公司 Feature fusion method, device, apparatus, medium and program product

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106428000A (en) * 2016-09-07 2017-02-22 清华大学 Vehicle speed control device and method
CN108196535A (en) * 2017-12-12 2018-06-22 清华大学苏州汽车研究院(吴江) Automated driving system based on enhancing study and Multi-sensor Fusion
US20180201273A1 (en) * 2017-01-17 2018-07-19 NextEv USA, Inc. Machine learning for personalized driving
CN108981726A (en) * 2018-06-09 2018-12-11 安徽宇锋智能科技有限公司 Unmanned vehicle semanteme Map building and building application method based on perceptual positioning monitoring
US20180364725A1 (en) * 2017-06-19 2018-12-20 Hitachi, Ltd. Real-time vehicle state trajectory prediction for vehicle energy management and autonomous drive
WO2018232680A1 (en) * 2017-06-22 2018-12-27 Baidu.Com Times Technology (Beijing) Co., Ltd. Evaluation framework for predicted trajectories in autonomous driving vehicle traffic prediction
CN109556615A (en) * 2018-10-10 2019-04-02 吉林大学 The driving map generation method of Multi-sensor Fusion cognition based on automatic Pilot
CN110286387A (en) * 2019-06-25 2019-09-27 深兰科技(上海)有限公司 Obstacle detection method, device and storage medium applied to automated driving system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109029417B (en) * 2018-05-21 2021-08-10 南京航空航天大学 Unmanned aerial vehicle SLAM method based on mixed visual odometer and multi-scale map
CN109029422B (en) * 2018-07-10 2021-03-05 北京木业邦科技有限公司 Method and device for building three-dimensional survey map through cooperation of multiple unmanned aerial vehicles
CN110542908B (en) * 2019-09-09 2023-04-25 深圳市海梁科技有限公司 Laser radar dynamic object sensing method applied to intelligent driving vehicle

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106428000A (en) * 2016-09-07 2017-02-22 清华大学 Vehicle speed control device and method
US20180201273A1 (en) * 2017-01-17 2018-07-19 NextEv USA, Inc. Machine learning for personalized driving
US20180364725A1 (en) * 2017-06-19 2018-12-20 Hitachi, Ltd. Real-time vehicle state trajectory prediction for vehicle energy management and autonomous drive
WO2018232680A1 (en) * 2017-06-22 2018-12-27 Baidu.Com Times Technology (Beijing) Co., Ltd. Evaluation framework for predicted trajectories in autonomous driving vehicle traffic prediction
CN108196535A (en) * 2017-12-12 2018-06-22 清华大学苏州汽车研究院(吴江) Automated driving system based on enhancing study and Multi-sensor Fusion
CN108981726A (en) * 2018-06-09 2018-12-11 安徽宇锋智能科技有限公司 Unmanned vehicle semanteme Map building and building application method based on perceptual positioning monitoring
CN109556615A (en) * 2018-10-10 2019-04-02 吉林大学 The driving map generation method of Multi-sensor Fusion cognition based on automatic Pilot
CN110286387A (en) * 2019-06-25 2019-09-27 深兰科技(上海)有限公司 Obstacle detection method, device and storage medium applied to automated driving system

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113848947A (en) * 2021-10-20 2021-12-28 上海擎朗智能科技有限公司 Path planning method and device, computer equipment and storage medium
CN113920166A (en) * 2021-10-29 2022-01-11 广州文远知行科技有限公司 Method and device for selecting object motion model, vehicle and storage medium
CN113920166B (en) * 2021-10-29 2024-05-28 广州文远知行科技有限公司 Method, device, vehicle and storage medium for selecting object motion model
CN115164931A (en) * 2022-09-08 2022-10-11 南开大学 System, method and equipment for assisting blind people in going out
CN117407694A (en) * 2023-11-06 2024-01-16 九识(苏州)智能科技有限公司 Multi-mode information processing method, device, equipment and storage medium
CN117407694B (en) * 2023-11-06 2024-08-27 九识(苏州)智能科技有限公司 Multi-mode information processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113383283B (en) 2024-06-18
WO2021134357A1 (en) 2021-07-08

Similar Documents

Publication Publication Date Title
CN113383283B (en) Perceptual information processing method, apparatus, computer device, and storage medium
CN111666921B (en) Vehicle control method, apparatus, computer device, and computer-readable storage medium
CN114930401A (en) Point cloud-based three-dimensional reconstruction method and device and computer equipment
CN111142557B (en) Unmanned aerial vehicle path planning method and system, computer equipment and readable storage medium
CN113678136B (en) Obstacle detection method and device based on unmanned technology and computer equipment
EP4152204A1 (en) Lane line detection method, and related apparatus
CN113811830B (en) Trajectory prediction method, apparatus, computer device and storage medium
CN113424121A (en) Vehicle speed control method and device based on automatic driving and computer equipment
CN110843789B (en) Vehicle lane change intention prediction method based on time sequence convolution network
KR20210074193A (en) Systems and methods for trajectory prediction
CN115917559A (en) Trajectory prediction method, apparatus, computer device and storage medium
CN113366486A (en) Object classification using out-of-region context
CN111986128A (en) Off-center image fusion
CN112660128B (en) Apparatus for determining lane change path of autonomous vehicle and method thereof
CN110751040B (en) Three-dimensional object detection method and device, electronic equipment and storage medium
JP7520444B2 (en) Vehicle-based data processing method, data processing device, computer device, and computer program
JP7376992B2 (en) Information processing device, information processing method, and program
CN110097077B (en) Point cloud data classification method and device, computer equipment and storage medium
CN116880462A (en) Automatic driving model, training method, automatic driving method and vehicle
CN115269371A (en) Platform for path planning system development of an autonomous driving system
CN115053277B (en) Method, system, computer device and storage medium for lane change classification of surrounding moving object
CN117387647A (en) Road planning method integrating vehicle-mounted sensor data and road sensor data
CN113632100A (en) Traffic light state identification method and device, computer equipment and storage medium
CN112710316A (en) Dynamic map generation focused on the field of construction and localization technology
CN114274980B (en) Track control method, track control device, vehicle and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant