CN118046921A - Vehicle control method, device, vehicle-mounted equipment, vehicle and storage medium - Google Patents

Vehicle control method, device, vehicle-mounted equipment, vehicle and storage medium Download PDF

Info

Publication number
CN118046921A
CN118046921A CN202410210724.2A CN202410210724A CN118046921A CN 118046921 A CN118046921 A CN 118046921A CN 202410210724 A CN202410210724 A CN 202410210724A CN 118046921 A CN118046921 A CN 118046921A
Authority
CN
China
Prior art keywords
vehicle
information
data
target object
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410210724.2A
Other languages
Chinese (zh)
Inventor
范圣印
贾砚波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jidu Technology Co Ltd
Original Assignee
Beijing Jidu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jidu Technology Co Ltd filed Critical Beijing Jidu Technology Co Ltd
Priority to CN202410210724.2A priority Critical patent/CN118046921A/en
Publication of CN118046921A publication Critical patent/CN118046921A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0027Planning or execution of driving tasks using trajectory prediction for other traffic participants
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/20Static objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/402Type
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/402Type
    • B60W2554/4029Pedestrians
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/40High definition maps

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present disclosure provides a vehicle control method, device, vehicle-mounted apparatus, vehicle, and storage medium, the vehicle control method including: determining behavior track information of a target object in an environment where a vehicle is located based on the environment awareness data, the position data of the vehicle and the map data; the behavior track information of the target object and the vehicle track information of the vehicle are sent to a server, so that the server processes the behavior track information and the vehicle track information of the target object based on the target model to generate planning decision information; the target model is obtained by generating a pre-training transducer and training according to various training sample data; the method comprises the steps of receiving planning decision information fed back by a server, and generating planning track information and driving strategy information for a vehicle according to behavior track information and planning decision information of a target object; and controlling the vehicle to run based on the planned trajectory information and the driving strategy information. According to the embodiment of the disclosure, the automatic driving safety and the driving experience of the user can be improved.

Description

Vehicle control method, device, vehicle-mounted equipment, vehicle and storage medium
Technical Field
The disclosure relates to the technical field of automatic driving, and in particular relates to a vehicle control method, a vehicle control device, vehicle-mounted equipment, a vehicle and a storage medium.
Background
Vehicle intellectualization is one of the main development directions of the current vehicle technology, and automatic driving technology is a key technology in the vehicle intellectualization process. In the automatic driving technique, a travel track is planned by the vehicle itself, and travel is performed in accordance with the planned travel track.
However, in practical applications, if the planned driving track of the vehicle is not suitable, not only the experience of automatic driving is affected, but also the safety of other vehicles and the vehicle itself is seriously affected. Therefore, how to improve the effectiveness and comfort of trajectory planning is a constantly pursuing goal in the field of automatic driving.
Disclosure of Invention
The embodiment of the disclosure at least provides a vehicle control method, a vehicle control device, vehicle-mounted equipment, a vehicle and a storage medium, which can improve the experience of automatic driving.
In a first aspect, an embodiment of the present disclosure provides a vehicle control method, including:
acquiring environment sensing data of an environment where a vehicle is located, position data of the vehicle and map data of the environment where the vehicle is located, and determining behavior track information of a target object in the environment where the vehicle is located based on the environment sensing data, the position data of the vehicle and the map data;
The behavior track information of the target object and the vehicle track information of the vehicle are sent to a server, so that the server processes the behavior track information of the target object and the vehicle track information based on a trained target model to generate planning decision information; the target model is obtained by generating a pre-training converter and performing self-supervision training according to various training sample data;
The planning decision information fed back by the server is received, and planning track information and driving strategy information aiming at the vehicle are generated according to the behavior track information of the target object and the planning decision information;
And controlling the vehicle to run based on the planned trajectory information and the driving strategy information.
In a possible implementation manner, the planning decision information includes predicted track information for the vehicle and driving instruction information, wherein the track length indicated by the predicted track information is greater than a preset length, and the driving instruction information is used for indicating driving behavior of the vehicle;
the generating the planning track information and the driving strategy information for the vehicle according to the behavior track information of the target object and the planning decision information includes:
And generating planning track information and driving strategy information for the vehicle according to the behavior track information, the predicted track information and the driving instruction information of the target object.
In a possible embodiment, the planning decision information further includes driving style information for indicating a degree of smoothness of driving of the vehicle; the controlling the vehicle to travel based on the planned trajectory information and the driving strategy information includes:
And controlling the vehicle to run based on the planned trajectory information, the driving strategy information and the driving style information.
In one possible implementation, the behavior trace information of the target object includes tracking trace information of the target object, predicted trace information of the target object, and predicted behavior information of the target object.
In one possible implementation manner, the determining the behavior trace information of the target object in the environment of the vehicle based on the environment sensing data, the position data of the vehicle and the map data includes:
Extracting features of the image data to obtain aerial view features of the image data, and determining aerial view features of the map data based on the aerial view features of the image data;
Fusing the aerial view angle characteristic of the image data with the aerial view angle characteristic of the map data based on the position data of the vehicle to obtain a target fusion characteristic;
And determining behavior track information of the target object based on the target fusion characteristics.
In one possible implementation manner, the fusing the aerial view feature of the image data and the aerial view feature of the map data based on the position data of the vehicle to obtain the target fusion feature includes:
fusing the aerial view angle characteristics of the image data with the aerial view angle characteristics of the map data at corresponding moments to obtain single-frame fused aerial view angle characteristics;
And based on the position data of the vehicle, determining the relative pose data of the adjacent moment of the vehicle, and based on the relative pose data, aligning and fusing the multi-frame fusion aerial view angle features at the current moment and before the current moment to obtain the target fusion feature.
In one possible embodiment, after the obtaining the target fusion feature, the method further comprises:
Determining a plurality of perceptual information based on the target fusion feature, the plurality of perceptual information comprising at least two of:
three-dimensional object perception information, three-dimensional road structure perception information, occupied space grid perception information, traffic signal lamp perception information, three-dimensional map perception information and fusion positioning perception information.
In one possible implementation manner, the generating the planned trajectory information and the driving strategy information for the vehicle according to the behavior trajectory information of the target object and the planning decision information includes:
and generating planning track information and driving strategy information for the vehicle according to the behavior track information, the traffic signal lamp perception information, the three-dimensional map perception information and the planning decision information of the target object.
In a second aspect, an embodiment of the present disclosure provides a vehicle control method, including:
Receiving self-vehicle track information sent by a target vehicle and aiming at the target vehicle and behavior track information of a target object in the environment of the target vehicle; the behavior track information of the target object is determined based on environment perception data of the environment where the vehicle is located, position data of the vehicle and map data of the environment where the vehicle is located;
Processing the behavior track information of the target object and the vehicle track information based on the trained target model to generate planning decision information; the target model is obtained by generating a pre-training converter and performing self-supervision training according to various training sample data;
And sending the planning decision information to the target vehicle, wherein the planning decision information is used for indicating the target vehicle to generate planning track information and driving strategy information for the vehicle according to the behavior track information of the target object and the planning decision information.
In one possible embodiment, the target model is trained by:
Acquiring a basic network to be trained, wherein model parameters of the basic network are larger than a preset scale;
Acquiring training sample data, and performing self-supervision training on the basic network based on the training sample data to obtain the trained target model; the training sample data includes at least two of the following:
the system comprises vehicle track sample data, driving state sample data, weather condition sample data in the running process of the vehicle, traffic condition sample data in the running process of the vehicle, navigation route sample data corresponding to the running of the vehicle, road structure state sample data perceived by the vehicle, track sample data of dynamic and static obstacles and Internet driving video sample data.
In a third aspect, an embodiment of the present disclosure provides a vehicle control apparatus including:
The information determining module is used for acquiring environment sensing data of an environment where a vehicle is located, position data of the vehicle and map data of the environment where the vehicle is located, and determining behavior track information of a target object in the environment where the vehicle is located based on the environment sensing data, the position data of the vehicle and the map data;
The first sending module is used for sending the behavior track information of the target object and the vehicle track information of the vehicle to a server so that the server can process the behavior track information of the target object and the vehicle track information based on the trained target model to generate planning decision information; the target model is obtained by generating a pre-training converter and performing self-supervision training according to various training sample data;
The information generation module is used for receiving planning decision information fed back by the server and generating planning track information and driving strategy information for the vehicle according to the behavior track information of the target object and the planning decision information;
And the vehicle control module is used for controlling the vehicle to run based on the planned track information and the driving strategy information.
In a fourth aspect, an embodiment of the present disclosure provides a vehicle control apparatus including:
The information receiving module is used for receiving the self-vehicle track information sent by the target vehicle and aiming at the target object in the environment of the target vehicle; the behavior track information of the target object is determined based on environment perception data of the environment where the vehicle is located, position data of the vehicle and map data of the environment where the vehicle is located;
the information processing module is used for processing the behavior track information of the target object and the self-vehicle track information based on the trained target model to generate planning decision information; the target model is obtained by generating a pre-training converter and performing self-supervision training according to various training sample data;
The second sending module is used for sending the planning decision information to the target vehicle, and the planning decision information is used for indicating the target vehicle to generate planning track information and driving strategy information for the vehicle according to the behavior track information of the target object and the planning decision information.
In a fifth aspect, an embodiment of the present disclosure provides an in-vehicle apparatus, including: the vehicle control system comprises a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, when the vehicle-mounted device is running, the processor and the memory are communicated through the bus, and the machine-readable instructions are executed by the processor to execute the vehicle control method according to any possible implementation mode of the first aspect.
In a sixth aspect, embodiments of the present disclosure provide a vehicle comprising a controller comprising:
A memory configured to store instructions; and
A processor configured to invoke the instructions from the memory and to enable the vehicle control method described in any of the possible implementation manners of the first aspect above when executing the instructions.
In a seventh aspect, the disclosed embodiments provide a computer readable storage medium having a computer program stored thereon, which when executed by a processor performs a vehicle control method as described in any one of the possible embodiments above.
According to the vehicle control method, the vehicle control device, the vehicle-mounted equipment, the vehicle and the computer readable storage medium, the behavior track information and the self-vehicle track information of the target object in the environment where the vehicle is located are firstly sent to the server, so that the server can generate the planning decision information with guiding significance for the vehicle based on the driving experience of the pre-trained large model (the pre-training converter is generated), and therefore, when the vehicle performs track planning, the safety of the generated planning track information and the applicability of driving strategy information can be improved based on the behavior track information of the target object in the environment where the vehicle is located and the planning decision information with guiding significance given by combining the large model.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
FIG. 1 illustrates a flow chart of a vehicle control method provided by an embodiment of the present disclosure;
FIG. 2 illustrates a schematic diagram of a vehicle control process provided by an embodiment of the present disclosure;
FIG. 3 illustrates a schematic diagram of another vehicle control process provided by an embodiment of the present disclosure;
FIG. 4 illustrates a flow chart of another vehicle control method provided by an embodiment of the present disclosure;
FIG. 5 illustrates a process diagram of object model training provided by embodiments of the present disclosure;
FIG. 6 shows a functional block diagram of a vehicle control apparatus provided by an embodiment of the present disclosure;
FIG. 7 illustrates a functional block diagram of another vehicle control apparatus provided by an embodiment of the present disclosure;
FIG. 8 illustrates a functional block diagram of yet another vehicle control apparatus provided by an embodiment of the present disclosure;
Fig. 9 shows a schematic structural diagram of an in-vehicle apparatus provided by an embodiment of the present disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The term "and/or" is used herein to describe only one relationship, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, may mean including any one or more elements selected from the group consisting of A, B and C.
The automatic driving automobile is characterized in that each part in the automobile is accurately controlled, calculated and analyzed through vehicle-mounted terminal equipment such as an ECU (Electronic Control Unit, an electronic control unit) and the like, so that the full-automatic operation of the automobile is realized, and the purpose of unmanned driving of the automobile is achieved. With rapid development of deep learning and deep research of artificial intelligence, automatic driving through end-to-end deep learning is a main research direction in the field of automatic driving.
The pre-training converter (GPT, generative Pretrained transducer), also called a large model, is generated and is increasingly applied to various actual scenes, has two properties of large-scale and pre-training, can perform pre-training on massive general data, and can greatly improve generalization, universality and practicability of artificial intelligence. According to research, the large model is high in resource requirement, real-time performance and robustness cannot be guaranteed, the application scene is mostly based on the large model to perform data preprocessing (such as automatic labeling and other tasks) for automatic driving training, and model training for assisting automatic driving is performed, but the large model is not directly used in an automatic driving system of a vehicle end in face of the requirements of high real-time performance and high robustness of automatic driving and the limitation constraint of vehicle end resources. Therefore, how to apply the superior performance of the large model to the automatic driving system to improve the experience of automatic driving is the focus of the research of the application.
Based on the above-mentioned study, an embodiment of the present disclosure provides a vehicle control method, which includes determining behavior track information of a target object in an environment where a vehicle is located based on environment sensing data of the environment where the vehicle is located, position data of the vehicle, and map data, and then transmitting the behavior track information of the target object and vehicle-by-vehicle track information of the vehicle to a server, so that the server processes the behavior track information of the target object and the vehicle-by-vehicle track information based on a trained target model to generate planning decision information; and generating planning track information and driving strategy information aiming at the vehicle according to the behavior track information of the target object and the planning decision information fed back by the server, and finally controlling the vehicle to run based on the planning track information and the driving strategy information.
In the embodiment of the disclosure, the behavior track information and the vehicle track information of the target object in the environment where the vehicle is located are sent to the server, so that the server can generate the planning decision information with guiding significance for the vehicle based on the driving experience of the pre-trained large model (the pre-training converter is generated), and therefore, when the vehicle performs track planning, the vehicle is not only based on the behavior track information of the target object in the environment but also combines the planning decision information with guiding significance fed back by the large model, and the safety of the generated planning track information and the applicability of driving strategy information can be improved, so that the safety of automatic driving and the riding experience of a user are improved.
For the convenience of understanding the present embodiment, a main body of execution of a vehicle control method disclosed in the embodiment of the present disclosure will be described in detail first. For example, the execution body of the vehicle control method provided by the embodiment of the present disclosure may be a vehicle, and the vehicle may include various controllers, for example, the controller may be a whole vehicle controller, a vehicle body domain controller, a cabin domain controller, an intelligent driving domain controller, and the like, which is not limited in particular.
The controller may include a processor and a memory for storing instructions from which the processor invokes and when executing the instructions is capable of implementing the vehicle control method of the various embodiments of the disclosure.
In other embodiments, the execution body of the vehicle control method may also be an on-board device or a chip in the on-board device, where the on-board device may be a vehicle machine or a device located on a vehicle. Specifically, the in-vehicle apparatus includes, for example: the terminal device or other processing device may be a user terminal, a handheld device, a computing device, a vehicle mounted device, a wearable device, etc.
In addition, the vehicle control method may also be implemented by way of a processor invoking computer readable instructions stored in a memory.
A vehicle control method provided by an embodiment of the present disclosure is described below with reference to the accompanying drawings, and as shown in fig. 1, the vehicle control method includes the following S101 to S104:
s101, acquiring environment sensing data of an environment where a vehicle is located, position data of the vehicle and map data of the environment where the vehicle is located, and determining behavior track information of a target object in the environment where the vehicle is located based on the environment sensing data, the position data of the vehicle and the map data.
Illustratively, the vehicle is an autonomous vehicle having an autonomous function. The automatic driving vehicle can cooperate with the global positioning system by means of artificial intelligence, visual computing, a radar, a monitoring device and the like, so that the vehicle can automatically and safely operate the motor vehicle without any active operation of human beings.
The vehicle may include a Battery Electric Vehicle (BEV), a Hybrid Electric Vehicle (HEV), a fuel vehicle, etc., and the specific vehicle type is not particularly limited as long as it has an automatic driving function.
Specifically, environmental perception data of an environment in which a vehicle is located can be obtained through a perception component arranged on the vehicle. The perception component may comprise radar means and/or image acquisition means, i.e. in embodiments of the present disclosure, the ambient awareness data may comprise image data and/or radar data.
The radar device is used for collecting point cloud data. In one example, the radar device may be a lidar to emit a laser beam to detect a characteristic amount of a position, a speed, or the like of a target. Each point in the point cloud data acquired by the radar device contains three-dimensional coordinate information, and can also contain information such as a target, a position, a speed, an acceleration and the like, reflection intensity information, echo frequency information and the like. In other examples, the radar apparatus may also be a millimeter wave radar, an ultrasonic radar, or the like.
The image acquisition device is used for acquiring environment image data of the vehicle. In one example, the image capturing device may be a camera, and in order to make the capturing more accurate, the vehicle may be provided with a plurality of image capturing devices, and the type of the image capturing device is not particularly limited, for example, may be a monocular camera or a binocular camera.
In addition, the position data of the vehicle and the map data of the environment in which the vehicle is located can be acquired by the global positioning system and the navigation system. Of course, in other embodiments, the location data and the map data may be obtained by other manners, which are not limited herein.
After the environment sensing data of the environment where the vehicle is located, the position data of the vehicle and the map data of the environment where the vehicle is located are obtained, the behavior track information of the target object in the environment where the vehicle is located can be determined based on the environment sensing data, the position data of the vehicle and the map data.
The target object comprises various dynamic obstacles and static obstacles in the environment where the vehicle is located. The dynamic barrier may include other moving motor vehicles, non-motor vehicles, pedestrians, etc.; the static obstacle comprises other static vehicles, various static objects (such as flower beds, trees and roadblocks) on the road, and the like.
In some embodiments, the behavior trace information of the target object includes tracking trace information of the target object, predicted trace information of the target object, and predicted behavior information of the target object. The track information refers to track information of the target object before the current moment, the predicted track information refers to track information of the target object in a preset time period after the current moment, and the predicted behavior information refers to information of action behaviors of the target object, which are predicted to occur after the current moment. For example, in the case where the target object is another vehicle, the predicted behavior may be a lane change behavior or a parking behavior of the other vehicle, or the like.
In the embodiment of the disclosure, since the behavior track information of the target object includes the track information, the predicted track information and the predicted behavior information, the behavior track of the target object can be estimated and measured from multiple aspects, which is further beneficial to improving the accuracy of track behavior judgment of the target object.
In addition, a process of how to determine behavior trace information of a target object based on the environment-aware data, the position data of the vehicle, and the map data will be described in detail later.
S102, behavior track information of the target object and self-vehicle track information of the vehicle are sent to a server, so that the server processes the behavior track information of the target object and the self-vehicle track information based on a trained target model to generate planning decision information; the target model is obtained by generating a pre-training converter and performing self-supervision training according to various training sample data.
In this step, after the vehicle end determines the behavior track information of the target object, the behavior track information of the target object and the vehicle track information of the vehicle may be sent to a server, so that the server processes (predicts) the behavior track information of the target object and the vehicle track information based on the trained target model to generate planning decision information.
In the embodiment of the disclosure, the target model is obtained by generating a pre-training transformer (also called a large model) and performing self-supervision training according to various training sample data. In this way, after receiving the behavior track information of the target object of the vehicle and the vehicle track information of the vehicle, the server can input the behavior track information of the target object and the vehicle track information of the vehicle into a pre-trained large model so that the large model outputs corresponding planning decision information.
It can be understood that the large model has larger parameter scale, so that the operation performance requirement can be ensured by deploying the large model on the cloud. In addition, the large model is obtained through self-supervision training based on various training data, so that the large model can output planning decision information with guiding significance according to the behavior track information of the target object and the self-vehicle track information of the vehicle, and the planning decision information is non-real-time data.
In other embodiments, the type information of the vehicle may also be sent to the server, so that the server combines the type factors of the vehicle in the process of generating the planning decision information based on the target model, so that the applicability of the planning decision information may be improved.
The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms.
In some embodiments, the planning decision information may include at least one of predicted trajectory information, driving instruction information, and driving style information for the vehicle. The track length indicated by the predicted track information is greater than a preset length, that is, the predicted track information is long track information planned by the server for the vehicle, and the preset length can be set according to actual requirements but is greater than the length of planned track information planned by the vehicle.
The driving instruction information is used for indicating driving behaviors of the vehicle. The driving instruction information may include, for example, instruction information of straight, left turn, right turn, on-ramp, off-ramp, and the like.
The driving style information is used to indicate the degree of smoothness of the driving of the vehicle, and may include, for example, smoothness, aggressiveness, and gifts.
It should be noted that the training process for the target model will be described in detail later.
And S103, receiving planning decision information fed back by the server, and generating planning track information and driving strategy information for the vehicle according to the behavior track information of the target object and the planning decision information.
For example, after receiving the planning decision information fed back by the server, the planning track information and driving strategy information for the vehicle can be generated according to the behavior track information of the target object and the planning decision information. In particular, the generation of planning trajectory information and driving strategy information may be implemented based on a deep learning model or decision tree.
The planned track information is used for indicating at least one piece of route information that the vehicle can travel, and the driving strategy information is used for reflecting driving behaviors of the vehicle, and specifically, the driving strategy information can include travel speed information, acceleration information, vehicle gear information, lane change information and the like, and is not limited specifically.
In one example, as described above, the planning decision information includes predicted trajectory information and driving instruction information for the vehicle, and thus, when generating planning trajectory information and driving strategy information for the vehicle according to behavior trajectory information of the target object and the planning decision information, it may include: and generating planning track information and driving strategy information for the vehicle according to the behavior track information, the predicted track information and the driving instruction information of the target object. Therefore, the generated planning track information and the driving strategy information can be improved in comfort and smoothness, and further the driving experience of automatic driving can be improved.
And S104, controlling the vehicle to run based on the planned track information and the driving strategy information.
For example, after the planned track information and the driving strategy information for the vehicle are obtained, the vehicle can be controlled to run according to the planned track information according to the driving strategy information.
In some embodiments, the planning decision information further includes driving style information, and thus, when controlling the vehicle to travel based on the planning trajectory information and the driving strategy information, may include: and controlling the vehicle to run based on the planned trajectory information, the driving strategy information and the driving style information. In this way, driving style information given by the large model is used as guidance in the process of controlling the vehicle, so that the comfort of controlling the vehicle can be further improved.
Specifically, the optimal driving energy consumption of the vehicle can be considered, and the vehicle can be controlled according to the planned track information, the driving strategy information and the driving style information given by the large model. For example, throttle, brake and steering wheel signals of the vehicle are calculated according to planned trajectory information to control the host vehicle to travel along a prescribed trajectory and to provide feedback control to cope with vehicle dynamics and environmental changes.
Further, hybrid vehicle models may also be employed to predict vehicle behavior and state during vehicle control, and to adjust driving strategies based on vehicle behavior and state. The driving style information may also be altered by key parameters such as speed and acceleration constraints, following time intervals, etc. For example, if the driving style information given by the large model is a aggressive style, the current speed and acceleration of the vehicle are determined to be very low according to the control parameters in the running process of the vehicle, so that the driving style can be changed into a gifts driving style.
According to the vehicle control method provided by the embodiment of the disclosure, the behavior track information and the self-vehicle track information of the target object in the environment where the vehicle is located are sent to the server, so that the server can generate the planning decision information with guiding significance for the vehicle based on the driving experience of the pre-trained large model (the pre-training converter is generated), and therefore, when the vehicle performs track planning, the safety of the generated planning track information and the applicability of driving strategy information can be improved based on the behavior track information of the target object in the environment and the planning decision information with guiding significance given by the large model.
A process of determining behavior trace information of a target object in an environment in which the vehicle is located based on the environment-aware data, the position data of the vehicle, and the map data will be described in detail.
In some embodiments, the environment-aware data includes image data, and thus, when determining behavior trace information of a target object in an environment in which the vehicle is located based on the environment-aware data, the position data of the vehicle, and the map data, the following (a) to (c) may be included:
(a) Extracting features of the image data to obtain aerial view features of the image data, and determining aerial view features of the map data based on the aerial view features of the image data;
(b) Fusing the aerial view angle characteristic of the image data with the aerial view angle characteristic of the map data based on the position data of the vehicle to obtain a target fusion characteristic;
(c) And determining behavior track information of the target object based on the target fusion characteristics.
The image data may be image data (such as looking around image data) acquired based on a single image acquisition device, or may be image data obtained by respectively performing image acquisition based on a plurality of image acquisition devices with different viewing angles and fusing the acquired images.
For example, referring to fig. 2, perspective View (PERSPECTIVE VIEW, PV) feature extraction may be performed on image data based on a preset feature extraction network to obtain perspective View features, and the perspective View features may be converted to obtain Bird Eye View (BEV) features of the image data.
Wherein the feature extraction network may include, but is not limited to, regnet neural network or Restnet neural network, etc. In addition, a Transformer network may be employed to transform the perspective view feature. In some embodiments, the image data of the different cameras may also be de-distorted separately based on their internal parameters prior to feature extraction.
After obtaining the bird's-eye view angle characteristic of the image data, the bird's-eye view angle characteristic of the map data may be determined based on the bird's-eye view angle characteristic of the image data. Specifically, the map data may be sampled according to the physical size of the aerial view angle feature of the image data, that is, the physical distance represented by each pixel is consistent with the aerial view angle feature of the image data, and then the feature extraction is performed on the map data based on a convolutional neural network or a network of a self-attention mechanism, so as to obtain the aerial view angle feature of the map data.
It should be noted that the map data may be determined based on the absolute pose data of the vehicle and a system map, which may be a high-precision map or a standard map, and is not particularly limited herein. The absolute pose data of the vehicle can be determined based on the relative pose data of the vehicle and the position data of the vehicle, for example, the position data of the vehicle and the relative pose data of the vehicle can be fused by an optimization method or a filtering method to obtain the absolute pose data of the vehicle.
For example, wheel speed data of the vehicle and inertial navigation data of an inertial measurement unit (Inertial Measurement Unit, IMU) may be acquired, and then relative pose data of the vehicle may be determined according to the wheel speed data and the inertial navigation data of the vehicle, and then absolute pose data of the vehicle may be determined according to the position data of the vehicle and the relative pose data of the vehicle.
In addition, the range to which the map data corresponds is larger than the range to which the image data corresponds, and the range to which the map data corresponds may be 2 times the range to which the image data corresponds, for example, without being particularly limited.
It will be appreciated that due to the constant motion of the vehicle itself, the BEV features at the two moments are spatially misaligned, and therefore, rotation angle and offset information of the vehicle itself needs to be derived based on the relative pose changes of the vehicle, so that feature alignment is spatially achieved for the BEV feature at the previous moment and the BEV feature at the current moment.
Thus, in some embodiments, when fusing the bird's-eye view feature of the image data with the bird's-eye view feature of the map data based on the position data of the vehicle, the target fusion feature may be obtained, including the following (1) to (2):
(1) Fusing the aerial view angle characteristics of the image data with the aerial view angle characteristics of the map data at corresponding moments to obtain single-frame fused aerial view angle characteristics;
(2) And based on the position data of the vehicle, determining the relative pose data of the adjacent moment of the vehicle, and based on the relative pose data, aligning and fusing the multi-frame fusion aerial view angle features at the current moment and before the current moment to obtain the target fusion feature.
The specific number of the multi-frames may be determined according to actual requirements, which is not limited herein.
It will be appreciated that in other embodiments, the BEV features of the multi-frame image data may be first subjected to space-time alignment fusion to obtain a target BEV feature of the image data, then the map is sampled and feature-extracted based on the target BEV feature to obtain a target BEV feature of the map data, and then the target BEV feature of the image data is fused with the target BEV feature of the map data to obtain the target fusion feature. For example, the main network of the PV view angle and the multi-scale looking-around image features extracted by the Neck network can be queried through the airspace cross-attention module, so that BEV fusion features in the BEV space are generated.
After the target fusion characteristics are obtained, the behavior track information of the target object can be determined based on the target fusion characteristics. Specifically, the prediction of the behavior trace information of the target object may be performed based on a machine learning method or a deep learning method. For example, behavior information (e.g., lane change, parking, etc.) of the target object may be predicted from a behavior classification network (convolutional neural network, recurrent neural network, etc.).
It should be noted that, while the vehicle is moving, the objects around the vehicle also move within a certain range, and for alignment of the objects around the vehicle, correction may be performed based on learning by the time domain self-attention module of the network itself.
Referring to fig. 3, in some embodiments, after obtaining the target fusion feature, the method further comprises: determining a plurality of perceptual information based on the target fusion feature, the plurality of perceptual information comprising at least two of:
Three-dimensional object perception information (BEV 3D moving objects), three-dimensional road structure perception information (BEV 3D road structure), occupancy space grid perception information (Occupancy & Flow), traffic light perception information (traffic light identification), three-dimensional map perception information (online mapping), and fusion positioning perception information (Model Based fusion positioning).
In the embodiment of the disclosure, after the target fusion feature is obtained, a plurality of pieces of perception information can be determined according to the target fusion feature based on the task network, and the plurality of pieces of perception information are respectively output, so that more references can be provided for subsequent track information planning, and the effectiveness and safety of track planning are improved.
Thus, in some embodiments, generating the planning track information and the driving strategy information for the vehicle according to the behavior track information of the target object and the planning decision information may include: and generating planning track information and driving strategy information for the vehicle according to the behavior track information, the traffic signal lamp perception information, the three-dimensional map perception information and the planning decision information of the target object. Therefore, the effectiveness and the comfortableness of the planning track information and the driving strategy information of the vehicle can be further improved, and the experience of automatic driving can be further improved.
Referring to fig. 4, a flowchart of a vehicle control method according to another embodiment of the disclosure is shown, where the vehicle control method is applied to the foregoing server, that is, an execution subject of the vehicle control method is a server, and the vehicle control method includes the following steps S401 to S403:
S401, receiving self-vehicle track information sent by a target vehicle and aiming at the target vehicle and behavior track information of a target object in the environment of the target vehicle; the behavior track information of the target object is determined based on environment perception data of an environment where the vehicle is located, position data of the vehicle and map data of the environment where the vehicle is located.
S402, processing behavior track information of the target object and the bicycle track information based on a trained target model to generate planning decision information; the target model is obtained by generating a pre-training converter and performing self-supervision training according to various training sample data.
S403, the planning decision information is sent to the target vehicle, and the planning decision information is used for indicating the target vehicle to generate planning track information and driving strategy information for the vehicle according to the behavior track information of the target object and the planning decision information.
The content of steps S401 to S403 can be referred to the content of steps S101 to S104, and will not be described herein.
In one possible embodiment, the target model is obtained by training the following steps (I) to (II):
(I) Acquiring a basic network to be trained, wherein model parameters of the basic network are larger than a preset scale;
(II) acquiring training sample data, and performing self-supervision training on the basic network based on the training sample data to obtain the trained target model; the training sample data includes at least two of the following:
the system comprises vehicle track sample data, driving state sample data, weather condition sample data in the running process of the vehicle, traffic condition sample data in the running process of the vehicle, navigation route sample data corresponding to the running of the vehicle, road structure state sample data perceived by the vehicle, track sample data of dynamic and static obstacles and Internet driving video sample data.
Specifically, referring to fig. 5, in the course of model training, multi-modal feature extraction is implemented first based on self-supervised basic network training using large-scale vehicle trajectory sample data, driving state sample data (gear shift, wheel speed, steering wheel angle, acceleration and deceleration, etc.), weather condition sample data of roads during vehicle driving (seasons, sunny days, cloudy days, sleet days, etc., early, mid, late, etc.), traffic condition sample data of roads (unobstructed, early and late peaks, congestion, etc.), planned path and navigation information sample data of maps corresponding to vehicle driving, road structure state sample data of the surroundings perceived by the vehicle, trajectory sample data of dynamic and static obstacles, etc. In addition, related driving videos, related comments, labels and the like on the internet can also be used for training of the basic network.
In the embodiment of the disclosure, the target model is formed by training the large model based on various training data, so that the trained target model has stronger applicability and can improve the prediction accuracy of the model.
In some embodiments, after the network is basically trained, the vehicle track and driving decision data of normal driving of human beings can be adopted on the basis of the pre-trained basic feature model, and non-real-time, multi-group decision instruction and multi-long planning track generation are performed on the basis of imitation learning.
In addition, the extreme cases (Corner Case) of the decision instructions and the planned tracks can be found through generalized tests, meanwhile, the rule is used for definitely defining the safe driving framework to restrain the planned tracks of the large model, the decision instructions and the planned tracks deviating from the safe driving framework are listed in the extreme cases, and then, the extreme Case samples are used for carrying out improvement on decision instructions and planned track generation based on deep reinforcement learning, so that the obtained target model can give planning decision information with stronger guiding significance.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Referring to fig. 6, a functional block diagram of a vehicle control apparatus according to an embodiment of the present disclosure is shown. The vehicle control apparatus 600 includes:
An information determining module 601, configured to obtain environment awareness data of an environment in which a vehicle is located, position data of the vehicle, and map data of the environment in which the vehicle is located, and determine behavior track information of a target object in the environment in which the vehicle is located based on the environment awareness data, the position data of the vehicle, and the map data;
the first sending module 602 is configured to send behavior track information of the target object and vehicle track information of the vehicle to a server, so that the server processes the behavior track information of the target object and the vehicle track information based on a trained target model to generate planning decision information; the target model is obtained by generating a pre-training converter and performing self-supervision training according to various training sample data;
The information generating module 603 is configured to receive planning decision information fed back by the server, and generate planning track information and driving strategy information for the vehicle according to the behavior track information of the target object and the planning decision information;
The vehicle control module 604 is configured to control the vehicle to travel based on the planned trajectory information and the driving strategy information.
In a possible implementation manner, the planning decision information includes predicted track information for the vehicle and driving instruction information, wherein the track length indicated by the predicted track information is greater than a preset length, and the driving instruction information is used for indicating driving behavior of the vehicle; the information generating module 603 is specifically configured to:
And generating planning track information and driving strategy information for the vehicle according to the behavior track information, the predicted track information and the driving instruction information of the target object.
In a possible embodiment, the planning decision information further includes driving style information for indicating a degree of smoothness of driving of the vehicle; the vehicle control module 604 is specifically configured to:
And controlling the vehicle to run based on the planned trajectory information, the driving strategy information and the driving style information.
In one possible implementation, the behavior trace information of the target object includes tracking trace information of the target object, predicted trace information of the target object, and predicted behavior information of the target object.
In one possible implementation manner, the information determining module 601 is specifically configured to:
Extracting features of the image data to obtain aerial view features of the image data, and determining aerial view features of the map data based on the aerial view features of the image data;
Fusing the aerial view angle characteristic of the image data with the aerial view angle characteristic of the map data based on the position data of the vehicle to obtain a target fusion characteristic;
And determining behavior track information of the target object based on the target fusion characteristics.
In one possible implementation manner, the information determining module 601 is specifically configured to:
fusing the aerial view angle characteristics of the image data with the aerial view angle characteristics of the map data at corresponding moments to obtain single-frame fused aerial view angle characteristics;
And based on the position data of the vehicle, determining the relative pose data of the adjacent moment of the vehicle, and based on the relative pose data, aligning and fusing the multi-frame fusion aerial view angle features at the current moment and before the current moment to obtain the target fusion feature.
In a possible implementation manner, the information determining module 601 is further configured to:
Determining a plurality of perceptual information based on the target fusion feature, the plurality of perceptual information comprising at least two of:
three-dimensional object perception information, three-dimensional road structure perception information, occupied space grid perception information, traffic signal lamp perception information, three-dimensional map perception information and fusion positioning perception information.
In one possible implementation manner, the information generating module 603 is specifically configured to:
and generating planning track information and driving strategy information for the vehicle according to the behavior track information, the traffic signal lamp perception information, the three-dimensional map perception information and the planning decision information of the target object.
Referring to fig. 7, a functional block diagram of another vehicle control apparatus according to an embodiment of the present disclosure is shown. The vehicle control device 700 includes:
An information receiving module 701, configured to receive vehicle track information sent by a target vehicle and specific to the target vehicle and behavior track information of a target object in the environment of the target vehicle; the behavior track information of the target object is determined based on environment perception data of the environment where the vehicle is located, position data of the vehicle and map data of the environment where the vehicle is located;
The information processing module 702 is configured to process the behavior track information of the target object and the vehicle track information based on the trained target model to generate planning decision information; the target model is obtained by generating a pre-training converter and performing self-supervision training according to various training sample data;
The second sending module 703 is configured to send the planning decision information to the target vehicle, where the planning decision information is used to instruct the target vehicle to generate planning track information and driving strategy information for the vehicle according to the behavior track information of the target object and the planning decision information.
In one possible implementation, the vehicle control apparatus 700 shown with reference to fig. 8 further includes a model training module 704, where the model training module 704 is configured to:
Acquiring a basic network to be trained, wherein model parameters of the basic network are larger than a preset scale;
Acquiring training sample data, and performing self-supervision training on the basic network based on the training sample data to obtain the trained target model; the training sample data includes at least two of the following:
the system comprises vehicle track sample data, driving state sample data, weather condition sample data in the running process of the vehicle, traffic condition sample data in the running process of the vehicle, navigation route sample data corresponding to the running of the vehicle, road structure state sample data perceived by the vehicle, track sample data of dynamic and static obstacles and Internet driving video sample data.
Based on the same technical concept, the embodiment of the disclosure also provides vehicle-mounted equipment. Referring to fig. 9, a schematic structural diagram of an in-vehicle device 900 according to an embodiment of the disclosure includes a processor 901, a memory 902, and a bus 903. Wherein the memory 902 is configured to store execution instructions.
In the embodiment of the present disclosure, the memory 902 is specifically configured to store application program codes for executing the solution of the present application, and the processor 901 controls the execution. That is, when the in-vehicle apparatus 900 is operated, communication between the processor 901 and the memory 902 is through the bus 903, so that the processor 901 executes the application program code stored in the memory 902, thereby performing the method described in any of the foregoing embodiments.
The Memory 902 may be, but is not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc.
Processor 901 may be an integrated circuit chip with signal processing capabilities. The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but may also be a digital signal processor (DIGITAL SIGNAL Processing, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field programmable gate array (Field Programmable GATE ARRAY, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It is to be understood that the configuration illustrated in the embodiment of the present application does not constitute a specific limitation on the in-vehicle apparatus 900. In other embodiments of the application, the in-vehicle device 900 may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Note that, the memory and the processor included in the vehicle controller are similar to the processor 901 and the memory 902 included in the in-vehicle apparatus 900, and are not described here again.
The disclosed embodiments also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the vehicle control method in the method embodiments described above.
The embodiments of the present disclosure further provide a computer program product, which includes a computer program/instruction, and when the computer program/instruction processor is executed, implements the vehicle control method provided in the embodiments of the present disclosure, and specifically, reference may be made to the foregoing method embodiments, which are not repeated herein.
Wherein the above-mentioned computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
The methods in the embodiments of the present disclosure may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs or instructions. When the computer program or instructions are downloaded and executed on a computer, the process or function of the present application is performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a network device, a user device, a core network device, an OAM, or other programmable apparatus.
The computer program or instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program or instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired or wireless means. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that integrates one or more available media. The usable medium may be a magnetic medium, e.g., floppy disk, hard disk, tape; but also optical media such as digital video discs; but also semiconductor media such as solid state disks. The computer readable storage medium may be volatile or nonvolatile storage medium, or may include both volatile and nonvolatile types of storage medium.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (15)

1. A vehicle control method characterized by comprising:
acquiring environment sensing data of an environment where a vehicle is located, position data of the vehicle and map data of the environment where the vehicle is located, and determining behavior track information of a target object in the environment where the vehicle is located based on the environment sensing data, the position data of the vehicle and the map data;
The behavior track information of the target object and the vehicle track information of the vehicle are sent to a server, so that the server processes the behavior track information of the target object and the vehicle track information based on a trained target model to generate planning decision information; the target model is obtained by generating a pre-training converter and performing self-supervision training according to various training sample data;
The planning decision information fed back by the server is received, and planning track information and driving strategy information aiming at the vehicle are generated according to the behavior track information of the target object and the planning decision information;
And controlling the vehicle to run based on the planned trajectory information and the driving strategy information.
2. The method according to claim 1, wherein the planning decision information includes predicted track information for the vehicle, the predicted track information indicating a track length greater than a preset length, and driving instruction information for indicating a driving behavior of the vehicle;
the generating the planning track information and the driving strategy information for the vehicle according to the behavior track information of the target object and the planning decision information includes:
And generating planning track information and driving strategy information for the vehicle according to the behavior track information, the predicted track information and the driving instruction information of the target object.
3. The method of claim 1, wherein the planning decision information further comprises driving style information for indicating a level of smoothness of vehicle driving; the controlling the vehicle to travel based on the planned trajectory information and the driving strategy information includes:
And controlling the vehicle to run based on the planned trajectory information, the driving strategy information and the driving style information.
4. The method of claim 1, wherein the behavior trace information of the target object comprises tracking trace information of the target object, predicted trace information of the target object, and predicted behavior information of the target object.
5. The method of claim 4, wherein the context awareness data comprises image data, and wherein the determining behavior trace information of a target object in an environment of the vehicle based on the context awareness data, the location data of the vehicle, and the map data comprises:
Extracting features of the image data to obtain aerial view features of the image data, and determining aerial view features of the map data based on the aerial view features of the image data;
Fusing the aerial view angle characteristic of the image data with the aerial view angle characteristic of the map data based on the position data of the vehicle to obtain a target fusion characteristic;
And determining behavior track information of the target object based on the target fusion characteristics.
6. The method according to claim 5, wherein the fusing the bird's-eye view feature of the image data with the bird's-eye view feature of the map data based on the position data of the vehicle to obtain the target fusion feature includes:
fusing the aerial view angle characteristics of the image data with the aerial view angle characteristics of the map data at corresponding moments to obtain single-frame fused aerial view angle characteristics;
And based on the position data of the vehicle, determining the relative pose data of the adjacent moment of the vehicle, and based on the relative pose data, aligning and fusing the multi-frame fusion aerial view angle features at the current moment and before the current moment to obtain the target fusion feature.
7. The method of claim 5, wherein after obtaining the target fusion feature, the method further comprises:
Determining a plurality of perceptual information based on the target fusion feature, the plurality of perceptual information comprising at least two of:
three-dimensional object perception information, three-dimensional road structure perception information, occupied space grid perception information, traffic signal lamp perception information, three-dimensional map perception information and fusion positioning perception information.
8. The method of claim 7, wherein generating planned trajectory information and driving strategy information for the vehicle from the behavior trajectory information of the target object and the planning decision information comprises:
and generating planning track information and driving strategy information for the vehicle according to the behavior track information, the traffic signal lamp perception information, the three-dimensional map perception information and the planning decision information of the target object.
9. A vehicle control method characterized by comprising:
Receiving self-vehicle track information sent by a target vehicle and aiming at the target vehicle and behavior track information of a target object in the environment of the target vehicle; the behavior track information of the target object is determined based on environment perception data of the environment where the vehicle is located, position data of the vehicle and map data of the environment where the vehicle is located;
Processing the behavior track information of the target object and the vehicle track information based on the trained target model to generate planning decision information; the target model is obtained by generating a pre-training converter and performing self-supervision training according to various training sample data;
And sending the planning decision information to the target vehicle, wherein the planning decision information is used for indicating the target vehicle to generate planning track information and driving strategy information for the vehicle according to the behavior track information of the target object and the planning decision information.
10. The method according to claim 9, wherein the target model is trained by:
Acquiring a basic network to be trained, wherein model parameters of the basic network are larger than a preset scale;
Acquiring training sample data, and performing self-supervision training on the basic network based on the training sample data to obtain the trained target model; the training sample data includes at least two of the following:
the system comprises vehicle track sample data, driving state sample data, weather condition sample data in the running process of the vehicle, traffic condition sample data in the running process of the vehicle, navigation route sample data corresponding to the running of the vehicle, road structure state sample data perceived by the vehicle, track sample data of dynamic and static obstacles and Internet driving video sample data.
11. A vehicle control apparatus characterized by comprising:
The information determining module is used for acquiring environment sensing data of an environment where a vehicle is located, position data of the vehicle and map data of the environment where the vehicle is located, and determining behavior track information of a target object in the environment where the vehicle is located based on the environment sensing data, the position data of the vehicle and the map data;
The first sending module is used for sending the behavior track information of the target object and the vehicle track information of the vehicle to a server so that the server can process the behavior track information of the target object and the vehicle track information based on the trained target model to generate planning decision information; the target model is obtained by generating a pre-training converter and performing self-supervision training according to various training sample data;
The information generation module is used for receiving planning decision information fed back by the server and generating planning track information and driving strategy information for the vehicle according to the behavior track information of the target object and the planning decision information;
And the vehicle control module is used for controlling the vehicle to run based on the planned track information and the driving strategy information.
12. A vehicle control apparatus characterized by comprising:
The information receiving module is used for receiving the self-vehicle track information sent by the target vehicle and aiming at the target object in the environment of the target vehicle; the behavior track information of the target object is determined based on environment perception data of the environment where the vehicle is located, position data of the vehicle and map data of the environment where the vehicle is located;
the information processing module is used for processing the behavior track information of the target object and the self-vehicle track information based on the trained target model to generate planning decision information; the target model is obtained by generating a pre-training converter and performing self-supervision training according to various training sample data;
The second sending module is used for sending the planning decision information to the target vehicle, and the planning decision information is used for indicating the target vehicle to generate planning track information and driving strategy information for the vehicle according to the behavior track information of the target object and the planning decision information.
13. An in-vehicle apparatus, characterized by comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the in-vehicle device is operating, the machine-readable instructions when executed by the processor performing the vehicle control method of any one of claims 1-8.
14. A vehicle comprising a controller, the controller comprising:
A memory configured to store instructions; and
A processor configured to invoke the instructions from the memory and when executing the instructions is capable of implementing the vehicle control method according to any of claims 1-8.
15. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, performs the vehicle control method according to any one of claims 1 to 10.
CN202410210724.2A 2024-02-26 2024-02-26 Vehicle control method, device, vehicle-mounted equipment, vehicle and storage medium Pending CN118046921A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410210724.2A CN118046921A (en) 2024-02-26 2024-02-26 Vehicle control method, device, vehicle-mounted equipment, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410210724.2A CN118046921A (en) 2024-02-26 2024-02-26 Vehicle control method, device, vehicle-mounted equipment, vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN118046921A true CN118046921A (en) 2024-05-17

Family

ID=91048029

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410210724.2A Pending CN118046921A (en) 2024-02-26 2024-02-26 Vehicle control method, device, vehicle-mounted equipment, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN118046921A (en)

Similar Documents

Publication Publication Date Title
Van Brummelen et al. Autonomous vehicle perception: The technology of today and tomorrow
JP7070974B2 (en) Sparse map for autonomous vehicle navigation
CN108073170B (en) Automated collaborative driving control for autonomous vehicles
CN111532257B (en) Method and system for compensating for vehicle calibration errors
CN113168708B (en) Lane line tracking method and device
US20200026282A1 (en) Lane/object detection and tracking perception system for autonomous vehicles
CN110914641A (en) Fusion framework and batch alignment of navigation information for autonomous navigation
US11042758B2 (en) Vehicle image generation
DE112019001657T5 (en) SIGNAL PROCESSING DEVICE AND SIGNAL PROCESSING METHOD, PROGRAM AND MOBILE BODY
CN112698645A (en) Dynamic model with learning-based location correction system
US20210389133A1 (en) Systems and methods for deriving path-prior data using collected trajectories
CN112650220A (en) Automatic vehicle driving method, vehicle-mounted controller and system
DE112018005910T5 (en) CONTROL DEVICE AND CONTROL METHOD, PROGRAM AND MOBILE BODY
CN117056153A (en) Methods, systems, and computer program products for calibrating and verifying driver assistance systems and/or autopilot systems
CN112461249A (en) Sensor localization from external source data
CN116403174A (en) End-to-end automatic driving method, system, simulation system and storage medium
CN113298250A (en) Neural network for localization and object detection
Gao et al. Autonomous driving of vehicles based on artificial intelligence
CN116991104A (en) Automatic driving device for unmanned vehicle
Chipka et al. Estimation and navigation methods with limited information for autonomous urban driving
CN118046921A (en) Vehicle control method, device, vehicle-mounted equipment, vehicle and storage medium
US20220053124A1 (en) System and method for processing information from a rotatable camera
CN110446106B (en) Method for identifying front camera file, electronic equipment and storage medium
US20240239378A1 (en) Systems and Methods for Handling Traffic Signs
US20230415766A1 (en) Lane segment clustering using hybrid distance metrics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination