CN112249033A - Automatic driving system and method for vehicle - Google Patents

Automatic driving system and method for vehicle Download PDF

Info

Publication number
CN112249033A
CN112249033A CN202011198246.6A CN202011198246A CN112249033A CN 112249033 A CN112249033 A CN 112249033A CN 202011198246 A CN202011198246 A CN 202011198246A CN 112249033 A CN112249033 A CN 112249033A
Authority
CN
China
Prior art keywords
model
vehicle
automatic driving
submodel
decision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011198246.6A
Other languages
Chinese (zh)
Other versions
CN112249033B (en
Inventor
焦志锋
肖志光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xiaopeng Motors Technology Co Ltd
Original Assignee
Guangzhou Xiaopeng Autopilot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xiaopeng Autopilot Technology Co Ltd filed Critical Guangzhou Xiaopeng Autopilot Technology Co Ltd
Priority to CN202011198246.6A priority Critical patent/CN112249033B/en
Publication of CN112249033A publication Critical patent/CN112249033A/en
Application granted granted Critical
Publication of CN112249033B publication Critical patent/CN112249033B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention provides an automatic driving system and a method of a vehicle, wherein the automatic driving system of the vehicle can comprise a perception model, a fusion model in communication connection with the perception model, a planning decision model in communication connection with the fusion model and an execution control model in communication connection with the planning decision model, and can be used for determining a target object of the current environment of the vehicle through the perception model, then the fusion model acquires characteristic parameters of the target object and obtains motion information of the vehicle through the planning decision model; according to the motion information and the characteristic parameters, an automatic driving instruction is generated, then an actuator corresponding to the vehicle is controlled to execute the automatic driving instruction through an execution control model, and automatic driving of the vehicle is achieved, so that information acquisition, parameter processing and decision generation are carried out on the environment where the vehicle is located through the communication connection relation among all models in an automatic driving system, automatic driving of the vehicle is achieved, vehicle-mounted resources are fully utilized, and manageability and utilization rate of the resources are improved.

Description

Automatic driving system and method for vehicle
Technical Field
The present invention relates to the field of vehicle technologies, and in particular, to an automatic driving system and an automatic driving method for a vehicle.
Background
With the development of science and technology, the automatic driving technology of vehicles is more mature. In an automatic driving system of a vehicle, automatic driving of the vehicle is often realized through double-path redundancy, and when a single node of one path of control path fails, the other node can continue to work. The double-path redundant working mode can effectively ensure the safety and stability of vehicle running in the automatic driving process. However, in the automatic driving system, the resources of each node are designed according to the maximum requirement, and in the actual system operation process, all the resources are not needed, so that resource waste is easily caused, and vehicle-mounted resources cannot be effectively utilized.
Disclosure of Invention
In view of the above, embodiments of the present invention are proposed in order to provide an autonomous driving system of a vehicle and a corresponding autonomous driving method of a vehicle that overcome or at least partially solve the above-mentioned problems.
Correspondingly, the embodiment of the invention also provides an automatic driving method of the vehicle, which is used for ensuring the realization and the application of the method.
In order to solve the above problems, an embodiment of the present invention discloses an automatic driving system for a vehicle, where the automatic driving system includes a sensing model, a fusion model in communication connection with the sensing model, a planning decision model in communication connection with the fusion model, and an execution control model in communication connection with the planning decision model; and an actuator in communication with the executive control model in the autopilot system;
the perception model is used for determining a target object of the current environment where the vehicle is located;
the fusion model is used for acquiring characteristic parameters of the target object;
the planning decision model is used for acquiring the motion information of the vehicle; generating an automatic driving instruction according to the motion information and the characteristic parameters;
and the execution control model is used for controlling an actuator corresponding to the vehicle to execute the automatic driving instruction so as to realize automatic driving of the vehicle.
Optionally, the perception model includes a first perception submodel and a second perception submodel, and the target object includes a first target object and a second target object corresponding to the current environment where the vehicle is located;
the first perception submodel is used for identifying a first target object which is smaller than or equal to a preset distance in the current environment of the vehicle through a preset sensor;
and the second perception submodel is used for identifying a second target object which is larger than a preset distance in the current environment of the vehicle through the sensor.
Optionally, the fusion model includes a first fusion submodel and a second fusion submodel, and the target object includes at least an environmental object and a moving object;
the first fusion submodel is used for acquiring a first environment characteristic parameter of a first environment object, a first motion parameter of the first motion object and a first object type through the sensor;
and the second fusion submodel is used for acquiring a second environment characteristic parameter of the second environment object, a second motion parameter of the second motion object and a second object type through the sensor.
Optionally, the planning decision model comprises a first decision sub-model and a second decision sub-model;
the first decision sub-model is used for acquiring the motion information of the vehicle; generating a first automatic driving instruction for the first moving object by adopting the motion information, the first environment characteristic parameter, a first motion parameter of the first moving object and a first object type;
the second decision sub-model is configured to generate a second automatic driving instruction for the second moving object by using the motion information, the second environment characteristic parameter, the second motion parameter of the second moving object, and the second object type.
Optionally, the execution control model includes a first control submodel and a second control submodel;
the first control submodel is used for controlling an actuator corresponding to a vehicle to execute an operation corresponding to the first automatic driving instruction within a first preset time period so as to realize automatic driving of the vehicle;
the second control submodel is used for controlling an actuator corresponding to the vehicle to execute the operation corresponding to the second automatic driving instruction within a second preset time period so as to realize the automatic driving of the vehicle;
the first preset time period and the second preset time period are mutually connected time periods.
Optionally, the first fused submodel and the second fused submodel are in communication connection through a first sharing layer;
the first fusion sub-model is used for acquiring the second environment object and the second moving object through the first sharing layer, and acquiring a second environment characteristic parameter of the second environment object, a second moving parameter of the second moving object and a second object type identified by the second perception sub-model through the sensor;
the second fusion submodel is used for acquiring the first environment object and the first moving object through the first sharing layer, and acquiring a first environment characteristic parameter of the first environment object, a first moving parameter of the first moving object and a first object type identified by the first perception submodel through the sensor.
Optionally, the first decision sub-model and the second decision sub-model are in communication connection through a second sharing layer;
the first decision sub-model is used for acquiring the motion information of the vehicle; acquiring the second environment characteristic parameter, the second motion parameter of the second moving object and the second object type from the second decision sub-model through the second sharing layer, and generating a second automatic driving instruction for the second moving object by adopting the motion information, the second environment characteristic parameter, the second motion parameter of the second moving object and the second object type;
the second decision sub-model is configured to obtain the first environmental characteristic parameter, the first motion parameter of the first moving object, and the first object type from the first decision sub-model through the first sharing layer, and generate a first automatic driving instruction for the first moving object by using the motion information, the first environmental characteristic parameter, the first motion parameter of the first moving object, and the first object type.
Optionally, the first control submodel and the second control submodel are in communication connection through a third sharing layer;
the first control submodel is used for acquiring a second automatic driving instruction from the second control submodel through the third sharing layer, and controlling an actuator corresponding to a vehicle to execute an operation corresponding to the second automatic driving instruction within the second preset time period so as to realize automatic driving of the vehicle;
and the second control submodel is used for acquiring a first automatic driving instruction from the first control submodel through the third sharing layer, and controlling an actuator corresponding to a vehicle to execute an operation corresponding to the first automatic driving instruction in the first preset time period so as to realize automatic driving of the vehicle.
The embodiment of the invention also discloses an automatic driving method of the vehicle, which is applied to an automatic driving system of the vehicle and an actuator in communication connection with the automatic driving system, and the method comprises the following steps:
determining a target object of the current environment of the vehicle;
collecting characteristic parameters of the target object;
acquiring the motion information of the vehicle, and generating an automatic driving instruction according to the motion information and the characteristic parameters;
and controlling the actuator to execute the automatic driving instruction to realize automatic driving of the vehicle.
Optionally, the automatic driving system includes a perception model including a first perception submodel and a second perception submodel, and the determining the target object of the vehicle in the environment includes:
identifying a first target object which is smaller than or equal to a preset distance in the current environment of the vehicle on the basis of a preset sensor through the first perception submodel;
and identifying a second target object which is larger than a preset distance in the current environment of the vehicle based on the sensor through the second perception submodel.
Optionally, the automatic driving system further includes a fusion model in communication connection with the perception model, where the fusion model includes a first fusion submodel and a second fusion submodel, the target object includes at least an environmental object and a moving object, and the acquiring characteristic parameters of the target object includes:
acquiring a first environment characteristic parameter of a first environment object, a first motion parameter of the first motion object and a first object type based on the sensor through the first fusion sub-model;
and acquiring a second environment characteristic parameter of a second environment object, a second motion parameter of the second motion object and a second object type based on the sensor through the second fusion sub-model.
Optionally, the automatic driving system further includes a planning decision model in communication connection with the fusion model, where the planning decision model includes a first decision sub-model and a second decision sub-model, and the obtaining motion information of the vehicle and generating an automatic driving instruction according to the motion information and the characteristic parameter includes:
obtaining the motion information of the vehicle through the first decision sub-model, and generating a first automatic driving instruction aiming at the first moving object by adopting the motion information, the first environment characteristic parameter, the first motion parameter of the first moving object and the first object type;
and generating a second automatic driving instruction aiming at the second moving object by adopting the motion information, the second environment characteristic parameter, the second motion parameter of the second moving object and a second object type through the second decision sub-model.
Optionally, the automatic driving system further includes an execution control model in communication connection with the planning decision model, where the execution control model includes a first control sub-model and a second control sub-model, and the control unit is configured to execute the automatic driving instruction to implement automatic driving of a vehicle, including:
controlling an actuator corresponding to a vehicle to execute an operation corresponding to the first automatic driving instruction within a first preset time period through the first control sub-model, so as to realize automatic driving of the vehicle;
controlling an actuator corresponding to the vehicle to execute the operation corresponding to the second automatic driving instruction within a second preset time period through the second control sub-model, so as to realize the automatic driving of the vehicle;
the first preset time period and the second preset time period are mutually connected time periods.
Optionally, the acquiring the characteristic parameters of the target object by the first fusion submodel and the second fusion submodel through a first sharing layer includes:
acquiring the second environment object and the second moving object based on the first sharing layer through the first fusion sub-model, and acquiring a second environment characteristic parameter of the second environment object, a second moving parameter of the second moving object and a second object type identified by the second perception sub-model through the sensor;
and acquiring the first environment object and the first moving object based on the first sharing layer through the second fusion sub-model, and acquiring a first environment characteristic parameter of the first environment object, a first moving parameter of the first moving object and a first object type of the first moving object, which are identified by the first perception sub-model, through the sensor.
Optionally, the communication connection between the first decision sub-model and the second decision sub-model through a second sharing layer, where the obtaining of the motion information of the vehicle and the generating of the automatic driving instruction according to the motion information and the characteristic parameter include:
acquiring motion information of the vehicle through the first decision sub-model, acquiring the second environment characteristic parameter, the second motion parameter of the second moving object and the second object type from the second decision sub-model through the second sharing layer, and generating a second automatic driving instruction for the second moving object by adopting the motion information, the second environment characteristic parameter, the second motion parameter of the second moving object and the second object type;
and acquiring the first environment characteristic parameter, the first motion parameter of the first moving object and the first object type from the first decision submodel based on the first sharing layer through the second decision submodel, and generating a first automatic driving instruction aiming at the first moving object by adopting the motion information, the first environment characteristic parameter, the first motion parameter of the first moving object and the first object type.
Optionally, the first control submodel and the second control submodel are in communication connection through a third sharing layer, and the controlling the actuator to execute the automatic driving instruction to realize automatic driving of the vehicle includes:
acquiring a second automatic driving instruction from the second control submodel based on the third sharing layer through the first control submodel, and controlling an actuator corresponding to a vehicle to execute an operation corresponding to the second automatic driving instruction within a second preset time period to realize automatic driving of the vehicle;
and acquiring a first automatic driving instruction from the first control submodel based on the third sharing layer through the second control submodel, and controlling an actuator corresponding to the vehicle to execute an operation corresponding to the first automatic driving instruction within the first preset time period, so as to realize automatic driving of the vehicle.
Optionally, the method further comprises:
obtaining training samples for the autonomous driving system;
inputting the training sample into the automatic driving system for model training, and generating a predicted value corresponding to the training sample;
and comparing the predicted value with a preset reference value, and carrying out reverse training on the automatic driving system according to a comparison result.
Optionally, the inputting a training sample into the automatic driving system for model training, and generating a predicted value corresponding to the training sample includes:
respectively inputting the training samples into the first perception submodel to generate a first perception result, and inputting the training samples into the second perception submodel to generate a second perception result;
inputting the first sensing result into the first fusion submodel to generate a first fusion result, and inputting the second sensing result into a second fusion submodel to generate a second fusion result;
inputting the first fusion result into the first decision submodel to generate a first decision result, and inputting the second fusion result into the second decision submodel to generate a second decision result;
and inputting the first decision result into the first control submodel to generate a first predicted value, and inputting the second decision result into the second control submodel to generate a second predicted value.
Optionally, the comparing the predicted value with a preset reference value, and performing reverse training on the automatic driving system according to a comparison result includes:
comparing the first predicted value with a preset first reference value to obtain a first comparison result;
comparing the second predicted value with a preset second reference value to obtain a second comparison result;
inputting the first comparison result and the second comparison result into a preset initial automatic driving system for iteration, and calculating a plurality of loss functions of the initial automatic driving system after each iteration;
and when a plurality of loss functions of the initial automatic driving system after iteration are minimized, stopping the iteration and generating the target automatic driving system.
The embodiment of the invention also discloses a vehicle, which comprises:
one or more processors; and
one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the vehicle to perform the method as described above.
Embodiments of the invention also disclose one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause the processors to perform the methods as described above.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, the automatic driving system of the vehicle can comprise a perception model, a fusion model in communication connection with the perception model, a planning decision model in communication connection with the fusion model, and an execution control model in communication connection with the planning decision model, and can be used for determining a target object of the current environment of the vehicle through the perception model, then the fusion model acquires characteristic parameters of the target object, and motion information of the vehicle is acquired through the planning decision model; according to the motion information and the characteristic parameters, an automatic driving instruction is generated, then an actuator corresponding to the vehicle is controlled to execute the automatic driving instruction through an execution control model, and automatic driving of the vehicle is achieved, so that information acquisition, parameter processing and decision generation are carried out on the environment where the vehicle is located through the communication connection relation among all models in an automatic driving system, automatic driving of the vehicle is achieved, vehicle-mounted resources are fully utilized, and manageability and utilization rate of the resources are improved.
Drawings
FIG. 1 is a block diagram of an embodiment of an autopilot system for a vehicle according to the present invention;
FIG. 2 is a block diagram of the structure of an automatic driving system in an embodiment of the present invention;
FIG. 3 is a block diagram of the structure of an automatic driving system in an embodiment of the present invention;
FIG. 4 is a block diagram of the structure of an automatic driving system in an embodiment of the present invention;
FIG. 5 is a flow chart of steps in an embodiment of a method for automated driving of a vehicle of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Referring to fig. 1, a block diagram of an embodiment of an automatic driving system of a vehicle according to the present invention is shown, the automatic driving system including a perception model, a fusion model in communication with the perception model, a planning decision model in communication with the fusion model, and an execution control model in communication with the planning decision model; and an actuator in communication with the executive control model in the autopilot system;
the perception model is used for determining a target object of the current environment where the vehicle is located;
the fusion model is used for acquiring characteristic parameters of the target object;
the planning decision model is used for acquiring the motion information of the vehicle; generating an automatic driving instruction according to the motion information and the characteristic parameters;
and the execution control model is used for controlling an actuator corresponding to the vehicle to execute the automatic driving instruction so as to realize automatic driving of the vehicle.
As an example, for an autonomous driving system, the stability of the vehicle can be ensured by means of a two-way redundancy, in the case of a failure of a single node of one control path, the other node can continue to operate. Specifically, the redundant lines of the automatic driving system may include a bidirectional backbone line layout to form a loop to and from the processor, and meanwhile, the backbone network may operate as two independent loops, and if a part of the backbone network fails, data of the device may be sent to the slave processor for processing, so as to ensure stable driving of the vehicle.
In embodiments of the invention, the vehicle may include an autonomous driving system, the autonomous driving system may include a perception model, a fusion model, a planning decision model, and an execution control model, and the vehicle may include an actuator communicatively coupled to the autonomous driving system. Each model included in the automatic driving system can be a software module, a hardware module, or a module combining software and hardware; the actuators may include actuators for controlling the driving state of the vehicle, such as a steering mechanism, a driving motor, a braking mechanism, and a shift position control mechanism, and may control the lateral direction and the longitudinal direction of the vehicle.
For convenience of understanding and explanation, the embodiment of the present invention takes as an example a mode that each model in the automatic driving system is a module combining software and hardware, that is, software controls hardware, and includes sensing, fusion, planning and decision processing, which may be software or hardware inside the controller, control a specific chip inside the controller, may also be a module inside the chip, and may also be a software model running on the chip to implement corresponding functions of different paths, and actuator control may be processed by a software module inside the controller, and a corresponding hardware port or software module is selected to output a control instruction for an actuator, and the like.
In a specific implementation, the perception model can perceive the environment of the vehicle in the actual driving process, including identifying that the environment of the vehicle includes a target object which has an influence on the driving of the vehicle, the target object can include an environment object and a moving object, the environment object can include lane lines, traffic lights, roadblocks and other fixed-position objects which indicate the driving of the vehicle of the environment, and the moving object can include people, animals, vehicles and other objects in a moving state of the environment. It should be noted that, for people and animals who are stationed in place, they are summarized in the moving object to avoid that the people and animals suddenly enter into the moving state, which leads to the system judgment error. The sensing model can comprise sensors for information acquisition and processing such as a camera, a millimeter wave radar, a laser radar, a positioning system and an ultrasonic radar, and after the automatic driving function of the vehicle is started, the current environment of the vehicle can be subjected to object recognition through the sensors to determine a target object.
After the perception model identifies that the periphery of the vehicle comprises the target object which has an influence on the driving of the vehicle through the sensor, the position relation between the vehicle and the target object and the characteristic parameters of the target object can be determined through the fusion model. Optionally, the fusion model may be a processor for data processing in each sensor, and may be located in the same sensor as the sensing model, where the sensing model is responsible for identifying an environmental object and a moving object of an environment where the vehicle is located, and the fusion model processes a position relationship between the environmental object and the vehicle, and between the moving object and the vehicle, so as to obtain an environmental characteristic parameter, a moving parameter, and an object type of the moving object, which can be reliable in different scenes and environmental conditions, so as to make a decision on a driving instruction.
In one example, the sensing model and the fusion model may be located on the same sensor, for example, on a sensor such as a millimeter wave radar, an ultrasonic radar, and a laser radar, and the sensing model may identify an environmental object and a moving object through the sensor, determine a driving indication identifier and the moving object, etc. in the environment where the vehicle is located, which have an influence on the driving of the vehicle, and then acquire an environmental characteristic parameter of the environmental object, and a moving parameter and an object type of the moving object through the fusion model. Specifically, the environmental characteristic parameters of the environmental object may include characteristic parameters indicating driving, limiting driving, decelerating driving, prohibiting lane change, allowing lane change, and the like, for example, when the environmental object is a traffic light, a red light represents parking waiting, a green light represents allowing passage, and a yellow light represents warning; when the environment object is a double yellow line, the lane change, line pressing driving and the like of the current lane are prohibited; and when the environmental object is a speed-limiting driving mark, the current lane is a speed-limiting lane and the like. For moving objects, including objects in motion states such as vehicles, people, and animals, the object type of each moving object may be identified, for example, the moving object is a vehicle, a person, or an animal, and then motion parameters (including a motion direction, a motion speed, and the like) of the moving object are obtained in a visual perception manner, so as to follow the vehicle, avoid pedestrians and animals, and so on, so as to obtain reliable characteristic parameters capable of making automatic driving decisions under various scenes and environmental conditions by identifying environmental objects in the environment and obtaining environmental characteristic parameters of the environmental objects, identifying the moving object, and obtaining the motion parameters and the object type of the moving object, so as to make subsequent decisions of automatic driving.
The planning decision model can make decision and planning of automatic driving according to the environment object and the moving object identified by the perception model and the environment characteristic parameters of the environment object output by the fusion model, the object type and the moving parameters of the moving object and the motion information of the vehicle. Specifically, the motion information of the vehicle may include a current motion direction, a current motion speed, and the like of the vehicle, and the planning decision model may further preset a driving rule, store driving habits of the user, and the like, so that the planning decision model may perform planning and decision for the obtained characteristic parameters, the driving rule, the driving habits of the user, and the like, and generate the automatic driving instruction.
In one example, the planning decision model may generate an automatic driving instruction based on the environmental characteristic parameters and the driving rules, for example, according to the identified lane lines, traffic light indicators, and the like, in combination with preset driving rules, generate a corresponding automatic driving instruction (e.g., forward, stop, accelerate, lane change, and the like), generate an automatic driving instruction based on the motion parameters and object types of the moving objects, for example, obtain the motion parameters of the front, rear, left, right, and the like vehicles, and output the relationship between the vehicles including the distance between the front vehicle, the rear vehicle, the side vehicle, and the like and the host vehicle, and generate an automatic driving instruction (e.g., deceleration avoidance, front vehicle following, and the like) according to the vehicle relationship, and also obtain the motion parameters of the pedestrian, according to the motion direction and motion speed of the pedestrian, in combination with the motion direction and motion speed of the vehicle itself, the positional relationship between the pedestrian and the vehicle is output, and thus an automatic driving instruction (e.g., a deceleration instruction or the like) may be output.
After the planning decision model outputs the automatic driving instruction, the automatic driving instruction can be transmitted to the corresponding vehicle actuator, so that the automatic driving instruction is executed through the vehicle actuator, and the automatic driving of the vehicle is realized. The actuators may include actuators for controlling the driving state of the vehicle, such as a steering mechanism, a driving motor, a brake system, and a shift control mechanism. Specifically, the steering of the vehicle can be controlled by the steering mechanism, the driving motor and the like, and the speed of the vehicle can be controlled by the driving motor, the braking system, the gear control mechanism and the like, so that the automatic driving of the vehicle is realized.
In an alternative embodiment of the present invention, referring to fig. 2, a schematic diagram of an automatic driving system in an embodiment of the present invention is shown, the automatic driving system may include a dual-path redundancy model, that is, the perception model may include a first perception submodel and a second perception submodel, the fusion model may include a first fusion submodel and a second fusion submodel, the decision planning model may include a first decision submodel and a second decision submodel, the execution control model may include a first control submodel and a second control submodel, and then the first control link may include the first perception submodel-the first fusion submodel-the first decision submodel-the first control submodel; the second link may include a second perception submodel-a second fusion submodel-a second decision submodel-a second control submodel. The arbitration module can transmit the automatic driving instruction to the actuator, and the actuator executes the corresponding automatic driving instruction.
Optionally, the autonomous driving system may include a redundant mode in which the autonomous driving system may control the traveling of the vehicle through one of the links, a complementary mode, and a learning mode; in the complementary mode, the automatic driving system can control the driving of the vehicle in a complex scene through two links; in the learning mode, algorithm models related to all models in the automatic driving system can be trained and optimized, so that the optimality of the models is kept, and more complex and variable road environments can be handled conveniently.
For the redundant mode, the data processing process of the automatic driving system may refer to the above description, that is, when a single node in one control link fails, the data processing process may work through another node, so as to ensure the integrity and stability of the control link, which is not described herein again.
In the complementary mode, aiming at a complex driving scene, the two links in the automatic driving system are complemented, so that the complex scene is identified, data is processed and the vehicle is controlled. The complex scenes can comprise a residential scene or a commercial scene of mixed flow of people and vehicles, a congestion scene of an expressway or an urban road and the like, and vehicles are required to identify target objects at medium and long distances and target objects at short distances under the scenes, and the target objects are relatively complex, so that vehicle-mounted resources can be effectively utilized through the cooperative work of two-way control paths in the automatic driving system, and redundant parts are called under the condition that the safe driving of the vehicles is ensured, so that the automatic driving system can maximally carry the vehicle-mounted resources.
It should be noted that, in a complex scene, besides common vehicles, two-wheeled vehicles, pedestrians, special work vehicles and animals affecting traffic may be included, and vehicles are required to accurately identify lane lines and guide lines with different meanings in complex ground markings, and at the same time, vehicles are required to identify traffic signs, traffic lights, vehicle indicators and the like between complex road buildings or obstacles, so that a reasonable decision is made under corresponding limiting conditions to ensure safe driving of the vehicles. The limiting conditions may include conditions (speed limit, travel limit, and passing) for ensuring that the vehicle travels in accordance with the travel rules, conditions for ensuring the vehicle to travel safely, conditions for preventing accidents such as collision and rubbing, conditions for ensuring the vehicle to travel smoothly and reducing excessive acceleration and deceleration and turning actions, and conditions (travel route, driving mode, and the like) for meeting the driving habits of the driver. Therefore, the two-way control path is fully called through the complementary mode of the automatic driving system, effective management and utilization of vehicle-mounted resources are achieved, under the condition that safe driving of the vehicle is guaranteed, computing resources of two channels are integrated to achieve calculation of more models and more complex parameters, superposition of software and hardware resources is achieved, and the vehicle-mounted resources are fully utilized.
In a specific implementation, for the vehicle running in a complex scene, a first target object in the current environment of the vehicle, which is smaller than or equal to a preset distance, can be identified through a first perception submodel based on a preset sensor, and a second target object in the current environment of the vehicle, which is greater than the preset distance, can be identified through a second perception submodel based on a sensor. The preset distance can be a distance threshold value set for the vehicle, and an environment smaller than or equal to the distance threshold value can be regarded as a scene of medium and short distance; the environment greater than the distance threshold value can be regarded as a long-distance scene, for example, the preset distance can be 30 meters, the first perception sub-model can recognize the target object within 30 meters through the sensor, and the second perception sub-model can recognize the target object outside 30 meters, so that the two perception models are called to recognize the objects at different distances, the vehicle-mounted resources are fully utilized, and the information acquisition efficiency is effectively improved.
After the sensing model identifies the target object, the first fusion sub-model may acquire a first environment characteristic parameter of the first environment object, and a first motion parameter and a first object type of the first motion object through the sensor, and the second fusion sub-model may acquire a second environment characteristic parameter of the second environment object, and a second motion parameter and a second object type of the second motion object through the sensor. Then, the first fusion sub-model can transmit the first environment characteristic parameter of the first environment object, the first motion parameter of the first motion object and the first object type to the first decision sub-model, and the first decision sub-model generates a first automatic driving instruction for the first motion object by adopting the motion information, the first environment characteristic parameter, the first motion parameter of the first motion object and the first object type after obtaining the motion information of the vehicle; the second fusion submodel may transmit the second environment characteristic parameter of the second environment object, the second motion parameter of the second moving object, and the second object type to the second decision submodel, and the second decision submodel may generate the second automatic driving instruction for the second moving object by using the motion information, the second environment characteristic parameter, the second motion parameter of the second moving object, and the second object type.
It should be noted that, for the decision-making planning model, since the first fusion submodel and the second fusion submodel are directed at the calculation of the characteristic parameters of the target objects at different distances, the automatic driving instruction output by the decision-making planning model can be a driving instruction directed at different distances, so that the mutually-connected automatic driving instructions can be generated through two control paths, the vehicle can be ensured to run smoothly in a complex scene for a period of time in the future, and the driving experience is optimized.
Specifically, the first control submodel may control the actuator corresponding to the vehicle to execute an operation corresponding to the first automatic driving instruction within a first preset time period, so as to implement automatic driving of the vehicle, and the second control submodel may control the actuator corresponding to the vehicle to execute an operation corresponding to the second automatic driving instruction within a second preset time period, so as to implement automatic driving of the vehicle. The first preset time period and the second preset time period are time periods which are mutually alternated and connected, for example, 0-5 seconds (the first preset time period), 6-10 seconds (the second preset time period), 10-15 seconds (the first preset time period), 16-20 seconds (the second preset time period) and the like, so that the automatic driving system controls the vehicle actuator to execute corresponding operations according to the driving instructions of the two control paths in different time periods, the vehicle can guarantee smooth running in a future time period in a complex scene, and driving experience is optimized.
In another optional embodiment of the present invention, referring to fig. 3, a schematic diagram of an automatic driving system in the embodiment of the present invention is shown, a first fusion sub-model and a second fusion sub-model may be connected through a shared layer, a first decision sub-model and a second decision sub-model may be connected through a second shared layer, a first control sub-model and a second control sub-model are connected through a third shared layer, a data sharing channel is added in the automatic driving system to schedule and manage a calculated model, computing resources of the two channels can be aggregated, flexibility of linked operation between models in the automatic driving system is improved, and then corresponding models can be called according to different scenes to perform data processing, so as to meet different driving scenes.
In a specific implementation, the first fusion submodel may be configured to obtain the second environment object and the second moving object through the first sharing layer, and acquire the second environment characteristic parameter of the second environment object, the second moving parameter of the second moving object, and the second object type, which are identified by the second sensing submodel, through the sensor; the second fusion submodel can be used for acquiring the first environment object and the first moving object through the first sharing layer, and acquiring the first environment characteristic parameter of the first environment object, the first moving parameter of the first moving object and the first object type, which are identified by the first perception submodel, through the sensor.
The first decision sub-model may be used to obtain motion information of the vehicle; acquiring a second environment characteristic parameter, a second motion parameter of a second moving object and a second object type from a second decision sub-model through a second sharing layer, and generating a second automatic driving instruction aiming at the second moving object by adopting the motion information, the second environment characteristic parameter, the second motion parameter of the second moving object and the second object type; the second decision sub-model may be configured to obtain the first environmental characteristic parameter, the first motion parameter of the first moving object, and the first object type from the first decision sub-model through the first sharing layer, and generate the first automatic driving instruction for the first moving object by using the motion information, the first environmental characteristic parameter, the first motion parameter of the first moving object, and the first object type.
The first control submodel can be used for acquiring a second automatic driving instruction from the second control submodel through the third sharing layer, and controlling an actuator corresponding to the vehicle to execute an operation corresponding to the second automatic driving instruction within a second preset time period so as to realize automatic driving of the vehicle; the second control submodel may be configured to obtain the first automatic driving instruction from the first control submodel through the third sharing layer, and control an actuator corresponding to the vehicle to execute an operation corresponding to the first automatic driving instruction within a first preset time period, so as to implement automatic driving of the vehicle.
Specifically, through a data sharing layer between the models, the first fusion sub-model can use the sensing result in the path a or the sensing result in the path B, and the second fusion sub-model is the same; the first decision submodel can use the fusion result in the path A and can also use the fusion result in the path B, and the second decision submodel is the same; the first control submodel can use the decision result in the path A and the decision result in the path B, and through the configuration of data interaction and model scheduling, the perception, fusion and planning control of the automatic driving system can be more flexible, the automatic driving system can be switched between different modes according to the requirements of the actual driving scene of the vehicle, the different modes can be operated according to the requirements of the scene, and meanwhile, in the process of switching the modes of the path A and the path B, the automatic driving system can still ensure the continuity of model calculation.
In the learning mode, the executed effect can be contrasted and analyzed in real time through the comparison of the two groups of learning parameters, so that the learning parameters are optimized, and the automatic driving system is optimized and updated.
In specific implementation, a training sample for the automatic driving system is obtained, the training sample is input into the automatic driving system for model training, a predicted value corresponding to the training sample is generated, the predicted value is compared with a preset reference value, and reverse training is performed on the automatic driving system according to a comparison result.
Specifically, the training samples may be respectively input into a first perception submodel to generate a first perception result, and the training samples may be input into a second perception submodel to generate a second perception result; inputting the first sensing result into a first fusion submodel to generate a first fusion result, and inputting the second sensing result into a second fusion submodel to generate a second fusion result; inputting the first fusion result into a first decision submodel to generate a first decision result, and inputting the second fusion result into a second decision submodel to generate a second decision result; and inputting the first decision result into a first control submodel to generate a first predicted value, and inputting the second decision result into a second control submodel to generate a second predicted value.
Comparing the first predicted value with a preset first reference value to obtain a first comparison result; comparing the second predicted value with a preset second reference value to obtain a second comparison result; inputting the first comparison result and the second comparison result into a preset initial automatic driving system for iteration, and calculating a plurality of loss functions of the initial automatic driving system after each iteration; and when a plurality of loss functions of the initial automatic driving system after iteration are minimized, stopping the iteration and generating the target automatic driving system.
Wherein, the parameter updating of the model may be based on a gradient descent strategy, and the parameter is updated in a target gradient direction. In a specific implementation, a learning rate can be preset, and the updating step length of the parameters in each iteration is controlled, so that each model in the automatic driving system is finally obtained. In addition, in practice, because the minimum value of the loss function is often difficult to achieve, model iteration can be controlled by setting iteration times, and when the loss function reaches an expected value or basically keeps unchanged, the model training can be regarded as the end of the model training, so that the automatic driving system is updated.
In one example, if the same data is used on the path a and the path B and the same mode is operated, the same result can be obtained in principle, and if some or a certain group of specific parameters are modified, the execution effects of different model parameters on the overall operation condition of the vehicle under various scenes can be compared, so that the optimal updating of the vehicle automatic driving system is realized. For example, referring to fig. 4, a schematic diagram of an automatic driving system in an embodiment of the present invention is shown, a planning decision model is subjected to learning optimization, parameters of a specific planning decision model of a path B (path a) can be modified, and then results output by a sensing model and a fusion model are the same, and different results can be output due to different parameters of the planning decision model in the two paths, and the results are fed back to the sensing model through an arbitration module, so that a group of parameters can be kept in closed-loop learning in a baitto operating scene, calculation results of models in each layer are compared in real time, and then two groups of results can be compared and optimized. Optimization of the model parameters may include optimization of the state of vehicle motion control under different scenarios, e.g., desired actions and behaviors based on driving rule preferences; desired actions and behaviors based on driving safety priorities; desired actions and behaviors based on ride comfort priority; parameters used by the regular models can have different influences on output results of the automatic driving system based on expected actions and behaviors generated by driving habits, and the like, so that model parameters optimal for motion performance of vehicles in various scenes can be obtained through real-time comparison of the two sets of model parameters. Optionally, for the learning process of other models in the automatic driving system, reference may be made to the above process, which is not described herein again.
In the embodiment of the invention, the automatic driving system of the vehicle can comprise a perception model, a fusion model in communication connection with the perception model, a planning decision model in communication connection with the fusion model, and an execution control model in communication connection with the planning decision model, and can be used for determining a target object of the current environment of the vehicle through the perception model, then the fusion model acquires characteristic parameters of the target object, and motion information of the vehicle is acquired through the planning decision model; according to the motion information and the characteristic parameters, an automatic driving instruction is generated, then an actuator corresponding to the vehicle is controlled to execute the automatic driving instruction through an execution control model, and automatic driving of the vehicle is achieved, so that information acquisition, parameter processing and decision generation are carried out on the environment where the vehicle is located through the communication connection relation among all models in an automatic driving system, automatic driving of the vehicle is achieved, vehicle-mounted resources are fully utilized, and manageability and utilization rate of the resources are improved.
Referring to fig. 5, a flowchart illustrating steps of an embodiment of an automatic driving method for a vehicle according to the present invention is applied to an automatic driving system for a vehicle and an actuator communicatively connected to the automatic driving system, and specifically may include the following steps:
step 501, determining a target object of the current environment of a vehicle;
502, collecting characteristic parameters of the target object;
step 503, obtaining motion information of the vehicle, and generating an automatic driving instruction according to the motion information and the characteristic parameters;
and step 504, controlling the actuator to execute the automatic driving instruction to realize automatic driving of the vehicle.
In an optional embodiment of the invention, the automatic driving system comprises a perception model, the perception model comprises a first perception submodel and a second perception submodel, and the determining the target object of the current environment of the vehicle comprises:
identifying a first target object which is smaller than or equal to a preset distance in the current environment of the vehicle on the basis of a preset sensor through the first perception submodel;
and identifying a second target object which is larger than a preset distance in the current environment of the vehicle based on the sensor through the second perception submodel.
In an optional embodiment of the present invention, the automatic driving system further includes a fusion model in communication with the perception model, the fusion model includes a first fusion submodel and a second fusion submodel, the target object includes at least an environmental object and a moving object, and the acquiring the characteristic parameters of the target object includes:
acquiring a first environment characteristic parameter of a first environment object, a first motion parameter of the first motion object and a first object type based on the sensor through the first fusion sub-model;
and acquiring a second environment characteristic parameter of a second environment object, a second motion parameter of the second motion object and a second object type based on the sensor through the second fusion sub-model.
In an optional embodiment of the present invention, the automatic driving system further includes a planning decision model in communication connection with the fusion model, where the planning decision model includes a first decision sub-model and a second decision sub-model, and the obtaining motion information of the vehicle and generating an automatic driving instruction according to the motion information and the characteristic parameter includes:
obtaining the motion information of the vehicle through the first decision sub-model, and generating a first automatic driving instruction aiming at the first moving object by adopting the motion information, the first environment characteristic parameter, the first motion parameter of the first moving object and the first object type;
and generating a second automatic driving instruction aiming at the second moving object by adopting the motion information, the second environment characteristic parameter, the second motion parameter of the second moving object and a second object type through the second decision sub-model.
In an optional embodiment of the present invention, the automatic driving system further includes an execution control model communicatively connected to the planning decision model, where the execution control model includes a first control sub-model and a second control sub-model, and the control unit is configured to control the actuator to execute the automatic driving instruction, so as to implement automatic driving of a vehicle, including:
controlling an actuator corresponding to a vehicle to execute an operation corresponding to the first automatic driving instruction within a first preset time period through the first control sub-model, so as to realize automatic driving of the vehicle;
controlling an actuator corresponding to the vehicle to execute the operation corresponding to the second automatic driving instruction within a second preset time period through the second control sub-model, so as to realize the automatic driving of the vehicle;
the first preset time period and the second preset time period are mutually connected time periods.
In an optional embodiment of the present invention, the acquiring the characteristic parameters of the target object includes:
acquiring the second environment object and the second moving object based on the first sharing layer through the first fusion sub-model, and acquiring a second environment characteristic parameter of the second environment object, a second moving parameter of the second moving object and a second object type identified by the second perception sub-model through the sensor;
and acquiring the first environment object and the first moving object based on the first sharing layer through the second fusion sub-model, and acquiring a first environment characteristic parameter of the first environment object, a first moving parameter of the first moving object and a first object type of the first moving object, which are identified by the first perception sub-model, through the sensor.
In an optional embodiment of the present invention, the communication connection between the first decision sub-model and the second decision sub-model through a second sharing layer, the obtaining motion information of the vehicle, and generating an automatic driving instruction according to the motion information and the characteristic parameter includes:
acquiring motion information of the vehicle through the first decision sub-model, acquiring the second environment characteristic parameter, the second motion parameter of the second moving object and the second object type from the second decision sub-model through the second sharing layer, and generating a second automatic driving instruction for the second moving object by adopting the motion information, the second environment characteristic parameter, the second motion parameter of the second moving object and the second object type;
and acquiring the first environment characteristic parameter, the first motion parameter of the first moving object and the first object type from the first decision submodel based on the first sharing layer through the second decision submodel, and generating a first automatic driving instruction aiming at the first moving object by adopting the motion information, the first environment characteristic parameter, the first motion parameter of the first moving object and the first object type.
In an optional embodiment of the present invention, the first control sub-model and the second control sub-model are communicatively connected through a third sharing layer, and the controlling the actuator to execute the automatic driving instruction to realize automatic driving of the vehicle includes:
acquiring a second automatic driving instruction from the second control submodel based on the third sharing layer through the first control submodel, and controlling an actuator corresponding to a vehicle to execute an operation corresponding to the second automatic driving instruction within a second preset time period to realize automatic driving of the vehicle;
and acquiring a first automatic driving instruction from the first control submodel based on the third sharing layer through the second control submodel, and controlling an actuator corresponding to the vehicle to execute an operation corresponding to the first automatic driving instruction within the first preset time period, so as to realize automatic driving of the vehicle.
In an optional embodiment of the present invention, further comprising:
obtaining training samples for the autonomous driving system;
inputting the training sample into the automatic driving system for model training, and generating a predicted value corresponding to the training sample;
and comparing the predicted value with a preset reference value, and carrying out reverse training on the automatic driving system according to a comparison result.
In an optional embodiment of the present invention, the inputting a training sample into the automatic driving system for model training, and generating a predicted value corresponding to the training sample includes:
respectively inputting the training samples into the first perception submodel to generate a first perception result, and inputting the training samples into the second perception submodel to generate a second perception result;
inputting the first sensing result into the first fusion submodel to generate a first fusion result, and inputting the second sensing result into a second fusion submodel to generate a second fusion result;
inputting the first fusion result into the first decision submodel to generate a first decision result, and inputting the second fusion result into the second decision submodel to generate a second decision result;
and inputting the first decision result into the first control submodel to generate a first predicted value, and inputting the second decision result into the second control submodel to generate a second predicted value.
In an optional embodiment of the present invention, the comparing the predicted value with a preset reference value, and performing reverse training on the automatic driving system according to a comparison result includes:
comparing the first predicted value with a preset first reference value to obtain a first comparison result;
comparing the second predicted value with a preset second reference value to obtain a second comparison result;
inputting the first comparison result and the second comparison result into a preset initial automatic driving system for iteration, and calculating a plurality of loss functions of the initial automatic driving system after each iteration;
and when a plurality of loss functions of the initial automatic driving system after iteration are minimized, stopping the iteration and generating the target automatic driving system.
In the embodiment of the invention, the automatic driving system of the vehicle can comprise a perception model, a fusion model in communication connection with the perception model, a planning decision model in communication connection with the fusion model, and an execution control model in communication connection with the planning decision model, and can be used for determining a target object of the current environment of the vehicle through the perception model, then the fusion model acquires characteristic parameters of the target object, and motion information of the vehicle is acquired through the planning decision model; according to the motion information and the characteristic parameters, an automatic driving instruction is generated, then an actuator corresponding to the vehicle is controlled to execute the automatic driving instruction through an execution control model, and automatic driving of the vehicle is achieved, so that information acquisition, parameter processing and decision generation are carried out on the environment where the vehicle is located through the communication connection relation among all models in an automatic driving system, automatic driving of the vehicle is achieved, vehicle-mounted resources are fully utilized, and manageability and utilization rate of the resources are improved.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
As for the method embodiment, since it is basically similar to the system embodiment, the description is simple, and the relevant points can be referred to the partial description of the system embodiment.
An embodiment of the present invention further provides a vehicle, including:
one or more processors; and
one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the vehicle to perform a method as described in embodiments of the invention.
Embodiments of the invention also provide one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause the processors to perform the methods described in embodiments of the invention.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, EEPROM, Flash, eMMC, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The present invention provides a method and a system for automatically driving a vehicle, which are described in detail above, and the present invention is described in detail by applying specific examples to explain the principle and the implementation of the present invention, and the description of the embodiments is only used to help understand the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (21)

1. An autonomous driving system for a vehicle, the autonomous driving system comprising a perception model, a fusion model in communication with the perception model, a planning decision model in communication with the fusion model, and an execution control model in communication with the planning decision model; and an actuator in communication with the executive control model in the autopilot system;
the perception model is used for determining a target object of the current environment where the vehicle is located;
the fusion model is used for acquiring characteristic parameters of the target object;
the planning decision model is used for acquiring the motion information of the vehicle; generating an automatic driving instruction according to the motion information and the characteristic parameters;
and the execution control model is used for controlling an actuator corresponding to the vehicle to execute the automatic driving instruction so as to realize automatic driving of the vehicle.
2. The autopilot system of claim 1 wherein the perception model includes a first perception submodel and a second perception submodel, the target objects including a first target object and a second target object corresponding to an environment in which the vehicle is currently located;
the first perception submodel is used for identifying a first target object which is smaller than or equal to a preset distance in the current environment of the vehicle through a preset sensor;
and the second perception submodel is used for identifying a second target object which is larger than a preset distance in the current environment of the vehicle through the sensor.
3. The autopilot system of claim 2 wherein the fusion model comprises a first fusion submodel and a second fusion submodel, the target objects comprising at least an environmental object and a moving object;
the first fusion submodel is used for acquiring a first environment characteristic parameter of a first environment object, a first motion parameter of the first motion object and a first object type through the sensor;
and the second fusion submodel is used for acquiring a second environment characteristic parameter of the second environment object, a second motion parameter of the second motion object and a second object type through the sensor.
4. The autopilot system of claim 3 wherein the planning decision model includes a first decision submodel and a second decision submodel;
the first decision sub-model is used for acquiring the motion information of the vehicle; generating a first automatic driving instruction for the first moving object by adopting the motion information, the first environment characteristic parameter, a first motion parameter of the first moving object and a first object type;
the second decision sub-model is configured to generate a second automatic driving instruction for the second moving object by using the motion information, the second environment characteristic parameter, the second motion parameter of the second moving object, and the second object type.
5. The autopilot system of claim 4 wherein the execution control model includes a first control submodel and a second control submodel;
the first control submodel is used for controlling an actuator corresponding to a vehicle to execute an operation corresponding to the first automatic driving instruction within a first preset time period so as to realize automatic driving of the vehicle;
the second control submodel is used for controlling an actuator corresponding to the vehicle to execute the operation corresponding to the second automatic driving instruction within a second preset time period so as to realize the automatic driving of the vehicle;
the first preset time period and the second preset time period are mutually connected time periods.
6. The autopilot system of claim 5 wherein the first and second fused submodels are communicatively coupled via a first shared layer;
the first fusion sub-model is used for acquiring the second environment object and the second moving object through the first sharing layer, and acquiring a second environment characteristic parameter of the second environment object, a second moving parameter of the second moving object and a second object type identified by the second perception sub-model through the sensor;
the second fusion submodel is used for acquiring the first environment object and the first moving object through the first sharing layer, and acquiring a first environment characteristic parameter of the first environment object, a first moving parameter of the first moving object and a first object type identified by the first perception submodel through the sensor.
7. The autopilot system of claim 6 wherein the first decision submodel is communicatively coupled to the second decision submodel via a second shared layer;
the first decision sub-model is used for acquiring the motion information of the vehicle; acquiring the second environment characteristic parameter, the second motion parameter of the second moving object and the second object type from the second decision sub-model through the second sharing layer, and generating a second automatic driving instruction for the second moving object by adopting the motion information, the second environment characteristic parameter, the second motion parameter of the second moving object and the second object type;
the second decision sub-model is configured to obtain the first environmental characteristic parameter, the first motion parameter of the first moving object, and the first object type from the first decision sub-model through the first sharing layer, and generate a first automatic driving instruction for the first moving object by using the motion information, the first environmental characteristic parameter, the first motion parameter of the first moving object, and the first object type.
8. The autopilot system of claim 7 wherein the first control submodel is communicatively coupled to the second control submodel via a third shared layer;
the first control submodel is used for acquiring a second automatic driving instruction from the second control submodel through the third sharing layer, and controlling an actuator corresponding to a vehicle to execute an operation corresponding to the second automatic driving instruction within the second preset time period so as to realize automatic driving of the vehicle;
and the second control submodel is used for acquiring a first automatic driving instruction from the first control submodel through the third sharing layer, and controlling an actuator corresponding to a vehicle to execute an operation corresponding to the first automatic driving instruction in the first preset time period so as to realize automatic driving of the vehicle.
9. An automatic driving method of a vehicle, which is applied to an automatic driving system of the vehicle and an actuator in communication connection with the automatic driving system, the method comprising:
determining a target object of the current environment of the vehicle;
collecting characteristic parameters of the target object;
acquiring the motion information of the vehicle, and generating an automatic driving instruction according to the motion information and the characteristic parameters;
and controlling the actuator to execute the automatic driving instruction to realize automatic driving of the vehicle.
10. The method of claim 9, wherein the autonomous driving system includes a perception model including a first perception submodel and a second perception submodel, the determining a target object of the environment in which the vehicle is currently located, comprising:
identifying a first target object which is smaller than or equal to a preset distance in the current environment of the vehicle on the basis of a preset sensor through the first perception submodel;
and identifying a second target object which is larger than a preset distance in the current environment of the vehicle based on the sensor through the second perception submodel.
11. The method of claim 10, wherein the autopilot system further comprises a fusion model in communicative connection with the perception model, the fusion model comprising a first fusion submodel and a second fusion submodel, the target object comprising at least an environmental object and a moving object, the acquiring the characteristic parameters of the target object comprising:
acquiring a first environment characteristic parameter of a first environment object, a first motion parameter of the first motion object and a first object type based on the sensor through the first fusion sub-model;
and acquiring a second environment characteristic parameter of a second environment object, a second motion parameter of the second motion object and a second object type based on the sensor through the second fusion sub-model.
12. The method of claim 11, wherein the autopilot system further comprises a planning decision model communicatively coupled to the fusion model, the planning decision model comprising a first decision submodel and a second decision submodel, the obtaining motion information for the vehicle and generating autopilot commands based on the motion information and the characteristic parameters comprises:
obtaining the motion information of the vehicle through the first decision sub-model, and generating a first automatic driving instruction aiming at the first moving object by adopting the motion information, the first environment characteristic parameter, the first motion parameter of the first moving object and the first object type;
and generating a second automatic driving instruction aiming at the second moving object by adopting the motion information, the second environment characteristic parameter, the second motion parameter of the second moving object and a second object type through the second decision sub-model.
13. The method of claim 12, wherein the autonomous driving system further comprises an execution control model communicatively coupled to the planning decision model, the execution control model comprising a first control sub-model and a second control sub-model, the controlling the actuator to execute the autonomous driving instructions to effect autonomous driving of the vehicle, comprising:
controlling an actuator corresponding to a vehicle to execute an operation corresponding to the first automatic driving instruction within a first preset time period through the first control sub-model, so as to realize automatic driving of the vehicle;
controlling an actuator corresponding to the vehicle to execute the operation corresponding to the second automatic driving instruction within a second preset time period through the second control sub-model, so as to realize the automatic driving of the vehicle;
the first preset time period and the second preset time period are mutually connected time periods.
14. The method according to claim 13, wherein the first fused submodel and the second fused submodel are communicatively connected through a first sharing layer, and the acquiring the characteristic parameters of the target object comprises:
acquiring the second environment object and the second moving object based on the first sharing layer through the first fusion sub-model, and acquiring a second environment characteristic parameter of the second environment object, a second moving parameter of the second moving object and a second object type identified by the second perception sub-model through the sensor;
and acquiring the first environment object and the first moving object based on the first sharing layer through the second fusion sub-model, and acquiring a first environment characteristic parameter of the first environment object, a first moving parameter of the first moving object and a first object type of the first moving object, which are identified by the first perception sub-model, through the sensor.
15. The method of claim 14, wherein the first decision sub-model and the second decision sub-model are connected through a second sharing layer, and the obtaining motion information of the vehicle and generating an automatic driving instruction according to the motion information and the characteristic parameters comprises:
acquiring motion information of the vehicle through the first decision sub-model, acquiring the second environment characteristic parameter, the second motion parameter of the second moving object and the second object type from the second decision sub-model through the second sharing layer, and generating a second automatic driving instruction for the second moving object by adopting the motion information, the second environment characteristic parameter, the second motion parameter of the second moving object and the second object type;
and acquiring the first environment characteristic parameter, the first motion parameter of the first moving object and the first object type from the first decision submodel based on the first sharing layer through the second decision submodel, and generating a first automatic driving instruction aiming at the first moving object by adopting the motion information, the first environment characteristic parameter, the first motion parameter of the first moving object and the first object type.
16. The method of claim 15, wherein the first control submodel and the second control submodel are communicatively coupled through a third shared layer, and the controlling the actuator to execute the automatic driving command to achieve automatic driving of the vehicle comprises:
acquiring a second automatic driving instruction from the second control submodel based on the third sharing layer through the first control submodel, and controlling an actuator corresponding to a vehicle to execute an operation corresponding to the second automatic driving instruction within a second preset time period to realize automatic driving of the vehicle;
and acquiring a first automatic driving instruction from the first control submodel based on the third sharing layer through the second control submodel, and controlling an actuator corresponding to the vehicle to execute an operation corresponding to the first automatic driving instruction within the first preset time period, so as to realize automatic driving of the vehicle.
17. The method of claim 13, further comprising:
obtaining training samples for the autonomous driving system;
inputting the training sample into the automatic driving system for model training, and generating a predicted value corresponding to the training sample;
and comparing the predicted value with a preset reference value, and carrying out reverse training on the automatic driving system according to a comparison result.
18. The method of claim 17, wherein inputting training samples into the autopilot system for model training and generating predicted values corresponding to the training samples comprises:
respectively inputting the training samples into the first perception submodel to generate a first perception result, and inputting the training samples into the second perception submodel to generate a second perception result;
inputting the first sensing result into the first fusion submodel to generate a first fusion result, and inputting the second sensing result into a second fusion submodel to generate a second fusion result;
inputting the first fusion result into the first decision submodel to generate a first decision result, and inputting the second fusion result into the second decision submodel to generate a second decision result;
and inputting the first decision result into the first control submodel to generate a first predicted value, and inputting the second decision result into the second control submodel to generate a second predicted value.
19. The method of claim 17, wherein comparing the predicted value with a preset reference value and performing reverse training on the automatic driving system according to the comparison result comprises:
comparing the first predicted value with a preset first reference value to obtain a first comparison result;
comparing the second predicted value with a preset second reference value to obtain a second comparison result;
inputting the first comparison result and the second comparison result into a preset initial automatic driving system for iteration, and calculating a plurality of loss functions of the initial automatic driving system after each iteration;
and when a plurality of loss functions of the initial automatic driving system after iteration are minimized, stopping the iteration and generating the target automatic driving system.
20. A vehicle, characterized by comprising:
one or more processors; and
one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the vehicle to perform the method of any of claims 9-19.
21. One or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause the processors to perform the method of any one of claims 9-19.
CN202011198246.6A 2020-10-30 2020-10-30 Automatic driving system and method for vehicle Active CN112249033B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011198246.6A CN112249033B (en) 2020-10-30 2020-10-30 Automatic driving system and method for vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011198246.6A CN112249033B (en) 2020-10-30 2020-10-30 Automatic driving system and method for vehicle

Publications (2)

Publication Number Publication Date
CN112249033A true CN112249033A (en) 2021-01-22
CN112249033B CN112249033B (en) 2022-02-01

Family

ID=74268125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011198246.6A Active CN112249033B (en) 2020-10-30 2020-10-30 Automatic driving system and method for vehicle

Country Status (1)

Country Link
CN (1) CN112249033B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113221636A (en) * 2021-03-29 2021-08-06 北京汽车研究总院有限公司 Automatic marking method for canceling lane change of front vehicle in scene marking
CN113296500A (en) * 2021-04-30 2021-08-24 浙江吉利控股集团有限公司 Local path planning method and system
CN113401139A (en) * 2021-06-21 2021-09-17 安徽江淮汽车集团股份有限公司 Tandem type automatic driving system
CN113844463A (en) * 2021-09-26 2021-12-28 国汽智控(北京)科技有限公司 Vehicle control method and device based on automatic driving system and vehicle
CN114244880A (en) * 2021-12-16 2022-03-25 云控智行科技有限公司 Operation method, device, equipment and medium for intelligent internet driving cloud control function
CN114419914A (en) * 2022-01-30 2022-04-29 重庆长安汽车股份有限公司 Driving system and method for early warning of traffic restriction and automatic avoidance of restricted number road section
CN114613180A (en) * 2022-02-22 2022-06-10 恒大新能源汽车投资控股集团有限公司 Autonomous parking method, device, vehicle and parking lot end server
WO2023077967A1 (en) * 2021-11-04 2023-05-11 武汉路特斯汽车有限公司 Autonomous driving control system and vehicle
EP4202476A1 (en) * 2021-12-22 2023-06-28 GM Cruise Holdings LLC Anomaly prioritization using dual-mode adaptive radar

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109358614A (en) * 2018-08-30 2019-02-19 深圳市易成自动驾驶技术有限公司 Automatic Pilot method, system, device and readable storage medium storing program for executing
CN109814554A (en) * 2019-01-17 2019-05-28 深兰科技(上海)有限公司 A kind of automatic Pilot bus
CN109941211A (en) * 2019-03-22 2019-06-28 清华大学 A kind of Intelligent Vehicle Driving System structure common type framework and construction method
CN110562265A (en) * 2019-08-19 2019-12-13 中国第一汽车股份有限公司 vehicle driving control system and control method thereof
CN110568852A (en) * 2019-10-12 2019-12-13 深圳市布谷鸟科技有限公司 Automatic driving system and control method thereof
CN111295319A (en) * 2018-12-26 2020-06-16 华为技术有限公司 Vehicle control method, related device and computer storage medium
CN111332295A (en) * 2018-12-17 2020-06-26 现代自动车株式会社 Vehicle and control method thereof
CN111665849A (en) * 2020-06-29 2020-09-15 北京智行者科技有限公司 Automatic driving system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109358614A (en) * 2018-08-30 2019-02-19 深圳市易成自动驾驶技术有限公司 Automatic Pilot method, system, device and readable storage medium storing program for executing
CN111332295A (en) * 2018-12-17 2020-06-26 现代自动车株式会社 Vehicle and control method thereof
KR20200081523A (en) * 2018-12-17 2020-07-08 현대자동차주식회사 Vehicle and control method thereof
CN111295319A (en) * 2018-12-26 2020-06-16 华为技术有限公司 Vehicle control method, related device and computer storage medium
CN109814554A (en) * 2019-01-17 2019-05-28 深兰科技(上海)有限公司 A kind of automatic Pilot bus
CN109941211A (en) * 2019-03-22 2019-06-28 清华大学 A kind of Intelligent Vehicle Driving System structure common type framework and construction method
CN110562265A (en) * 2019-08-19 2019-12-13 中国第一汽车股份有限公司 vehicle driving control system and control method thereof
CN110568852A (en) * 2019-10-12 2019-12-13 深圳市布谷鸟科技有限公司 Automatic driving system and control method thereof
CN111665849A (en) * 2020-06-29 2020-09-15 北京智行者科技有限公司 Automatic driving system

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113221636A (en) * 2021-03-29 2021-08-06 北京汽车研究总院有限公司 Automatic marking method for canceling lane change of front vehicle in scene marking
CN113296500A (en) * 2021-04-30 2021-08-24 浙江吉利控股集团有限公司 Local path planning method and system
CN113401139A (en) * 2021-06-21 2021-09-17 安徽江淮汽车集团股份有限公司 Tandem type automatic driving system
CN113401139B (en) * 2021-06-21 2023-02-17 安徽江淮汽车集团股份有限公司 Tandem type automatic driving system
CN113844463A (en) * 2021-09-26 2021-12-28 国汽智控(北京)科技有限公司 Vehicle control method and device based on automatic driving system and vehicle
WO2023077967A1 (en) * 2021-11-04 2023-05-11 武汉路特斯汽车有限公司 Autonomous driving control system and vehicle
CN114244880A (en) * 2021-12-16 2022-03-25 云控智行科技有限公司 Operation method, device, equipment and medium for intelligent internet driving cloud control function
CN114244880B (en) * 2021-12-16 2023-12-26 云控智行科技有限公司 Operation method, device, equipment and medium of intelligent network driving cloud control function
EP4202476A1 (en) * 2021-12-22 2023-06-28 GM Cruise Holdings LLC Anomaly prioritization using dual-mode adaptive radar
CN114419914A (en) * 2022-01-30 2022-04-29 重庆长安汽车股份有限公司 Driving system and method for early warning of traffic restriction and automatic avoidance of restricted number road section
CN114613180A (en) * 2022-02-22 2022-06-10 恒大新能源汽车投资控股集团有限公司 Autonomous parking method, device, vehicle and parking lot end server

Also Published As

Publication number Publication date
CN112249033B (en) 2022-02-01

Similar Documents

Publication Publication Date Title
CN112249033B (en) Automatic driving system and method for vehicle
CN112416004B (en) Control method and device based on automatic driving, vehicle and related equipment
CN112292719B (en) Adapting the trajectory of an ego-vehicle to a moving foreign object
WO2022057630A1 (en) Data processing method and apparatus, device, and storage medium
WO2020164238A1 (en) Method, apparatus and device for driving control, and medium and system
CN103935361B (en) For autonomous lane changing, process and the valid data flow algorithm surmounting behavior
JP7334365B2 (en) Adaptive control of vehicle traffic
CN111332283B (en) Method and system for controlling a motor vehicle
Saust et al. Autonomous vehicle guidance on braunschweig's inner ring road within the stadtpilot project
KR102633251B1 (en) Method for selecting automated driving process by means of a driver assistance system
KR20150066303A (en) Apparatus and method for autonomous driving using driving pattern of driver
CN111833597A (en) Autonomous decision making in traffic situations with planning control
CN110171420A (en) Controller of vehicle
CN111967163B (en) Vehicle simulation control method and device, computer equipment and storage medium
WO2022189661A1 (en) Implementing slowdown manoeuvres in autonomous vehicles
US20220392276A1 (en) Vehicle behavior evaluation device, vehicle behavior evaluation method, and vehicle behavior evaluation program product
JP7379033B2 (en) Driving support method and driving support device
EP3798912A1 (en) Training method for a convolutional neural network for predicting a driving maneuver of a traffic participant
CN115136081A (en) Method for training at least one algorithm for a controller of a motor vehicle, method for optimizing a traffic flow in a region, computer program product and motor vehicle
JP7475951B2 (en) Vehicle driving support method and driving support device
CN113538909A (en) Traffic incident prediction method and device for automatic driving vehicle
CN112365730A (en) Automatic driving method, device, equipment, storage medium and vehicle
US20220379918A1 (en) Vehicle behavior generation device, vehicle behavior generation method, and vehicle behavior generation program product
CN112622924A (en) Driving planning method and device and vehicle
CN114763135A (en) Vehicle running control method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 46, room 406, No. 1, Yichuang street, Zhongxin knowledge city, Huangpu District, Guangzhou, Guangdong 510725

Applicant after: Guangzhou Xiaopeng Automatic Driving Technology Co.,Ltd.

Address before: Room 46, room 406, No.1, Yichuang street, Zhongxin knowledge city, Huangpu District, Guangzhou City, Guangdong Province

Applicant before: Guangzhou Xiaopeng Automatic Driving Technology Co.,Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240226

Address after: 510000 No.8 Songgang street, Cencun, Tianhe District, Guangzhou City, Guangdong Province

Patentee after: GUANGZHOU XIAOPENG MOTORS TECHNOLOGY Co.,Ltd.

Country or region after: China

Address before: Room 46, room 406, No. 1, Yichuang street, Zhongxin knowledge city, Huangpu District, Guangzhou, Guangdong 510725

Patentee before: Guangzhou Xiaopeng Automatic Driving Technology Co.,Ltd.

Country or region before: China