CN114684202A - Intelligent system for automatically driving vehicle and integrated control method thereof - Google Patents

Intelligent system for automatically driving vehicle and integrated control method thereof Download PDF

Info

Publication number
CN114684202A
CN114684202A CN202210612201.1A CN202210612201A CN114684202A CN 114684202 A CN114684202 A CN 114684202A CN 202210612201 A CN202210612201 A CN 202210612201A CN 114684202 A CN114684202 A CN 114684202A
Authority
CN
China
Prior art keywords
vehicle
agent
intelligent system
intelligent
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210612201.1A
Other languages
Chinese (zh)
Other versions
CN114684202B (en
Inventor
李强
杨爱喜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Daqi New Energy Automobile Co ltd
Original Assignee
Zhejiang Daqi New Energy Automobile Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Daqi New Energy Automobile Co ltd filed Critical Zhejiang Daqi New Energy Automobile Co ltd
Priority to CN202210612201.1A priority Critical patent/CN114684202B/en
Publication of CN114684202A publication Critical patent/CN114684202A/en
Application granted granted Critical
Publication of CN114684202B publication Critical patent/CN114684202B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60LPROPULSION OF ELECTRICALLY-PROPELLED VEHICLES; SUPPLYING ELECTRIC POWER FOR AUXILIARY EQUIPMENT OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRODYNAMIC BRAKE SYSTEMS FOR VEHICLES IN GENERAL; MAGNETIC SUSPENSION OR LEVITATION FOR VEHICLES; MONITORING OPERATING VARIABLES OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRIC SAFETY DEVICES FOR ELECTRICALLY-PROPELLED VEHICLES
    • B60L15/00Methods, circuits, or devices for controlling the traction-motor speed of electrically-propelled vehicles
    • B60L15/20Methods, circuits, or devices for controlling the traction-motor speed of electrically-propelled vehicles for control of the vehicle or its driving motor to achieve a desired performance, e.g. speed, torque, programmed variation of speed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60LPROPULSION OF ELECTRICALLY-PROPELLED VEHICLES; SUPPLYING ELECTRIC POWER FOR AUXILIARY EQUIPMENT OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRODYNAMIC BRAKE SYSTEMS FOR VEHICLES IN GENERAL; MAGNETIC SUSPENSION OR LEVITATION FOR VEHICLES; MONITORING OPERATING VARIABLES OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRIC SAFETY DEVICES FOR ELECTRICALLY-PROPELLED VEHICLES
    • B60L15/00Methods, circuits, or devices for controlling the traction-motor speed of electrically-propelled vehicles
    • B60L15/32Control or regulation of multiple-unit electrically-propelled vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/14Adaptive cruise control
    • B60W30/16Control of distance between vehicles, e.g. keeping a distance to preceding vehicle
    • B60W30/165Automatically following the path of a preceding lead vehicle, e.g. "electronic tow-bar"
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60LPROPULSION OF ELECTRICALLY-PROPELLED VEHICLES; SUPPLYING ELECTRIC POWER FOR AUXILIARY EQUIPMENT OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRODYNAMIC BRAKE SYSTEMS FOR VEHICLES IN GENERAL; MAGNETIC SUSPENSION OR LEVITATION FOR VEHICLES; MONITORING OPERATING VARIABLES OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRIC SAFETY DEVICES FOR ELECTRICALLY-PROPELLED VEHICLES
    • B60L2220/00Electrical machine types; Structures or applications thereof
    • B60L2220/40Electrical machine applications
    • B60L2220/42Electrical machine applications with use of more than one motor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60LPROPULSION OF ELECTRICALLY-PROPELLED VEHICLES; SUPPLYING ELECTRIC POWER FOR AUXILIARY EQUIPMENT OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRODYNAMIC BRAKE SYSTEMS FOR VEHICLES IN GENERAL; MAGNETIC SUSPENSION OR LEVITATION FOR VEHICLES; MONITORING OPERATING VARIABLES OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRIC SAFETY DEVICES FOR ELECTRICALLY-PROPELLED VEHICLES
    • B60L2240/00Control parameters of input or output; Target parameters
    • B60L2240/10Vehicle control parameters
    • B60L2240/12Speed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo or light sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • B60W2420/408
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • B60W2552/50Barriers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics
    • B60W2554/4042Longitudinal speed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/80Spatial relation or speed relative to objects
    • B60W2554/802Longitudinal distance
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2555/00Input parameters relating to exterior conditions, not covered by groups B60W2552/00, B60W2554/00
    • B60W2555/20Ambient conditions, e.g. wind or rain
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2720/00Output or target parameters relating to overall vehicle dynamics
    • B60W2720/10Longitudinal speed
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/60Other road transportation technologies with climate change mitigation effect
    • Y02T10/72Electric energy management in electromobility

Abstract

The invention discloses an intelligent system for automatically driving a vehicle and an integrated control method thereof, wherein the intelligent system comprises a perception fusion layer, a decision layer, a coordination layer and an execution layer; the sensing fusion layer comprises a multi-sensor information characteristic fusion Agent, and the multi-sensor information characteristic fusion Agent is connected with a GPS positioning Agent, a camera Agent, a millimeter wave radar Agent and a laser radar Agent; the decision layer comprises a system Agent which is connected with a wireless communication Agent; the coordination layer comprises four motor agents; the execution layer comprises an ECU (electronic control unit) which is connected with wheels through a motor; the invention can coordinate and control the automatic driving vehicle, save the cost and improve the working efficiency.

Description

Intelligent system for automatically driving vehicle and integrated control method thereof
Technical Field
The invention relates to the technical field of automatic driving of automobiles, in particular to an intelligent system for automatically driving an automobile and an integrated control method thereof.
Background
At present, with the vigorous development of computer technology, the coordination and cooperation among intelligent systems are applied to complex work, so that the working efficiency, the flexibility and the robustness of the system can be greatly improved, and the system cost is saved. Meanwhile, along with the rapid development of the autonomous vehicles and the multi-intelligent systems, the application of the multi-intelligent system technology to the coordination and control of the autonomous vehicles also becomes a hot spot direction. The multi-Agent system originated from distributed artificial intelligence is a technology which is developed rapidly in recent years, is developed for solving the intelligent solution of large-scale problems, relates to a plurality of fields such as parallel computing, distributed systems, knowledge engineering, expert systems and the like, is the development and leap of the traditional object technology, forms a dispersed subsystem facing specific problems and is relatively simple by describing, decomposing and distributing problem domains, coordinates all systems to carry out problem solution in parallel and in cooperation, is quite suitable for the intelligent solution of large-scale diagnosis problems to a dynamic, distributed, real-time and uncertain complex system, shows great advantages in the aspects of fault resolution, diagnosis and control, and shows certain social intelligence by interacting, coordinating and cooperating with the environment, people and individuals where the agents are located, thereby solving a large-scale complex problem that some traditional AIs cannot solve. Based on this, how to adopt different control strategies according to different situations makes the automatic driving vehicle be applied to places such as mines, port transportation, factory parks, warehouse patrol and the like, so as to save cost and improve working efficiency, and the technical problem that the applicant needs to solve urgently is formed.
Disclosure of Invention
The invention aims to provide an intelligent system for an automatic driving vehicle and an integrated control method thereof. The invention can coordinate and control the automatic driving vehicle, save the cost and improve the working efficiency.
The technical scheme of the invention is as follows: an intelligent system for automatically driving a vehicle comprises a perception fusion layer, a decision layer, a coordination layer and an execution layer; the sensing fusion layer comprises a multi-sensor information characteristic fusion Agent, and the multi-sensor information characteristic fusion Agent is connected with a GPS positioning Agent, a camera Agent, a millimeter wave radar Agent and a laser radar Agent; the decision layer comprises a system Agent which is connected with a wireless communication Agent; the coordination layer comprises four motor agents; the execution layer comprises an MCU (microprogrammed control unit) correspondingly connected with the corresponding motor Agent, the MCU is connected with a motor, and the output end of the motor is connected with a wheel; the sensing fusion layer is used for sensing environmental information data around the vehicle and position information data of the vehicle, and the multi-sensor information characteristic fusion Agent is used for fusing and outputting the data; the system Agent is used for decomposing and sequentially optimizing the work tasks sent by the wireless communication Agent, receiving the data sent by the perception fusion layer, comprehensively processing the data with the data of the vehicle and sending a work instruction; the coordination layer is used for ensuring the information interaction and cooperation relationship between the system layer and the execution layer; the execution layer is used for acquiring real-time working condition information, executing a working instruction issued by the system layer and ensuring power output.
According to the intelligent system for automatically driving the vehicle, the wireless communication Agent receives the working instruction and sends the working instruction to the system Agent, then the system Agent decomposes and optimizes the working instruction, the GPS positioning Agent, the camera Agent, the millimeter wave radar Agent and the laser radar Agent sense environmental information data and vehicle information data, the multi-sensor information characteristic fusion Agent is used for fusing the data, then the data are sent to the system Agent, the system Agent carries out comprehensive processing, then the instruction is sent to the coordination layer and the execution layer, and finally the coordination layer and the execution layer receive the instruction and output the instruction to the vehicle; and meanwhile, the system Agent records the action data, the state data and the environment information data of the vehicle, performs deep reinforcement learning on the system Agent by using the recorded data, and issues instructions by using the system Agent after the deep reinforcement learning.
In the intelligent system for automatically driving the vehicle, the process of fusing and outputting the data by the multi-sensor information characteristic fusion Agent is to jointly calibrate the GPS positioning Agent, the camera Agent, the millimeter wave radar Agent and the laser radar Agent, synchronize time and space, correlate and fuse the vehicle position and posture information acquired by the GPS positioning Agent, the obstacle position coordinate information acquired by the camera Agent, the vehicle distance, the vehicle speed and the angle information acquired by the millimeter wave radar Agent and the cloud point map acquired by the laser radar Agent, and further determine the target.
In the foregoing intelligent system for an autonomous vehicle, the calibration of the camera Agent is to convert a camera coordinate system in which the camera is located into a pixel coordinate system:
Figure DEST_PATH_IMAGE001
wherein:
Figure 791723DEST_PATH_IMAGE002
Figure DEST_PATH_IMAGE003
respectively represent coordinate values of a camera coordinate system,
Figure 138391DEST_PATH_IMAGE004
Figure DEST_PATH_IMAGE005
and
Figure 347656DEST_PATH_IMAGE006
coordinate values representing a pixel coordinate system;
Figure DEST_PATH_IMAGE007
is an internal reference matrix of the camera, and is defined by the following formula:
Figure 437971DEST_PATH_IMAGE008
in the formula:
Figure DEST_PATH_IMAGE009
and
Figure 681871DEST_PATH_IMAGE010
focal lengths of the camera x-axis and y-axis, respectively;
Figure DEST_PATH_IMAGE011
and
Figure 169746DEST_PATH_IMAGE012
optical centers of a u axis and a v axis of an image coordinate system respectively;
and correcting the coordinates by using the radial distortion parameters:
Figure DEST_PATH_IMAGE013
in the formula:
Figure 131886DEST_PATH_IMAGE014
and
Figure DEST_PATH_IMAGE015
to correct the abscissa and ordinate after radial distortion,
Figure 760314DEST_PATH_IMAGE016
and
Figure DEST_PATH_IMAGE017
is the distorted horizontal and vertical coordinates;
Figure 858720DEST_PATH_IMAGE018
Figure DEST_PATH_IMAGE019
and
Figure 780146DEST_PATH_IMAGE020
respectively the radial distortion of the camera is such that,
Figure DEST_PATH_IMAGE021
is the distance of the point from the imaging center,
Figure 229582DEST_PATH_IMAGE022
and
Figure DEST_PATH_IMAGE023
respectively the tangential distortion of the camera;
and (3) correcting the coordinates by using the tangential distortion parameters:
Figure 927279DEST_PATH_IMAGE024
in the formula:
Figure DEST_PATH_IMAGE025
and
Figure 880192DEST_PATH_IMAGE026
the horizontal and vertical coordinates after tangential distortion is corrected;
the calibration of the laser radar Agent and the millimeter wave radar Agent is divided into internal reference calibration and external reference calibration; the internal reference is calibrated into a distance correction angle, a rotation correction angle, a vertical correction angle and a horizontal offset factor; the external reference calibration is to establish the relationship between the sensor and a world coordinate system or other sensor coordinate systems through calibration.
The aforementioned intelligent system for autonomous vehicle, the time synchronization comprising hardware synchronization and software synchronization; the hardware synchronization refers to that multiple sensors trigger sampling at the same time; the software synchronization means that reference time is provided for each sensor through a unified upper computer, so that each frame of data of the sensors is synchronized to a unified timestamp.
The integrated control method for the intelligent system of the automatic driving vehicle comprises a plurality of automatic driving vehicles with the intelligent systems, wherein the vehicles are vertically controlled, the control terminal issues an instruction task to the intelligent system of the first-level leader vehicle, the intelligent system of the first-level leader vehicle issues an instruction task to the intelligent system of the second-level leader vehicle, and finally the intelligent system of the second-level leader vehicle issues an instruction task to the intelligent system of the follower vehicle, so that each vehicle can execute the corresponding regional task.
According to the integrated control method for the intelligent system of the automatic driving vehicle, the regional tasks comprise formation tasks of multiple vehicles, the control terminal issues the formation tasks to the intelligent system of the first-level leader vehicle, the intelligent system of the first-level leader vehicle receives messages and makes corresponding formation actions, the formation tasks are issued to the intelligent system of the second-level leader vehicle, the intelligent system of the second-level leader vehicle receives the formation tasks and makes formation actions for determining the formation position sequence of the second-level leader vehicles, the intelligent system of the second-level leader vehicle issues the formation tasks to the intelligent system of the following vehicles, and the intelligent system of the following vehicles receives the formation tasks and makes determination of the formation position sequence of the following vehicles; the method comprises the steps that after the position sequence of each vehicle is determined, formation is completed, an intelligent system of a first-level leader vehicle completes primary path planning according to position information of a target point, an environment and a barrier, when the first-level leader vehicle starts, a starting signal is sent to an intelligent system of a second-level leader vehicle, the intelligent system of the second-level leader vehicle sends a starting signal to an intelligent system of a following vehicle, then the second-level leader vehicle and the following vehicle start according to the formation sequence, the following vehicle keeps a distance from a front vehicle and transmits information backwards, and meanwhile power output by each motor is adjusted constantly according to data transmitted by a sensing fusion layer; the vehicles are in communication connection according to the wireless communication agents, after the primary leader vehicle acquires the position coordinate information of the obstacle, the primary leader vehicle is sequentially transmitted to the following vehicles by the secondary leader vehicle, corresponding actions are made, the path planning is updated, the secondary leader vehicle and the following intelligent system vehicle know the obstacle information in advance, the corresponding actions are made in advance, whether the obstacle is still in the original position or not is detected, if the obstacle is still in the original position, the path planning is not updated, otherwise, the path planning is updated again, and the formation is finished until all the vehicles reach the destination.
In the integrated control method for an intelligent system for automatically driving vehicles, the distance and formation steps of the primary leader vehicle, the secondary leader vehicle and the following vehicles are as follows:
with directed graphs
Figure DEST_PATH_IMAGE027
Network of intelligent systems representing a plurality of vehicles, directedDrawing (A)
Figure 975449DEST_PATH_IMAGE028
Of a neighboring matrix
Figure DEST_PATH_IMAGE029
Describing the information exchange between nodes of a graph, defined as:
Figure 646602DEST_PATH_IMAGE030
in the formula:
Figure DEST_PATH_IMAGE031
representing a set of vertices, sets of edges, of a directed graph
Figure 882411DEST_PATH_IMAGE032
Representing interactions between the intelligent system and the intelligent system;
the dynamic model of the second order system is described as:
Figure DEST_PATH_IMAGE033
in the formula:
Figure 955409DEST_PATH_IMAGE034
and
Figure DEST_PATH_IMAGE035
respectively represent
Figure 507655DEST_PATH_IMAGE036
Position, velocity and acceleration of the individual intelligent systems;
Figure DEST_PATH_IMAGE037
a control algorithm;
Figure 400524DEST_PATH_IMAGE038
the first derivative of the ith intelligent system position,
Figure DEST_PATH_IMAGE039
is the first derivative of the ith intelligent system speed;
for a second-order system, the control algorithm is as follows:
Figure 174445DEST_PATH_IMAGE040
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE041
controlling gain for adjacent matrix elements
Figure 101950DEST_PATH_IMAGE042
Figure DEST_PATH_IMAGE043
Presentation intelligence system
Figure 37545DEST_PATH_IMAGE036
A set of neighbor intelligent systems from which information can be obtained;
by control algorithms, for arbitrary
Figure 653596DEST_PATH_IMAGE044
And
Figure DEST_PATH_IMAGE045
when is coming into contact with
Figure 965629DEST_PATH_IMAGE046
Figure DEST_PATH_IMAGE047
Figure 13220DEST_PATH_IMAGE048
Then, then
Figure DEST_PATH_IMAGE049
And is
Figure 119716DEST_PATH_IMAGE050
Network of, multiple intelligent systemsThe collaterals converge to be consistent;
by considering arbitrary intelligent systems
Figure 721598DEST_PATH_IMAGE036
And an intelligent system
Figure DEST_PATH_IMAGE051
The spacing and formation of (d) can be defined as:
Figure 335857DEST_PATH_IMAGE052
Figure DEST_PATH_IMAGE053
in the formula:
Figure 237954DEST_PATH_IMAGE054
in order to be adjacent to the elements of the matrix,
Figure 249772DEST_PATH_IMAGE055
the elements of the connection matrix are used for describing the condition that the intelligent system of the secondary leader vehicle acquires the intelligent system information of the primary leader vehicle;
Figure DEST_PATH_IMAGE056
for intelligent systems
Figure 604530DEST_PATH_IMAGE036
And an intelligent system
Figure 992786DEST_PATH_IMAGE057
A desired longitudinal distance therebetween;
Figure DEST_PATH_IMAGE058
intelligent system for secondary leader
Figure 250855DEST_PATH_IMAGE036
Intelligent system with first-level leader
Figure 433574DEST_PATH_IMAGE059
The desired longitudinal distance of the vehicle,
Figure DEST_PATH_IMAGE060
and
Figure 541208DEST_PATH_IMAGE061
respectively control gain.
According to the integrated control method for the intelligent system of the automatic driving vehicle, when the vehicle executes a regional task, the intelligent system of any vehicle receives information of the intelligent systems of other vehicles, the intelligent system of the vehicle generates expected vehicle speed according to the task and the information, after data transmitted by an Agent are fused by combining the multi-sensor information characteristics of the vehicle, the vehicle outputs specified vehicle speed to advance, meanwhile, the intelligent system collects data according to a millimeter wave radar Agent and calculates the distance between the vehicle and the front vehicle, and if the distance between the vehicle and the front vehicle exceeds a threshold range, the intelligent system changes the vehicle speed by adjusting the rotating speed of each motor.
The aforementioned integrated control method for an intelligent system of an autonomous vehicle, when the vehicle performs a regional task, the intelligent system of any vehicle receives the information of the intelligent systems of other vehicles, the intelligent system of the vehicle generates a reference track and/or an expected course angle according to tasks and information, and the vehicle advances according to the actual track and/or course angle after data transmitted by the Agent is fused by combining the multi-sensor information characteristics of the vehicle, meanwhile, the intelligent system calculates the threshold value of the actual track and/or course angle of the vehicle and the reference track and/or the expected course angle according to data collected by the millimeter wave radar Agent, if the threshold value range is exceeded, and the vehicle intelligent system improves the track and/or the course angle through deep reinforcement learning, and if the training times do not meet the requirements, the vehicle intelligent system also improves the deep reinforcement learning.
Compared with the prior art, the intelligent system comprises a perception fusion layer, a decision layer, a coordination layer and an execution layer, wherein the perception fusion layer is used for perceiving environmental information data around the vehicle and position information data of the vehicle, a multi-sensor information characteristic fusion Agent is used for carrying out fusion output on the data, a work task sent by a wireless communication Agent is decomposed and sequentially optimized through the system Agent, the data sent by the perception fusion layer is received, the data and the data of the vehicle are comprehensively processed, and a work instruction is issued; ensuring the information interaction and cooperation relationship between the system layer and the execution layer by utilizing the coordination layer; and finally, acquiring real-time working condition information through an execution layer, and executing a working instruction issued by a system layer to ensure power output. Different control strategies can be adopted among the intelligent systems according to different conditions, and when the intelligent systems perform formation control/issue task instructions, the intelligent systems adopt vertical control, namely the leading intelligent system issues the task instructions, the speed of the vehicle, the detected obstacles and other information to the following intelligent system. When the intelligent systems execute regional tasks (patrol and other conditions), the multiple intelligent systems adopt flat control, namely, the intelligent systems can communicate with each other and transmit messages in the same region. The invention can also control the multiple intelligent systems, so that the course angle and/or the track among the intelligent systems are optimized based on reinforcement learning, and the distance among the multiple intelligent systems is adjusted.
Drawings
FIG. 1 is a schematic diagram of an intelligent system of the present invention;
FIG. 2 is a schematic diagram of the operation of the intelligent system of the present invention;
FIG. 3 is a multi-sensor information feature fusion Agent;
FIG. 4 is a schematic diagram of verticalization control of an intelligent system;
FIG. 5 is a control diagram illustrating the flattening of the tasks performed by the intelligent system on the region;
FIG. 6 is a vehicle formation control flow;
FIG. 7 is a schematic diagram of a vehicle formation control sequence;
FIG. 8 is a schematic diagram of intelligent system vertical control;
FIG. 9 is a schematic diagram of the intelligent system lateral control;
FIG. 10 is a diagram of intelligent system ROS control.
Detailed Description
The invention is further illustrated by the following figures and examples, which are not to be construed as limiting the invention.
Example (b): an intelligent system for an autonomous vehicle, as shown in fig. 1, includes a perception fusion layer, a decision layer, a coordination layer, and an execution layer; the sensing fusion layer comprises a multi-sensor information characteristic fusion Agent, and the multi-sensor information characteristic fusion Agent is connected with a GPS positioning Agent, a camera Agent, a millimeter wave radar Agent and a laser radar Agent; the decision layer comprises a system Agent which is connected with a wireless communication Agent; the coordination layer comprises four motor agents; the execution layer comprises an MCU (microprogrammed control unit) correspondingly connected with the corresponding motor Agent, the MCU is connected with a motor, and the output end of the motor is connected with a wheel; the sensing fusion layer is used for sensing environmental information data around the vehicle and position information data of the vehicle, and the multi-sensor information characteristic fusion Agent is used for fusing and outputting the data; the wireless communication Agent mainly receives a work task sent by a control terminal (a computer terminal) to a vehicle intelligent system, and receives information such as the work task or obstacle position information and vehicle speed sent by a vehicle to other vehicles; the system Agent is used for decomposing and sequentially optimizing the work tasks sent by the wireless communication Agent, receiving the data sent by the perception fusion layer, comprehensively processing the data with the data of the vehicle and sending a work instruction; the coordination layer is used for ensuring the information interaction and cooperation relationship between the system layer and the execution layer; the execution layer is used for acquiring real-time working condition information, executing a working instruction issued by the system layer and ensuring power output.
As shown in FIG. 2, the intelligent system receives a working instruction by a wireless communication Agent and sends the working instruction to a system Agent, then the system Agent decomposes and optimizes the working instruction, then a GPS positioning Agent, a camera Agent, a millimeter wave radar Agent and a laser radar Agent sense environmental information data and vehicle information data, a multi-sensor information characteristic fusion Agent is used for fusing the data, then the data are sent to the system Agent, the system Agent carries out comprehensive processing, then an instruction is issued and output to a coordination layer and an execution layer, and finally the coordination layer and the execution layer receive the instruction and output the instruction to a vehicle; meanwhile, the Agent and other functional agents exchange information through a can bus, the system Agent records the action data, state data and environment information data of the vehicle, deep reinforcement learning is carried out on the system Agent by using the recorded data, and then the system Agent after the deep reinforcement learning is used for issuing instructions.
In this embodiment, as shown in fig. 3, the process of fusing and outputting data by the multi-sensor information feature fusion Agent is to perform joint calibration on the GPS positioning Agent, the camera Agent, the millimeter wave radar Agent, and the laser radar Agent, perform time and space synchronization, perform data association and fusion on the vehicle position and posture information acquired by the GPS positioning Agent, the obstacle position coordinate information acquired by the camera Agent, the vehicle distance, the vehicle speed, and the angle information acquired by the millimeter wave radar Agent, and the cloud point map acquired by the laser radar Agent, and further perform target determination.
The calibration of the camera Agent is to convert a camera coordinate system where the camera is located into a pixel coordinate system:
Figure 201996DEST_PATH_IMAGE001
wherein:
Figure 281948DEST_PATH_IMAGE002
Figure 901148DEST_PATH_IMAGE003
respectively representing the coordinate values of the camera coordinate system,
Figure 699339DEST_PATH_IMAGE004
Figure 927933DEST_PATH_IMAGE005
and
Figure 862391DEST_PATH_IMAGE006
coordinate values representing a pixel coordinate system;
Figure 386913DEST_PATH_IMAGE007
is an internal reference matrix of the camera, which is defined by the formula:
Figure DEST_PATH_IMAGE062
in the formula:
Figure 203560DEST_PATH_IMAGE009
and
Figure 471730DEST_PATH_IMAGE010
focal lengths of the camera x-axis and y-axis, respectively;
Figure 526274DEST_PATH_IMAGE011
and
Figure 487276DEST_PATH_IMAGE012
optical centers of a u axis and a v axis of an image coordinate system respectively; the pixel coordinate system is used for describing the position and the origin of a pixel point on an image on an imaging chip of the camera
Figure 495946DEST_PATH_IMAGE063
The corner point of the upper left corner of the image; the camera coordinate system is used for describing three-dimensional coordinates of an object in space relative to a camera body, and an origin
Figure 100002_DEST_PATH_IMAGE064
Is the optical center of the camera and is,
Figure 833386DEST_PATH_IMAGE065
and
Figure 100002_DEST_PATH_IMAGE066
is an axis parallel to the plane of the pixel,
Figure 539174DEST_PATH_IMAGE067
is the camera optical axis in meters;
and correcting the coordinates by using the radial distortion parameters:
Figure 100002_DEST_PATH_IMAGE068
in the formula:
Figure 936658DEST_PATH_IMAGE014
and
Figure 196738DEST_PATH_IMAGE015
to correct the abscissa and ordinate after radial distortion,
Figure 806710DEST_PATH_IMAGE016
and
Figure 334382DEST_PATH_IMAGE017
is the distorted horizontal and vertical coordinates;
Figure 371608DEST_PATH_IMAGE069
Figure DEST_PATH_IMAGE070
and
Figure 384563DEST_PATH_IMAGE020
respectively the radial distortion of the camera is such that,
Figure 798227DEST_PATH_IMAGE021
is the distance of the point from the imaging center,
Figure 681869DEST_PATH_IMAGE022
and
Figure 155576DEST_PATH_IMAGE023
respectively the tangential distortion of the camera;
and (3) correcting the coordinates by using the tangential distortion parameters:
Figure 390248DEST_PATH_IMAGE071
in the formula:
Figure 843488DEST_PATH_IMAGE025
and
Figure 847217DEST_PATH_IMAGE026
to correct the abscissa and ordinate after tangential distortion.
The calibration of the laser radar Agent and the millimeter wave radar Agent is divided into internal reference calibration and external reference calibration; the internal reference is calibrated into a distance correction angle, a rotation correction angle, a vertical correction angle and a horizontal offset factor; the internal hardware parameters of general laser radar and millimeter wave radar are set by manufacturers, and the point cloud data acquired by the laser radar can be directly applied to image building and target detection. The millimeter wave radar can detect the obstacle. The external reference calibration is to establish the relationship between the sensor and a world coordinate system or other sensor coordinate systems through calibration. The calibration principle is as follows: and (3) completing joint calibration by adopting a calibration object characteristic point correlation method, namely acquiring the same chessboard calibration plate by two heterogeneous sensors, extracting the coordinates of the characteristic points on the calibration plate under respective sensor coordinate systems, and solving an external parameter matrix of the joint calibration.
The point cloud coordinates of the target object surface in the lidar coordinate system may be expressed as
Figure DEST_PATH_IMAGE072
Meanwhile, the pixel point coordinate of the feature point in the pixel coordinate system can be expressed as
Figure 757404DEST_PATH_IMAGE073
Namely, the three-dimensional coordinates are represented by the same feature point in the image plane after the distance information is removed. The calibration process comprises the steps of recording sensor data including time stamps, point cloud data, image data and the like through a Rosbag function packet; outputting the recorded data to a calibration function packet in an upper computer, and manually selecting the relationship between the three-dimensional space characteristic point and the corresponding point of the two-dimensional image in the calibration function packet; and finally, selecting a certain number of characteristic point pairs for iterative computation, and solving a rotation matrix and a translation matrix.
The point cloud coordinates can be expressed as
Figure DEST_PATH_IMAGE074
And the pixel coordinate system is converted into a conversion relation,as follows:
Figure 479372DEST_PATH_IMAGE075
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE076
in order to rotate the matrix of the matrix,
Figure 765997DEST_PATH_IMAGE077
is a translation matrix.
In this embodiment, the time synchronization includes hardware synchronization and software synchronization; the hardware synchronization refers to that multiple sensors trigger sampling at the same time; the software synchronization means that reference time is provided for each sensor through a unified upper computer, so that each frame of data of the sensors is synchronized to a unified timestamp.
The integrated control method for the intelligent system of the automatic driving vehicle comprises a plurality of automatic driving vehicles with the intelligent system, and a manager can monitor and control the intelligent system of the vehicle through a computer terminal and issue work tasks. As shown in fig. 4, the vehicles are controlled vertically, the control terminal issues an instruction task to the intelligent system of the first-level leader vehicle, the intelligent system of the first-level leader vehicle issues an instruction task to the intelligent system of the second-level leader vehicle, and the intelligent system of the second-level leader vehicle issues an instruction task to the intelligent system of the follower vehicle, so that each vehicle executes a corresponding regional task.
When the computer terminal issues regional work tasks to the intelligent systems of the vehicles, the intelligent system of the primary leader vehicle issues the work tasks to all the vehicles according to a verticalization control mode. Therefore, after the tasks are transferred layer by layer, the vehicles can execute regional tasks. When regional tasks are executed, the vehicle multi-intelligent systems are automatically combined and divided into a combination A and a combination B … …, as shown in FIG. 5, the combination A (B, C … …) adopts flat control, any vehicle multi-intelligent systems in one combination can communicate with each other, and information such as environment and obstacles can be sent. Such as: when the regional patrol task is executed, when any vehicle intelligent system in any combination senses an obstacle through a radar/camera, obstacle information is sent to a system Agent, and the system Agent respectively controls and outputs the four motors to the vehicle after processing according to obstacle coordinate position information, vehicle parameter information, environment and other information. Simultaneously, this vehicle intelligent system can be with other vehicle intelligent system of barrier information transmission in this combination, lets it know barrier information in advance, and corresponding action can be made in advance with its adjacent vehicle intelligent system simultaneously to whether the detection barrier still is in the original position, if still exist in the original position, then not update barrier information, otherwise, then send the information of the barrier after the update to other vehicle intelligent system, other vehicle intelligent system know barrier information in advance. Meanwhile, when the vehicles carry out the material task in the execution area, any vehicle intelligent system in any combination can obtain the position information and the task process of other vehicles in the combination, and plan is made in advance according to the speed, the steering and other information of other vehicles in the combination, so that the time cost is shortened. And when the vehicles are controlled to form a formation, the front intelligent system also sends the information of the encountered obstacles and the like to the following intelligent system in advance, the following intelligent system makes a plan in advance and detects whether the positions of the obstacles are in the original positions or not, if the positions of the obstacles are in the original positions, the coordinates of the obstacles are not updated, otherwise, the coordinates of the obstacles are updated and sent to the following intelligent system of the intelligent system.
Explaining that the regional task is a formation task of multiple vehicles, as shown in fig. 6, a control terminal issues a formation task to an intelligent system of a first-level leader vehicle, the intelligent system of the first-level leader vehicle receives a message and makes a corresponding formation action, and simultaneously issues the formation task to an intelligent system of a second-level leader vehicle, the intelligent system of the second-level leader vehicle receives the formation task and makes a formation action for determining a formation position sequence of each second-level leader vehicle, and simultaneously issues the formation task to an intelligent system of a following vehicle, and the intelligent system of the following vehicle receives the formation task and makes a formation position sequence for determining each following vehicle; as shown in fig. 7, the position sequence of each vehicle is determined and then formation is completed, the intelligent system of the first-level leader vehicle completes preliminary path planning according to the position information of a target point, an environment and an obstacle, when the first-level leader vehicle starts, a starting signal is sent to the intelligent system of the second-level leader vehicle, the intelligent system of the second-level leader vehicle sends a starting signal to the intelligent system of the following vehicle, then the second-level leader vehicle and the following vehicle start according to the formation sequence, the following vehicle keeps a distance from the front vehicle and transmits information backwards, and meanwhile, power output by each motor is constantly adjusted according to data transmitted by the sensing fusion layer; the vehicles are in communication connection according to the wireless communication agents, after the primary leader vehicle acquires the position coordinate information of the obstacle, the primary leader vehicle is sequentially transmitted to the following vehicles by the secondary leader vehicle, corresponding actions are made, the path planning is updated, the secondary leader vehicle and the following intelligent system vehicle know the obstacle information in advance, the corresponding actions are made in advance, whether the obstacle is still in the original position or not is detected, if the obstacle is still in the original position, the path planning is not updated, otherwise, the path planning is updated again, and the formation is finished until all the vehicles reach the destination.
In this embodiment, the steps of spacing and formation of the primary leader vehicle, the secondary leader vehicle, and the following vehicles are as follows:
with directed graphs
Figure 122767DEST_PATH_IMAGE027
Network, directed graph, of intelligent systems representing multiple vehicles
Figure 938276DEST_PATH_IMAGE028
Of (2) an adjacent matrix
Figure 881961DEST_PATH_IMAGE029
Describing the information exchange between nodes of a graph, defined as:
Figure 175539DEST_PATH_IMAGE030
in the formula:
Figure 622701DEST_PATH_IMAGE031
set of vertices, set of edges representing directed graph
Figure 609112DEST_PATH_IMAGE032
Representing interactions between the intelligent system and the intelligent system;
the dynamic model of the second order system is described as:
the dynamic model of the second order system is described as:
Figure 40093DEST_PATH_IMAGE033
in the formula:
Figure 137362DEST_PATH_IMAGE034
and
Figure 704610DEST_PATH_IMAGE035
respectively represent
Figure 363386DEST_PATH_IMAGE036
Position, velocity and acceleration of the individual intelligent systems;
Figure 281664DEST_PATH_IMAGE037
a control algorithm;
Figure 917044DEST_PATH_IMAGE038
the first derivative of the ith intelligent system position,
Figure 604378DEST_PATH_IMAGE039
is the first derivative of the ith intelligent system speed;
for a second-order system, the control algorithm is as follows:
Figure DEST_PATH_IMAGE078
wherein the content of the first and second substances,
Figure 198170DEST_PATH_IMAGE041
controlling gain for adjacent matrix elements
Figure 338164DEST_PATH_IMAGE042
Figure 42815DEST_PATH_IMAGE043
Presentation intelligence system
Figure 319076DEST_PATH_IMAGE036
A set of neighbor intelligent systems from which information can be obtained;
the consistency of a second order system is a more general and more practical control problem for multiple intelligent systems. The second-order system model adopts the acceleration as the control input quantity, and not only considers the influence among the individual position states, but also considers the action relation among the speed states. By control algorithms, for arbitrary
Figure 582305DEST_PATH_IMAGE044
And
Figure 944016DEST_PATH_IMAGE045
when is coming into contact with
Figure 186778DEST_PATH_IMAGE046
Figure 848704DEST_PATH_IMAGE047
Figure 518720DEST_PATH_IMAGE048
Then, then
Figure 633306DEST_PATH_IMAGE049
And is provided with
Figure 148601DEST_PATH_IMAGE050
The network convergence of the multiple intelligent systems is consistent;
in the formation control problem of the multi-intelligent system, the concept of a head vehicle and a following vehicle is often involved, and the concept is similar to the relationship between a leader and a follower in the research of the multi-intelligent system. The head car (or leader) is more specific to other vehicles and its state of motion is not affected by the following vehicle. By constantly updating the controller during driving, the speed and displacement of the following vehicle will track as much as possible to the reference speed and displacement of the head vehicle. Thus, we can consider the intelligent vehicle formation problem as a leader following consistency problem in a multi-intelligent system.
Based on the consistency of the second-order system, any intelligent system is considered
Figure 665033DEST_PATH_IMAGE036
And an intelligent system
Figure 7415DEST_PATH_IMAGE051
The spacing and formation of (d) can be defined as:
Figure 609298DEST_PATH_IMAGE079
Figure 193863DEST_PATH_IMAGE053
in the formula:
Figure 564801DEST_PATH_IMAGE054
in order to be adjacent to the elements of the matrix,
Figure DEST_PATH_IMAGE080
the elements of the connection matrix are used for describing the condition that the intelligent system of the secondary leader vehicle acquires the intelligent system information of the primary leader vehicle;
Figure 842199DEST_PATH_IMAGE056
for intelligent systems
Figure 665798DEST_PATH_IMAGE036
And intelligent system
Figure 54054DEST_PATH_IMAGE057
A desired longitudinal distance therebetween;
Figure 778034DEST_PATH_IMAGE081
intelligent system for secondary leader
Figure 960754DEST_PATH_IMAGE036
Intelligent system with first-level leader
Figure DEST_PATH_IMAGE082
The desired longitudinal distance of the vehicle,
Figure 68387DEST_PATH_IMAGE083
and
Figure 260334DEST_PATH_IMAGE061
respectively control gain.
Further, as shown in fig. 8, when the vehicle executes a regional task, the intelligent system of any vehicle receives information of the intelligent systems of other vehicles, the intelligent system of the vehicle generates an expected speed according to the task and the information, and after data transmitted by the Agent is fused by combining the multi-sensor information characteristics of the vehicle, the vehicle outputs a specified speed to advance, and meanwhile, the intelligent system collects data according to the millimeter wave radar Agent (including the IMU inertial sensor), calculates a distance between the vehicle and the vehicle ahead, and if the distance between the vehicle and the vehicle ahead exceeds a threshold range, the intelligent system changes the speed of the vehicle by adjusting the rotating speed of each motor.
Further, as shown in fig. 9, when the vehicle executes a regional task, the intelligent system of any vehicle receives information of the intelligent systems of other vehicles, the intelligent system of the vehicle generates a reference track and/or an expected heading angle according to the task and the information, and after data transmitted by the Agent is fused by combining the multi-sensor information characteristics of the vehicle, the vehicle advances according to the actual track and/or the actual heading angle, and the intelligent system acquires data according to the millimeter wave radar Agent, and calculates a threshold value of the actual track and/or the actual heading angle of the vehicle and the reference track and/or the expected heading angle, if the threshold value exceeds the threshold value range, the intelligent system of the vehicle improves the track and/or the heading angle through reinforcement learning, and if the training times do not meet the requirement, the intelligent system of the vehicle also improves the reinforcement learning.
In the embodiment, the control terminal and each vehicle intelligent system communicate through a local area network based on a TCP/IP protocol, the control terminal is connected with the IP address of each vehicle intelligent system server, data communication between the client and the server is realized, and transmitted data comprise detected data such as obstacle positions, course angles, speeds, path planning tracks and the like.
In this embodiment, as shown in fig. 10, when the vehicle executes the regional task, the required ros node/function packet includes: 1. vehicle intelligent system chassis control function package (movebase _ control): and planning a path according to the reference message to enable the mobile robot to reach the designated position. 2. Vehicle intelligent system 2D/3D build picture function package (mapping): and establishing a map according to the point cloud image after the laser radar deep learning and the image after the camera outputs the deep learning. 3. Vehicle intelligent system location function package (amcl): the intelligent system performs positioning according to an existing map. 4. Global path planning node (global _ planer): the intelligent system plans the overall path according to the given target position. 5. Local _ plane node (local _ plane): the intelligent system avoids and completes route planning according to nearby obstacles. Tf transform node (sensors _ transform): the intelligent system is required to issue the transformation relation of each relevant reference system in the form of tf tree when executing the regional task function package. 7. Each sensor information fusion function packet (sensors _ source fusion): when the intelligent system executes regional tasks, sensor information of the robot needs to be acquired, so that the effect of real-time obstacle avoidance is achieved. The sensor nodes (laser radar, millimeter wave radar, camera and the like) require that messages in formats such as sensor _ msgs/LaserScan, sensor _ msgs/PointCloud, sensor _ msgs/Image and the like, namely two-dimensional radar information, three-dimensional point cloud data, Image data and the like can be issued through ROS, and the messages are fused by the sensors and then sent to the function packet for executing the regional task. 8. Odometry information node (odometry _ source): when the intelligent system executes regional tasks, the robot is required to issue Odometry information in a nav _ msgs/Odometry format, and meanwhile, a corresponding tf conversion from-to-base _ link is also issued. The odometer contains information on two aspects, namely pose on the one hand and speed on the other hand. 9. Vehicle intelligent system control node (robot _ control) the intelligent system can issue a geometry _ msgs/Twist type message through cmd _ vel, the message is based on the base coordinate system of the intelligent system, and the message transmits a motion command. This means that there must be a node subscribing to the cmd _ vel topic and converting the speed command on that topic into a motor command to the mobile base. 10. Map node (map _ server): the environment map is given when the intelligent system performs regional tasks. 11. Intelligent system communication node (robot _ communication): the intelligent system receives/sends messages sent by other intelligent systems, such as: tasks, obstacles, etc.
In conclusion, the invention can adopt different control strategies according to different conditions, so that the automatic driving vehicle can be applied to places such as mines, port transportation, factory parks, warehouse patrol and the like, and the effects of saving cost and improving working efficiency are achieved.

Claims (10)

1. An intelligent system for autonomous vehicles, characterized by: the system comprises a perception fusion layer, a decision layer, a coordination layer and an execution layer; the sensing fusion layer comprises a multi-sensor information characteristic fusion Agent, and the multi-sensor information characteristic fusion Agent is connected with a GPS positioning Agent, a camera Agent, a millimeter wave radar Agent and a laser radar Agent; the decision layer comprises a system Agent which is connected with a wireless communication Agent; the coordination layer comprises four motor agents; the execution layer comprises an MCU (microprogrammed control unit) correspondingly connected with the corresponding motor Agent, the MCU is connected with a motor, and the output end of the motor is connected with a wheel; the sensing fusion layer is used for sensing environmental information data around the vehicle and position information data of the vehicle, and the multi-sensor information characteristic fusion Agent is used for fusing and outputting the data; the system Agent is used for decomposing and sequentially optimizing the work tasks sent by the wireless communication Agent, receiving the data sent by the perception fusion layer, comprehensively processing the data with the data of the vehicle and sending a work instruction; the coordination layer is used for ensuring the information interaction and cooperation relationship between the system layer and the execution layer; the execution layer is used for acquiring real-time working condition information, executing a working instruction issued by the system layer and ensuring power output.
2. The intelligent system for autonomous vehicles according to claim 1, wherein: the wireless communication Agent receives the working instruction and sends the working instruction to the system Agent, then the system Agent decomposes and optimizes the working instruction, the GPS positioning Agent, the camera Agent, the millimeter wave radar Agent and the laser radar Agent sense environmental information data and vehicle information data, the multi-sensor information characteristic fusion Agent fuses the data, the system Agent carries out comprehensive processing, then the instruction is issued and output to the coordination layer and the execution layer, and finally the coordination layer and the execution layer receive the instruction and output the instruction to the vehicle; and meanwhile, the system Agent records the action data, the state data and the environment information data of the vehicle, performs deep reinforcement learning on the system Agent by using the recorded data, and issues instructions by using the system Agent after the deep reinforcement learning.
3. The intelligent system for autonomous vehicles according to claim 2, characterized in that: the process of fusing and outputting the data by the multi-sensor information characteristic fusion Agent comprises the steps of firstly carrying out combined calibration on a GPS positioning Agent, a camera Agent, a millimeter wave radar Agent and a laser radar Agent, then carrying out time and space synchronization, then carrying out data association and fusion on vehicle position and attitude information acquired by the GPS positioning Agent, obstacle position coordinate information acquired by the camera Agent, vehicle distance, vehicle speed and angle information acquired by the millimeter wave radar Agent and a cloud point map acquired by the laser radar Agent, and further carrying out target judgment.
4. The intelligent system for autonomous vehicles as claimed in claim 3, wherein: the calibration of the camera Agent is to convert a camera coordinate system where the camera is located into a pixel coordinate system:
Figure 242871DEST_PATH_IMAGE001
wherein:
Figure DEST_PATH_IMAGE002
Figure 956749DEST_PATH_IMAGE003
respectively represent coordinate values of a camera coordinate system,
Figure DEST_PATH_IMAGE004
Figure 329961DEST_PATH_IMAGE005
and
Figure DEST_PATH_IMAGE006
coordinate values representing a pixel coordinate system;
Figure 286022DEST_PATH_IMAGE007
is an internal reference matrix of the camera, and is defined by the following formula:
Figure DEST_PATH_IMAGE008
in the formula:
Figure DEST_PATH_IMAGE010
and
Figure 693870DEST_PATH_IMAGE011
focal lengths of the camera x-axis and y-axis, respectively;
Figure 47491DEST_PATH_IMAGE013
and
Figure DEST_PATH_IMAGE014
optical centers of a u axis and a v axis of an image coordinate system respectively;
and correcting the coordinates by using the radial distortion parameters:
Figure 376841DEST_PATH_IMAGE015
in the formula:
Figure 608365DEST_PATH_IMAGE017
and
Figure DEST_PATH_IMAGE018
to correct the abscissa and ordinate after radial distortion,
Figure 339560DEST_PATH_IMAGE019
and
Figure DEST_PATH_IMAGE020
is the distorted horizontal and vertical coordinates;
Figure 395241DEST_PATH_IMAGE021
Figure DEST_PATH_IMAGE022
and
Figure 946308DEST_PATH_IMAGE023
respectively the radial distortion of the camera is such that,
Figure DEST_PATH_IMAGE024
is the distance of the point from the imaging center,
Figure 244172DEST_PATH_IMAGE025
and
Figure DEST_PATH_IMAGE026
respectively the tangential distortion of the camera;
and (3) correcting the coordinates by using the tangential distortion parameters:
Figure 829874DEST_PATH_IMAGE027
in the formula:
Figure 790877DEST_PATH_IMAGE029
and
Figure DEST_PATH_IMAGE030
the horizontal and vertical coordinates after tangential distortion is corrected;
the calibration of the laser radar Agent and the millimeter wave radar Agent is divided into internal reference calibration and external reference calibration; the internal reference is calibrated into a distance correction angle, a rotation correction angle, a vertical correction angle and a horizontal offset factor; the external reference calibration is to establish the relationship between the sensor and a world coordinate system or other sensor coordinate systems through calibration.
5. The intelligent system for autonomous vehicles as claimed in claim 3, wherein: the time synchronization comprises hardware synchronization and software synchronization; the hardware synchronization refers to that multiple sensors trigger sampling at the same time; the software synchronization means that reference time is provided for each sensor through a unified upper computer, so that each frame of data of the sensors is synchronized to a unified timestamp.
6. The integrated control method of an intelligent system for autonomous vehicles according to any of claims 1 to 5, characterized in that: the method comprises the steps that a plurality of automatic driving vehicles with intelligent systems are adopted, vertical control is adopted among the vehicles, a control terminal issues an instruction task to the intelligent system of a first-level leader vehicle, then the intelligent system of the first-level leader vehicle issues an instruction task to the intelligent system of a second-level leader vehicle, and finally the intelligent system of the second-level leader vehicle issues an instruction task to the intelligent system of a follower vehicle, so that each vehicle can execute a corresponding regional task.
7. The integrated control method of an intelligent system for autonomous vehicles according to claim 6, characterized in that: the regional tasks comprise formation tasks of multiple vehicles, the control terminal issues the formation tasks to an intelligent system of a first-level leader vehicle, the intelligent system of the first-level leader vehicle receives messages and makes corresponding formation actions, and simultaneously issues the formation tasks to an intelligent system of a second-level leader vehicle, the intelligent system of the second-level leader vehicle receives the formation tasks and makes formation actions for determining formation position sequences of the second-level leader vehicles, the intelligent system of the second-level leader vehicle issues the formation tasks to an intelligent system of a following vehicle, and the intelligent system of the following vehicle receives the formation tasks and makes formation position sequences for determining the following vehicles; the method comprises the steps that formation is completed after the position sequence of each vehicle is determined, an intelligent system of a first-level leader vehicle completes primary path planning according to position information of a target point, an environment and an obstacle, when the first-level leader vehicle starts, a starting signal is sent to an intelligent system of a second-level leader vehicle, the intelligent system of the second-level leader vehicle sends a starting signal to an intelligent system of a following vehicle, then the second-level leader vehicle and the following vehicle start according to the formation sequence, the following vehicle keeps a distance from a front vehicle and transmits information backwards, and meanwhile power output by each motor is adjusted constantly according to data transmitted by a perception fusion layer; the vehicles are in communication connection according to the wireless communication agents, after the primary leader vehicle acquires the position coordinate information of the obstacle, the primary leader vehicle is sequentially transmitted to the following vehicles by the secondary leader vehicle, corresponding actions are made, the path planning is updated, the secondary leader vehicle and the following intelligent system vehicle know the obstacle information in advance, the corresponding actions are made in advance, whether the obstacle is still in the original position or not is detected, if the obstacle is still in the original position, the path planning is not updated, otherwise, the path planning is updated again, and the formation is finished until all the vehicles reach the destination.
8. The integrated control method for an intelligent system of an autonomous vehicle according to claim 7, characterized in that: the steps of spacing and formation of the primary leader vehicle, the secondary leader vehicle and the following vehicles are as follows:
with direction toDrawing (A)
Figure 94819DEST_PATH_IMAGE031
Network of intelligent systems representing a plurality of vehicles, directed graph
Figure DEST_PATH_IMAGE032
Of a neighboring matrix
Figure 432260DEST_PATH_IMAGE033
Describing the information exchange between nodes of a graph, defined as:
Figure DEST_PATH_IMAGE034
in the formula:
Figure 639512DEST_PATH_IMAGE035
representing a set of vertices, sets of edges, of a directed graph
Figure DEST_PATH_IMAGE036
Representing interactions between the intelligent system and the intelligent system;
the dynamic model of the second order system is described as:
Figure 36996DEST_PATH_IMAGE037
in the formula:
Figure DEST_PATH_IMAGE038
and
Figure 562655DEST_PATH_IMAGE039
respectively represent
Figure DEST_PATH_IMAGE040
Position, velocity and acceleration of the individual intelligent systems;
Figure DEST_PATH_IMAGE042
a control algorithm;
Figure 234945DEST_PATH_IMAGE043
the first derivative of the ith intelligent system position,
Figure DEST_PATH_IMAGE044
is the first derivative of the ith intelligent system speed;
for a second-order system, the control algorithm is as follows:
Figure 293774DEST_PATH_IMAGE045
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE046
controlling gain for adjacent matrix elements
Figure 862159DEST_PATH_IMAGE047
Figure DEST_PATH_IMAGE048
Presentation intelligence system
Figure 875114DEST_PATH_IMAGE040
A set of neighbor intelligent systems from which information can be obtained;
by control algorithms, for arbitrary
Figure 288778DEST_PATH_IMAGE049
And
Figure 172420DEST_PATH_IMAGE051
when is coming into contact with
Figure DEST_PATH_IMAGE052
Figure 678750DEST_PATH_IMAGE053
Figure DEST_PATH_IMAGE054
Then, then
Figure DEST_PATH_IMAGE055
And is
Figure 710160DEST_PATH_IMAGE056
The network convergence of the multiple intelligent systems is consistent;
by considering arbitrary intelligent systems
Figure 661936DEST_PATH_IMAGE040
And an intelligent system
Figure DEST_PATH_IMAGE057
The spacing and formation of (d) can be defined as:
Figure 462401DEST_PATH_IMAGE058
Figure DEST_PATH_IMAGE059
in the formula:
Figure 402282DEST_PATH_IMAGE060
in order to be adjacent to the elements of the matrix,
Figure 858671DEST_PATH_IMAGE062
the elements of the connection matrix are used for describing the condition that the intelligent system of the secondary leader vehicle acquires the intelligent system information of the primary leader vehicle;
Figure DEST_PATH_IMAGE063
for intelligent systems
Figure 879717DEST_PATH_IMAGE040
And an intelligent system
Figure DEST_PATH_IMAGE064
A desired longitudinal distance therebetween;
Figure DEST_PATH_IMAGE065
intelligent system for secondary leader
Figure DEST_PATH_IMAGE066
Intelligent system with first-level leader
Figure DEST_PATH_IMAGE067
The desired longitudinal distance of the vehicle,
Figure DEST_PATH_IMAGE068
and
Figure DEST_PATH_IMAGE069
respectively control gain.
9. The integrated control method for an intelligent system of an autonomous vehicle according to claim 6, characterized in that: when a vehicle executes a regional task, an intelligent system of any vehicle receives information of intelligent systems of other vehicles, the intelligent system of the vehicle generates an expected speed according to the task and the information, the vehicle outputs a specified speed to advance after data transmitted by an Agent is fused by combining the multi-sensor information characteristics of the vehicle, meanwhile, the intelligent system acquires data according to a millimeter wave radar Agent, calculates the distance between the vehicle and the front vehicle, and if the distance between the vehicle and the front vehicle exceeds a threshold range, the intelligent system changes the speed of the vehicle by adjusting the rotating speed of each motor.
10. The integrated control method for an intelligent system of an autonomous vehicle according to claim 6, characterized in that: when a vehicle executes a regional task, an intelligent system of any vehicle receives information of intelligent systems of other vehicles, the intelligent system of the vehicle generates a reference track and/or an expected course angle according to the task and the information, combines the multi-sensor information characteristics of the vehicle and fuses data transmitted by an Agent, the vehicle advances according to an actual track and/or an expected course angle, the intelligent system collects data according to a millimeter wave radar Agent, calculates a threshold value of the actual track and/or the course angle of the vehicle and the reference track and/or the expected course angle, if the threshold value range is exceeded, the intelligent system of the vehicle improves the track and/or the course angle through deep reinforcement learning, and if the training times do not meet the requirement, the intelligent system of the vehicle also improves the deep reinforcement learning.
CN202210612201.1A 2022-06-01 2022-06-01 Intelligent system for automatically driving vehicle and integrated control method thereof Active CN114684202B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210612201.1A CN114684202B (en) 2022-06-01 2022-06-01 Intelligent system for automatically driving vehicle and integrated control method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210612201.1A CN114684202B (en) 2022-06-01 2022-06-01 Intelligent system for automatically driving vehicle and integrated control method thereof

Publications (2)

Publication Number Publication Date
CN114684202A true CN114684202A (en) 2022-07-01
CN114684202B CN114684202B (en) 2023-03-10

Family

ID=82131030

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210612201.1A Active CN114684202B (en) 2022-06-01 2022-06-01 Intelligent system for automatically driving vehicle and integrated control method thereof

Country Status (1)

Country Link
CN (1) CN114684202B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115410399A (en) * 2022-08-09 2022-11-29 北京科技大学 Truck parking method and device and electronic equipment
CN115840234A (en) * 2022-10-28 2023-03-24 苏州知至科技有限公司 Radar data acquisition method and device and storage medium
CN117048365A (en) * 2023-10-12 2023-11-14 江西五十铃汽车有限公司 Automobile torque control method, system, storage medium and equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108445885A (en) * 2018-04-20 2018-08-24 鹤山东风新能源科技有限公司 A kind of automated driving system and its control method based on pure electric vehicle logistic car
CN109146980A (en) * 2018-08-12 2019-01-04 浙江农林大学 The depth extraction and passive ranging method of optimization based on monocular vision
CN109857115A (en) * 2019-02-27 2019-06-07 华南理工大学 A kind of finite time formation control method of the mobile robot of view-based access control model feedback
CN110162065A (en) * 2019-06-18 2019-08-23 东北大学 It is a kind of based on the oriented adaptive multiple agent formation control method followed
CN110286694A (en) * 2019-08-05 2019-09-27 重庆邮电大学 A kind of unmanned plane formation cooperative control method of more leaders
CN111245953A (en) * 2020-02-26 2020-06-05 洛阳智能农业装备研究院有限公司 Intelligent networking system of unmanned electric tractor and cluster driving method
US20200264634A1 (en) * 2019-02-15 2020-08-20 DRiV Automotive Inc. Autonomous vehicle platooning system and method
CN111766879A (en) * 2020-06-24 2020-10-13 天津大学 Intelligent vehicle formation system based on autonomous collaborative navigation
CN112445229A (en) * 2020-11-04 2021-03-05 清华大学 Single-lane multi-queue hierarchical control method for piloting motorcade cooperation
CN113282083A (en) * 2021-05-17 2021-08-20 北京航空航天大学 Unmanned vehicle formation experiment platform based on robot operating system
CN113359752A (en) * 2021-06-24 2021-09-07 中煤科工开采研究院有限公司 Automatic driving method for underground coal mine skip car

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108445885A (en) * 2018-04-20 2018-08-24 鹤山东风新能源科技有限公司 A kind of automated driving system and its control method based on pure electric vehicle logistic car
CN109146980A (en) * 2018-08-12 2019-01-04 浙江农林大学 The depth extraction and passive ranging method of optimization based on monocular vision
US20200264634A1 (en) * 2019-02-15 2020-08-20 DRiV Automotive Inc. Autonomous vehicle platooning system and method
CN109857115A (en) * 2019-02-27 2019-06-07 华南理工大学 A kind of finite time formation control method of the mobile robot of view-based access control model feedback
CN110162065A (en) * 2019-06-18 2019-08-23 东北大学 It is a kind of based on the oriented adaptive multiple agent formation control method followed
CN110286694A (en) * 2019-08-05 2019-09-27 重庆邮电大学 A kind of unmanned plane formation cooperative control method of more leaders
CN111245953A (en) * 2020-02-26 2020-06-05 洛阳智能农业装备研究院有限公司 Intelligent networking system of unmanned electric tractor and cluster driving method
CN111766879A (en) * 2020-06-24 2020-10-13 天津大学 Intelligent vehicle formation system based on autonomous collaborative navigation
CN112445229A (en) * 2020-11-04 2021-03-05 清华大学 Single-lane multi-queue hierarchical control method for piloting motorcade cooperation
CN113282083A (en) * 2021-05-17 2021-08-20 北京航空航天大学 Unmanned vehicle formation experiment platform based on robot operating system
CN113359752A (en) * 2021-06-24 2021-09-07 中煤科工开采研究院有限公司 Automatic driving method for underground coal mine skip car

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李科志: "车联网环境下基于反馈的智能车辆编队控制研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115410399A (en) * 2022-08-09 2022-11-29 北京科技大学 Truck parking method and device and electronic equipment
CN115840234A (en) * 2022-10-28 2023-03-24 苏州知至科技有限公司 Radar data acquisition method and device and storage medium
CN115840234B (en) * 2022-10-28 2024-04-19 苏州知至科技有限公司 Radar data acquisition method, device and storage medium
CN117048365A (en) * 2023-10-12 2023-11-14 江西五十铃汽车有限公司 Automobile torque control method, system, storage medium and equipment
CN117048365B (en) * 2023-10-12 2024-01-26 江西五十铃汽车有限公司 Automobile torque control method, system, storage medium and equipment

Also Published As

Publication number Publication date
CN114684202B (en) 2023-03-10

Similar Documents

Publication Publication Date Title
CN110244772B (en) Navigation following system and navigation following control method of mobile robot
CN114684202B (en) Intelligent system for automatically driving vehicle and integrated control method thereof
JP6666304B2 (en) Travel control device, travel control method, and program
Furgale et al. Toward automated driving in cities using close-to-market sensors: An overview of the v-charge project
Geiger et al. Team AnnieWAY's entry to the 2011 grand cooperative driving challenge
CN111308490B (en) Balance car indoor positioning and navigation system based on single-line laser radar
CN109857102B (en) Wheeled robot formation and tracking control method based on relative position
KR20150038776A (en) Auto parking system using infra sensors
CN111367285B (en) Wheeled mobile trolley cooperative formation and path planning method
CN112947407A (en) Multi-agent finite-time formation path tracking control method and system
WO2022252221A1 (en) Mobile robot queue system, path planning method and following method
CN115993825A (en) Unmanned vehicle cluster control system based on air-ground cooperation
JPWO2020116194A1 (en) Information processing device, information processing method, program, mobile control device, and mobile
JP2022027593A (en) Positioning method and device for movable equipment, and movable equipment
CN114299039B (en) Robot and collision detection device and method thereof
CN111830995B (en) Group intelligent cooperation method and system based on hybrid architecture
CN116540706A (en) System and method for providing local path planning for ground unmanned aerial vehicle by unmanned aerial vehicle
CN112318507A (en) Robot intelligent control system based on SLAM technology
CN113610910B (en) Obstacle avoidance method for mobile robot
WO2022194110A1 (en) External parameter calibration method and apparatus, device, server and vehicle-mounted computing device
CN112925326B (en) AGV obstacle avoidance method based on data fusion of laser radar and depth camera
CN112747752B (en) Vehicle positioning method, device, equipment and storage medium based on laser odometer
Lee et al. Infrastructure Node-based Vehicle Localization for Autonomous Driving
Sun et al. Detection and state estimation of moving objects on a moving base for indoor navigation
CN115202338A (en) Collaborative motion control of multiple intelligent vehicles for communication, navigation and obstacle avoidance based on ROS

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant