CN116373902A - Method and device for predicting and controlling environmental information of automatic driving vehicle and vehicle - Google Patents

Method and device for predicting and controlling environmental information of automatic driving vehicle and vehicle Download PDF

Info

Publication number
CN116373902A
CN116373902A CN202310311949.2A CN202310311949A CN116373902A CN 116373902 A CN116373902 A CN 116373902A CN 202310311949 A CN202310311949 A CN 202310311949A CN 116373902 A CN116373902 A CN 116373902A
Authority
CN
China
Prior art keywords
vehicle
automatic driving
lane
driving vehicle
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310311949.2A
Other languages
Chinese (zh)
Inventor
叶光湖
马骁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weilai Automobile Technology Anhui Co Ltd
Original Assignee
Weilai Automobile Technology Anhui Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weilai Automobile Technology Anhui Co Ltd filed Critical Weilai Automobile Technology Anhui Co Ltd
Priority to CN202310311949.2A priority Critical patent/CN116373902A/en
Publication of CN116373902A publication Critical patent/CN116373902A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0015Planning or execution of driving tasks specially adapted for safety
    • B60W60/0017Planning or execution of driving tasks specially adapted for safety of other traffic participants
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0027Planning or execution of driving tasks using trajectory prediction for other traffic participants
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0019Control system elements or transfer functions
    • B60W2050/0028Mathematical models, e.g. for simulation
    • B60W2050/0031Mathematical model of the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/80Spatial relation or speed relative to objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

The invention relates to the technical field of automatic driving, in particular to an automatic driving vehicle environment information prediction and control method, an automatic driving vehicle environment information prediction and control device and a vehicle, and aims to solve the problems of how to construct an environment model of an automatic driving vehicle by combining various information, so that the environment information is fully considered, and the robustness of the model is improved. According to the invention, a state transition model of a Kalman filter equation is constructed according to the self-vehicle motion information of the automatic driving vehicle, a predicted value of the position relation of the environmental target at the current moment is further obtained, the predicted value is updated by applying the Kalman filter equation based on the predicted value and the current moment sensing measurement result, a final predicted result is obtained, the sensing measurement result is obtained through a self-vehicle motion system, a vehicle end sensing device and a non-vehicle end sensing device, comprehensive prediction of the motion trend of the environmental target can be realized, the robustness is higher, and more comprehensive, stable and reliable control input can be provided for the control process of the automatic driving vehicle.

Description

Method and device for predicting and controlling environmental information of automatic driving vehicle and vehicle
Technical Field
The invention relates to the technical field of automatic driving, and particularly provides an automatic driving vehicle environment information prediction and control method and device and a vehicle.
Background
With development of autopilot technology, more and more sensors, such as radar, lidar, high-precision map, high-precision inertial navigation IMU (Inertial Measurement Unit), etc., are installed on autopilot vehicles in the future. Thanks to the breakthrough development of image processing technology, target fusion technology and positioning technology, the acquisition of multidimensional sensing information is very accessible, and the utilization of the abundant information provides conditions for acquiring a more accurate and stable vehicle environment model. Meanwhile, on the basis of the construction of the environment models, a basis is provided for the prediction of the behaviors of surrounding traffic participants on the environment road, so that the automatic driving vehicle running on the public road can be provided with more stable and reliable guarantee.
The vehicle environment model plays a very important role in the behavior planning and control of the automatic driving vehicle, can predict the running track of the vehicle and surrounding vehicles, further acquire road characteristic obstacle information on the running track and predict the track, provide input for the decision motion of the vehicle in the next stage, and finally realize the control processes of steering control, driving control, braking control and the like of the automatic driving vehicle.
At present, a vehicle environment model has no good standardized form, most of which do not consider the motion state prediction of traffic participants, and early days only depend on single information prediction: for example, vehicle body motion state information such as vehicle speed and yaw rate information are acquired, which are steady state estimation methods based on the current vehicle state, and future vehicle motion trends are not well represented; while the perceived information based on road attributes and the like can represent road information under a certain time and space in the future, the perceived information is limited by obstacles, weather and the like, and the acquired environmental model is limited in robustness. Meanwhile, the application of multiple information acquisition environment models is limited by the fact that the acquired information of different types of perception information is not uniform, and fusion and use of multiple input information are difficult.
Accordingly, there is a need in the art for a new autonomous vehicle environment model building solution to the above-described problems.
Disclosure of Invention
The present invention has been made to overcome the above-mentioned drawbacks, and an object of the present invention is to provide a method and apparatus for constructing an environment model of an autonomous vehicle by combining various information, thereby fully considering the environment information and improving the robustness of the model.
In a first aspect, the present invention provides a method of predicting environmental information of an autonomous vehicle, the method comprising:
constructing a state transition model of a Kalman filter equation according to the self-vehicle motion information of the self-driving vehicle;
acquiring a predicted value of the position relation of an environmental target in the surrounding environment of the automatic driving vehicle at the current moment according to the state transition model; the position relation is the relation between the environment target and the position of the automatic driving vehicle;
updating the predicted value based on a Kalman filtering equation according to a perceived measurement result of the position relation of the environmental target at the current moment and the predicted value obtained by the automatic driving vehicle, and obtaining a final predicted result of the position relation of the environmental target at the current moment;
the sensing measurement result is obtained through an automatic vehicle movement system and/or vehicle end sensing equipment and/or non-vehicle end sensing equipment of the automatic driving vehicle.
In one technical scheme of the above method for predicting environmental information of an automatic driving vehicle, the environmental target includes a predicted track point of the automatic driving vehicle within a preset pre-aiming time, and the positional relationship includes a longitudinal distance and an orientation angle of the predicted track point relative to the automatic driving vehicle;
The method further comprises the steps of:
acquiring the speed and yaw angle of the autonomous vehicle according to the autonomous moving system of the autonomous vehicle;
acquiring a self-vehicle running radius of the self-driving vehicle according to the vehicle speed and the yaw angle;
acquiring a vehicle running track of the automatic driving vehicle according to the self-vehicle running radius;
and according to the vehicle running track, acquiring the longitudinal distance and the orientation angle of the predicted track points in front and behind the automatic driving vehicle in the front and behind pre-aiming time of the vehicle running track, and taking the longitudinal distance and the orientation angle as the sensing measurement result of the predicted track points.
In one aspect of the above-described method for predicting environmental information of an autonomous vehicle, the step of acquiring an autonomous running radius of the autonomous vehicle according to the vehicle speed and the yaw angle includes:
and based on the circle-defining motion assumption of the automatic driving vehicle, acquiring the self-driving radius at each moment according to the vehicle speed and the yaw angle.
In one aspect of the above-described method for predicting environmental information of an automatically driven vehicle, the environmental target includes a road attribute, and the positional relationship includes a longitudinal distance and an orientation angle of the road attribute with respect to the automatically driven vehicle;
The method further comprises the steps of:
acquiring perception data of the road attribute according to the vehicle end perception device of the automatic driving vehicle;
discretizing the perception data according to a plurality of discrete points of the road attribute;
and according to the discrete points, acquiring longitudinal distances and orientation angles of the discrete points relative to the automatic driving vehicle as perception measurement results of the road attribute.
In one aspect of the above-described method for predicting environmental information of an autonomous vehicle, the environmental target includes a traffic participant, and the positional relationship includes a longitudinal distance and an orientation angle of the traffic participant with respect to the autonomous vehicle;
the method further comprises the steps of:
acquiring perception information of the traffic participant according to the vehicle end perception device of the automatic driving vehicle;
according to the perception information of the traffic participants, obtaining the displacement of the traffic participants in two frames of perception information of adjacent sampling time;
and according to the current position and the displacement of the automatic driving vehicle, acquiring the longitudinal distance and the orientation angle of the traffic participant relative to the automatic driving vehicle as a perception measurement result of the traffic participant.
In one aspect of the above method for predicting environmental information of an autonomous vehicle, the traffic participant includes a non-lane-changing vehicle;
the method further comprises the steps of:
acquiring the movement speed of surrounding vehicles of the automatic driving vehicle and an included angle between the current moment and the last moment of the surrounding vehicles;
and taking the surrounding vehicles with the movement speed greater than the preset speed and the included angle smaller than the preset included angle as the non-lane-changing vehicles.
In one technical solution of the above-mentioned method for predicting environmental information of an automatically driven vehicle, the step of "updating the predicted value based on a kalman filter equation according to the perceived measurement result of the current moment of the automatically driven vehicle and the predicted value" includes:
according to a plurality of direction angles of the non-lane-changing vehicle, acquiring the mean value and variance of the direction angles;
according to the average value and a preset average value, determining an effective non-lane-changing vehicle;
and updating the predicted value according to the longitudinal distance and the orientation angle of the effective non-lane-changing vehicle.
In one aspect of the above method for predicting environmental information of an automatically driven vehicle, the step of updating the predicted value according to the longitudinal distance and the heading angle of the effective non-lane-changing vehicle includes:
Taking the variance of the orientation angle of the non-lane-changing vehicle as the noise updated by the Kalman filtering equation;
and updating the predicted value according to the noise, the longitudinal distance and the orientation angle of the effective non-lane-changing vehicle.
In one aspect of the above method for predicting environmental information of an autonomous vehicle, the environmental target includes a lane path, and the positional relationship includes a longitudinal distance and an orientation angle between the lane path and the autonomous vehicle;
the method further comprises the steps of:
acquiring a driving path of the automatic driving vehicle according to non-vehicle-end sensing equipment of the automatic driving vehicle;
according to the current position of the automatic driving vehicle and the driving path, lane path discrete points of a lane where the automatic driving vehicle is located and surrounding lanes are obtained;
and taking the longitudinal distance and the orientation angle of the lane path discrete point relative to the automatic driving vehicle as a perception measurement result of the lane path.
In one aspect of the above method for predicting environmental information of an autonomous vehicle, the step of acquiring lane path discrete points of a lane where the autonomous vehicle is currently located and a lane around according to the current position of the autonomous vehicle and the travel path includes:
When the roads are merged, acquiring lane path discrete points at the rear of the automatic driving vehicle according to the current position;
the step of "taking the longitudinal distance and the heading angle of the lane path discrete point with respect to the autonomous vehicle as the perceived measurement of the lane path" includes:
and taking the longitudinal distance and the orientation angle of the rear lane path discrete point relative to the automatic driving vehicle as a perception measurement result of the lane path.
In one aspect of the above method for predicting environmental information of an automatically driven vehicle, before the step of "obtaining the predicted value of the positional relationship of the environmental target in the surrounding environment of the automatically driven vehicle at the current time according to the state transition model", the method further includes:
initializing perception measurement information of the environmental target;
based on the initialization result, the effectiveness of the acquisition device of the perception measurement information is judged.
In one aspect of the above-described method for predicting environmental information of an automatically driven vehicle, the step of "determining the validity of the acquisition device of the perception measurement information" includes:
When the acquisition equipment is a vehicle end sensing equipment of the automatic driving vehicle, acquiring the comprehensive confidence coefficient of the vehicle end sensing equipment according to the initial confidence coefficient and the environment information of the vehicle end sensing equipment;
judging the effectiveness of the vehicle-end sensing equipment according to the comprehensive confidence; and/or the number of the groups of groups,
when the acquisition equipment is a map matching system, the effectiveness is judged according to the communication conditions among different maps in the map matching system, whether a positioning system of the automatic driving vehicle works normally or not and whether a current road has high-precision map coverage or not.
In one technical solution of the above-mentioned environmental information prediction method of an automatic driving vehicle, before the step of "obtaining a final prediction result of the positional relationship of the environmental target at the current time by updating the prediction value based on a kalman filter equation according to the perceived measurement result of the positional relationship of the environmental target and the prediction value obtained by the automatic driving vehicle", the method includes:
obtaining the confidence coefficient of the perception measurement result;
and selectively updating the predicted value by using the perception measurement result according to the confidence level so as to obtain the final predicted result.
In one technical solution of the above method for predicting environmental information of an automatic driving vehicle, the environmental targets include a moving target and a stationary target, and the step of "obtaining a final prediction result of the positional relationship of the environmental targets at the current time according to the perceived measurement result of the positional relationship of the environmental targets and the prediction value obtained by the automatic driving vehicle and based on a kalman filter equation, updating the prediction value includes:
updating the predicted value according to the perception measurement result of the moving object to obtain an updated predicted value of the moving object;
and acquiring the final prediction result according to the updated prediction value of the moving target and the position relation of the position of the static target relative to the automatic driving vehicle.
In a second aspect, the present invention provides a control method of an autonomous vehicle, the method comprising:
according to the environmental information prediction method of the automatic driving vehicle, which is any one of the environmental information prediction methods of the automatic driving vehicle, a final prediction result of an environmental target in the surrounding environment of the automatic driving vehicle is obtained;
And controlling the automatic driving vehicle according to the final prediction result.
In one aspect of the above control method for an autonomous vehicle, the step of "controlling the autonomous vehicle according to the final prediction result" includes:
encoding the environmental target according to the final prediction result;
and controlling the automatic driving vehicle according to the coding result.
In one aspect of the above control method for an autonomous vehicle, "the environmental target is a surrounding vehicle of the autonomous vehicle; the step of "encoding the environmental object based on the final prediction result" includes:
acquiring the orientation angle of the surrounding vehicles according to the final prediction result of the surrounding vehicles;
and coding the surrounding vehicles according to the orientation angle and a preset included angle threshold value.
In a third aspect, a control device is provided, the control device including a processor and a storage device, the storage device being adapted to store a plurality of program codes, the program codes being adapted to be loaded and executed by the processor to perform the method for predicting environmental information of an autonomous vehicle according to any one of the above-described methods for predicting environmental information of an autonomous vehicle or the method for controlling an autonomous vehicle according to any one of the above-described methods for controlling an autonomous vehicle.
In a fourth aspect, there is provided a computer readable storage medium having stored therein a plurality of program codes adapted to be loaded and executed by a processor to perform the method of predicting environmental information of an autonomous vehicle as set forth in any one of the above-described methods of predicting environmental information of an autonomous vehicle or the method of controlling an autonomous vehicle as set forth in any one of the above-described methods of controlling an autonomous vehicle.
In a fifth aspect, a vehicle is provided, which includes the control device in the above control device technical solution.
The technical scheme provided by the invention has at least one or more of the following beneficial effects:
in the technical scheme of implementing the invention, a state transition model of a Kalman filter equation is constructed according to the self-vehicle motion information of the self-driving vehicle, a predicted value of the position relation of an environmental target in the surrounding environment of the self-driving vehicle at the current moment is obtained according to the state transition model, the predicted value is updated by applying the Kalman filter equation based on the predicted value and a sensing measurement result of the position relation at the current moment, and a final predicted result of the position relation of the environmental target at the current moment is obtained, wherein the sensing measurement result can be obtained through a self-vehicle motion system, a vehicle-end sensing device and a non-vehicle-end sensing device. According to the configuration mode, the position relation between the environmental target in the surrounding environment of the automatic driving vehicle and the automatic driving vehicle is predicted based on the Kalman filtering model, and the predicted value of the position relation of the environmental target is updated based on the multidimensional sensing measurement result of the automatic driving vehicle, so that the movement trend of the environmental target in the surrounding environment of the automatic driving vehicle can be predicted more comprehensively, and the robustness is higher; meanwhile, the automatic driving vehicle can be controlled based on the final prediction result of the environment information, and more comprehensive, stable and reliable control input can be provided for the control process of the automatic driving vehicle, so that more effective control is realized.
An environment information prediction method for an automatic driving vehicle, the method comprising:
constructing a state transition model of a Kalman filter equation according to the self-vehicle motion information of the self-driving vehicle;
acquiring a predicted value of the position relation of an environmental target in the surrounding environment of the automatic driving vehicle at the current moment according to the state transition model; the position relation is the relation between the environment target and the position of the automatic driving vehicle;
updating the predicted value based on a Kalman filtering equation according to a perceived measurement result of the position relation of the environmental target at the current moment and the predicted value obtained by the automatic driving vehicle, and obtaining a final predicted result of the position relation of the environmental target at the current moment;
the sensing measurement result is obtained through an automatic vehicle movement system and/or vehicle end sensing equipment and/or non-vehicle end sensing equipment of the automatic driving vehicle.
An aspect 2 is an environmental information prediction method of an automatic driving vehicle according to the aspect 1, wherein the environmental target includes a predicted track point of the automatic driving vehicle within a preset pre-aiming time, and the positional relationship includes a longitudinal distance and an orientation angle of the predicted track point relative to the automatic driving vehicle;
The method further comprises the steps of:
acquiring the speed and yaw angle of the autonomous vehicle according to the autonomous moving system of the autonomous vehicle;
acquiring a self-vehicle running radius of the self-driving vehicle according to the vehicle speed and the yaw angle;
acquiring a vehicle running track of the automatic driving vehicle according to the self-vehicle running radius;
and according to the vehicle running track, acquiring the longitudinal distance and the orientation angle of the predicted track points in front and behind the automatic driving vehicle in the front and behind pre-aiming time of the vehicle running track, and taking the longitudinal distance and the orientation angle as the sensing measurement result of the predicted track points.
An aspect 3 is the environmental information prediction method of an autonomous vehicle according to the aspect 2, wherein the step of acquiring the autonomous running radius of the autonomous vehicle based on the vehicle speed and the yaw angle includes:
and based on the circle-defining motion assumption of the automatic driving vehicle, acquiring the self-driving radius at each moment according to the vehicle speed and the yaw angle.
An aspect 4 is the environmental information prediction method of an autonomous vehicle according to the aspect 1, wherein the environmental target includes a road attribute, and the positional relationship includes a longitudinal distance and an orientation angle of the road attribute with respect to the autonomous vehicle;
The method further comprises the steps of:
acquiring perception data of the road attribute according to the vehicle end perception device of the automatic driving vehicle;
discretizing the perception data according to a plurality of discrete points of the road attribute;
and according to the discrete points, acquiring longitudinal distances and orientation angles of the discrete points relative to the automatic driving vehicle as perception measurement results of the road attribute.
An aspect 5. The environmental information prediction method of an autonomous vehicle according to the aspect 1, wherein the environmental target includes a traffic participant, and the positional relationship includes a longitudinal distance and an orientation angle of the traffic participant with respect to the autonomous vehicle;
the method further comprises the steps of:
acquiring perception information of the traffic participant according to the vehicle end perception device of the automatic driving vehicle;
according to the perception information of the traffic participants, obtaining the displacement of the traffic participants in two frames of perception information of adjacent sampling time;
and according to the current position and the displacement of the automatic driving vehicle, acquiring the longitudinal distance and the orientation angle of the traffic participant relative to the automatic driving vehicle as a perception measurement result of the traffic participant.
An environmental information prediction method of an autonomous vehicle according to claim 5, characterized in that the traffic participant includes a non-lane-changing vehicle;
the method further comprises the steps of:
acquiring the movement speed of surrounding vehicles of the automatic driving vehicle and an included angle between the current moment and the last moment of the surrounding vehicles;
and taking the surrounding vehicles with the movement speed greater than the preset speed and the included angle smaller than the preset included angle as the non-lane-changing vehicles.
An aspect 7 is the environmental information prediction method of an autonomous vehicle according to the aspect 6, wherein the step of updating the predicted value based on a kalman filter equation according to the perceived measurement result of the current time of the autonomous vehicle and the predicted value includes:
according to a plurality of direction angles of the non-lane-changing vehicle, acquiring the mean value and variance of the direction angles;
according to the average value and a preset average value, determining an effective non-lane-changing vehicle;
and updating the predicted value according to the longitudinal distance and the orientation angle of the effective non-lane-changing vehicle.
An aspect 8 is the environmental information prediction method of an autonomous vehicle according to the aspect 7, wherein the step of "updating the predicted value according to the longitudinal distance and the heading angle of the effective non-lane-changing vehicle" includes:
Taking the variance of the orientation angle of the non-lane-changing vehicle as the noise updated by the Kalman filtering equation;
and updating the predicted value according to the noise, the longitudinal distance and the orientation angle of the effective non-lane-changing vehicle.
An aspect 9. The environmental information prediction method of an autonomous vehicle according to the aspect 1, wherein the environmental target includes a lane path, and the positional relationship includes a longitudinal distance and an orientation angle of the lane path from the autonomous vehicle;
the method further comprises the steps of:
acquiring a driving path of the automatic driving vehicle according to non-vehicle-end sensing equipment of the automatic driving vehicle;
according to the current position of the automatic driving vehicle and the driving path, lane path discrete points of a lane where the automatic driving vehicle is located and surrounding lanes are obtained;
and taking the longitudinal distance and the orientation angle of the lane path discrete point relative to the automatic driving vehicle as a perception measurement result of the lane path.
An aspect 10 is the environmental information prediction method of an autonomous vehicle according to the aspect 9, characterized in that,
the step of acquiring the lane path discrete points of the lane where the autonomous vehicle is currently located and the surrounding lanes according to the current position of the autonomous vehicle and the travel path includes:
When the roads are merged, acquiring lane path discrete points at the rear of the automatic driving vehicle according to the current position;
the step of "taking the longitudinal distance and the heading angle of the lane path discrete point with respect to the autonomous vehicle as the perceived measurement of the lane path" includes:
and taking the longitudinal distance and the orientation angle of the rear lane path discrete point relative to the automatic driving vehicle as a perception measurement result of the lane path.
An aspect 11 is the environmental information prediction method of an automatically driven vehicle according to the aspect 1, wherein, before the step of "obtaining the predicted value of the positional relationship of the environmental target in the surrounding environment of the automatically driven vehicle at the present time according to the state transition model", the method further includes:
initializing perception measurement information of the environmental target;
based on the initialization result, the effectiveness of the acquisition device of the perception measurement information is judged.
An aspect 12 is the environmental information prediction method of an automatically driven vehicle according to the aspect 11, characterized in that the step of "determining the validity of the acquisition device of the perception measurement information" includes:
When the acquisition equipment is a vehicle end sensing equipment of the automatic driving vehicle, acquiring the comprehensive confidence coefficient of the vehicle end sensing equipment according to the initial confidence coefficient and the environment information of the vehicle end sensing equipment;
judging the effectiveness of the vehicle-end sensing equipment according to the comprehensive confidence; and/or the number of the groups of groups,
when the acquisition equipment is a map matching system, the effectiveness is judged according to the communication conditions among different maps in the map matching system, whether a positioning system of the automatic driving vehicle works normally or not and whether a current road has high-precision map coverage or not.
An aspect 13 is an environmental information prediction method of an autonomous vehicle according to the aspect 1, wherein before the step of "obtaining a final prediction result of a positional relationship of the environmental target at a current time by updating the prediction value based on a kalman filter equation according to a perceived measurement result of the positional relationship of the environmental target and the prediction value obtained by the autonomous vehicle at the current time", the method includes:
obtaining the confidence coefficient of the perception measurement result;
and selectively updating the predicted value by using the perception measurement result according to the confidence level so as to obtain the final predicted result.
The method for predicting environmental information of an autonomous vehicle according to claim 1, wherein the environmental targets include a moving target and a stationary target, and the step of "obtaining a final predicted result of the positional relationship of the environmental targets at the current time by updating the predicted value based on a kalman filter equation based on a perceived measurement result of the positional relationship of the environmental targets and the predicted value obtained by the autonomous vehicle" includes:
updating the predicted value according to the perception measurement result of the moving object to obtain an updated predicted value of the moving object;
and acquiring the final prediction result according to the updated prediction value of the moving target and the position relation of the position of the static target relative to the automatic driving vehicle.
Scheme 15. A control method for an autonomous vehicle, characterized in that it comprises:
the environmental information prediction method of an autonomous vehicle according to any one of aspects 1 to 14, obtaining a final prediction result of an environmental target in a surrounding environment of the autonomous vehicle;
and controlling the automatic driving vehicle according to the final prediction result.
The control method for an autonomous vehicle according to claim 15, wherein the step of "controlling the autonomous vehicle according to the final prediction result" includes:
encoding the environmental target according to the final prediction result;
and controlling the automatic driving vehicle according to the coding result.
The method for controlling an autonomous vehicle according to claim 16, wherein the environmental target is a surrounding vehicle of the autonomous vehicle; the step of "encoding the environmental object based on the final prediction result" includes:
acquiring the orientation angle of the surrounding vehicles according to the final prediction result of the surrounding vehicles;
and coding the surrounding vehicles according to the orientation angle and a preset included angle threshold value.
A control device comprising at least one processor and at least one memory device, said memory device being adapted to store a plurality of program codes, characterized in that said program codes are adapted to be loaded and run by said processor to perform the method of predicting environmental information of an autonomous vehicle according to any one of the claims 1 to 14 or the method of controlling an autonomous vehicle according to any one of the claims 15 to 17.
A computer readable storage medium having stored therein a plurality of program codes, wherein the program codes are adapted to be loaded and executed by a processor to perform the method of predicting environmental information of an autonomous vehicle according to any one of the aspects 1 to 14 or the method of controlling an autonomous vehicle according to any one of the aspects 15 to 17.
A vehicle comprising the control device of claim 18.
Drawings
The present disclosure will become more readily understood with reference to the accompanying drawings. As will be readily appreciated by those skilled in the art: the drawings are for illustrative purposes only and are not intended to limit the scope of the present invention. Moreover, like numerals in the figures are used to designate like parts, wherein:
FIG. 1 is a flow chart illustrating the main steps of a method for predicting environmental information of an autonomous vehicle according to one embodiment of the present invention;
FIG. 2 is a flow chart of the main steps of a method of controlling an autonomous vehicle according to one embodiment of the invention;
FIG. 3 is a flow chart of the main steps of a method of controlling an autonomous vehicle according to one implementation of an embodiment of the invention;
FIG. 4 is a schematic diagram of the encoding results of an environmental objective according to one implementation of an embodiment of the invention.
Detailed Description
Some embodiments of the invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present invention, and are not intended to limit the scope of the present invention.
In the description of the present invention, a "module," "processor" may include hardware, software, or a combination of both. A module may comprise hardware circuitry, various suitable sensors, communication ports, memory, or software components, such as program code, or a combination of software and hardware. The processor may be a central processor, a microprocessor, an image processor, a digital signal processor, or any other suitable processor. The processor has data and/or signal processing functions. The processor may be implemented in software, hardware, or a combination of both. Non-transitory computer readable storage media include any suitable medium that can store program code, such as magnetic disks, hard disks, optical disks, flash memory, read-only memory, random access memory, and the like. The term "a and/or B" means all possible combinations of a and B, such as a alone, B alone or a and B. The term "at least one A or B" or "at least one of A and B" has a meaning similar to "A and/or B" and may include A alone, B alone or A and B. The singular forms "a", "an" and "the" include plural referents.
Referring to fig. 1, fig. 1 is a flowchart illustrating main steps of an environment information prediction method for an automatic driving vehicle according to an embodiment of the present invention. As shown in fig. 1, the method for predicting environmental information of an autonomous vehicle in an embodiment of the present invention mainly includes the following steps S101 to S103.
Step S101: and constructing a state transition model of a Kalman filter equation according to the self-vehicle motion information of the self-driving vehicle.
In the present embodiment, a state transition model of a kalman filter equation may be constructed from own-vehicle motion information of an autonomous vehicle. The state transition model is used for describing the propagation relationship between the predicted value of the last moment and the predicted value of the current moment.
In one embodiment, the vehicle movement information may include a current vehicle speed of the autonomous vehicle.
Step S102: acquiring a predicted value of the position relation of an environmental target in the surrounding environment of the automatic driving vehicle at the current moment according to the state transition model; the position relationship is the relationship between the environmental target and the position of the automatic driving vehicle.
In this embodiment, the predicted value of the positional relationship of the environmental target in the surroundings of the automatically driven vehicle at the present time may be obtained according to the state transition model. I.e. the prediction value for the current time is deduced based on the prediction value for the previous time.
In one embodiment, the environmental targets may include predicted trajectory points of the autonomous vehicle itself, road attributes, traffic participants, lane paths, and the like. The ambient environment may include a front environment and a rear environment of the autonomous vehicle. The positional relationship may include a longitudinal distance and an orientation angle (heading) of the environmental target relative to the autonomous vehicle; wherein, the heading angle (heading) refers to the included angle between the direction of the environmental target and the automatic driving vehicle (own vehicle); the longitudinal distance refers to the distance in the y-axis direction of the environmental target relative to the host vehicle.
Step S103: according to a perceived measurement result and a predicted value of the position relation of the environmental target at the current moment, which are acquired by an automatic driving vehicle, updating the predicted value based on a Kalman filtering equation to acquire a final predicted result of the position relation of the environmental target at the current moment; the sensing measurement result is obtained through a self-vehicle motion system and/or a vehicle-end sensing device and/or a non-vehicle-end sensing device of the self-driving vehicle.
In this embodiment, the predicted value may be updated based on the perception measurement information acquired by the self-vehicle motion system, the vehicle-end perception device, and the non-vehicle-end perception device of the self-driving vehicle, so as to obtain the predicted result of the positional relationship of the environmental target. Wherein the updating process is realized based on a Kalman filtering equation. The vehicle motion system may provide motion information of the autonomous vehicle such as vehicle speed, yaw angle (yaw), steering wheel angle, etc. The vehicle-end sensing device is a device which is arranged on the vehicle and senses the surrounding environment, such as a camera, a millimeter wave radar, a laser radar and the like. The non-vehicle-end sensing device is a device that relies on information outside the vehicle to obtain sensing measurements, such as high-precision maps, navigation maps, map matching systems, V2X (vehicle to everything), and the like.
In one embodiment, a predicted road model of the surroundings of an autonomous vehicle may be constructed according to the following equation (1):
Figure BDA0004149166210000121
wherein y is 0 Is a transverse position parameter; η is an orientation angle parameter; c 0 Is a curvature parameter; c 1 Is a curvature change rate parameter.
The direction angle (heading) between the tangent line of the road at the distance x and the self-workshop can be obtained according to the above formula (1) as shown in formula (2):
Figure BDA0004149166210000131
the transfer function of the kalman filter equation can be constructed from the above polynomial as shown in equation (3) and equation (4):
theta k =F×theta k-1 +Q (3)
z k =H×theta k +R (4)
wherein theta is k The predicted value of the k moment; f is a state transition matrix; q is a system process noise matrix; z k The perceived measurement result at the moment k; h is a transmission matrix between the perception measurement result and the predicted value; and R is measurement noise.
theta can be expressed by the following formula (5):
Figure BDA0004149166210000132
the state transition matrix can be represented by the following formula (6):
Figure BDA0004149166210000133
if the current vehicle speed is assumed to be v, the operation period of the system is T, the state transition matrix can be expressed by the following formula (7):
Figure BDA0004149166210000134
the transfer matrix can be represented by the following formula (8):
Figure BDA0004149166210000135
the prediction equation of the kalman filter based on the above procedure can be expressed according to the following equation (9) and equation (10):
theta k(-) =F×theta k-1 (9)
P k(-) =F×P k-1 ×F T +Q (10)
Wherein theta is k(-) Obtaining a predicted value at the moment k for a final predicted value based on k-1; theta (theta) k-1 A final predicted value of k-1; p (P) k(-) For the prediction covariance matrix at time k, P k-1 The final covariance matrix at the moment k-1; f (F) T Is the transpose of the state transition matrix.
The system update equation of the kalman filter can be expressed according to the following formulas (11) to (13):
K k =P k(-) ×H T /(H×P k(-) ×H T +R) (11)
theta k =theta k(-) +K k ×(z k -H×theta k(-) ) (12)
P k =(I-K k ×H)×P k(-) ×(I-K k ×H) T +K k ×R×K k T (13)
wherein K is k Kalman gain at time k, theta k And I is an identity matrix, which is the final predicted value at the moment k.
In one embodiment, the confidence level of a plurality of perception measurements may be obtained, and the prediction value is updated by selectively applying the perception measurements according to the confidence level, so as to obtain a final prediction result.
In one embodiment, a confidence threshold may be set, and when the confidence of the perceived measurement is less than the confidence threshold, the perceived measurement is not applied to update the predicted value; when the confidence level of the sensing measurement result is greater than or equal to the confidence level threshold, the sensing measurement result can be applied to update the predicted value.
In one embodiment, weights may be set for different confidence levels, e.g., a perceived measurement with a higher confidence level corresponds to a higher weight and a perceived measurement with a lower confidence level corresponds to a lower weight, so that the predicted value is updated according to the perceived measurement and the corresponding weight.
Based on the above steps S101-S103, the embodiment of the present invention constructs a state transition model of a kalman filter equation according to the self-vehicle motion information of the self-driving vehicle, obtains a predicted value of a position relationship of an environmental target in a surrounding environment of the self-driving vehicle at a current moment according to the state transition model, and updates the predicted value based on the predicted value and a sensing measurement result of the position relationship at the current moment by using the kalman filter equation to obtain a final predicted result of the position relationship of the environmental target at the current moment, wherein the sensing measurement result can be obtained through a self-vehicle motion system, a vehicle-end sensing device and a non-vehicle-end sensing device. Through the configuration mode, the embodiment of the invention predicts the position relationship between the environmental target in the surrounding environment of the automatic driving vehicle and the automatic driving vehicle based on the Kalman filtering model, and updates the predicted value of the position relationship of the environmental target based on the multidimensional sensing measurement result of the automatic driving vehicle, so that the motion trend of the environmental target in the surrounding environment of the automatic driving vehicle can be predicted more comprehensively, and the robustness is higher; meanwhile, the automatic driving vehicle can be controlled based on the final prediction result of the environment information, and more comprehensive, stable and reliable control input can be provided for the control process of the automatic driving vehicle, so that more effective control is realized.
The step of acquiring the sensing measurement result in step S103 will be described in detail.
In one implementation of the embodiment of the present invention, the environmental target may include a predicted track point of the autopilot vehicle within a preset pretighting time, and the positional relationship includes a longitudinal distance and an orientation angle of the predicted track point with respect to the autopilot vehicle; the perceived measurement result of the predicted trajectory point may be acquired through the following steps S201 to S204.
Step S201: according to the self-vehicle movement system of the self-driving vehicle, the speed and the yaw angle of the self-driving vehicle are obtained.
Step S202: and acquiring the self-driving radius of the self-driving vehicle according to the vehicle speed and the yaw angle.
In this embodiment, step S202 may be further configured to:
based on the circle-defining motion assumption of the automatic driving vehicle, the self-driving radius at each moment is obtained according to the speed and the yaw angle.
In one embodiment, the running radius of the vehicle may be obtained according to the following equation (14):
Figure BDA0004149166210000151
where ROC is the radius of travel, host_speed is the speed of the vehicle, and host_yawrate is the yaw angle of the vehicle.
Step S203; and acquiring the vehicle running track of the autonomous vehicle according to the vehicle running radius.
Step S204: and according to the vehicle running track, acquiring the longitudinal distance and the orientation angle of the predicted track points of the front and rear of the automatic driving vehicle in the front and rear pre-aiming time of the vehicle running track, and taking the longitudinal distance and the orientation angle as the perception measurement result of the predicted track points.
In this embodiment, a point and a corresponding head value in a certain range from front to rear may be obtained by multiplying the front-rear pre-aiming time of the vehicle travel track by the current vehicle speed:
[(-v*t_prd,heading(-v*t_prd)),(v*t_prd,heading(v*t_prd))]
where t_ prd is the pretightening time. The above points may be taken as perceived measurements of predicted trajectory points.
In one implementation of the present embodiment, the environmental targets may include road attributes, and the positional relationship may include a longitudinal distance and an orientation angle of the road attributes with respect to the autonomous vehicle; the perceived measurement of the road attribute may be acquired according to the following steps S301 to S303.
Step S301: and acquiring the perception data of the road attribute according to the vehicle end perception equipment of the automatic driving vehicle.
Step S302: the perceived data is discretized according to a plurality of discrete points of the road attribute.
Step S303: and according to the plurality of discrete points, acquiring the longitudinal distance and the orientation angle of the discrete points relative to the automatic driving vehicle as a perception measurement result of the road attribute. The road attribute may include information such as lane lines, edges, fences, and the like.
In the present embodiment, the surrounding road environment model based on the road attribute can be obtained according to the following equation (15):
y=c 0 +c 1 ×x+2×c 2 ×x 2 +3×c 3 ×x 3 (15)
where x is the longitudinal distance of the road attribute in the vehicle coordinate system relative to the own vehicle, then the heading angle heading from the own vehicle x can be further obtained according to formula (15), as shown in formula (16):
headina(x)=c 1 +2×c 2 ×x+3×c 3 ×x 2 (16)
based on the formula (15) and the formula (16), the perception data can be discretized to obtain a plurality of discrete points of the road attribute, and the relation between the longitudinal distance and the orientation angle is as follows:
[(x_1,heading(x_1)),(x_2,heading(x_2)),(x_3,heading(x_3)),…,(x_n,heading(x_n))]。
the predicted value may be updated based on the discretized points described above.
In one implementation of an embodiment of the present invention, the environmental objectives include traffic participants, and the positional relationship includes a longitudinal distance and an orientation angle of the traffic participants relative to the autonomous vehicle. The perception measurements of the traffic participant may be obtained according to the following steps S401 to S403.
Step S401: and acquiring the perception information of the traffic participants according to the vehicle end perception equipment of the automatic driving vehicle.
Step S402: and acquiring the displacement of the traffic participant in the two frames of sensing information of adjacent sampling time according to the sensing information of the traffic participant.
Step S403: based on the current position and displacement of the autonomous vehicle, the longitudinal distance and heading angle of the traffic participant relative to the autonomous vehicle is obtained as a perceived measurement of the traffic participant.
In this embodiment, the traffic participant may include information of pedestrians, surrounding vehicles, and the like. The perception information of the traffic participant can be obtained according to the vehicle-end perception device, and the direction angle of the traffic participant can be obtained based on the displacement of the front and rear frame perception information, as shown in the following formulas (17) to (19):
flow dx =x current -x prev (17)
flow dy =y current -y prev (18)
Figure BDA0004149166210000161
wherein, flow dx For displacement in x direction, flow dy For displacement in the y direction, x current For the x coordinate of the current frame, x prev Is the x coordinate, y of the previous frame current For the y-coordinate of the current frame, y prev Flow_angle is the orientation angle for the y-coordinate of the previous frame.
The longitudinal distance and heading angle head relation between all traffic participants around the own vehicle and the own vehicle can be obtained according to the above formulas (17) to (19):
[(Tgt1.long,Tgt1.heading),(Tgt2.long,Tgt2.heading),(Tgt3.long,Tgt3.heading),…(Tgtn.long,Tgtn.heading)]
wherein Tgt is traffic participant, tgt.long is longitudinal distance, tgt.head is angle of orientation.
In one embodiment, the traffic participant may include a non-lane-changing vehicle, which may be determined according to the following steps S501 and S502.
Step S501: the movement speed of surrounding vehicles of the automatic driving vehicle is obtained, and the included angle between the current moment and the last moment of the surrounding vehicles is obtained.
Step S502: surrounding vehicles with the movement speed being greater than the preset speed and the included angle being smaller than the preset included angle are used as non-lane-changing vehicles.
In the present embodiment, it is possible to determine whether or not the vehicle is a non-lane-changing vehicle based on the movement speed of the surrounding vehicle and the angle between the current time and the previous time.
In one implementation of the present embodiment, the environmental targets may include lane paths, and the positional relationship may include a longitudinal distance and an orientation angle of the lane paths from the autonomous vehicle; the perceived measurement of the lane path may be acquired according to the following steps S601 to S603.
Step S601: and acquiring a driving path of the automatic driving vehicle according to the non-vehicle-end sensing equipment of the automatic driving vehicle.
Step S602: and acquiring lane path discrete points of the lane where the automatic driving vehicle is currently located and surrounding lanes according to the current position and the driving path of the automatic driving vehicle.
Step S603: the longitudinal distance and the orientation angle of the discrete points of the lane path relative to the autonomous vehicle are used as the sensing measurement result of the lane path.
In the present embodiment, the travel path of the autonomous vehicle may be acquired according to a non-vehicle-end sensing device, which may include a navigation map and a high-precision map. When the vehicle runs in the high-precision map coverage area, high-precision map information of the running path can be obtained according to the fusion of the navigation map and the high-precision map, and the information contains detailed information of the path level, the road level and the lane level. The lane level information of the front and rear of the vehicle, such as the current lane ID, the longitudinal distance and the direction angle of the front and rear lanes of the current and surrounding vehicle, can be obtained according to the current position of the automatic driving vehicle, specifically:
[(pos1.long,pos1.heading),(pos 2.long,pos2.heading),(pos 3.long,pos3.heading),…,(pos n.long,pos n.heading)]
Where pos.long is the longitudinal distance of the lane path discrete point, pos.head is the heading angle of the lane path discrete point.
In one embodiment, when the roads merge, a lane path discrete point at the rear of the autonomous vehicle may be acquired according to the current position; and the longitudinal distance and the orientation angle of the rear lane path discrete point relative to the automatic driving vehicle are used as the sensing measurement result of the lane path.
In one implementation of the embodiment of the present invention, before step S102, the present invention may further include step S104 and step S105.
Step S104: and initializing the perception measurement information of the environmental target.
Step S105: based on the initialization result, the validity of the acquisition device that perceives the measurement information is determined.
In this embodiment, before predicting the positional relationship of the environmental target, the sensing measurement information of the environmental target may be initialized, the validity of the acquisition device for the sensing measurement result may be determined based on the initialization result, and the step of predicting the positional relationship of the environmental target may be performed when the determination is passed.
In one embodiment, when the obtaining device is a vehicle-end sensing device of an automatic driving vehicle, the comprehensive confidence coefficient of the vehicle-end sensing device may be obtained according to the initial confidence coefficient and the environmental information of the vehicle-end sensing device; and judging the effectiveness of the vehicle-end sensing equipment according to the comprehensive confidence.
In this embodiment, if the acquisition device of the sensing measurement information is a vehicle-end sensing device, such as a camera, a laser radar, a millimeter wave radar, or the like, whether the vehicle-end sensing device can predict the position relationship of the environmental target can be first determined according to the initial confidence level of the vehicle-end sensing device, then the comprehensive confidence level of the sensing device is obtained based on the environmental information, and the validity of the vehicle-end sensing device is determined according to the comprehensive confidence level. The initial confidence level refers to a default confidence level of the lane perception device. The integrated confidence is a confidence obtained by integrating the initial confidence and the environmental information. Taking a lane line as an example, the environmental information may include information to detect its length, definition, curvature, whether there is a crossing, etc.
In one embodiment, when the acquiring device is a map matching system, the validity is determined according to the communication conditions between different maps in the map matching system, whether the positioning system of the automatic driving vehicle works normally or not, and whether the current road has high-precision map coverage or not.
In one implementation of the embodiment of the present invention, step S103 may further include the following steps S1031 and S1032:
Step S1031: and updating the predicted value according to the perception measurement result of the moving object to obtain the updated predicted value of the moving object.
Step S1032: and acquiring a final prediction result according to the updated prediction value of the moving target and the position relation of the position of the stationary target relative to the automatic driving vehicle.
In the embodiment, in the process of predicting and updating the position relationship of the environmental target, the static target does not participate in the discussion, and only the moving target is predicted and updated; and then obtaining a final prediction result according to the updated prediction value of the moving target and the position relation between the static target and the self workshop.
In one embodiment, when the traffic participant is a non-lane-changing vehicle, step S103 may further include the following steps S1033 to S1035:
step S1033: and acquiring the mean value and the variance of the heading angles according to a plurality of heading angles of the non-lane-changing vehicle.
In the present embodiment, the mean and variance of the heading angles may be obtained from a plurality of heading angles of the non-lane-changing vehicle.
Step S1034: and determining the effective non-lane-changing vehicle according to the average value and the preset average value.
In this embodiment, a preset average value may be set, and the heading angle average value is compared with the preset average value to determine an effective non-lane-changing vehicle.
Step S1035: and updating the predicted value according to the longitudinal distance and the orientation angle of the effective non-lane-changing vehicle.
In the present embodiment, step S1035 may further include step S10351 and step S10352:
step S10351: and taking the variance of the orientation angle of the non-lane-changing vehicle as the noise updated by the Kalman filtering equation.
Step S10352: and updating the predicted value according to the noise, the longitudinal distance and the orientation angle of the effective non-lane-changing vehicle.
In the present embodiment, the variance of the heading angle of the non-lane-changing vehicle may be used as noise for updating the kalman filter equation, and the predicted value of the traffic participant may be updated based on the longitudinal distance and the heading angle of the effective non-lane-changing vehicle.
Further, the invention also provides a control method of the automatic driving vehicle.
Referring to fig. 2, fig. 2 is a flow chart illustrating main steps of a control method of an autonomous vehicle according to an embodiment of the present invention. As shown in fig. 2, the control method of the autonomous vehicle in the embodiment of the present invention mainly includes the following steps S701 to S702.
Step S701: according to the method for predicting the environmental information of the automatic driving vehicle in the embodiment of the method for predicting the environmental information of the automatic driving vehicle, a final prediction result of the environmental target in the surrounding environment of the automatic driving vehicle is obtained.
In the present embodiment, the final prediction result of the environmental target in the surrounding environment may be obtained by the environmental information prediction method of the automatically driven vehicle.
Step S702: and controlling the automatic driving vehicle according to the final prediction result.
In this embodiment, the behavior planning of the autonomous vehicle may be implemented based on the final prediction result of the environmental target, so as to further implement the control procedures of the autonomous vehicle, such as steering control, driving control, braking control, and the like.
In one embodiment, step S702 may further include the following steps S7021 and S7022.
Step S7021: and encoding the environmental target according to the final prediction result.
Step S7022: and controlling the automatic driving vehicle according to the coding result.
In this embodiment, reference may be made to fig. 4, and fig. 4 is a schematic diagram of an encoding result of an environmental target according to an embodiment of the present invention. As shown in fig. 4, the environmental targets may be encoded for different environmental targets and final prediction results of the environmental targets, thereby achieving control of the autonomous vehicle according to the encoding results.
In one embodiment, the environmental target is a surrounding vehicle of the autonomous vehicle, and step S7021 may further include the following steps S70211 and S70212.
Step S70211: and acquiring the orientation angle of the surrounding vehicles according to the final prediction result of the surrounding vehicles.
Step S70212: and coding surrounding vehicles according to the orientation angle and a preset included angle threshold value.
In this embodiment, it may be determined whether the orientation angle of the surrounding vehicle is smaller than the angle threshold; when the included angle is smaller than the included angle threshold value, the surrounding vehicles can be encoded into non-lane-changing attributes; and when the included angle is greater than or equal to the included angle threshold value, the surrounding vehicles can be encoded into lane changing attributes.
In one embodiment, reference may be made to fig. 3, and fig. 3 is a schematic flow chart of main steps of a control method of an autonomous vehicle according to an embodiment of the present invention. As shown in fig. 3, the control method of the autonomous vehicle may include the following steps S801 to S823.
Step S801: and initializing a self-vehicle motion system.
In the present embodiment, the vehicle motion system may be initialized first.
Step S802: whether the data of the self-vehicle movement system is valid; if yes, go to step S803; if not, go to step S801.
In the present embodiment, it is possible to determine whether or not data (such as the vehicle speed, yaw angle, etc.) of the own vehicle movement system is valid. If so, step S803 may be performed, and the valid vehicle motion system data may be applied to step S822 to update the predicted value.
Step S803: whether the own vehicle runs along the lane line or not; if yes, go to step S804; if not, go to step S801.
In the present embodiment, it is possible to determine whether or not the host vehicle is traveling along the lane line.
Step S804: the front and back environmental models are updated based on the rounding attributes (rounding motion assumptions).
In this embodiment, step S804 is similar to the methods described in step S202 and step S203, and is not described herein for simplicity.
Step S805: based on the pre-aiming time, the longitudinal distance and the heading information (heading angle) are acquired, and the process goes to step S822.
In this embodiment, step S805 is similar to the method described in step S204, and is not described herein for simplicity.
Step S806: and initializing a self-vehicle sensing system (vehicle-end sensing equipment).
In this embodiment, the vehicle sensing system may be initialized first.
Step S807: the self-vehicle sensing system diagnoses whether the vehicle passes or not; if yes, go to step S808 and step S812; if not, go to step S806.
In the present embodiment, it can be determined that the diagnosis of the vehicle sensing system is passed.
Step S808: sensing to obtain whether the lane lines, the road edges and the fence check are qualified; if yes, jump to step S809; if not, go to step S807.
In this embodiment, whether the lane lines, the road edges, the fences, and the like are checked to be qualified or not may be checked.
Step S809: whether lanes intersect, merge and lose; if not, jumping to step S810; if yes, go to step S807.
In the present embodiment, it is possible to determine whether or not there is a lane crossing, merging, or loss.
Step S810: the perceptual data is discretized.
In this embodiment, step S810 is similar to the method described in step S302, and is not repeated here for simplicity of description.
Step S811: the effective lane line, road edge, position of fence information (longitudinal distance) and heading information (heading angle) are selected, and the process goes to step S822.
In this embodiment, step S811 is similar to the method described in step S303, and is not described herein for simplicity.
Step S812: the traffic participant movement information is perceived.
In this embodiment, step S812 is similar to the method described in step S401, and is not described herein for simplicity.
Step S813: judging whether the traffic participant is a moving target, if so, jumping to step S814; if not, go to step S812.
In this embodiment, it may be determined whether or not the traffic participant is a moving object. If it is the moving object, step S814 is performed. The moving object that is valid at the same time is also used for updating the predicted value in step S822.
Step S814: and updating the obtained motion head information, and acquiring the mean value and the variance.
In this embodiment, step S814 is similar to the methods described in step S402 and step S403, and is not described herein for simplicity.
Step S815: the effective moving object azimuth (longitudinal distance) and head information are selected, and the process goes to step S822.
In this embodiment, an effective moving object may be selected for prediction.
Step S816: non-vehicle-end aware system (device) initialization.
In this embodiment, the non-vehicle-end sensing system may be initialized.
Step S817: the system diagnoses whether the system passes or not; if yes, go to step S818; if not, go to step S816.
In the present embodiment, it is possible to determine whether or not the system diagnosis is passed.
Step S818: the vehicle position is matched with the high-precision map information.
Step S819: and matching the navigation map with the future path information.
Step S820: and outputting the map information of the front road section and the rear road section of the current vehicle.
In this embodiment, the methods described in step S818, step S819 and step S820 are similar to those described in step S601, and are not repeated here for simplicity of description.
Step S821: and extracting lane path discrete points of the high-precision map.
In this embodiment, the method of step S821 is similar to that of step S602, and is not repeated here for describing the detection.
Step S822: and updating the predicted value of the environmental target according to the Kalman state equation to obtain the type and the confidence of the environmental target.
In this embodiment, the predicted value of the environmental target may be updated according to the kalman equation, so as to obtain the type and the confidence of the environmental target.
Step S823: and obtaining the coding result of the environmental target according to the relation between the vehicle and the environmental target.
In this embodiment, the method described in step S823 is similar to that of step S7021, and is not described here again for simplicity.
It is noted that although the above embodiments describe the steps in a specific order, it will be understood by those skilled in the art that, in order to achieve the effects of the present invention, the steps are not necessarily performed in such order, and may be performed simultaneously (in parallel) or in other orders, and these variations are within the scope of the present invention.
It should be noted that, the data (including, but not limited to, data for analysis, stored data, displayed data, vehicle usage data, data collected by the vehicle, etc.) according to the embodiments of the present disclosure are all data fully authorized by each party. The data acquisition, collection and other actions involved in the embodiments of the present disclosure are performed after user and object authorization or after full authorization by each party.
It will be appreciated by those skilled in the art that the present invention may implement all or part of the above-described methods according to the above-described embodiments, or may be implemented by means of a computer program for instructing relevant hardware, where the computer program may be stored in a computer readable storage medium, and where the computer program may implement the steps of the above-described embodiments of the method when executed by a processor. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable storage medium may include: any entity or device, medium, usb disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory, random access memory, electrical carrier wave signals, telecommunications signals, software distribution media, and the like capable of carrying the computer program code. It should be noted that the computer readable storage medium may include content that is subject to appropriate increases and decreases as required by jurisdictions and by jurisdictions in which such computer readable storage medium does not include electrical carrier signals and telecommunications signals.
Further, the invention also provides a control device. In one control device embodiment according to the present invention, the control device includes a processor and a storage device, the storage device may be configured to store a program for executing the environment information prediction method of the autonomous vehicle of the above-described method embodiment, and the processor may be configured to execute the program in the storage device, including, but not limited to, the program for executing the environment information prediction method of the autonomous vehicle of the above-described method embodiment. For convenience of explanation, only those portions of the embodiments of the present invention that are relevant to the embodiments of the present invention are shown, and specific technical details are not disclosed, please refer to the method portions of the embodiments of the present invention. The control device may be a control device formed of various electronic devices.
The control device in the embodiment of the invention can be a control device formed by various electronic devices. In some possible embodiments, the control device may include a plurality of memory devices and a plurality of processors. And the program for executing the method for predicting the environmental information of the autonomous vehicle according to the above-mentioned method embodiment may be divided into a plurality of sub-programs, and each sub-program may be loaded and executed by the processor to execute the different steps of the method for predicting the environmental information of the autonomous vehicle according to the above-mentioned method embodiment. Specifically, each of the sub-programs may be stored in different storage devices, and each of the processors may be configured to execute the programs in one or more storage devices to collectively implement the method for predicting the environmental information of the autonomous vehicle of the above method embodiment, that is, each of the processors executes different steps of the method for predicting the environmental information of the autonomous vehicle of the above method embodiment, respectively, to collectively implement the method for predicting the environmental information of the autonomous vehicle of the above method embodiment.
The plurality of processors may be processors disposed on the same device, and for example, the control means may be a high-performance device composed of a plurality of processors, and the plurality of processors may be processors disposed on the high-performance device. In addition, the plurality of processors may be processors disposed on different devices, for example, the control apparatus may be a server cluster, and the plurality of processors may be processors on different servers in the server cluster.
Further, the invention also provides a computer readable storage medium. In one embodiment of the computer-readable storage medium according to the present invention, the computer-readable storage medium may be configured to store a program for executing the method of predicting the environmental information of the autonomous vehicle of the above-described method embodiment, the program being loadable and executable by a processor to implement the method of predicting the environmental information of the autonomous vehicle described above. For convenience of explanation, only those portions of the embodiments of the present invention that are relevant to the embodiments of the present invention are shown, and specific technical details are not disclosed, please refer to the method portions of the embodiments of the present invention. The computer readable storage medium may be a storage device including various electronic devices, and optionally, the computer readable storage medium in the embodiments of the present invention is a non-transitory computer readable storage medium.
Further, the invention also provides a control device. In one control device embodiment according to the present invention, the control device includes a processor and a storage device, the storage device may be configured to store a program for executing the control method of the autonomous vehicle of the above-described method embodiment, and the processor may be configured to execute the program in the storage device, including, but not limited to, the program for executing the control method of the autonomous vehicle of the above-described method embodiment. For convenience of explanation, only those portions of the embodiments of the present invention that are relevant to the embodiments of the present invention are shown, and specific technical details are not disclosed, please refer to the method portions of the embodiments of the present invention. The control device may be a control device formed of various electronic devices.
The control device in the embodiment of the invention can be a control device formed by various electronic devices. In some possible embodiments, the control device may include a plurality of memory devices and a plurality of processors. And the program for executing the method for controlling the autonomous vehicle of the above-described method embodiment may be divided into a plurality of sub-programs, each of which may be loaded and executed by the processor to perform different steps of the method for controlling the autonomous vehicle of the above-described method embodiment, respectively. Specifically, each of the subroutines may be stored in different storage devices, respectively, and each of the processors may be configured to execute the programs in one or more storage devices to collectively implement the method for controlling an autonomous vehicle of the above method embodiment, that is, each of the processors executes different steps of the method for controlling an autonomous vehicle of the above method embodiment, respectively, to collectively implement the method for controlling an autonomous vehicle of the above method embodiment.
The plurality of processors may be processors disposed on the same device, and for example, the control means may be a high-performance device composed of a plurality of processors, and the plurality of processors may be processors disposed on the high-performance device. In addition, the plurality of processors may be processors disposed on different devices, for example, the control apparatus may be a server cluster, and the plurality of processors may be processors on different servers in the server cluster.
Further, the invention also provides a computer readable storage medium. In one embodiment of the computer-readable storage medium according to the present invention, the computer-readable storage medium may be configured to store a program for executing the control method of the autonomous vehicle of the above-described method embodiment, which program may be loaded and executed by a processor to implement the control method of the autonomous vehicle described above. For convenience of explanation, only those portions of the embodiments of the present invention that are relevant to the embodiments of the present invention are shown, and specific technical details are not disclosed, please refer to the method portions of the embodiments of the present invention. The computer readable storage medium may be a storage device including various electronic devices, and optionally, the computer readable storage medium in the embodiments of the present invention is a non-transitory computer readable storage medium.
Further, the invention also provides a vehicle. In one vehicle embodiment according to the invention, the vehicle may comprise control means in an embodiment of the control means.
Further, it should be understood that, since the respective modules are merely set to illustrate the functional units of the apparatus of the present invention, the physical devices corresponding to the modules may be the processor itself, or a part of software in the processor, a part of hardware, or a part of a combination of software and hardware. Accordingly, the number of individual modules in the figures is merely illustrative.
Those skilled in the art will appreciate that the various modules in the apparatus may be adaptively split or combined. Such splitting or combining of specific modules does not cause the technical solution to deviate from the principle of the present invention, and therefore, the technical solution after splitting or combining falls within the protection scope of the present invention.
Thus far, the technical solution of the present invention has been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present invention is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present invention, and such modifications and substitutions will fall within the scope of the present invention.

Claims (10)

1. A method of predicting environmental information of an autonomous vehicle, the method comprising:
constructing a state transition model of a Kalman filter equation according to the self-vehicle motion information of the self-driving vehicle;
acquiring a predicted value of the position relation of an environmental target in the surrounding environment of the automatic driving vehicle at the current moment according to the state transition model; the position relation is the relation between the environment target and the position of the automatic driving vehicle;
updating the predicted value based on a Kalman filtering equation according to a perceived measurement result of the position relation of the environmental target at the current moment and the predicted value obtained by the automatic driving vehicle, and obtaining a final predicted result of the position relation of the environmental target at the current moment;
the sensing measurement result is obtained through an automatic vehicle movement system and/or vehicle end sensing equipment and/or non-vehicle end sensing equipment of the automatic driving vehicle.
2. The environmental information prediction method of an autonomous vehicle according to claim 1, wherein the environmental target includes a predicted trajectory point of the autonomous vehicle within a preset pretightening time, and the positional relationship includes a longitudinal distance and an orientation angle of the predicted trajectory point with respect to the autonomous vehicle;
The method further comprises the steps of:
acquiring the speed and yaw angle of the autonomous vehicle according to the autonomous moving system of the autonomous vehicle;
acquiring a self-vehicle running radius of the self-driving vehicle according to the vehicle speed and the yaw angle;
acquiring a vehicle running track of the automatic driving vehicle according to the self-vehicle running radius;
and according to the vehicle running track, acquiring the longitudinal distance and the orientation angle of the predicted track points in front and behind the automatic driving vehicle in the front and behind pre-aiming time of the vehicle running track, and taking the longitudinal distance and the orientation angle as the sensing measurement result of the predicted track points.
3. The environmental information prediction method of an autonomous vehicle according to claim 2, wherein the step of acquiring an autonomous running radius of the autonomous vehicle based on the vehicle speed and the yaw angle includes:
and based on the circle-defining motion assumption of the automatic driving vehicle, acquiring the self-driving radius at each moment according to the vehicle speed and the yaw angle.
4. The environmental information prediction method of an autonomous vehicle according to claim 1, wherein the environmental target includes a road attribute, and the positional relationship includes a longitudinal distance and an orientation angle of the road attribute with respect to the autonomous vehicle;
The method further comprises the steps of:
acquiring perception data of the road attribute according to the vehicle end perception device of the automatic driving vehicle;
discretizing the perception data according to a plurality of discrete points of the road attribute;
and according to the discrete points, acquiring longitudinal distances and orientation angles of the discrete points relative to the automatic driving vehicle as perception measurement results of the road attribute.
5. The method of predicting environmental information of an autonomous vehicle of claim 1, wherein the environmental target comprises a traffic participant, and the positional relationship comprises a longitudinal distance and an orientation angle of the traffic participant relative to the autonomous vehicle;
the method further comprises the steps of:
acquiring perception information of the traffic participant according to the vehicle end perception device of the automatic driving vehicle;
according to the perception information of the traffic participants, obtaining the displacement of the traffic participants in two frames of perception information of adjacent sampling time;
and according to the current position and the displacement of the automatic driving vehicle, acquiring the longitudinal distance and the orientation angle of the traffic participant relative to the automatic driving vehicle as a perception measurement result of the traffic participant.
6. The method for predicting environmental information of an autonomous vehicle of claim 5, wherein said traffic participant comprises a non-lane-changing vehicle;
the method further comprises the steps of:
acquiring the movement speed of surrounding vehicles of the automatic driving vehicle and an included angle between the current moment and the last moment of the surrounding vehicles;
and taking the surrounding vehicles with the movement speed greater than the preset speed and the included angle smaller than the preset included angle as the non-lane-changing vehicles.
7. The method according to claim 6, wherein the step of updating the predicted value based on a kalman filter equation based on the predicted value and the perceived measurement result of the current time of the autonomous vehicle includes:
according to a plurality of direction angles of the non-lane-changing vehicle, acquiring the mean value and variance of the direction angles;
according to the average value and a preset average value, determining an effective non-lane-changing vehicle;
and updating the predicted value according to the longitudinal distance and the orientation angle of the effective non-lane-changing vehicle.
8. The method according to claim 7, characterized in that the step of updating the predicted value according to the longitudinal distance and the heading angle of the effective non-lane-changing vehicle includes:
Taking the variance of the orientation angle of the non-lane-changing vehicle as the noise updated by the Kalman filtering equation;
and updating the predicted value according to the noise, the longitudinal distance and the orientation angle of the effective non-lane-changing vehicle.
9. The environmental information prediction method of an autonomous vehicle according to claim 1, wherein the environmental target includes a lane path, and the positional relationship includes a longitudinal distance and an orientation angle of the lane path from the autonomous vehicle;
the method further comprises the steps of:
acquiring a driving path of the automatic driving vehicle according to non-vehicle-end sensing equipment of the automatic driving vehicle;
according to the current position of the automatic driving vehicle and the driving path, lane path discrete points of a lane where the automatic driving vehicle is located and surrounding lanes are obtained;
and taking the longitudinal distance and the orientation angle of the lane path discrete point relative to the automatic driving vehicle as a perception measurement result of the lane path.
10. The method for predicting environmental information of an autonomous vehicle according to claim 9, wherein,
the step of acquiring the lane path discrete points of the lane where the autonomous vehicle is currently located and the surrounding lanes according to the current position of the autonomous vehicle and the travel path includes:
When the roads are merged, acquiring lane path discrete points at the rear of the automatic driving vehicle according to the current position;
the step of "taking the longitudinal distance and the heading angle of the lane path discrete point with respect to the autonomous vehicle as the perceived measurement of the lane path" includes:
and taking the longitudinal distance and the orientation angle of the rear lane path discrete point relative to the automatic driving vehicle as a perception measurement result of the lane path.
CN202310311949.2A 2023-03-27 2023-03-27 Method and device for predicting and controlling environmental information of automatic driving vehicle and vehicle Pending CN116373902A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310311949.2A CN116373902A (en) 2023-03-27 2023-03-27 Method and device for predicting and controlling environmental information of automatic driving vehicle and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310311949.2A CN116373902A (en) 2023-03-27 2023-03-27 Method and device for predicting and controlling environmental information of automatic driving vehicle and vehicle

Publications (1)

Publication Number Publication Date
CN116373902A true CN116373902A (en) 2023-07-04

Family

ID=86974387

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310311949.2A Pending CN116373902A (en) 2023-03-27 2023-03-27 Method and device for predicting and controlling environmental information of automatic driving vehicle and vehicle

Country Status (1)

Country Link
CN (1) CN116373902A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116872926A (en) * 2023-08-16 2023-10-13 北京斯年智驾科技有限公司 Automatic driving lane keeping method, system, device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116872926A (en) * 2023-08-16 2023-10-13 北京斯年智驾科技有限公司 Automatic driving lane keeping method, system, device and storage medium

Similar Documents

Publication Publication Date Title
JP6754856B2 (en) Sensor-intensive framework for self-driving vehicles
KR102335389B1 (en) Deep Learning-Based Feature Extraction for LIDAR Position Estimation of Autonomous Vehicles
KR102292277B1 (en) LIDAR localization inferring solutions using 3D CNN networks in autonomous vehicles
CN112034834B (en) Offline agents for accelerating trajectory planning of autonomous vehicles using reinforcement learning
KR102099152B1 (en) Path and speed optimization fallback mechanism for autonomous vehicles
KR102350181B1 (en) LIDAR Position Estimation Using RNN and LSTM to Perform Temporal Smoothing in Autonomous Vehicles
KR102211299B1 (en) Systems and methods for accelerated curve projection
CN109937343B (en) Evaluation framework for prediction trajectories in automated driving vehicle traffic prediction
US20200363800A1 (en) Decision Making Methods and Systems for Automated Vehicle
CN111948938B (en) Slack optimization model for planning open space trajectories for autonomous vehicles
US20190146509A1 (en) Autonomous vehicle routing using annotated maps
CN112034833A (en) Online agent to plan open space trajectories for autonomous vehicles
CN108684203A (en) The method and system of the road friction of automatic driving vehicle is determined using based on the Model Predictive Control of study
CN104778850A (en) Determining portions of a roadway model requiring updating
US20220284619A1 (en) Offline optimization of sensor data for agent trajectories
US20220194412A1 (en) Validating Vehicle Sensor Calibration
CN108391429A (en) Method and system for autonomous vehicle speed follower
CN109070889A (en) The angle detecting and Lane Keeping System based on deceleration curb of automatic driving vehicle
US11073403B2 (en) Map selection for vehicle pose system
CN116373902A (en) Method and device for predicting and controlling environmental information of automatic driving vehicle and vehicle
CN114179832A (en) Lane changing method for autonomous vehicle
US20240012106A1 (en) Multiple sensor calibration in autonomous vehicles performed in an undefined environment
US20220227358A1 (en) Map-based target heading disambiguation
CN114426030A (en) Pedestrian passing intention estimation method, device and equipment and automobile
CN111486837B (en) Vehicle position estimation method based on graph structure and vehicle using same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination